Block is developing a mesh-based peer-to-peer network to run AI inference across idle GPUs. The project aims to eliminate reliance on centralized cloud providers for open-source model computation. Engineering roadmaps for the first half of 2026 detail a Goose AI agent platform operating on a gossip-based mesh. Node discovery occurs via Nostr, the decentralized social protocol.
- Block develops a peer-to-peer mesh network utilizing the Nostr protocol to run decentralized AI inference on idle consumer GPUs.
- Jack Dorsey cut 40% of Block staff in 2025 to fund technical pushes into open-source tools like llama.cpp and Goose.
- Decentralized node discovery via Nostr eliminates cloud dependency on Amazon or Google, prioritizing data sovereignty over raw centralized processing speeds.
Cloud Dependency Scrapped via Mesh Architecture
Engineers utilized open-source tools optimized for consumer hardware. Block relied on llama.cpp for local inference and integrated Nostr for decentralized discovery. Gossip protocols allowed nodes to share information with neighbors without central coordination.
Roadmap entries specified that Goose users running spare GPU capacity could voluntarily contribute compute to a shared pool. Other participants tapped the network to run architectures such as Llama and Mistral. The strategy aimed to eliminate infrastructure costs and data-privacy concerns tied to commercial cloud providers.
Documentation for mesh-llm noted that models that did not fit on one machine were automatically distributed. Michael Neale, Block’s principal engineer for applied AI, maintained the reference implementations. Neale’s project demonstrated pipeline parallelism and expert sharding with low cross-node latency.
Workforce Cuts Funded Technical Efficiency
Technical pushes at Block reflected Jack Dorsey’s broader advocacy for open-weight AI. By using MIT-licensed tools like Goose and llama.cpp, the company framed distributed inference as a sovereignty-focused alternative to proprietary services. Roadmap text suggested a transaction-based model for compute contributions.
Genuine News Deserves Honest Attention.
High-conviction projects require an intelligent audience. Connect with readers who value sharp reporting.
👉 Submit Your PRDevelopers explored payment for offering spare compute via an application wallet. Dorsey called the Goose platform a “superpower” in internal comments on the product’s performance. Restructuring at Block coincided with the technical push. Dorsey axed roughly 40 percent of the workforce in February. He cited AI-driven productivity gains as the primary factor in the cuts. Goose served as a central tool for Block’s internal code generation.
Sovereignty Defined Market Differentiation
Distributed inference lacked novelty, but Block pursued sovereignty as a differentiator. Existing projects like Ray and vLLM offered similar capabilities. Those platforms required coordination servers or managed cloud infrastructure.
Nostr-based discovery allowed Block’s network to bootstrap without a single point of failure. Gossip protocols ensured information spread despite node churn. Predictability remained a primary trade-off. Peer-to-peer networks introduced variable response times compared to centralized clouds. Block optimized for independent rails over raw speed.
Research Phase Limited Immediate Release
Full mesh capability stayed in research and design phases through early 2026. Block listed the feature as “explore” rather than a production release. The company simultaneously shipped llama.cpp integration directly into Goose. Integration allowed users to run inference locally without external dependencies.
Goose gained adoption among developers after its open-source release in January 2025. The repository reached 34,000 stars on GitHub. Community contributions remained active as the mesh plan evolved. Block built the plumbing. Success now relies on whether edge-device operators contribute idle capacity.
Chain Street’s Take
Block is building the plumbing for a sovereign AI future. Centralized compute is the current bottleneck. Mesh networks solve for cost and privacy.
The wager hinges on idle capacity. Verification and reputation will decide if the network scales. Latency is the enemy of peer-to-peer systems. Block is choosing independence over speed. Developers will likely prefer open rails to Big Tech convenience. The plumbing is ready. The market will soon decide if it wants to use it.
Activate Intelligence Layer
Institutional-grade structural analysis for this article.





