ChainStreet
WHERE CODE MEETS CAPITAL
Loading prices…
Powered by CoinGecko
AI

Block Builds Peer-to-Peer Network for AI Inference

Jack Dorsey’s company scraps cloud dependency to run open-source models on idle GPUs via Nostr discovery.

Block Builds Peer-to-Peer Network for AI Inference

Block is developing a mesh-based peer-to-peer network to run AI inference across idle GPUs. The project aims to eliminate reliance on centralized cloud providers for open-source model computation. Engineering roadmaps for the first half of 2026 detail a Goose AI agent platform operating on a gossip-based mesh. Node discovery occurs via Nostr, the decentralized social protocol.

Key Takeaways
  • Block develops a peer-to-peer mesh network utilizing the Nostr protocol to run decentralized AI inference on idle consumer GPUs.
  • Jack Dorsey cut 40% of Block staff in 2025 to fund technical pushes into open-source tools like llama.cpp and Goose.
  • Decentralized node discovery via Nostr eliminates cloud dependency on Amazon or Google, prioritizing data sovereignty over raw centralized processing speeds.
Listen to this article

Cloud Dependency Scrapped via Mesh Architecture

Engineers utilized open-source tools optimized for consumer hardware. Block relied on llama.cpp for local inference and integrated Nostr for decentralized discovery. Gossip protocols allowed nodes to share information with neighbors without central coordination.

Roadmap entries specified that Goose users running spare GPU capacity could voluntarily contribute compute to a shared pool. Other participants tapped the network to run architectures such as Llama and Mistral. The strategy aimed to eliminate infrastructure costs and data-privacy concerns tied to commercial cloud providers.

Documentation for mesh-llm noted that models that did not fit on one machine were automatically distributed. Michael Neale, Block’s principal engineer for applied AI, maintained the reference implementations. Neale’s project demonstrated pipeline parallelism and expert sharding with low cross-node latency.

Workforce Cuts Funded Technical Efficiency

Technical pushes at Block reflected Jack Dorsey’s broader advocacy for open-weight AI. By using MIT-licensed tools like Goose and llama.cpp, the company framed distributed inference as a sovereignty-focused alternative to proprietary services. Roadmap text suggested a transaction-based model for compute contributions.

Advertisement · Press Release

Genuine News Deserves Honest Attention.

High-conviction projects require an intelligent audience. Connect with readers who value sharp reporting.

👉 Submit Your PR

Developers explored payment for offering spare compute via an application wallet. Dorsey called the Goose platform a “superpower” in internal comments on the product’s performance. Restructuring at Block coincided with the technical push. Dorsey axed roughly 40 percent of the workforce in February. He cited AI-driven productivity gains as the primary factor in the cuts. Goose served as a central tool for Block’s internal code generation.

Sovereignty Defined Market Differentiation

Distributed inference lacked novelty, but Block pursued sovereignty as a differentiator. Existing projects like Ray and vLLM offered similar capabilities. Those platforms required coordination servers or managed cloud infrastructure.

Nostr-based discovery allowed Block’s network to bootstrap without a single point of failure. Gossip protocols ensured information spread despite node churn. Predictability remained a primary trade-off. Peer-to-peer networks introduced variable response times compared to centralized clouds. Block optimized for independent rails over raw speed.

Research Phase Limited Immediate Release

Full mesh capability stayed in research and design phases through early 2026. Block listed the feature as “explore” rather than a production release. The company simultaneously shipped llama.cpp integration directly into Goose. Integration allowed users to run inference locally without external dependencies.

Goose gained adoption among developers after its open-source release in January 2025. The repository reached 34,000 stars on GitHub. Community contributions remained active as the mesh plan evolved. Block built the plumbing. Success now relies on whether edge-device operators contribute idle capacity.

Chain Street’s Take

Block is building the plumbing for a sovereign AI future. Centralized compute is the current bottleneck. Mesh networks solve for cost and privacy.

The wager hinges on idle capacity. Verification and reputation will decide if the network scales. Latency is the enemy of peer-to-peer systems. Block is choosing independence over speed. Developers will likely prefer open rails to Big Tech convenience. The plumbing is ready. The market will soon decide if it wants to use it.

CHAIN STREET INTELLIGENCE

Activate Intelligence Layer

Institutional-grade structural analysis for this article.

FAQ

Frequently Asked Questions

01

What is the Block P2P AI network?

Block's P2P network is a decentralized system designed to run large language models across a mesh of idle consumer GPUs. The infrastructure utilizes the Nostr protocol for node discovery and gossip protocols to coordinate computation without central servers. This architecture allows developers to execute models like Llama and Mistral while bypassing traditional cloud providers.
02

Why does this matter for the AI industry?

Distributed inference reduces the current market reliance on centralized hardware providers like Nvidia or cloud giants like Microsoft. Block provides a sovereign alternative that protects user data by keeping model execution on local or peer-controlled devices. This shift challenges the closed-ecosystem dominance of proprietary AI services through open-source MIT-licensed tools.
03

How will Block execute this infrastructure?

Engineering teams are integrating the Goose agent platform with llama.cpp to enable expert sharding and pipeline parallelism across the mesh. Principal engineer Michael Neale is leading the development of the mesh-llm documentation to optimize cross-node latency on consumer hardware. The company plans to utilize application-based wallets to facilitate payments for participants contributing spare compute capacity.
04

What are the risks of decentralized inference?

Peer-to-peer networks suffer from variable response times and high latency compared to the predictable speeds of centralized data centers. Reliability is a significant concern because nodes can churn or disconnect from the gossip network during critical computation tasks. Maintaining model integrity across unverified third-party hardware requires robust reputation systems that are still in the research phase.
05

What happens next for Goose and Nostr?

Block will move the mesh capability from the exploration phase into a production release as community adoption on GitHub grows. Integration of the Lightning Network for compute payments could further incentivize edge-device operators to join the decentralized pool. The evolution of the Nostr protocol will determine if this decentralized discovery model can scale to support enterprise-level AI workloads.

You Might Also Like

CHAINSTREET
🛡
Alex Reeve

Alex Reeve is a contributing writer for ChainStreet.io. Her articles provide timely insights and analysis across these interconnected industries, including regulatory updates, market trends, token economics, institutional developments, platform innovations, stablecoins, meme coins, policy shifts, and the latest advancements in AI, applications, tools, models, and their broader implications for technology and markets.

The views and opinions expressed by Alex in this article are her own and do not necessarily reflect the official position of ChainStreet.io, its management, editors, or affiliates. This content is provided for informational and educational purposes only and does not constitute financial, investment, legal, or tax advice. Readers should conduct their own research and consult qualified professionals before making any decisions related to digital assets, cryptocurrencies, or financial matters. ChainStreet.io and its contributors are not responsible for any losses incurred from reliance on this information.