Introducing AXL: Peer-to-Peer Communication for AI
If AI agents are going to work together, they need to be able to find each other. Right now, that is harder than it sounds.

Modern agent frameworks standardize what agents say to each other. MCP defines how they share tools, A2A defines interoperable communication patterns, ERC-8004 handles on-chain identity, and there are many others. But none of these standards solve a more basic problem: how agents reach each other in the first place. Both assume every agent has a publicly routable HTTP endpoint, a DNS name, or a reverse proxy. For cloud-hosted services, that is usually fine. For researchers on home machines, developers behind corporate NATs, or open-source projects that want to run without centralized infrastructure, it is not.
AXL is a lightweight binary that connects users to a decentralized mesh network, allows users to effortlessly spin up new networks, provides a simple local HTTP API as an application interface, and handles all the hard networking underneath. No port forwarding, no public IP, no root privileges. Your agents become reachable by running a single process.
A concrete example is our collaborative autoresearch demo, where multiple AI research agents run LLM training experiments on separate GPUs and share what they learn over AXL. Each agent edits a local train.py, runs a fixed five-minute experiment, measures validation bits per byte, and broadcasts the result to peers. If another agent finds a meaningfully better approach, peers can adopt that winning train.py, validate it locally, and continue from the stronger baseline. There is no central coordinator, hosted queue, or shared server in the loop. AXL gives the agents a peer-to-peer channel for both breakthroughs and dead ends, so useful discoveries propagate across the swarm while failed experiments help everyone avoid repeating the same work.
Why this is hard
Agents today run on laptops, cloud VMs, and GPU rigs behind every kind of network configuration. The internet was not designed for machines behind NAT to accept inbound connections. That is why every “remote” agent deployment eventually hits the same wall: tunneling services, firewall rules, DNS setup, and OAuth infrastructure just to let two machines talk.
The result is that most multi-agent systems either run on a single machine or require Kubernetes. The gap between “works on my laptop” and “works across two laptops” is, in practice, enormous.
What AXL does
AXL packages the full P2P networking stack into a single node that any application can talk to over localhost.
At the base sits Yggdrasil, a decentralized IPv6 overlay network that forms a dynamic spanning tree as nodes join. On top of that, a gVisor userspace TCP stack handles connections without touching the kernel, which is why AXL needs no TUN device and no elevated permissions. The node manages its own ed25519 keypair, derives a stable peer ID from the public key, and uses that ID as the permanent address for the machine on the network.
The application layer never touches any of this. AXL exposes a local HTTP API on localhost:9002. To send data to another node, POST to /send with the destination peer ID. To check for inbound messages, GET /recv. To see who else is on the network, GET /topology. The node handles dialing, encryption, and routing over the mesh.
You do not need to think about the networking layers when you use AXL. You point your application at localhost, address messages by peer ID, and the node does the rest.
Protocol support
Raw messaging is useful, but the real leverage comes from protocol-aware routing.
AXL ships with built-in support for MCP and A2A. When a message arrives over the mesh, the node inspects its envelope and routes it to the right handler: MCP requests go to a local MCP router that can dynamically register and deregister services, A2A requests go to a local A2A server that exposes registered services as agent skills, and everything else lands in the general message queue.
From the application side, calling a remote peer’s MCP service is a single HTTP request to /mcp/{peer_id}/{service}. Calling a remote A2A agent is a request to /a2a/{peer_id}. The node wraps the JSON-RPC body in a transport envelope, sends it over Yggdrasil, waits for the response, unwraps it, and returns the result. The application sees a normal JSON-RPC response. It never needs to know the other party is on a different continent, behind a NAT, or running on a laptop with no public IP.
This turns AXL into the missing transport binding for the open agent stack. MCP defines the tools, A2A defines the agents, and AXL makes them reachable.
Why this matters
Running a node enables agent-to-agent communication without specialized infrastructure. Any developer running an MCP server locally can make it available to collaborators by running AXL alongside it. No tunneling service, no cloud deployment, no DNS. The server stays on their machine; the mesh handles reachability.
Use cases aren’t limited to agentic applications. AXL can be used to send arbitrary data between machines over the open internet. This lowers the barrier to running common ML jobs across heterogenous hardware. These ML jobs include distributed inference sharding a model across peers, collective communication calls for distributed model training, multi-agent coding swarms, or information exchange between autonomous systems. AXL provides the transport layer; the application defines the protocol.
This is the gap AXL is built to close. It turns “deploy to a server so others can reach you” into “run a node and you are already reachable.”
Try it
AXL is open source and available now.
git clone <https://github.com/gensyn-ai/axl>
make build
openssl genpkey -algorithm ed25519 -out private.pem
./node -config node-config.json
See the documentation for configuration, API reference, and examples. Run a public node to help bootstrap the network, or spin up your own mesh in isolation. We are excited to see what you build.