From Bundles to Time: A Theory of Decentralised Compute Markets
We present a decentralised two-sided market design that treats compute as a time‑bound asset, enabled by reproducibility, verification, and checkpointing, yielding dynamic pricing and simple matching without combinatorial auctions.
We present a decentralised two-sided market design that treats compute as a time‑bound asset, enabled by reproducibility, verification, and checkpointing, yielding dynamic pricing and simple matching without combinatorial auctions.
Key Highlights
- Today’s compute markets are opaque and bundle‑based; decentralised versions inherit latency, complexity, and trust problems.
- With determinism + verification + checkpointing, jobs can move safely across hardware—so feasibility comes from a scheduler, not from pre‑allocating bundles.
- Treating compute as time slices in capacity tiers enables continuous, transparent price discovery and greedy (fast) matching.
- The paper proves: equilibrium prices exist/are unique; cost minimizing, deadline aware greedy matching is efficient for job allocations in both single- and multi-period case; mechanisms encourage honest supply via pooling and slashing.
Why it matters
As compute resources have become one of the world’s most valuable commodities, underpinning everything from cloud services and AI training to decentralised applications, yet, despite their centrality, markets for compute remain fragmented and inefficient.
Current cloud offerings are split across on-demand, spot, and reserved-instance models, forcing users into rigid pricing and reliability trade-offs rather than allowing them to assemble flexible combinations that match their precise needs. At the same time, overall utilization of compute infrastructure remains surprisingly low: a large percentage of available capacity sits idle at any given moment.
This inefficiency reflects the absence of a truly open and liquid market where smaller providers or underutilized resources can easily participate. Instead, compute pricing remains concentrated in the hands of a few large centralized entities, who set opaque prices that reflect strategic or contractual considerations rather than transparent supply-and-demand dynamics.These mismatches between dynamic, constraint-laden demand and inflexible, fragmented supply present a natural opportunity for a two-sided market to realign incentives and enable more efficient, adaptive allocation.
We introduce a decentralised two-sided market design that treats compute not as a bespoke bundle of hardware, but as time in capacity tiers. The punchline is simple: when workloads are reproducible, verifiable, and resumable, we can price and match time slices instead of running slow, combinatorial auctions over bundles.
Background
What’s broken in decentralised compute today
Attempts to build open markets for compute have run into three persistent problems:
- Pricing latency. Auctions and multi‑round bidding introduce delay, exclude smaller participants, and still fail to track real‑time supply and demand.
- Combinatorial Auctions. Assigning computational jobs to heterogeneous hardware is a hard optimization problem, where combinatorial auctions are NP-hard in general.
- Trust and incomplete information. Without practical verification, a provider can misreport capacity, return incorrect results, or fail mid‑run—breaking the economic assumptions most mechanisms need.
What changed: determinism, verification, and checkpointing
Over the last year, we’ve focused on three enabling properties:
- Determinism (RepOps). If the same program produces the same bits regardless of where it runs, results are portable and comparable.
- Verification (Verde). With determinism, correctness can be established via refereed delegation and dispute mechanisms—without redoing all the work or trusting the provider.
- Checkpointing & resumability. Long jobs can be paused and resumed across different machines without losing correctness.
Together, these shift the burden of feasibility from “the user must acquire the perfect bundle for T hours” to “the scheduler can assemble enough time quanta on any eligible machines and verify the outputs”. That’s the key step that lets us replace bundles with time.
AMM for Goods with Perishable Utility
Model
The paper develops a decentralised, two‑sided market where compute is a time‑bound, perishable asset:
- Capacity tiers. Providers list availability in tiers defined by minimum capabilities (e.g., ≥VRAM, ≥CPU, ≥RAM). A unit in a tier is simply “Δt minutes of execution” that can run any compatible quantum.
- Time‑minting with collateral. Providers stake collateral and register availability windows, effectively minting time‑bounded assets (their future slots). This aligns incentives and makes supply auditable.
- Transparent, dynamic pricing. An algorithmic market maker computes equilibrium prices for each tier from aggregate supply and demand. No slow auctions; users see a live price they can accept immediately.
- Simple matching. Because resources are time slices, matching becomes fast: a greedy allocator assigns quanta subject to tier constraints and deadlines. No combinatorial winner determination.
- Incentive‑compatible pooling. Providers contribute time slices into a shared pool. They earn: (i) their reported base price if matched and (ii) a share of market surplus tied to their competitiveness. Overpricing is penalized by reduced selection and surplus.
- Verification and slashing. Tasks are sampled or disputed through a verification layer; misreports or non‑performance trigger slashing. Racing mechanisms ensure timely execution under minimal trust.
Results
Motivated by practical constraints , the underlying mathematics turns out to be surprisingly elegant: once compute is expressed as time-indexed capacity and providers report per-unit costs, the allocation problem resembles a convex bipartite matching setup on the time side and a weighted bipartite selection problem on the cost side.
This structure is what allows us to prove clean guarantees—something that is rarely possible in decentralised resource markets.
- Prices settle cleanly. Under our assumptions, equilibrium prices in each capacity tier exist and are unique. That means the market maker can post a single, stable price per tier that clears supply and demand in the model.
- Matching is fast and good. A greedy matching algorithm achieves a 1/2 competitive ratio in a single-period setting—provably capturing a constant fraction of the best possible allocation without solving a hard optimization.
- Robustness improves with scale. In a multi-period model with two providers, the difference between two natural matching policies is bounded and vanishes as capacities scale. Intuitively: fragmentation and timing mismatches matter less as the market gets deeper.
Finally, while this work focuses on compute, the underlying framework—treating heterogeneous, intermittent resources as time-denominated, verifiable units—suggests broader applicability to other decentralised markets where availability fluctuates and trust is limited.
Future Directions
Theory. Extend multi‑period analysis beyond two providers; design endogenous floor prices that stabilize participation; incorporate stochastic arrival models; account for fragmentation and preemption costs; analyze strategic robustness to misreports, collusion, and sybil behavior.
Practice. Measure verification overhead and latency on real workloads; build trace‑driven simulators or testbeds to quantify utilization and welfare; experiment with fairness and service‑quality constraints on both sides; integrate with verifiable‑compute backends and checkpoint‑aware schedulers.
Closing Remarks
This paper offers a blueprint for what an open, liquid, and trust-minimized compute economy can look like.
Although the results are theoretical, the implications are concrete: decentralised compute markets do not require heavy auctions or opaque heuristics. With determinism, verification, and checkpointing, we can design mechanisms that are tractable, incentive-aligned, and robust.
By reframing compute as a verifiable, time-based asset, we move beyond the constraints of legacy, bundle-based cloud models and open the door to markets that are faster, fairer, and far more scalable.
Academically, the work lays the foundation for a unified theory of “compute as time”, inviting new research at the intersection of systems and mechanism design. Practically, it points toward a future where global compute supply can finally meet global compute demand through open, transparent, and decentralized markets.
Read the full research paper here:
https://arxiv.org/abs/2511.16357