Gensyn
  • Home
  • Blog
  • Careers
  • Testnet
Sign in Subscribe
CheckFree: fault tolerant training without checkpoints
Research

CheckFree: fault tolerant training without checkpoints

This is an academic paper describing CheckFree, a novel recovery method for failures in distributed training that does not require checkpointing or redundant computation.
Read More
Gensyn
NoLoCo: training large models with no all-reduce
Research

NoLoCo: training large models with no all-reduce

This is an academic paper describing NoLoCo, a novel optimisation method for distributed training that replaces the global synchronisation step with a gossip method.
Read More
Gensyn
Diverse Expert Ensembles: embarrassingly parallel LLMs from diverse experts
Research

Diverse Expert Ensembles: embarrassingly parallel LLMs from diverse experts

This is an academic paper that finds benefits to heterogeneity (different model sizes and number of training steps) when training embarrassingly-parallel ensembles of expert models.
Read More
Gensyn
RL Swarm: a framework for collaborative RL
Product

RL Swarm: a framework for collaborative RL

This is open source code (MIT Licence) for peer-to-peer nodes that perform collaborative reinforcement learning over the internet, accessible by anyone on consumer or datacentre hardware.
Read More
Gensyn
SkipPipe: a communication efficient method for decentralised training
Research

SkipPipe: a communication efficient method for decentralised training

This is an academic paper for efficient communication in pipeline parallel training. It introduces an optimal scheduling algorithm that maximises performance and fault tolerance whilst minimising convergence impact from layer skips.
Read More
Gensyn
Verde: a verification system for machine learning over untrusted nodes
Research

Verde: a verification system for machine learning over untrusted nodes

This is an academic paper describing Verde, a verification protocol for machine learning programs, as well as the underlying Reproducible Operators (RepOps) system that enables it.
Read More
Gensyn
GPT@home: Why the Future of Training is Decentralized
Article

GPT@home: Why the Future of Training is Decentralized

AI training costs are hitting $100B per run. Gensyn's decentralized infrastructure enables efficient training across edge devices at massive scale—making model development collaborative and accessible.
Read More
Gensyn
Gensyn © 2026
  • Sign up
Powered by Ghost