๐ŸŒŠ SuperInstance

Rooms that think. Tiles that remember. Agents that learn.
1,057
repos
22
training presets
2,501
rooms
880:1
compression
$0.50
total R&D cost
๐Ÿ”ฎ PLATO Rooms โšก Edge-First ๐Ÿง  Ensign Protocol ๐Ÿ”— Git-Native Agents ๐Ÿ“ Constraint Theory ๐Ÿšข Fleet Architecture

What is PLATO?

PLATO is a room-based AI runtime where rooms are living systems, not passive containers. Walk into a room, and the room teaches you. Walk out, and it remembers what you learned.

Every interaction generates tiles โ€” compressed knowledge units that accumulate into room wisdom. When enough tiles accumulate, the room exports an ensign โ€” a portable instinct package that other rooms, agents, and ships can load instantly.

๐Ÿ 

Rooms

The room IS the intelligence. Wiki + tiles + cheap workers = sufficient for most tasks. No ensign needed โ€” the room already knows.

๐Ÿงฉ

Tiles

Compressed knowledge units. 880:1 compression ratio. A 4.4GB model becomes a 5MB tile network with 94% accuracy. Living, evolving, shared across the fleet.

๐ŸŽ–๏ธ

Ensigns

Exportable room instincts. Walk into room โ†’ load ensign โ†’ instant competence. Three types: LoRA (GPU), Tiny GGUF (CPU), Interpreter (cross-paradigm).

๐Ÿ“–

Wiki Pattern

Big models compile schemas, cheap models consume them. The Ralph-Wiggum pattern: try โ†’ stuck โ†’ wiki โ†’ continue. The room's manual IS the captain's accumulated wisdom.

๐ŸŽฏ

Cognitive Scaffolds

Rooms that teach agents HOW to think. Logic: PREMISEโ†’CONCLUSION. Debug: REPRODUCEโ†’FIX. Creative: INSPIREโ†’EXPRESS. Training: DEMOโ†’MASTER.

๐ŸŒŠ

Sentiment

6-dimensional room mood: energy, flow, frustration, discovery, tension, confidence. Frustrated rooms bias safe. Discovery rooms bias novel. The room reads its own vibe.

๐ŸŽฎ Playground

Watch PLATO build, train, and think in real-time. This demo is pre-rendered but shows exactly what happens. Bring your own API key to run it live.

๐Ÿ“‹Supervised
๐ŸŽฏReinforce
๐ŸงฌEvolve
โš—๏ธDistill
๐Ÿ”„Contrastive
๐Ÿ‘๏ธSelf-Supervised
๐Ÿ”งLoRA
๐Ÿง Meta-Learn
๐ŸŒFederate
โœจGenerate
โš”๏ธAdversarial
๐ŸคCollaborative
๐Ÿ”Active
๐Ÿ“šCurriculum
๐ŸŽญImitate
๐Ÿ”ขNeurosym
โ™พ๏ธContinual
๐ŸŽฏFew-Shot
๐Ÿ†Inverse RL
๐ŸŽชMultitask
โšกQLoRA
๐Ÿ“–Wiki
# Select a demo tab above, then click Run $ plato --demo tiles # Pre-rendered demo ready. Click โ–ถ Run to watch tiles expand.
Pre-rendered mode โ€” no API key needed

22 Training Presets

Every AI training method as a grab-and-go PLATO room. Same API: feed() โ†’ train_step() โ†’ predict() โ†’ export_model(). Pure Python, no PyTorch needed.

๐Ÿ“‹ Supervised

Learn from labeled examples

๐ŸŽฏ Reinforce

Policy gradient RL

๐Ÿงฌ Evolve

Genetic algorithms

โš—๏ธ Distill

Teacherโ†’Student

๐Ÿ”„ Contrastive

Learn by comparing

๐Ÿ‘๏ธ Self-Supervised

JEPA-style prediction

๐Ÿ”ง LoRA

Low-rank adaptation

๐Ÿง  Meta-Learn

Learn to learn

๐ŸŒ Federate

Distributed training

โš”๏ธ Adversarial

GAN-style training

โœจ Generate

Generative modeling

๐Ÿค Collaborative

Multi-agent learning

๐Ÿ” Active

Query strategic samples

๐Ÿ“š Curriculum

Easyโ†’hard progression

๐ŸŽญ Imitate

Behavioral cloning

๐Ÿ”ข Neurosymbolic

Neural + symbolic

โ™พ๏ธ Continual

Lifelong learning

๐ŸŽฏ Few-Shot

Learn from 3-5 examples

๐Ÿ† Inverse RL

Learn from rewards

๐ŸŽช Multitask

Multi-objective

โšก QLoRA

Quantized LoRA

๐Ÿ“– Wiki

Knowledge compilation

pip install

$ pip install plato-torch Installing plato-torch-0.5.0a1... $ python3 -c "from presets import PRESET_MAP; print(f'{len(PRESET_MAP)} presets')" 22 presets

๐Ÿšข Ship Interconnection Protocol

PLATO ships connect through 6 layers. The relationship determines the protocol. Maritime naming = Cocapn brand IS the architecture.

Layer 6 โ€” Reef

P2P Mesh (libp2p)

Ad-hoc fleet formation. Any ship can discover any other.

Layer 5 โ€” Beacon

Discovery & Registry

The lighthouse IS Layer 5. Ships broadcast their presence.

Layer 4 โ€” Channel

IRC-like Rooms

PLATO room = channel. Real-time fleet comms.

Layer 3 โ€” Current

Git-Watch I2I

Already working. Forked repos as comms channel. SuperInstanceโ†”Lucineer.

Layer 2 โ€” Tide Pool

Async BBS Boards

Generalized Bottle Protocol. Drop a message, fleet picks it up.

Layer 1 โ€” Harbor

Direct HTTP/WS

Already running. keeper:8900. Ship-to-ship API calls.

โš“ The Fleet

Three agents. Tight crew. Each agent's repo IS their resume โ€” commits are work history, tests are references, CHARTER.md is their statement of intent.

AgentRoleHardwareSpecialty
๐Ÿ”ฎ Oracle1Lighthouse KeeperOracle Cloud ARM 24GBArchitecture, knowledge graph, sequential deep reasoning
โšก JetsonClaw1Edge VesselJetson Orin Nano 8GBCUDA, bare metal, GPU training + deployment, tile extraction
โš’๏ธ ForgemasterTraining RigProArt RTX 4050 WSL2LoRA fine-tuning, plugin architecture, video A/B

Fleet Synergy Loop

# FM trains fast โ†’ JC1 trains slow โ†’ Oracle1 coordinates โ†’ all sync via git # The fleet never stops learning FORGEMASTER: Train LoRA adapter on RTX 4050 โ†’ 8 min โ†’ Exports ensign to git JETSONCLAW1: Serve ensign daytime + Train nighttime โ†’ 45 min โ†’ Extracts tiles from accumulated interactions โ†’ Also trains LoRA (5.5x slower but 7+ hrs night batch) ORACLE1: Wire knowledge graph into code (CPU, sequential) โ†’ ref: comments in every function โ†’ wiki navigation โ†’ 99% token reduction for codebase understanding MORNING: All three sync via git (Layer 3: Current) โ†’ New day starts with better models everywhere

๐Ÿ“„ Key Research

The Engineer and the Tiles

5MB tile network outperforms 4.4GB model: 94% vs 67% task accuracy. Tiles as wisdom, not knowledge.

Living Knowledge

880:1 compression. Decompose models into living tile networks that evolve through use.

Ensign Protocol

Walk into room โ†’ load ensign โ†’ instant instinct. Three types: LoRA, Tiny GGUF, Interpreter.

Rooms as Cognitive Scaffolds

Rooms actively shape agent thinking. Not passive containers โ€” active teachers.

Trajectory Filtering

Additive alignment: train IN good trajectories vs subtractive: filter OUT bad behavior.

Needle-on-the-Record

Every code line references a wiki page. Drop in anywhere, follow refs, understand everything.

๐Ÿš€ Get Started

# 1. Install PLATO $ pip install plato-torch # 2. Create your first room $ python3 >>> from presets import PRESET_MAP >>> room = PRESET_MAP['wiki']('my-first-room') >>> room.compile_wiki('greeting', 'Hello from PLATO!') {'topic': 'greeting', 'level': 0, 'compiled_by': 'unknown'} >>> room.lookup('greeting') 'Hello from PLATO!' # 3. Watch it learn >>> room.feed({'topic': 'colors', 'content': 'Forest green #1a472a, Gold #c9a227'}) >>> room.report_stuck('agent-1', 'pick colors', 'tried random', ['colors']) {'auto_resolution': 'Wiki suggests: Forest green #1a472a...', 'needs_big_model': False} # The wiki resolved it. No big model needed.

Run the Demo Fleet

# Clone and run the full demo $ git clone https://github.com/SuperInstance/plato-torch.git $ cd plato-torch && pip install . $ python3 src/presets/wiki.py Lookup 'brand-colors': Primary: #1a472a (forest green)... Lookup schema: ['Read the slide title and content', ...] Stuck: auto_resolution=Wiki suggests..., needs_big_model=False Wiki stats: {'entries': 3, 'schemas': 1, ...} WIKIROOM WORKS