๐ฟ Getting Started with SAGE
Welcome! SAGE is an early alpha โ things will be rough around the edges, but the core works and we're building fast. Here's everything you need to get running.
Contents
๐ Prerequisites
System Requirements
- macOS (Apple Silicon / arm64) or Linux (x86_64)
- ~1 GB free disk space (for the model download)
- No GPU required โ runs on CPU
Optional: Install Ollama
SAGE ships with an embedded SmolLM2 1.7B model that works out of the box โ no extra setup needed. For access to larger models like Qwen, Llama, or Mistral, install Ollama and use the --ollama flag.
Default (SmolLM2): Works immediately, no dependencies. Good for getting started.
With --ollama flag: SAGE uses whatever Ollama model you choose (e.g. qwen2.5:14b). Better output quality for complex tasks.
๐ฆ Install
One command:
curl -fsSL https://whatssage.ai/install.sh | bash
This downloads the sage binary for your platform and installs it to ~/.sage/bin/sage. It also adds ~/.sage/bin to your PATH.
Open a new terminal after install, then verify:
$ sage version
sage v0.2.0-alpha
๐ฌ Your First Chat
sage chat
SAGE uses the embedded SmolLM2 1.7B model by default. To use Ollama instead, run sage chat --ollama.
Here's what you'll see:
SAGE Chat
Engine: smollm2 (embedded)
Brain: ~/.sage/brain.bin (0 active cells)
you> What's the deal with neural cellular automata?
sage> Neural Cellular Automata are grids of cells where each cell
updates based on its neighbors using small neural networks.
They can self-organize, self-repair, and encode information
as stable patterns โ like a living memory substrate...
โโ Brain โโโโโโโโโโโโโโโโโโโ
โ ยท ยท ยท ยท ยท ยท ยท ยท ยท ยท ยท โ
โ ยท ยท ยท โโยทยท ยท ยท ยท ยท ยท โ
โ ยท ยท โโโโโโโโยท ยท ยท ยท ยท โ
โ ยท ยท ยท ยทโโยท ยท ยท ยท ยท ยท ยท โ
โ ยท ยท ยท ยท ยท ยท ยท ยท ยท ยท ยท โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The brain visualization at the bottom shows SAGE's NCA grid in real-time. As you chat, you'll see patterns form and grow โ that's knowledge being encoded into the neural cellular automata.
Tip: Use sage chat --ollama to use Ollama models, or sage chat --ollama --model llama3.2:3b to pick a specific one. Smaller models are faster; larger models give better answers.
๐ง Understanding the Brain
The brain visualization shows SAGE's NCA (Neural Cellular Automata) grid โ a 256ร256 grid of cells that stores knowledge as self-organizing patterns.
- Dim dots โ inactive cells, waiting
- Bright dots / blocks โ active cells encoding knowledge
- Clusters โ related concepts that have self-organized together
- Growth over time โ the more you chat, the more the grid fills with learned patterns
This isn't just a pretty animation. The grid is the actual knowledge store. When you ask SAGE a question, it queries the grid for relevant patterns and uses them to augment the LLM's response. The patterns are compact (~128 KB for the whole brain) and can be shared across the network.
Think of it like: The LLM handles language. The NCA grid handles memory. Together, they give SAGE both fluency and recall โ without needing a massive context window.
๐ Joining the Network
sage node start
This starts your SAGE node and connects it to the peer-to-peer network. Here's what happens:
- LAN discovery โ finds other SAGE nodes on your local network via mDNS
- Bootstrap connection โ connects to
bootstrap.whatssage.ai:4001to find internet peers - Knowledge sync โ exchanges compact NCA pattern diffs with other nodes
What gets shared (and what doesn't)
Shared: Compressed NCA pattern diffs โ tiny (200โ2000 bytes), anonymous representations of what was learned. Think statistical patterns, not text.
NOT shared: Your conversations, your prompts, your data. Raw text never leaves your machine. PII filtering runs before any encoding.
You can also run in local-only mode โ all the learning, none of the network:
sage chat # Just chat, no network. Brain still learns locally.
โจ๏ธ Keyboard Shortcuts
| Key | Action |
|---|---|
PageUp / PageDown | Scroll through chat history |
Shift + โโ | Navigate within input |
Shift + โโ | Scroll output |
Ctrl + C | Cancel current generation |
/quit or /exit | Exit chat |
๐ง Troubleshooting
"Killed" on macOS
macOS may kill unsigned binaries. Fix it:
codesign -s - ~/.sage/bin/sage
Bad output quality
If responses are short, repetitive, or nonsensical โ the embedded SmolLM2 may be struggling with the topic. Try using Ollama with a larger model for better results:
# Install Ollama, then:
sage chat --ollama
"command not found: sage"
Make sure ~/.sage/bin is in your PATH. Open a new terminal after install, or run:
export PATH="$HOME/.sage/bin:$PATH"
Node won't connect to peers
- Check your firewall allows outbound connections
- Verify
bootstrap.whatssage.ai:4001is reachable:nc -zv bootstrap.whatssage.ai 4001 - Try
sage node start --no-mdnsif mDNS causes issues
Brain file issues
If things get weird, you can reset the brain. You'll re-learn from the network when you connect:
rm ~/.sage/brain.bin
๐ What's Next
You're up and running. Here's where to go from here:
- Blog โ Research updates and technical deep dives
- Research โ NCA training results and experimental data
- Whitepaper โ The full technical architecture
- GitHub โ Source code, issues, contributions
- Discord โ Chat with the community, report bugs, follow progress
Remember: SAGE is early alpha. Things will break. Output quality varies. The NCA research is promising but experimental. We're building in the open โ your feedback and bug reports on Discord make a real difference.
GitHub ยท Discord ยท whatssage.ai