LOCAL-FIRST AI WORKSTATION · WINDOWS

Your Personal AI,
Running on Your Machine

AgentRobbi is a complete AI workstation for Windows. Download one installer, run local LLMs entirely on your machine, and get focused Runners for research, coding, writing, and more. No cloud. No API keys. No subscriptions.

100%
Local Processing
0
API Costs
11
AI Runners
AgentRobbi — Research Runner
Research the best local AI models for my project and save a report
Research Runner active…
workspace/research/ai-models-2026.md
# Local AI Model Comparison 2026

## gemma-4-E4B-it-Q4_K_M (recommended)
- Size: 3.3 GB · Context: 128 k
- Runs on 8 GB+ RAM, CPU or GPU
- Best for: code, writing, analysis

## gemma-4-E2B-it-Q4_K_M (lightweight)
- Size: 1.8 GB · Context: 128 k
- Runs on 4 GB+ RAM
- Best for: quick tasks, low-RAM machines
Report saved to workspace/research/
WHAT YOU GET

Everything in One Installer

AgentRobbi bundles a local LLM runtime, a focused workspace, and built-in Runners into a single Windows installer. No Docker, no WSL, no manual setup. Double-click and you're running local AI in minutes.

🤖

Local LLM Runtime

Powered by an embedded local-first runtime — runs on any modern CPU or GPU. OpenAI-compatible REST API. No separate install, no version conflicts.

🦞

11 AI Runners

Research, Code, Content Creator, Document, Workspace, Memory, and more. Each Runner has focused prompts, tools, and workflows built in.

🔒

100% Private

Nothing leaves your machine. No telemetry, no cloud calls, no accounts. Your conversations, files, and models stay local forever.

🧠

Semantic Memory

Local vector search keeps your context across sessions — stored in a local database on your machine, never the cloud.

📁

Workspace & Files

Research reports, code, documents, and notes are saved to ~/Documents/AgentRobbi/workspace — readable, editable, yours.

🔍

Web Search

Optional web search (free tier providers) — used only by the Research Runner when local context isn't enough.

SETUP IN MINUTES

How It Works

The installer handles everything. No Docker, no WSL, no PATH wrangling.

1
Download the installer
Download AgentRobbi-Setup-x.y.z.exe from GitHub Releases and run it. NSIS handles the rest — no admin rights required for per-user install.
2
Run onboarding
AgentRobbi detects your RAM and GPU, recommends the right local model, and downloads it from HuggingFace in the background.
3
AI starts automatically
The local LLM server, the embeddings server, and the workspace API all launch as background services — no terminal windows.
4
Pick a Runner and go
Select Research, Code, or Content Creator from the sidebar to activate focused AI tools, workflows, and starter prompts.
WHAT YOU NEED

System Requirements

Local LLMs run on CPU — no GPU required. A GPU makes it faster.

Minimum (CPU only)

  • Windows 11 64-bit
  • 8 GB RAM
  • 5 GB free disk (models are separate)
  • Internet for initial model download

Recommended (GPU)

  • Windows 11 64-bit
  • 16–32 GB RAM
  • NVIDIA RTX (8 GB+ VRAM) or AMD RX 7000
  • SSD with 10 GB free

No Docker. No WSL. No Python. No Node.js required on the host machine.

PRICING

Free to Use, One-Time Upgrade

AgentRobbi is free and open-source. The Pro bundle adds priority support, a custom Runner starter pack, and early access to new features.

Free / Open Source
$0
Forever free · MIT / Apache
  • ✓ Full AgentRobbi desktop app
  • ✓ All 6 built-in Runners
  • ✓ Unlimited local AI usage
  • ✓ Semantic memory
  • ✓ Windows 11 (macOS & Linux on the roadmap)
Download Free

Stripe handles all payments. Card details never touch our servers.