The Door
What happens when AI doesn't automate from outside, but inhabits the same address space as the apps you're using? We built an operating system to find out.
The Traditional Model
In most systems, AI automates from outside. It connects via WebDriver protocols, sends commands through HTTP APIs, parses screenshots to understand state. The AI is a client. The app is a server. There's a wall between them.
This creates fundamental friction:
- AI can't see what you see without verbose dumps
- AI can't verify its changes without you running them
- Every interaction is a round-trip through an abstraction layer
- The debugging loop is open—write code, hope it works
A Different Architecture
FunctionServer inverts this. Apps aren't compiled binaries or isolated processes. They're JavaScript running in the browser's VM. And AI connects directly to that VM via WebSocket.
// From a terminal, execute JS in the user's browser
eye 'document.title'
→ "FunctionServer"
eye 'Lens.grep("fetchData")'
→ "42:async function fetchData"
eye 'Lens.setLine(42, "fixed")'
→ "✓ L42"
When AI calls getBoundingClientRect(), it touches the same DOM element the user sees. When it patches window.openSubmenu, that's the real running function. When it injects CSS, the user sees it instantly.
The AI doesn't automate the OS. It inhabits it.
What This Changes
The Debugging Loop Collapses
Traditional: write code → deploy → refresh → inspect → guess → repeat.
FunctionServer: inspect live → measure → test fix → verify → commit.
The AI can query element positions, check computed styles, inject test fixes, and verify they worked—all before touching source files. The browser becomes a REPL you can poke from anywhere.
Token Efficiency Improves Dramatically
Instead of reading 1000-line files to make one-line changes, AI uses Lens:
Lens.grep('fetchData') // Find it
Lens.setLine(42, 'new code') // Fix it
Lens.save() // Save it
Three calls. Zero file reads. 60x fewer tokens than traditional file operations.
AI Gets Eyes
When AI works in FunctionServer, it can see. The AI Eyes system shows humans what AI is looking at and editing—purple highlights for inspection, green flashes for edits. You can watch the AI's focus saccade across the screen as it navigates, searches, and fixes.
Errors Become Conversations
Guardian monitors console errors. When something breaks, a toast appears: "Error detected—Get AI help." Click it, and the AI receives the error context. It can investigate, fix, and verify—proactively.
Projects Ship, Not Just Generate
With GitHub OAuth integration, AI doesn't just write code—it ships:
Lens.project("particle simulator")
// Creates ~/repos/particle-simulator/
// Initializes git, creates skeleton
// Creates GitHub repo, pushes
// Opens in Studio, ready to edit
One command. Idea to GitHub repository.
A note on architectureLive patching exists. Browser automation exists. MITM injection exists. So what's actually new here?
In traditional setups, apps are compiled binaries or isolated processes. You automate them from outside via WebDriver protocols. The AI is a client, the app is a server.
In FunctionServer, apps are JavaScript artifacts running in the same VM that the eye bridge accesses. When AI calls
getBoundingClientRect(), it touches the same DOM element the user sees. When it patcheswindow.openSubmenu, that's the real running function.This creates an interesting question: how do you version control a system that's constantly being reshaped?
You don't. Each app becomes its own repository. What you version is the protocol—the ALGO API, the conventions for file type registration, the pubsub message format. The OS defines the rules. Users and AI agents shape whatever they want within them.
The Tools
FunctionServer is a collection of tools that work together:
- Eye — WebSocket bridge to the browser VM. Execute any JavaScript in ~25ms.
- Lens — Token-efficient code editing. Grep, view context, edit surgically, run and verify.
- Studio — IDE built for FunctionServer. Every feature optimized for AI.
- Guardian — Console monitoring. Catches errors, offers AI help, throttles noise.
- AI Eyes — Visual feedback. Shows humans what AI is looking at and editing.
- GitHub Auth — OAuth Device Flow. One-click sign-in, automatic repo creation.
- Lens.project() — One-command project creation. Idea to GitHub in seconds.
Individually, each tool solves a specific problem. Together, they form something more: an environment where AI is a developer, not just a code generator.
The Uncomfortable Question
Yes, this means an AI can manipulate your browser. That's why there's auth. Your session token, your control.
The upside: an AI can now help you in ways that weren't possible before. It can see what you see. It can try things and check if they worked. It can debug CSS without asking you to "open DevTools and tell me what you see."
Is that worth the tradeoff? For us, watching AI fix a bug, commit the change, and push to GitHub—all from a terminal on a laptop—yeah, it's worth it.
"I'm trying to free your mind, Neo. But I can only show you the door. You're the one that has to walk through it."
We built the door. Come walk through.
Try it: functionserver.com/app
The Happy Path: AI-first development guide
Lens: Token-efficient editing