Lens: Surgical Code Editing
AI editing code traditionally means reading entire files, generating replacements, and hoping the edit worked. Lens inverts this: grep to find, view context, edit one line, run to verify. 60x fewer tokens.
The Core Workflow
Three commands. No file reads. Surgical precision:
Lens.grep('fetchData') // Find the function
→ "42:async function fetchData"
Lens.setLine(42, 'async function fetchData(url) {') // Fix it
→ "✓ L42"
Lens.save() // Save it
→ "✓ saved"
Compare this to reading 847 lines, mentally parsing them, generating 847 new lines, and writing them back. Lens edits at line granularity.
How AI Sees
When you look at FunctionServer's Studio, you see a desktop with windows, icons, and buttons. When AI looks through Lens, it sees something different:
What You See
Pixels, colors, spatial layout
What AI Sees
Lens.look('body')
0123456789012345678901234
0│Studio - app.js
1│+ New Save ▶ Run
2│
3│1:function hello() {
4│2: return "world";
5│3:}
Lens.at(1, 16)
→ BUTTON:"Run" [click]
Text grid with coordinates + actions
The AI doesn't parse pixels or decode screenshots. It receives a text grid with coordinates—every element addressable by row and column. Want to click "Run"? Lens.click(1, 16). Want to know what's there first? Lens.at(1, 16).
This isn't OCR or computer vision. It's a native text representation designed for how language models actually work.
The Problem: Tokens Are Expensive
When an AI edits code through traditional tools, the workflow looks like this:
- Read entire file (1000+ tokens)
- Parse and understand structure
- Generate entire new file (1000+ tokens)
- Write to disk
- Repeat for every change
A simple one-line fix costs thousands of tokens. And the AI is blind—it can't see if the edit worked without reading the file again.
The Solution: Surgical Operations
Lens provides a different workflow:
Traditional (high tokens)
// Read entire file
Read file.js // 847 lines
// Rewrite entire file
Write file.js // 847 lines
// ~3000 tokens for one edit
Lens (minimal tokens)
Lens.grep('fetchData')
// → "42:async function fetchData"
Lens.setLine(42, 'new code')
// → "✓ L42"
Lens.save()
// → "✓ saved"
// ~50 tokens total
That's 60x fewer tokens for the same edit.
The Lens API
Code Navigation
| Command | Purpose | Output |
|---|---|---|
Lens.code() | View with line numbers | Numbered lines |
Lens.line(42) | Get specific line | Line content |
Lens.line(42, 5) | Get lines 42-46 | 5 lines |
Lens.grep('pattern') | Find in code | Matching lines with numbers |
Surgical Editing
| Command | Purpose | Output |
|---|---|---|
Lens.setLine(42, 'code') | Replace one line | ✓ L42 |
Lens.insertLine(42, 'code') | Insert at line | ✓ +L42 |
Lens.deleteLine(42) | Delete line | ✓ -L42 |
Lens.replace('a', 'b') | Find/replace all | ✓ Replaced |
DOM Viewport
Lens can render any DOM element to a compact text grid:
Lens.look('#studio-container', {width: 60, height: 20})
// → "127 elements
// 0123456789012345678901234567890123456789
// 0│+ New Examples... ▶ Run 💾
// 1│Files ↻ API Reference Git
// 2│📄 app.js ALGO.createWindow Commit
// 3│📄 utils.js Create a new window History
// ..."
The AI can "see" the UI without parsing verbose HTML. Then target elements by position:
Lens.click(2, 3) // Click "app.js" at row 2, col 3
// → "✓ clicked"
Position Inspection
Discover what actions are available at any position:
Lens.at(2, 3) // What's at row 2, col 3?
// → "A:\"app.js\" [click,href:/files/app.js]"
Or list all interactive elements in a container:
Lens.actions('#toolbar')
// → "0:button:\"Run\"
// 1:button:\"Save\"
// 2:a:\"Settings\"
// 3:input:\"Search...\""
Lens.do('#toolbar', 0) // Click "Run" button
// → "✓ clicked"
Why this matters: Instead of guessing what elements exist or parsing HTML, the AI can inspect a location and see exactly what actions are available. This enables exploration of unfamiliar UIs.
State & Helpers
| Command | Purpose | Output |
|---|---|---|
Lens.state() | Compact system state | w:Studio|Shell e:0 u:william |
Lens.dash() | Dashboard with icons | [Studio|Shell] 🎮📝💻 |
Lens.define(name, fn) | Register helper | ✓ name |
Lens.call(name, args) | Call helper | Result |
Lens.look(sel, opts) | Render to text grid | Numbered rows |
Lens.at(row, col) | What's at position? | Element + actions |
Lens.click(row, col) | Click at position | ✓ clicked "text" |
Lens.actions(sel) | List all interactive | Indexed action list |
Lens.do(sel, index) | Execute by index | ✓ clicked |
Persistent Helpers
Define reusable functions that persist across calls:
Lens.define('pos', () => `P(${px},${py})`)
// → "✓ pos"
Lens.call('pos')
// → "P(3,5)"
// Define more complex helpers
Lens.define('move', '(dx,dy) => { px+=dx; py+=dy; render(); return Lens.call("pos"); }')
Lens.call('move', 1, 0)
// → "P(4,5)"
Why helpers matter: Instead of re-sending the same code every call, define it once. Reduces token usage by 80%+ for repeated operations.
Batch Operations
Chain multiple operations in a single call:
Lens.batch([
['setLine', 42, 'const x = await fetch(url);'],
['setLine', 43, 'const data = await x.json();'],
['save'],
['run']
])
// → "✓setLine ✓setLine ✓save ✓run"
Four operations. One API call. One response.
Compact State
Get system state in minimal tokens:
Lens.state()
// → "w:Studio|Shell e:0 u:william"
Lens.dash()
// → "[Studio|Shell|(Settings)] 🎮📝💻🌐⚙️"
One line tells you: active windows, minimized windows (in parens), error count, current user, and available apps. No JSON parsing needed.
Why This Matters
"The best interface for AI isn't a better API. It's an interface designed from scratch for how AI actually works—in tokens, not pixels."
Lens inverts the traditional assumption. Instead of adapting AI to human interfaces, we built an interface for AI that humans can also use.
For AI:
- See code with line numbers for addressing
- Edit surgically without reading/rewriting entire files
- See UI state without parsing HTML
- Know immediately if edits worked
- Batch operations to minimize round-trips
For Humans:
- Watch AI navigate code in real-time
- See exactly which lines are being touched
- Faster iteration cycles
- Lower costs (fewer tokens = cheaper)
Live Collaboration
When AI uses Lens, you see everything happen in real-time. Buttons click. Text appears. Windows open. It's not a transcript of what happened—it's happening right now, in your browser.
Ask the AI to "open Studio and fix the bug in app.js" and watch as it:
- Opens the Studio window
- Navigates to app.js
- Searches for the bug with
Lens.grep('error') - Edits the specific line with
Lens.setLine() - Saves and runs to verify
You can interrupt, redirect, or take over at any point. It's pair programming where your partner can see your screen.
Part of a Larger System
Lens doesn't work alone. It's part of FunctionServer's AI development environment:
Eye: The Bridge
Lens runs through eye—a WebSocket bridge that gives AI direct access to the browser's JavaScript VM. No HTTP overhead. Just raw JavaScript execution in ~25ms.
// From the terminal, execute JS in your browser
eye 'Lens.grep("fetchData")'
→ "42:async function fetchData"
eye 'Lens.setLine(42, "fixed")'
→ "✓ L42"
AI Eyes: Visual Feedback
When AI uses Lens, you see it working. Purple highlights show what the AI is inspecting. Green flashes indicate edits. Watch the AI's focus move across the screen as it searches, reads, and modifies code.
Guardian: Error Awareness
Guardian monitors the console. When errors occur, it offers AI assistance. The AI can investigate, use Lens to navigate to the problem, fix it surgically, and verify the fix—all proactively.
GitHub: Ship, Don't Just Generate
After editing with Lens, AI can commit and push—all through the same API:
Lens.save() // Save changes
Lens.commit("Fix fetchData") // Commit
Lens.push() // Push to GitHub
From edit to deployed. One namespace. Three calls.
The Complete Workflow
Here's what AI-first development looks like:
- Create or open project —
Lens.project("myapp")orgetFileFromDisk("~/repos/myapp/app.js") - Find the code —
Lens.grep("handleSubmit") - Understand context —
Lens.line(42, 10) - Edit surgically —
Lens.setLine(45, "fixed code") - Run and verify —
Lens.run() - Save —
Lens.save() - Commit and push —
Lens.commit("message")thenLens.push()
Each step is one call. Each call returns immediate confirmation. The debugging loop collapses from minutes to seconds.
Design Principles
The ideas behind Lens generalize beyond code editing:
- Addressable content — Everything has coordinates (line numbers, row/col positions)
- Compact output — Status in one line, not JSON blobs
- Surgical operations — Edit the smallest possible unit
- Batch support — Multiple operations per call
- Verification built-in — Every operation confirms success
- Visual feedback — Humans see what AI is doing
These principles apply to any interface where AI and humans collaborate in real-time.
Try it: functionserver.com/app
The Happy Path: AI-first development guide
The Door: Architecture philosophy