Lens: Surgical Code Editing

AI editing code traditionally means reading entire files, generating replacements, and hoping the edit worked. Lens inverts this: grep to find, view context, edit one line, run to verify. 60x fewer tokens.

The Core Workflow

Three commands. No file reads. Surgical precision:

Lens.grep('fetchData')           // Find the function
→ "42:async function fetchData"

Lens.setLine(42, 'async function fetchData(url) {')  // Fix it
→ "✓ L42"

Lens.save()                        // Save it
→ "✓ saved"

Compare this to reading 847 lines, mentally parsing them, generating 847 new lines, and writing them back. Lens edits at line granularity.

How AI Sees

When you look at FunctionServer's Studio, you see a desktop with windows, icons, and buttons. When AI looks through Lens, it sees something different:

What You See

Studio - app.js
+ New Save Run
1function hello() {
2 return "world";
3}

Pixels, colors, spatial layout

What AI Sees

Lens.look('body')
 0123456789012345678901234
0│Studio - app.js
1│+ New    Save    ▶ Run
2│
3│1:function hello() {
4│2:  return "world";
5│3:}

Lens.at(1, 16)
→ BUTTON:"Run" [click]

Text grid with coordinates + actions

The AI doesn't parse pixels or decode screenshots. It receives a text grid with coordinates—every element addressable by row and column. Want to click "Run"? Lens.click(1, 16). Want to know what's there first? Lens.at(1, 16).

This isn't OCR or computer vision. It's a native text representation designed for how language models actually work.

The Problem: Tokens Are Expensive

When an AI edits code through traditional tools, the workflow looks like this:

  1. Read entire file (1000+ tokens)
  2. Parse and understand structure
  3. Generate entire new file (1000+ tokens)
  4. Write to disk
  5. Repeat for every change

A simple one-line fix costs thousands of tokens. And the AI is blind—it can't see if the edit worked without reading the file again.

The Solution: Surgical Operations

Lens provides a different workflow:

Traditional (high tokens)

// Read entire file
Read file.js  // 847 lines

// Rewrite entire file
Write file.js // 847 lines

// ~3000 tokens for one edit

Lens (minimal tokens)

Lens.grep('fetchData')
// → "42:async function fetchData"

Lens.setLine(42, 'new code')
// → "✓ L42"

Lens.save()
// → "✓ saved"

// ~50 tokens total

That's 60x fewer tokens for the same edit.

The Lens API

Code Navigation

CommandPurposeOutput
Lens.code()View with line numbersNumbered lines
Lens.line(42)Get specific lineLine content
Lens.line(42, 5)Get lines 42-465 lines
Lens.grep('pattern')Find in codeMatching lines with numbers

Surgical Editing

CommandPurposeOutput
Lens.setLine(42, 'code')Replace one line✓ L42
Lens.insertLine(42, 'code')Insert at line✓ +L42
Lens.deleteLine(42)Delete line✓ -L42
Lens.replace('a', 'b')Find/replace all✓ Replaced

DOM Viewport

Lens can render any DOM element to a compact text grid:

Lens.look('#studio-container', {width: 60, height: 20})
// → "127 elements
//   0123456789012345678901234567890123456789
//  0│+ New              Examples...        ▶ Run  💾
//  1│Files   ↻         API Reference       Git
//  2│📄 app.js          ALGO.createWindow   Commit
//  3│📄 utils.js        Create a new window History
// ..."

The AI can "see" the UI without parsing verbose HTML. Then target elements by position:

Lens.click(2, 3)   // Click "app.js" at row 2, col 3
// → "✓ clicked"

Position Inspection

Discover what actions are available at any position:

Lens.at(2, 3)   // What's at row 2, col 3?
// → "A:\"app.js\" [click,href:/files/app.js]"

Or list all interactive elements in a container:

Lens.actions('#toolbar')
// → "0:button:\"Run\"
//    1:button:\"Save\"
//    2:a:\"Settings\"
//    3:input:\"Search...\""

Lens.do('#toolbar', 0)  // Click "Run" button
// → "✓ clicked"

Why this matters: Instead of guessing what elements exist or parsing HTML, the AI can inspect a location and see exactly what actions are available. This enables exploration of unfamiliar UIs.

State & Helpers

CommandPurposeOutput
Lens.state()Compact system statew:Studio|Shell e:0 u:william
Lens.dash()Dashboard with icons[Studio|Shell] 🎮📝💻
Lens.define(name, fn)Register helper✓ name
Lens.call(name, args)Call helperResult
Lens.look(sel, opts)Render to text gridNumbered rows
Lens.at(row, col)What's at position?Element + actions
Lens.click(row, col)Click at position✓ clicked "text"
Lens.actions(sel)List all interactiveIndexed action list
Lens.do(sel, index)Execute by index✓ clicked

Persistent Helpers

Define reusable functions that persist across calls:

Lens.define('pos', () => `P(${px},${py})`)
// → "✓ pos"

Lens.call('pos')
// → "P(3,5)"

// Define more complex helpers
Lens.define('move', '(dx,dy) => { px+=dx; py+=dy; render(); return Lens.call("pos"); }')
Lens.call('move', 1, 0)
// → "P(4,5)"

Why helpers matter: Instead of re-sending the same code every call, define it once. Reduces token usage by 80%+ for repeated operations.

Batch Operations

Chain multiple operations in a single call:

Lens.batch([
  ['setLine', 42, 'const x = await fetch(url);'],
  ['setLine', 43, 'const data = await x.json();'],
  ['save'],
  ['run']
])
// → "✓setLine ✓setLine ✓save ✓run"

Four operations. One API call. One response.

Compact State

Get system state in minimal tokens:

Lens.state()
// → "w:Studio|Shell e:0 u:william"

Lens.dash()
// → "[Studio|Shell|(Settings)] 🎮📝💻🌐⚙️"

One line tells you: active windows, minimized windows (in parens), error count, current user, and available apps. No JSON parsing needed.

Why This Matters

"The best interface for AI isn't a better API. It's an interface designed from scratch for how AI actually works—in tokens, not pixels."

Lens inverts the traditional assumption. Instead of adapting AI to human interfaces, we built an interface for AI that humans can also use.

For AI:

For Humans:

Live Collaboration

When AI uses Lens, you see everything happen in real-time. Buttons click. Text appears. Windows open. It's not a transcript of what happened—it's happening right now, in your browser.

Ask the AI to "open Studio and fix the bug in app.js" and watch as it:

  1. Opens the Studio window
  2. Navigates to app.js
  3. Searches for the bug with Lens.grep('error')
  4. Edits the specific line with Lens.setLine()
  5. Saves and runs to verify

You can interrupt, redirect, or take over at any point. It's pair programming where your partner can see your screen.


Part of a Larger System

Lens doesn't work alone. It's part of FunctionServer's AI development environment:

Eye: The Bridge

Lens runs through eye—a WebSocket bridge that gives AI direct access to the browser's JavaScript VM. No HTTP overhead. Just raw JavaScript execution in ~25ms.

// From the terminal, execute JS in your browser
eye 'Lens.grep("fetchData")'
→ "42:async function fetchData"

eye 'Lens.setLine(42, "fixed")'
→ "✓ L42"

AI Eyes: Visual Feedback

When AI uses Lens, you see it working. Purple highlights show what the AI is inspecting. Green flashes indicate edits. Watch the AI's focus move across the screen as it searches, reads, and modifies code.

Guardian: Error Awareness

Guardian monitors the console. When errors occur, it offers AI assistance. The AI can investigate, use Lens to navigate to the problem, fix it surgically, and verify the fix—all proactively.

GitHub: Ship, Don't Just Generate

After editing with Lens, AI can commit and push—all through the same API:

Lens.save()                    // Save changes
Lens.commit("Fix fetchData")   // Commit
Lens.push()                    // Push to GitHub

From edit to deployed. One namespace. Three calls.


The Complete Workflow

Here's what AI-first development looks like:

  1. Create or open projectLens.project("myapp") or getFileFromDisk("~/repos/myapp/app.js")
  2. Find the codeLens.grep("handleSubmit")
  3. Understand contextLens.line(42, 10)
  4. Edit surgicallyLens.setLine(45, "fixed code")
  5. Run and verifyLens.run()
  6. SaveLens.save()
  7. Commit and pushLens.commit("message") then Lens.push()

Each step is one call. Each call returns immediate confirmation. The debugging loop collapses from minutes to seconds.


Design Principles

The ideas behind Lens generalize beyond code editing:

These principles apply to any interface where AI and humans collaborate in real-time.


Try it: functionserver.com/app

The Happy Path: AI-first development guide

The Door: Architecture philosophy

Code: github.com/williamsharkey/functionserver