The web, as an API,
for your agents.

Your agent says what to do. BabelWrap does it and returns structured JSON.

Navigate, click, fill forms, extract data — all through natural language. No selectors, no browser setup, no HTML parsing.

Claude Desktop Cursor Claude Code Any MCP Client Python JavaScript curl
import httpx, os

API_KEY = os.environ["BABELWRAP_API_KEY"]
BASE    = "https://api.babelwrap.com/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}

# Create session, navigate, and extract -- 3 calls
sid = httpx.post(f"{BASE}/sessions", headers=HEADERS, json={}).json()["session_id"]
httpx.post(f"{BASE}/sessions/{sid}/navigate", headers=HEADERS,
    json={"url": "https://news.ycombinator.com"})
stories = httpx.post(f"{BASE}/sessions/{sid}/extract", headers=HEADERS,
    json={"query": "all story titles and their point counts"}).json()["data"]
const API_KEY = process.env.BABELWRAP_API_KEY;
const BASE = "https://api.babelwrap.com/v1";
const h = { "Authorization": `Bearer ${API_KEY}`, "Content-Type": "application/json" };

// Create session, navigate, and extract -- 3 calls
const { session_id: sid } = await fetch(`${BASE}/sessions`, {
  method: "POST", headers: h, body: "{}" }).then(r => r.json());
await fetch(`${BASE}/sessions/${sid}/navigate`, { method: "POST", headers: h,
  body: JSON.stringify({ url: "https://news.ycombinator.com" }) });
const { data: stories } = await fetch(`${BASE}/sessions/${sid}/extract`, {
  method: "POST", headers: h,
  body: JSON.stringify({ query: "all story titles and their point counts" })
}).then(r => r.json());
# Create session, navigate, and extract -- 3 calls
SID=$(curl -s -X POST https://api.babelwrap.com/v1/sessions \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -H "Content-Type: application/json" -d '{}' | jq -r .session_id)

curl -s -X POST https://api.babelwrap.com/v1/sessions/$SID/navigate \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -d '{"url": "https://news.ycombinator.com"}'

curl -s -X POST https://api.babelwrap.com/v1/sessions/$SID/extract \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -d '{"query": "all story titles and their point counts"}' | jq .data

Built for developers who build with AI

AI Agent Builders

Building agents with Claude, GPT, or LangChain that need to interact with websites? BabelWrap gives them browsing capabilities through a single API.

Backend Developers

Need to extract data, submit forms, or automate workflows on the web? Skip Selenium boilerplate — describe what you want in plain English.

Data & Research Teams

Monitor prices, track competitors, gather public data. BabelWrap handles the browser so your pipelines stay clean.

Stop fighting the DOM

Your agents shouldn't need to know CSS selectors, XPath, or page structure.

Without BabelWrap
# Brittle selectors that break when the site changes
driver.find_element(By.CSS_SELECTOR,
    "#root > div.app > main > form > div:nth-child(2) > input"
).send_keys("user@example.com")

driver.find_element(By.XPATH,
    "//button[contains(@class, 'btn-primary') and text()='Sign In']"
).click()

# Parse raw HTML to extract data
soup = BeautifulSoup(driver.page_source, "html.parser")
items = soup.select("div.product-card > h3.title")
prices = soup.select("div.product-card > span.price")
With BabelWrap
# Natural language -- works even when the site redesigns
await client.post(f"{BASE}/sessions/{sid}/fill", json={
    "target": "Email address field",
    "value": "user@example.com"
})

await client.post(f"{BASE}/sessions/{sid}/click", json={
    "target": "the Sign In button"
})

# Structured data extraction
resp = await client.post(f"{BASE}/sessions/{sid}/extract", json={
    "query": "all product names and prices"
})
products = resp.json()["data"]

Three steps to web automation

No browser setup. No Playwright boilerplate. Just API calls.

1

Create a Session

One POST request gives you an isolated browser context with its own cookies and state.

2

Describe What You Want

Use natural language: "click the Login button", "fill the email field", "extract all prices".

3

Get Structured Data

Every action returns an LLM-readable snapshot with inputs, actions, forms, and navigation.

Drop-in MCP server

16 tools, zero configuration. Works with Claude Desktop, Cursor, and any MCP client.

// claude_desktop_config.json
{
  "mcpServers": {
    "babelwrap": {
      "command": "uvx",
      "args": ["babelwrap-mcp"],
      "env": {
        "BABELWRAP_API_KEY": "bw_your_api_key_here"
      }
    }
  }
}

Native tool support for AI agents

Add BabelWrap to your agent's toolkit in one config change. The MCP server exposes all browser capabilities as tools that Claude, GPT, and other agents can call directly.

Your agent can say: "Navigate to example.com and tell me what you see" — and BabelWrap handles the rest.

babelwrap_new_session babelwrap_navigate babelwrap_click babelwrap_fill babelwrap_submit babelwrap_extract babelwrap_snapshot babelwrap_screenshot babelwrap_press babelwrap_scroll babelwrap_hover babelwrap_upload babelwrap_wait_for babelwrap_back babelwrap_forward babelwrap_close_session

MCP setup guide →

Map a site once, use it forever

Your agent calls linkedin_search_jobs(query="python developer") instead of navigating page by page.

Without Site Mapping
# 5+ API calls, agent must understand each page
sid = create_session()
navigate(sid, "https://linkedin.com/jobs")
fill(sid, "search field", "python developer")
fill(sid, "location field", "NYC")
click(sid, "Search button")
data = extract(sid, "all job listings")
close_session(sid)
With Site Mapping
# 1 call. BabelWrap handles the entire flow.
result = linkedin_search_jobs(
    query="python developer",
    location="NYC"
)
# → [{"title": "Senior Python Dev", "company": "Acme", ...}]
1

Map

Point BabelWrap at any website. An AI agent explores it, discovers pages and actions, and generates replayable recipes.

2

Generate Tools

Recipes become typed MCP tools with named parameters: linkedin_search_jobs(query, location).

3

Call

Your agent calls generated tools directly. Self-healing auto-corrects when sites change their layout.

Pre-mapped sites are free. Every mapped site is added to a public catalog. If someone already mapped a site, your agent can use all its tools at no cost. Learn more →
linkedin_search_jobs linkedin_view_profile github_search_repos amazon_search_products indeed_search_jobs yelp_search_businesses

How site mapping works →

The Snapshot Format

Every action returns a structured snapshot your LLM can immediately reason about. No HTML parsing. No DOM traversal.

{
  "url": "https://example.com/login",
  "title": "Sign In -- Example",
  "content": "Welcome back. Sign in to continue.",
  "inputs": [
    { "id": "email-field", "label": "Email address", "type": "text" },
    { "id": "password-field", "label": "Password", "type": "password" }
  ],
  "actions": [
    { "id": "sign-in-btn", "label": "Sign In", "type": "button", "primary": true },
    { "id": "forgot-pwd", "label": "Forgot password?", "type": "link" }
  ],
  "navigation": ["Home", "Products", "Pricing", "Blog"],
  "forms": [
    { "id": "login-form", "fields": ["email-field", "password-field"], "submit": "sign-in-btn" }
  ],
  "alerts": [],
  "tables": [],
  "lists": [],
  "frames": []
}

What your agent sees

Instead of raw HTML, BabelWrap gives your agent a clean, structured representation of every page. Every field, button, link, form, and table is labeled and ready to act on.

  • inputs — every form field with label, type, and current value
  • actions — all clickable elements (buttons, links)
  • forms — logical groupings with their submit buttons
  • tables — structured table data with headers and rows
  • lists — ordered and unordered list items
  • navigation — site navigation links
  • alerts — error messages, success banners, warnings
  • frames — inputs and actions inside iframes
  • content — up to 15,000 characters of readable page text
Learn more about snapshots →

What can you build?

From quick data grabs to complex multi-step workflows.

Price Monitoring

Track product prices across e-commerce sites. Get alerts when prices drop.

~3 actions per check

AI Research Agent

Let your agent browse the web to answer questions. Works with Claude Desktop out of the box.

Drop-in MCP integration

Form Automation

Fill and submit forms: contact forms, applications, signups. Handle dropdowns, checkboxes, file uploads.

~5-8 actions per form

Data Extraction

Pull structured data from any page with a natural language query. No selectors needed.

2 actions per page

QA & Testing

Verify login flows, checkout processes, and user journeys programmatically.

End-to-end validation

Competitive Intelligence

Monitor competitor pages, job postings, product launches. Stay ahead of market changes.

Scheduled automation

Full-Site Automation

Map a website once. Your agent calls typed tools like github_search_repos(query) instead of manual navigation.

1 API call per task

Real working code

Navigate to Hacker News and extract the top stories — in 3 API calls.

import asyncio
import httpx
import os

API_KEY = os.environ["BABELWRAP_API_KEY"]
BASE = "https://api.babelwrap.com/v1"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}

async def main():
    async with httpx.AsyncClient(timeout=30.0) as client:
        # 1. Create session
        resp = await client.post(f"{BASE}/sessions", headers=HEADERS, json={})
        session_id = resp.json()["session_id"]

        try:
            # 2. Navigate to Hacker News
            await client.post(
                f"{BASE}/sessions/{session_id}/navigate",
                headers=HEADERS,
                json={"url": "https://news.ycombinator.com"},
            )

            # 3. Extract top stories
            resp = await client.post(
                f"{BASE}/sessions/{session_id}/extract",
                headers=HEADERS,
                json={"query": "all story titles and their point counts"},
            )
            for story in resp.json()["data"]:
                print(f"  {story['points']} - {story['title']}")

        finally:
            await client.delete(f"{BASE}/sessions/{session_id}", headers=HEADERS)

asyncio.run(main())
const API_KEY = process.env.BABELWRAP_API_KEY;
const BASE = "https://api.babelwrap.com/v1";
const headers = {
  "Authorization": `Bearer ${API_KEY}`,
  "Content-Type": "application/json",
};

// 1. Create session
const { session_id } = await fetch(`${BASE}/sessions`, {
  method: "POST", headers, body: JSON.stringify({})
}).then(r => r.json());

// 2. Navigate to Hacker News
await fetch(`${BASE}/sessions/${session_id}/navigate`, {
  method: "POST", headers,
  body: JSON.stringify({ url: "https://news.ycombinator.com" })
});

// 3. Extract top stories
const { data: stories } = await fetch(
  `${BASE}/sessions/${session_id}/extract`, {
    method: "POST", headers,
    body: JSON.stringify({ query: "all story titles and their point counts" })
}).then(r => r.json());

stories.forEach(s => console.log(`  ${s.points} - ${s.title}`));

// Clean up
await fetch(`${BASE}/sessions/${session_id}`, { method: "DELETE", headers });
# 1. Create session
SID=$(curl -s -X POST https://api.babelwrap.com/v1/sessions \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{}' | jq -r .session_id)

# 2. Navigate to Hacker News
curl -s -X POST https://api.babelwrap.com/v1/sessions/$SID/navigate \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://news.ycombinator.com"}' | jq .snapshot.title

# 3. Extract top stories
curl -s -X POST https://api.babelwrap.com/v1/sessions/$SID/extract \
  -H "Authorization: Bearer $BABELWRAP_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"query": "all story titles and their point counts"}' | jq .data

# Clean up
curl -s -X DELETE https://api.babelwrap.com/v1/sessions/$SID \
  -H "Authorization: Bearer $BABELWRAP_API_KEY"

See more examples →

Everything your agent needs

Navigate, read, interact, scroll, wait, upload, and extract. Everything returns a structured snapshot.

Navigation

navigate

Load any URL and get a structured snapshot of the page.

{"url": "https://..."}

back / forward

Navigate browser history without re-entering URLs.

{}

scroll

Scroll the page up or down to reveal more content.

{"direction": "down"}
Interaction

click

Click any element using a natural language description.

{"target": "the Login button"}

fill

Fill any input field with a value, identified by description.

{"target": "Email field", "value": "..."}

submit

Submit a form. Omit target to submit the most prominent form.

{"target": "login form"}

press

Press keyboard keys: Enter, Escape, Tab, arrow keys.

{"key": "Enter"}

hover

Hover over elements to reveal dropdown menus and tooltips.

{"target": "Products menu"}
Data

snapshot

Read the current page state without performing any action.

{}

screenshot

Capture a base64 PNG screenshot for debugging.

{}
Utility

upload

Upload files to file input fields (resumes, documents, images).

{"target": "Resume field", ...}

wait_for

Wait for text, elements, or URL changes before proceeding.

{"text": "Order confirmed"}

Built for agents, loved by developers

Feature Selenium / Playwright Browser APIs (Browserbase, Steel) BabelWrap
Interface CSS selectors, XPath CSS selectors, XPath Natural language
Output Raw HTML / DOM Raw HTML / DOM Structured JSON snapshot
Agent integration Custom glue code Custom glue code First-class MCP server
Element resolution Breaks when DOM changes Breaks when DOM changes LLM-based, adapts automatically
Billing Self-hosted or per-minute Per-minute browser time Per-action, starting at $0
Setup Install browser + driver API key + SDK One API key or MCP config
Pre-built site tools None None Catalog of mapped sites with typed tools

Simple, usage-based pricing

Start free. Scale as you grow. No hidden fees.

Free
$0
For experimenting and prototyping
  • 500 actions / month
  • 2 concurrent sessions
  • All 16 tools
  • MCP server access
  • 10 requests / minute
Get Started

All plans include full API and MCP server access. All 16 tools included on every plan. Usage resets on the 1st of each month. Pay only for what you use.

Site Mapping: $10 per site to generate typed tools your agent can call directly. Pre-mapped sites from the public catalog are free to use. Full pricing details →

Get your first structured snapshot in under a minute

Free tier. No credit card. 500 actions/month.

pip install babelwrap
export BABELWRAP_API_KEY="bw_your_key"
python -c "
from babelwrap import BabelWrap
with BabelWrap(api_key='bw_your_key') as bw:
    with bw.create_session() as s:
        s.navigate('https://example.com')
        print(s.extract('main heading and page description'))
"
Get Started Free