What Can This Skill Do?
This Skill takes your Agent's web capabilities to a whole new level.
- 10 sub-agents operating in parallel across 10 different platforms: Xiaohongshu, Weibo, Bilibili, Boss, Huxiu, and more
- Open 100 web pages at once, operate various platform UIs in parallel, read content, and assemble reports
- Doesn't hijack your computer: the Agent searches within sites and operates web pages in the background
- Auto-publishes to social platforms: opens the platform, writes copy, uploads images, and publishes without human intervention
- Everyday web tasks too: finding episodes to stream, checking US visa appointment systems, automated web testing
- Can even auto-solve some captchas
- Automatically distills site operation experience; the Agent gets smoother the more you use it
These capabilities all emerge generically once the Agent is equipped with this Skill — no per-site tuning required.
Compatible with Claude Code, OpenClaw, and any Agent that supports Skills.
Why Do You Need This Skill?
Problems with Existing Agent Web Solutions
Claude Code: By default uses Web Search for searching and Web Fetch for reading pages; after installing Playwright or Chrome DevTools MCP, it can also control a browser. But it only provides tools — the access strategy is left entirely to the model.
The model too easily gets tunnel vision:
- Holding the Search tool, it endlessly tries different keywords in search engines to find non-public web info
- Using fetch, it helplessly requests login-walled, JS-heavy sites (which simply won't load)
- You have to install Playwright or Agent Browser CLI yourself, and will inevitably hit pitfalls multiple times
OpenClaw: Offers a CDP mode, but every site requires a fresh login, and some site components fail to load in CDP mode.
On top of that, neither handles concurrent multi-page operations well, and both may fight you for browser control.
The Ideal Agent Web Solution
- Flexible strategy allocation: switch fluidly between search, static fetch, and browser; swap tools when blocked
- Reuse existing login sessions: don't maintain a separate identity for every site
- Strong generalization: adapt to diverse web tasks and target-site behaviors and anti-bot measures
- Sub-Agent divide and conquer: high concurrency across many pages, running in the background without interference
- Accumulate operational experience: next visit to the same site doesn't start from scratch
Installation
Send this to your Agent and installation completes itself:
Help me install the web-access skill from https://github.com/eze-is/web-access.
This skill was originally designed for Claude Code. Before installing, please
understand its core principles and working logic, then adapt it to your Agent
architecture and local environment so it integrates naturally instead of being
transplanted rigidly.The Agent will download, configure the environment, and complete the install automatically. No manual steps required.
Prerequisites
- [Required] Install Chrome and update it to the latest version
- In Chrome's address bar, enter
chrome://inspect/#remote-debuggingand check Allow remote debugging for this browser instance
How to Use
Type "follow the web-access skill" to manually cue the Agent, or just describe the web task you want done:
- Searching info, reading web pages: "look up xx for me"
- Operating web UIs (filling forms, clicking, uploading): "open xx"
- Scraping or publishing to social platforms: "post xx to the xx platform for me"
- Reading dynamically rendered pages, or any web task that needs a browser
We recommend disabling any extra browser MCP services (Chrome DevTools, Playwright MCP, etc.) to prevent the model from fighting itself across tools.
Skill Design Philosophy
Core Idea
A Skill that unlocks a model's upper bound = Agent strategic philosophy + a minimal, complete toolset + essential factual notes
General-purpose Agent Skills should avoid over-specifying "how" the Agent does things. Instead, focus on:
- Recalibrating the model's strategic philosophy for the target scenario
- Supplying basic tools the Agent is missing
- Surfacing upfront factual notes the model is unlikely to remember
Four-Step Loop
- Define success criteria: What counts as done? What info, actions, or outcomes are expected?
- Start with the most direct path: if you already know the target requires login or has anti-bot defenses, go straight to the browser
- Validate in-flight: every step's result is evidence. A search miss doesn't necessarily mean bad keywords — the target may simply not exist. When hitting popups or login walls, first check whether the content is already in the DOM
- Stop when the success criteria are met: don't over-operate or second-guess
Minimal Complete Toolset
Humans really only do three things on the web: Search (find where the info is), Look (see the content), Do (perform actions on the page).
| Action | Tool | Notes |
|---|---|---|
| Search | Search | Find summaries, discover sources |
| Look | Fetch / curl / browser | AI extraction for public pages, or browser for dynamic pages |
| Do | Browser automation | Click, fill forms, upload, and other interactions |
Why Chrome's built-in CDP: native WebSocket interactions evade most sites' anti-bot detection, naturally support multi-tab parallel background operation within one browser, and connect directly to the user's daily Chrome — so login sessions come for free.
Sub-Agent Divide and Conquer
Web tasks often involve multiple independent targets. Using the sub-agent mechanism in frameworks like Claude Code, independent subtasks are dispatched to sub-agents running in parallel, and the main Agent only receives the aggregated results.
Architecturally, all sub-agents share the same Chrome and the same CDP Proxy. Each creates its own background tab and operates via a distinct targetId — no interference, no race conditions.
Caveat: Prompt Wording Traps
When the main Agent writes prompts for its sub-agents, default phrasing can inadvertently steer sub-agent behavior. For example, if you write "research", the main Agent may turn that into "search" in the sub-agent prompt — which anchors the sub-agent to WebSearch instead of using the browser to enter an anti-bot platform.
Experience Accumulation
After operating on a site, the Agent automatically distills access strategy for that site: platform traits, effective URL patterns, known pitfalls — stored per domain.
The efficiency gap is dramatic (for example, finding a blogger's homepage on Xiaohongshu is about 90% faster with prior experience than without).
Experience files are stamped with a discovery date and treated as "likely useful hints" rather than "guaranteed facts". Sites redesign and anti-bot strategies evolve. If experience-based steps fail, the Agent falls back to the general mode and updates the experience file.
Installation Link
🔗 GitHub: https://github.com/eze-is/web-access
MIT-licensed open source, compatible with Claude Code, OpenClaw, and any Agent that supports Skills.