Claude Code Channels Lets You Text Your AI Coder From Telegram and Discord
Anthropic shipped Claude Code Channels, turning its coding agent into an always-on assistant you can message from Telegram and Discord. Built on MCP and the Bun runtime, it directly challenges OpenClaw's grip on the personal AI agent market.
Jan Schmitz
|
|
11 min read
On this page
TL;DR: Anthropic just released Claude Code Channels, a feature that hooks Claude Code into Telegram and Discord so developers can text their AI coding agent from anywhere. Built on the Model Context Protocol and the Bun runtime, it turns Claude Code into a persistent, always-on worker. That’s the exact value proposition that made OpenClaw a phenomenon. The difference: no dedicated Mac Mini, no Docker setup, no half-million lines of open-source config. Just a bot token and a flag.
Claude Code Channels Lets You Text Your AI Coder From Telegram and Discord
Four months ago, if you wanted an AI agent you could ping from your phone at 2 AM and have it push a hotfix to staging before you woke up, you had one real option: OpenClaw. Peter Steinberger’s open-source project had 210,000 GitHub stars, integrations with every messaging platform that matters, and a passionate community of developers who bought dedicated Mac Minis just to keep it running around the clock.
That calculus changed this morning. Anthropic announced Claude Code Channels, and the developer internet collectively reached for the same phrase: OpenClaw killer.
What actually shipped
Claude Code Channels is not a chatbot skin. It is an architectural extension that turns a Claude Code session into a long-lived process capable of receiving and responding to messages from external platforms.
The concrete version: you start Claude Code with a --channels flag. Behind the scenes, it spins up an MCP server that polls a messaging platform (Telegram or Discord at launch) for incoming messages. When something arrives, the message gets injected into the active session as a <channel> event. Claude processes it with full access to its toolkit: file reads, code edits, test execution, git operations. When it finishes, it pushes a reply back to the messaging platform using a dedicated reply tool.
Your phone becomes a remote control for your development environment.
Two platforms ship on day one. Adding more is straightforward, because every channel is just an MCP server that declares the claude/channel capability and communicates over stdio transport. Anthropic publishes the plugin code on GitHub. The community can fork it for Slack or WhatsApp without waiting for an official release.
The feature launched in research preview. Pro and Max subscribers can opt in per session. Team and Enterprise plans need admin approval first.
The technical stack: MCP, Bun, and a polling loop
Three pieces of infrastructure make Channels work, and each one explains why this was not feasible a year ago.
Model Context Protocol as the backbone
The Model Context Protocol has quietly become the shared language of AI tool integration since Anthropic open-sourced it in November 2024. Over 6,400 servers sit on the official registry. Monthly SDK downloads crossed 97 million by the end of 2025. OpenAI, Google, Microsoft: everyone adopted it.
MCP defines how an AI model discovers and invokes external tools. What Channels adds is a new capability type: claude/channel. An MCP server that declares this capability can emit notifications/claude/channel events, which the Claude Code runtime picks up and routes into the active conversation. The channel does not need to know anything about Claude’s internal state. It pushes structured events and waits for replies. Clean separation.
Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation in December 2025, co-founded with Block and OpenAI. That move matters here: building Channels on an open standard means nobody can credibly accuse Anthropic of locking developers into a proprietary messaging layer. Anyone can write a channel. The protocol is the protocol.
Why Bun, not Node
The pre-built channel plugins are Bun scripts, and Claude Code itself ships with Bun embedded as a standalone executable. That’s a direct consequence of Anthropic’s acquisition of Bun in December 2025.
The performance numbers tell the story. Bun’s cold start time lands between 8 and 15 milliseconds, compared to Node.js at 60 to 120. For a system that spawns channel subprocesses every time a developer fires up --channels, that gap compounds fast. HTTP throughput sits around 150,000 requests per second versus Node’s 50,000. Native TypeScript execution eliminates transpilation overhead entirely.
Jarred Sumner’s runtime was already popular (82,000 GitHub stars, 7 million monthly downloads, used by Midjourney and Lovable), but Anthropic embedding it directly into Claude Code’s binary turns Bun from a developer choice into plumbing. You do not install Bun to use Channels. It ships baked in.
Persistence without dedicated hardware
The persistence model is where this gets interesting.
A Claude Code session with --channels enabled can run in a background terminal, a tmux session, or a VPS. It sits idle, barely touching resources, until a message arrives from Telegram or Discord. Then it wakes up, does its work, replies, and goes back to sleep.
This is the exact workflow OpenClaw users built, except they needed a dedicated machine (the infamous Mac Mini setup), Docker containers, and a nontrivial amount of configuration. Claude Code Channels flattens all of that into a single process on whatever machine you already have running.
The OpenClaw context
You cannot understand why Channels matters without understanding what it is responding to.
OpenClaw launched in November 2025 under a different name. Steinberger originally called it “Clawd,” a tribute to the Anthropic model that powered it. Anthropic’s legal team sent a cease-and-desist. The project cycled through “Moltbot” before landing on OpenClaw. The name stuck. The project exploded.
Within three months, OpenClaw had 20,000 GitHub stars. By March 2026, that number sat above 210,000. The appeal was dead simple: a personal AI worker you could message 24/7, over whatever app you already used. Not a chatbot. A worker. People had it applying for jobs while they watched movies, managing social media campaigns, writing and sending emails, building entire applications from a Telegram thread.
The platform list was wild: WhatsApp, Telegram, Slack, Discord, iMessage, Signal, Google Chat, IRC, Microsoft Teams, Matrix, LINE, Twitch. Over 20 integrations in total.
But OpenClaw came with baggage. The codebase ballooned to nearly half a million lines of code. Fifty-three config files. Seventy-plus dependencies. Security was application-level only, meaning a misconfigured agent could access your file system, your credentials, anything on the host machine. For the non-technical crowd drawn in by viral demos, the distance between “look what it can do” and “I can actually set this up safely” was massive.
That gap spawned a whole ecosystem of derivatives:
- NanoClaw stripped OpenClaw down to a single process with OS-level container isolation. 24,400 GitHub stars. MIT license. Built on Anthropic’s own Claude Agent SDK.
- KiloClaw went fully managed: deploy a hosted OpenClaw agent in under 60 seconds, no Docker required, starting at $19/month. Backed by GitLab co-founder Sid Sijbrandij.
- NemoClaw, announced by NVIDIA at GTC on March 16, wrapped OpenClaw in enterprise guardrails using the NVIDIA Agent Toolkit. Jensen Huang called OpenClaw “the operating system for personal AI.”
Then, on February 15, Sam Altman announced that Steinberger was joining OpenAI. “Peter Steinberger is joining OpenAI to drive the next generation of personal agents,” Altman wrote. Steinberger’s own post was characteristically blunt: “What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone.”
OpenClaw moved to a foundation and remains open source, now sponsored by OpenAI.
Anthropic shipping native messaging integration five weeks later is not a coincidence. It is a direct competitive answer.
Setup: simpler than you’d expect
Getting Channels running takes roughly ten minutes. You need Claude Code v2.1.80 or later and a claude.ai login on a paid plan.
Telegram
- Open BotFather in Telegram and run
/newbotto generate a bot and access token. - In your Claude Code terminal:
/plugin install telegram@claude-plugins-official - Configure the token:
/telegram:configure <your-token> - Restart with the channel flag:
claude --channels plugin:telegram@claude-plugins-official - DM your bot on Telegram to get a pairing code, then enter it:
/telegram:access pair <code>
Discord
- Go to the Discord Developer Portal, create a new application, and copy the bot token.
- Under Bot settings, enable Message Content Intent in Privileged Gateway Intents.
- In Claude Code:
/plugin install discord@claude-plugins-officialthen/discord:configure <your-token> - Launch:
claude --channels plugin:discord@claude-plugins-official - DM your bot and pair:
/discord:access pair <code>
Anthropic also ships a “Fakechat” demo, a localhost-only chat UI for testing the push-and-reply logic before connecting to any external server. Cautious by default. Developers get to understand the event flow before they expose a terminal session to the open internet.
What this actually changes for developers
On the surface, the pitch is convenience. Message your coding agent from your phone. Get a ping when a build finishes. Kick off a deploy from a Telegram thread while walking the dog.
But the bigger change is structural.
Coding assistants, including Claude Code before today, operate in a synchronous loop. You sit down. You type. The model responds. You review. You iterate. Close the laptop and the assistant stops existing.
OpenClaw shattered that pattern by making the agent persistent and reachable from anywhere. Claude Code Channels does the same thing, minus the sysadmin prerequisites.
Picture this: a developer pushes a feature branch before leaving the office. On the train home, they text their Claude Code bot on Telegram: “Run the test suite on feature/auth-refactor and fix any failures.” Claude Code picks up the message, checks out the branch, runs the tests, spots two failing specs, reads the error traces, patches the code, reruns the suite, and messages back: “Fixed two assertion failures in auth_controller_spec.rb. All 847 tests passing. Changes committed to feature/auth-refactor.”
The developer reviews the diff on their phone, approves it, and goes back to their podcast.
OpenClaw users have been living this way for months. They just needed a Mac Mini, Docker, and an afternoon of tinkering to get there. Channels compresses all of that into a bot token and a CLI flag.
The licensing tension
Here is where it gets complicated.
Claude Code is proprietary software. It requires a paid Anthropic subscription: Pro at $20/month, Max at $100 or $200/month, or Enterprise. The model behind it, Claude Opus 4.6, is closed. The infrastructure is Anthropic’s cloud.
But the protocol underneath is open. MCP belongs to the Agentic AI Foundation. The channel plugins sit on GitHub. The spec for writing a custom channel is public documentation. Anyone can build a connector without asking Anthropic.
Anthropic has been refining this playbook for a while now: proprietary engine, open tracks. The brain is locked down; the nervous system is open source. It is the mirror image of the OpenClaw approach, where everything was open but safety and reliability fell squarely on the developer.
For enterprise buyers, that distinction carries weight. Anthropic’s brand is built on AI safety research. When a CISO evaluates letting developers wire an AI agent into corporate Telegram channels, “built by Anthropic on audited infrastructure” lands very differently than “community-maintained open-source project with application-level security.”
The flip side is lock-in. Build your workflow around Claude Code Channels, and you are tethered to Anthropic’s pricing, uptime, and product roadmap. OpenClaw users own their whole stack. This is the same managed-versus-self-hosted trade-off that has played out for decades. But it cuts deeper when the “managed service” is an autonomous agent with access to your codebase.
Community verdict: swift and loud
The reaction on X was fast and almost uniformly one-directional.
BentoBoi (@BentoBoiNFT) nailed the hardware angle: “Claude just killed OpenClaw with this update. You no longer need to buy a Mac Mini. I say this as someone who owns one lol.” That remark about Mac Minis is not a joke. A sizable group of developers bought dedicated hardware specifically to run OpenClaw around the clock. Channels makes that hardware redundant for anyone already paying for Claude.
Ejaaz (@cryptopunk7213) zeroed in on shipping speed: incorporating texting, thousands of MCP skills, and autonomous bug-fixing in four weeks was, in his words, “fucking crazy.”
Matthew Berman kept it short: “They’ve BUILT OpenClaw.”
The hype is understandable, but the “killer” label oversimplifies things. OpenClaw supports 20+ messaging platforms. Channels supports two. OpenClaw is free and open source. Channels requires a subscription. OpenClaw runs any model, not just Claude, and KiloClaw users can toggle between 500+ models. Channels is Anthropic-only by design.
What Channels actually kills is not OpenClaw as a project, but the primary reason most people adopted OpenClaw: having an always-on AI coding agent you can reach from your phone. For the majority of developers who wanted that capability without the operational overhead, Channels is the path of least resistance.
Power users running multi-model setups with custom integrations across a dozen platforms? OpenClaw’s ecosystem still has no equivalent.
The bigger picture: agents go asynchronous
Step back from the Claude-versus-OpenClaw horse race and a structural shift snaps into focus.
The entire AI agent industry is moving from synchronous to asynchronous. Google launched the Agent2Agent (A2A) protocol with 50+ enterprise partners. NVIDIA is pouring resources into persistent agent infrastructure. OpenAI acquired Steinberger specifically to productize what he proved was possible with OpenClaw.
The direction is consistent across every major player: AI tools are graduating from things you use while sitting at a desk to things that work on your behalf while you do something else.
Claude Code’s own trajectory maps this shift precisely. May 2025: public launch as a terminal coding assistant. By November, it crossed $1 billion in run-rate revenue. It overtook GitHub Copilot and Cursor as the most-used AI coding tool. Agent Teams shipped in February 2026. Voice mode arrived earlier this month. Now Channels.
Each release nudges the product further from “tool I interact with” toward “colleague I delegate to.”
The implications reach well past developer tooling. If this trajectory holds (and every signal says it will), we are heading toward a world where knowledge workers of all kinds run persistent AI agents in the background, reachable by text, handling work that used to demand focused desk time. Coding is just the proving ground because developers tolerate the rough edges.
What comes next
Anthropic labeled Channels a research preview for good reason. Two messaging platforms is a starting point. The MCP architecture means Slack, WhatsApp, and iMessage connectors are straightforward to build. The community will ship them if Anthropic does not.
The more interesting question is what happens when Channels collides with Agent Teams. Right now, a developer texts one Claude Code session. What happens when they can text a team of agents, one for frontend, one for backend, one for infrastructure, and orchestrate them from a group chat? That is the OpenClaw vision at scale, wrapped in enterprise guardrails.
For today, though, the takeaway is concrete: the barrier to having an always-on AI coding partner just dropped from “buy a Mac Mini and spend a weekend on Docker” to “create a Telegram bot and add a flag.”
Whether that makes it an OpenClaw killer or just the opening salvo of a much bigger convergence depends on where this goes from here. Given the pace Anthropic is shipping, we probably will not wait long to find out.