Manus AI got acquired by Meta for somewhere between $2 billion and $3 billion. It claims to be the fastest startup from zero to $100 million ARR in history. Its founders just got barred from leaving China.
And its own help documentation warns users that any cost estimate the AI gives you should be treated as a “hallucination rather than a factual commitment.”
This is that review.
TL;DR — The Short Version for Busy People
Manus AI is not an AI model. It’s a paid middleman between you and AI models you could access directly for a fraction of the cost.
- Developers: Skip Manus entirely. Claude Code gives you the same reasoning engine (Anthropic’s Claude) with direct API pricing. A task that burns $200 in Manus credits costs roughly $5 through the API.
- Non-technical users who need autonomous task execution: Manus works, but only on the $200/month Extended plan. The $20 Standard plan gives you 4,000 credits — that’s approximately four to eight real tasks before you’re locked out for the month.
- Everyone else: ChatGPT Plus or Perplexity Pro at $20/month gives you flat-rate, predictable access without the credit roulette.
The core problem isn’t that Manus is bad software. It’s that the pricing model is structurally hostile to its own target audience.
Jump to full comparison table →
What Manus AI Actually Is (And How It Works)
Strip away the marketing language and here’s what Manus does: it takes your request, sends it to Anthropic’s Claude (primarily Claude 3.5 Sonnet, with fine-tuned versions of Alibaba’s Qwen for specific sub-tasks), breaks it into steps, and executes those steps inside a cloud-based virtual Linux machine.
That’s not a small thing. The execution layer is real engineering. Manus coordinates 29 integrated tools — browser automation, file operations, shell commands, code execution, analytics modules — through what’s called a “CodeAct” approach. Instead of relying on brittle pre-defined API calls, the agent writes and runs disposable Python scripts on the fly to solve problems dynamically.
The architecture was confirmed in March 2025 when a user prompted Manus to output its own internal runtime files, exposing the full system prompts, tool lists, and model configuration. Chief scientist Ji Yichao publicly confirmed the Claude + Qwen stack after the leak. The full system prompts were subsequently published on GitHub.
The Orchestration Tax: Paying for Middlemen
Here’s where it gets uncomfortable. Everything Manus does — the reasoning, the code generation, the decision-making — runs on models built by other companies. Anthropic built Claude. Alibaba built Qwen. Manus built the plumbing that connects them to a sandbox and charges you a premium for access.
That premium is what we’re calling the Orchestration Tax.
A developer who knows their way around a terminal can replicate the core Manus workflow by combining Claude’s API, a Docker container, Playwright for browser automation, and an orchestration framework like LangChain. It would take work — Manus’s 29-tool integration represents thousands of hours of prompt engineering and edge-case handling. But the raw materials are available, and the cost difference is staggering.
The argument in Manus’s favor: building a robust, fault-tolerant agentic execution system is brutally hard. Handling memory persistence across long-running tasks, spinning up secure virtual machines, coordinating sub-agents — a marketing agency is never going to do this themselves. For non-technical teams, the Orchestration Tax buys real convenience.
The argument against: that convenience comes wrapped in a pricing model designed to extract maximum revenue from the people least equipped to monitor their spending.
Manus AI Pricing Explained: The Credit Trap
Manus runs on credits. Not a flat monthly fee. Not a predictable per-task rate. Credits — where the cost of any given task is unknown before you start, unknowable during execution, and non-refundable after completion.
Current Subscription Plans (March 2026)
| Plan | Monthly Price | Credits/Month | Daily Refresh | Annual Option |
|---|---|---|---|---|
| Free | $0 | 1,000 starter (one-time) | 300 (Lite mode only, capped at 1,500/mo) | — |
| Standard | $20 | 4,000 | 300 | ~$16.60/mo |
| Customizable | $40 | 8,000 | 300 | ~$33.20/mo |
| Extended | $200 | 40,000 | 300 | ~$166/mo |
| Team | Custom | Scalable pool | 300 | Custom |
Those numbers look reasonable until you see what tasks actually cost.
What Credits Actually Buy (Cost Breakdown)
| Task | Time | Credits Used |
|---|---|---|
| Simple web search | ~1 min | 10–20 |
| Market research query | ~50 sec | ~59 |
| Data visualization/chart | ~15 min | ~200 |
| 3-day trip itinerary | ~4.5 min | ~152 |
| Wedding invitation webpage | ~25 min | ~360 |
| Complex web app build | ~80 min | ~900+ |
| Large research task (user-reported failure) | N/A | 8,555 (wasted) |
| Single enterprise task (pre-run estimate) | N/A | ~10,000 |
Do the math on the Standard plan. You get 4,000 credits per month. A single complex web app eats 900+. That’s four tasks and you’re done. Five if nothing goes wrong.
Things go wrong constantly.
The “Hallucination” Admission in Official Docs
This is the part that should be on a warning label.
Manus’s own help documentation explicitly states that any credit-cost estimate generated by the AI itself should be treated as “hallucinations rather than factual commitments.” The company that built the agent is telling you not to trust the agent’s predictions about how much it will charge you.
There is no upfront cost estimate. There is no pause-and-resume. If your credits run out mid-task, Manus stops completely. Not pauses — stops. Your work in progress is permanently lost. No save state. No recovery.
Credit refunds are issued only for verifiable technical bugs or platform malfunctions. Running out of credits mid-task does not qualify.
Monthly subscription credits don’t roll over. The daily 300-credit refresh doesn’t accumulate. The only credits that never expire are purchased add-ons and the initial signup bonus.
Real User Damage and Wasted Credits
The complaints aren’t theoretical. On X, a user named @AndreCRomano published a detailed audit of 109 transactions showing 210,000 credits lost — 47% of total consumption — with support unresponsive. Gergely Orosz, author of The Pragmatic Engineer (one of the most-read engineering newsletters in tech), publicly posted that he stopped using Manus because credits ran out too fast and it cost more than Perplexity.
On Reddit’s r/ManusOfficial, a user documented 8,555 credits wasted on a CSV research task where the AI falsely claimed completion — only 48 of 129 entries were actually processed. Another reported 22,000 credits deducted without any user activity. A thread titled “Credit System Makes Manus Absolutely Unusable” detailed 500 credits spent stripping HTML from five pages with zero viable output.
After the Meta acquisition, user @dmaysing25 reported credit consumption increasing tenfold, with 30,000 credits vanishing in a single stretch.
Who Gets Hurt by This Pricing Model
Credit-based pricing works when the user understands what they’re consuming. AWS charges per gigabyte. Stripe charges per transaction. You know the unit cost before you commit.
Manus charges per autonomous AI decision loop — a unit that is invisible, unpredictable, and controlled entirely by the agent. The user has zero visibility into how many loops a task will require. A developer recognizes when an agent is stuck in a hallucination spiral and kills the process. A marketing manager assumes the spinning wheel means progress while their monthly allocation evaporates.
The people Manus is built for — non-technical users who can’t set up their own agent stack — are the exact people most vulnerable to the pricing model.
The GAIA Benchmark Illusion
Manus’s marketing leans heavily on its GAIA benchmark performance. The claimed scores:
| Level | Manus | OpenAI Deep Research | Previous Best |
|---|---|---|---|
| Level 1 (basic) | 86.5% | 74.3% | 67.9% |
| Level 2 (intermediate) | 70.1% | 69.1% | 67.4% |
| Level 3 (complex) | 57.7% | 47.6% | 42.3% |
Three things you need to know about these numbers.
First: They are self-submitted. Manus ran its own agent against the GAIA test set and submitted its own scores to the Hugging Face leaderboard. No independent organization re-ran Manus’s code against the private test set to verify results. The leaderboard infrastructure is hosted by Hugging Face, but the actual score generation is performed by the competing teams themselves.
Second: The scores are contested. In the same month Manus launched, H2O.ai published a blog post claiming their h2oGPTe agent reclaimed the top GAIA position with 75% overall accuracy. Some sources cite Manus’s Level 1 score at 81.3% rather than 86.5%, without clarifying which test set version or date was used.
Third: GAIA was co-created by Meta AI (FAIR), Meta GenAI, Hugging Face, and AutoGPT teams. Meta subsequently acquired Manus. No evidence of benchmark manipulation has been published. But the relationship between the benchmark creator and the acquirer of the benchmark leader is a fact worth noting.
GAIA contains 466 questions. High scores on a 466-question benchmark do not guarantee production reliability — as the user complaint data makes abundantly clear.
Meta’s $2 Billion Acquisition of Manus AI
Meta did not officially disclose the acquisition price. The Wall Street Journal reported “over $2 billion.” Reuters cited sources placing it between $2 billion and $3 billion. To put that number in perspective, Manus’s Series B in April 2025 valued the company at $500 million post-money after a $75 million raise led by Benchmark. In roughly eight months, the valuation multiplied by at least four.
Why Meta Paid Billions for an Execution Layer
The acquisition logic isn’t about Manus’s models — Manus doesn’t have proprietary models. It’s about three things:
Execution runtime at scale. By December 2025, Manus claimed $125 million or more in revenue run rate, 147 trillion tokens processed, and 80 million virtual execution environments managed. Building that infrastructure from scratch takes years of edge-case discovery. Meta bought the trial-and-error phase pre-completed.
WhatsApp integration potential. Meta’s broader play is converting WhatsApp Business from a messaging interface into an autonomous transactional operating system. Embedding Manus’s execution layer means a small business owner could send a voice note and have the agent handle customer service, scheduling, inventory queries — without a human in the loop.
Speed. Alexandr Wang (former CEO of Scale AI, now Meta’s Chief AI Officer) is building a vertically integrated agentic AI supply chain. The $14.3 billion Scale AI investment in mid-2025 secured the data-labeling layer. Manus secures the downstream execution layer. Waiting to build in-house was not an option in the 2026 market.
The Geopolitical Bomb: China’s Exit Ban on Founders
Manus was founded in Beijing in 2022 as Butterfly Effect Technology by CEO Xiao Hong and chief scientist Ji Yichao. In mid-2025, the company relocated its executive team, operations, and intellectual property to Singapore — a move analysts have termed “Singapore washing” — to attract Western venture capital and avoid US tech sanctions.
The relocation worked for fundraising. It did not work with Beijing.
In March 2026, China’s National Development and Reform Commission summoned both founders to Beijing. After the meeting, they were barred from leaving the country. China’s Ministry of Commerce is investigating whether the relocation of Manus’s codebase and personnel to Singapore prior to the Meta sale constituted an illegal export of core technology without prior government licensing. The investigation falls under the same regulatory framework China invoked regarding TikTok’s potential US sale.
As of March 27, 2026, the investigation is ongoing. Both founders remain under an exit ban. Meta’s public statement: “The transaction adhered fully to applicable laws. We expect a suitable resolution to the inquiry.”
The precedent this sets goes far beyond Manus. Any Chinese-founded AI startup considering a Western acquisition now knows that Singapore incorporation does not shield you from Beijing’s extraterritorial reach.
The “My Computer” Desktop App Privacy Problem
On March 16, 2026, Manus launched its desktop application — “My Computer” — available for macOS and Windows. This shifted Manus from an isolated cloud sandbox to a hybrid architecture where the agent can directly access your local terminal, read and edit local files, alter system settings, and control local applications.
The reasoning (via Claude/Qwen) still runs on Meta’s cloud servers. The execution happens on your machine. That means your local file metadata, directory structures, and potentially file contents must be transmitted to the cloud to provide context to the language model.
Credit consumption is identical whether tasks run in the cloud or on your local machine — the cost is determined by tokens and compute, not file location.
The desktop app includes an approval system: explicit “Allow Once” or “Always Allow” controls scoped to specific folders. Every terminal command requires user confirmation before execution.
For comparison, an open-source agent framework running with local model weights ensures zero data leaves the local hardware. Even when using external APIs, the developer has full visibility into every network request. Manus, as a proprietary Meta product, is an unauditable black box.
On X, reactions to the launch were largely positive from a feature perspective — 7,000 likes on the official announcement, minimal bug reports. But privacy concerns surfaced immediately. The question isn’t whether the feature works. It’s whether you trust Meta’s AI to browse your personal folders.
Manus AI vs. The Field (Alternatives & Competitors)
At the $20/Month Price Point Comparison
| Feature | Manus Standard ($20) | ChatGPT Plus ($20) | Claude Pro ($20) | Perplexity Pro ($20) |
|---|---|---|---|---|
| Core Model | Claude 3.5/3.7 + Qwen | GPT-5.4 | Sonnet 4.6 | Multi-model (user selects) |
| Usage Limit | 4,000 credits/mo | 80 msgs/3hrs (GPT-4o) | 5x free tier (dynamic) | 200 Pro searches/week |
| Autonomous Execution | Yes | No | No | No |
| Response Time | 4–80 minutes | Seconds | Seconds | Seconds |
| Output Type | Files, apps, reports | Conversational | Conversational | Cited research |
| Deep Research | Wide Research (50 cr/sub-task) | 10 runs/month | Extended Thinking | 20 runs/month |
| Cost Predictability | Low — no upfront estimate | High — flat rate | High — flat rate | High — flat rate |
| Credit Rollover | No | N/A | N/A | N/A |
For Developers: Claude Code
If you write code for a living, the Manus calculus collapses entirely. Claude Code is a standalone CLI tool that gives you the same Claude reasoning engine Manus uses, with a 400,000 token context window, multi-file agentic editing, and Zero Data Retention. You pay API rates — usage-based, but with full visibility and hard spending caps you control.
A web-debugging task that eats $200 in Manus credits runs approximately $5 through Claude Code. Not approximately $50. Five dollars.
For Researchers: Perplexity Pro
Perplexity’s $200/month tier offers an autonomous agent (built on open-source infrastructure) that reportedly completes multi-source research tasks five to ten times faster than Manus. The $20 tier provides 200 Pro searches per week with cited, multi-model responses and no credit anxiety.
The OpenClaw Variable (Open Source Agent)
OpenClaw is an open-source agent framework that’s gained significant attention — Jensen Huang called it “the next ChatGPT” at a March 2026 event, a quote that pulled 3,700 likes on X. The developer community overwhelmingly favors its transparency and zero cost. However, it requires meaningful technical skill to set up, demands careful sandboxing (one security audit found 500+ vulnerabilities including 8 critical), and comes with none of the polished UX that makes Manus accessible to non-technical users.
OpenClaw is the right choice for developers who want full control and zero vendor lock-in. It is not a Manus replacement for the marketing manager who needs 50 event posters generated by Friday.
Who Should Actually Use Manus
It works for you if:
You run a marketing agency or small operation where the bottleneck is execution speed, not engineering depth. You need competitor pricing scraped from 15 sites, 50 localized event graphics generated and sorted, or a recruitment pipeline built spanning LinkedIn analysis through outbound email drafting. You cannot build or maintain your own agent infrastructure. You have budget for the $200/month Extended plan and treat credit overruns as an operational cost, not a personal financial risk.
At that scale, Manus saves hours of manual work per task. The Orchestration Tax is worth it when the alternative is hiring a contractor.
Skip it if:
You’re a developer. You are paying a massive convenience fee for an interface you do not need. Claude Code, the Claude API, or an OpenClaw-based custom stack will do the same work at 5% to 20% of the cost.
You’re on a tight budget. The $20 Standard plan is a trap. Four thousand credits sounds like a lot until your first complex task eats a quarter of them. If something goes wrong mid-task — and it will — you lose both the credits and the work product.
You handle sensitive data. Organizations dealing with HIPAA, GDPR, or sensitive intellectual property should not route local desktop data through Meta’s cloud infrastructure. The “My Computer” app’s approval controls are a start, but they don’t change the fundamental architecture: your data travels to Meta’s servers for the AI to reason about it.
You need reliability guarantees. Independent testing — from MIT Technology Review’s 2025 evaluation to current Reddit reports — documents a consistent pattern: the agent cuts corners, enters infinite loops, falsely claims task completion, and cannot predict its own resource consumption. For tasks where failure has real consequences, Manus is not yet trustworthy enough to run unsupervised.
The Troubleshooting Paradox
This is the structural contradiction at the heart of Manus’s business. The product is designed for non-technical users. But when a Python script fails inside the cloud sandbox, the agent frequently hallucinates fixes, entering infinite loops that drain credits without producing output.
A technical user catches this immediately from the error logs and intervenes. A non-technical user stares at the loading animation, assuming the agent is working, while their monthly allocation disappears.
Until Manus implements automatic execution pauses when the agent deviates from the objective — or at minimum, real-time credit consumption alerts with kill-switch access — the non-technical value proposition carries a serious financial asterisk.
Is Manus AI Truly the “Second DeepSeek“?
Tech commentators labeled Manus as China’s “second DeepSeek moment.” The comparison doesn’t hold up under any structural analysis.
DeepSeek’s disruption was algorithmic. Its R1 model uses a 671 billion parameter Mixture-of-Experts architecture with only 37 billion active parameters per query. DeepSeek introduced novel optimizations in MoE routing, Multi-Head Latent Attention, and training compute efficiency that fundamentally altered the economics of building frontier AI models. That is base-layer innovation.
Manus has zero foundational model intellectual property. It depends entirely on Anthropic’s Claude and Alibaba’s Qwen for cognitive capability. If Anthropic restricts API access or Claude’s reasoning degrades, Manus’s product degrades with it. The innovation is in the execution layer — UX, systems engineering, workflow orchestration. That’s application-layer work. Important work. Not the same category of breakthrough as what DeepSeek achieved.
The counterargument: Meta paid $2 billion because in 2026, the market values reliable execution as highly as raw intelligence. The baseline reasoning of Claude 3.7 and DeepSeek-V3 has hit a commodity threshold — “smart enough” for most commercial tasks. The bottleneck is the harness that aims that intelligence at real-world software reliably. Manus built that harness, scaled it to $125 million in revenue, and accumulated millions of proprietary workflow interactions worth of training signal. That’s a real moat — just not the kind DeepSeek built.
Final Verdict: Who Should Actually Use Manus AI?
Manus AI is real software that solves real problems for a specific audience. If you’re a non-technical team that needs autonomous task execution and you can absorb a $200/month budget with occasional overruns, it delivers value no chatbot currently matches.
But the pricing model is a bet against its own users. A system where the vendor officially warns you not to trust cost estimates, where interrupted tasks are permanently destroyed, where credits vanish with zero recourse — that’s not a product confidence issue. That’s a structural design choice that prioritizes revenue extraction over user trust.
The Meta acquisition adds a layer of geopolitical uncertainty that no product roadmap can resolve. The founders are under an exit ban in China. The regulatory investigation is ongoing. The long-term ownership and governance of the technology is unclear.
For most people reading this, the answer is simple. Developers: Claude Code. Research-heavy workflows: Perplexity Pro. General AI assistance: ChatGPT Plus or Claude Pro. All of them cost $20/month, all of them charge flat rates, and none of them will silently eat your budget while claiming the bill is a hallucination.
Disclosure: Future Stack Reviews has no affiliate relationship with Manus AI. This review is based on publicly available technical documentation, confirmed third-party testing data, user reports across X/Reddit/LinkedIn/G2, the March 2025 system prompt leak, and Manus’s own published help documentation. Pricing and features are current as of March 27, 2026.
This review draws on pricing data corroborated across 3+ independent sources, technical architecture confirmed by Manus’s chief scientist following the March 2025 code leak, credit consumption data from third-party testing, and user complaints documented on G2, Reddit, and X.
The Meta acquisition valuation ($2B–$3B) was reported by WSJ and Reuters; Meta has not disclosed terms.
GAIA benchmark scores are self-submitted. ARR figures are self-reported and unaudited.
The China investigation and founder exit bans were confirmed by Reuters, FT, NYT, and CNBC (March 25–26, 2026).
For a focused look at AI ad generation, see our AdCreative AI review.
