Opening

The big enterprise announcement from Anthropic this morning will dominate feeds all week. Blackstone, Goldman Sachs, Hellman & Friedman (the backing consortium reads like a capital markets press release, not an AI company blog post). That is worth noticing. Anthropic is not positioning Claude as a product you buy. It is positioning Claude as infrastructure you deploy through a services wrapper, the same way Oracle and SAP moved into enterprise decades ago. The signal is not the funding. The signal is the delivery model: small engineering teams embedded with customers, building against existing workflows, avoiding the systems integrators that slow everything down. For operators building on Claude today, that context matters. The platform you are building on is moving toward enterprise depth, not consumer breadth.

200+ AI Side Hustles to Start Right Now

AI isn't just changing business—it's creating entirely new income opportunities. The Hustle's guide features 200+ ways to make money with AI, from beginner-friendly gigs to advanced ventures. Each comes with realistic income projections and resource requirements. Join 1.5M professionals getting daily insights on emerging tech and business opportunities.

Today's Signals

  • Anthropic launched a new enterprise AI services company backed by Blackstone, Goldman Sachs, Hellman & Friedman, General Atlantic, and Sequoia Capital. Target market: mid-sized companies (community banks, regional health systems, manufacturers) that lack the internal engineering capacity to deploy Claude themselves. Small embedded teams handle the build. (Anthropic, May 4)

  • OpenAI's parallel JV is named The Development Company. It raised $4 billion at a $10 billion valuation from 19 investors, including TPG, Brookfield Asset Management, Advent, and Bain Capital. The structure mirrors Anthropic's: alternative asset managers get preferred access to deploy AI through their portfolio companies. Not a product subscription tier, a separately capitalized entity with equity partners. (TechCrunch, May 4)

  • Anthropic and Amazon expanded their compute collaboration by up to 5 gigawatts of new capacity. That number puts it in the range of large hyperscaler buildouts and signals that inference spend at frontier scale is still accelerating, not plateauing. (Anthropic, Apr 20)

  • The Defense Department reached AI deployment agreements with eight companies for classified networks at Impact Level 6 and 7. The list: Nvidia, Microsoft, AWS, OpenAI, Google, Oracle, SpaceX, and Reflection AI. Capabilities run through GenAI.mil, DoD's central AI platform, covering data synthesis, situational awareness, and warfighter decision support. No dollar figures disclosed. The signal: federal AI procurement is now an active program, not a pilot. (Nextgov, May 1)

  • The HN thread "Show HN: Representing Agents as MCP Servers" drew 58 points and 16 comments. The pattern: wrapping an agent in an MCP server interface so other agents can call it as a tool. It is one of the cleaner architectural ideas for multi-agent composition to appear on HN this cycle. (HN, May 2026)

Your reach is rented. And landlords evict.

One algorithm update. One policy change. One bad quarter for a platform that isn't yours. The audience you spent years building disappears overnight.

beehiiv is what happens when you stop renting and start owning. A list that's yours. Revenue that compounds. Growth tools built in from day one.

30% off your first 3 months with code LIST30. Start building today.

The Drops

[Repo] wrg32786/titus-os: A file-based agent operating system for Claude Code. The core is 15 markdown documents forming a kernel: identity, decision logic, session rhythm, authority matrix, memory layer. No database, no server. Persistent state lives in a vault of markdown files with four-tier staleness rules and local semantic search via all-MiniLM-L6-v2. Seven named sub-agents handle specialized routing. Three paired ledgers track claim verification, trust decay, and failure modes. The architecture treats the agent runtime as a queryable file system rather than a chat loop: durable state that survives context resets. Early-stage (2 stars), pre-launch signal. Worth watching if you are thinking about agent memory as a design problem, not a database problem. (github.com/wrg32786/titus-os)

[Repo] rampstackco/claude-skills: 60 reusable Claude Skills covering the full website lifecycle, from brand strategy through SEO, development, QA, and incident response. Each skill follows the same SKILL.md structure with under-250-line constraint per file, so they compose cleanly across projects. MCP integrations include Ahrefs, GitHub, Cloudflare, Vercel, PostHog, and Datadog. 98 reference files included. MIT license. 112 stars in first week. (github.com/rampstackco/claude-skills)

[Skill] anthropics/skills · skill-creator: Anthropic's official skill for building skills. The flow: capture intent, draft a SKILL.md, run parallel test cases (with-skill versus baseline), surface diffs through a browser eval viewer, and refine on human judgment. Inputs are user intent, success criteria, and 2-3 test prompts. Output is a finished SKILL.md plus benchmark data. Role-driven, not slash-invokable. The design choice that matters: the parallel A/B step keeps Claude from overfitting to one example. The human stays in the loop before each iteration. Worth running through if you have been writing skills by feel rather than by measurement. (github.com/anthropics/skills/tree/main/skills/skill-creator)

The Stack

[MCP] HarimxChoi/google-surf-mcp: A search MCP server that hits Google without requiring an API key, bypassing the anti-bot layer via a custom request strategy. 130 stars in four days. The practical value: Claude Code agents can run real-time web search in workflows without burning a Serper or Brave API budget. The tradeoff is stability: rate limits and detection patterns will change over time, and this is not a production SLA you can depend on. Use it for research workloads and internal tooling where a failed search degrades gracefully. The setup is straightforward: stdio MCP server, single config entry in your MCP JSON. Worth adding to a development Claude Code environment this week. (github.com/HarimxChoi/google-surf-mcp)

The Onboard

1. Write your CLAUDE.md before you write any code. The most common Claude Code failure mode is not a bad prompt. It is an agent that does not know what it is operating inside. A CLAUDE.md with three things (the repo's purpose in one sentence, the tech stack layer the agent is allowed to touch, and the output convention it must follow) cuts context drift on long sessions by more than anything else. Write it before the first task. Keep it under 80 lines. Revise it when the agent makes a mistake that good context would have prevented.

2. Use --continue to preserve session context across terminal restarts. Claude Code drops in-session memory when you close the terminal. Running claude --continue on restart loads the most recent conversation and picks up where the prior session left off. Pair this with a /close command pattern that writes a one-paragraph session summary to a markdown file. Your --continue session then opens with that summary in the first human turn, giving the agent a cold-start context anchor without burning tokens on re-explanation.

3. Be specific about the format you want back. If you ask Claude to "make me a list," you get whatever shape it picks. If you ask for "a table with three columns: name, price, where to buy," you get exactly that. The rule holds for every output: emails, summaries, code, even spoken-style answers. Tell Claude the format up front, not as an afterthought. Vague asks produce vague output, every time.

The Frame

The Chatbot Is Not the Product

Every agent framework that shipped in 2025 defaulted to the same interaction model: prompt in, response out, context cleared. It was the path of least resistance. The chat interface was already built. Users understood it. Developers could ship something in a weekend.

The cost showed up six months later. Agents that reset every session cannot accumulate operational knowledge. They cannot track decisions across projects. They cannot route tasks to the right sub-agent based on a history of what worked. Every session starts cold. The operator compensates by stuffing more context into the system prompt, which hits token limits, raises inference spend, and degrades response quality on the tail of long windows. The chat loop is not just a UX choice. It is an architectural constraint that caps what the agent can do.

The repos worth watching right now are not better chat wrappers. They are agent runtimes that treat state as a first-class concern: file-based vaults, session ledgers, staleness rules, semantic recall. The pattern is closer to an operating system than a chatbot. That is not a metaphor. An OS manages resources, schedules processes, and persists state across application lifecycles. The next generation of Claude Code workflows will do the same.

My take: the switch from chat-loop to agent-OS is the same architectural shift as moving from scripts to services. Most operators have not made it yet. The ones who do in the next 90 days will be running materially different workloads by Q4.

Builder's Brief

Freelancer Mini CRM: Paste Zoom transcript, get follow-up email

The premise. Freelancers lose clients because they forget to follow up, not because their work is bad. They tried a Notion CRM, gave up after two weeks, and now track everything in their heads. Buyers are $50-150/hr operators juggling 8 to 15 clients.

The pitch. Paste the Zoom or Otter transcript. Get the extracted summary, the next-step action items, and a draft follow-up email in 10 seconds. No data entry. No template wrestling.

The positioning. Notion CRMs lost the battle for a reason: they require manual entry. Otter and Zoom already export plain-text transcripts. Claude parses a transcript into structured data in one call. The wedge is no-manual-entry-ever, not feature parity.

The build. SQLite with two tables (clients, interactions). One Claude prompt: transcript in, JSON out (client name, topics, commitments, next actions, draft email). Three-panel UI: client list with follow-up badge, paste box with summary, draft email ready to copy. Deploy to Railway. Total build time with Claude Code writing scaffolding: under eight hours.

The customers. Post a 30-second screen recording in r/freelance: paste-to-draft-email flow. Title it factually, no marketing copy. Indie Hackers Show IH board picks up tools like this. DM 10 people in freelance Twitter who tweet about client management frustrations. Target operators billing $5K+ monthly.

The math. $19/month. Below the threshold where freelancers think twice. Comparison frame: not other CRMs, but the $200 per hour they lose forgetting to follow up on a warm lead. 100 subscribers is $1,900 MRR. Variable cost per user (Claude API plus Railway): under $3/month. Gross margin north of 85%.

Ship it. One day with Claude Code. Post the demo in r/freelance tonight. First customer before the week is out.

Before You Go

What does your agent remember between sessions? If the answer is nothing, that is not a technical constraint. It is a design choice you made by default. What would change if it remembered everything?

Forward this to one builder who is still running agents like chatbots. They will know what to do with it.

Keep Reading