Sources#
Summary#
Cat Wu argues that Claude's character — low-ego, lighthearted, positive, bias-toward-action, willing to give honest feedback — is core product surface, not a soft attribute. The work of shaping it is owned by Amanda at Anthropic; Cat names this role as "harder than coding because the task is so ambiguous." Implication: AI-native product teams have a discipline of character work alongside engineering and design, and treat character iterations with the same rigor as feature launches.
What "character" means at Anthropic#
A short list from Cat:
- Low-ego. When told it did something wrong, Claude responds "Oh shoot, like thanks for telling me. Let me fix it."
- Positive. When the user feels stuck on an "insurmountable" task, Claude offers concrete steps and asks if it should start.
- Lighthearted. Cat: "It's like it's lighthearted and fun, but also extremely competent."
- Bias toward action. Coupled with positivity — not just encouragement, but offering to take the next step.
- Honest feedback. Doesn't reflexively agree with everything the user says. (This connects to OpenClaw / open-models conversations where users miss Claude's character specifically because other models sycophantically agree.)
These together describe what Cat calls "a great co-worker."
Why it's product, not polish#
Three claims:
- Character is felt at every interaction surface. It's not a tagline; it's what every response sounds like. Users notice instantly when a different model "doesn't have it."
- It's harder than coding. "Coding is easier because you can verify the success. Crafting the character requires a very strong sense of conviction in who Claude should be."
- It's why people miss Claude when migrated. OpenClaw users whose access was capped expressed sadness specifically about the personality, not the capability — implying character is one of the load-bearing reasons for product attachment.
Lenny adds Ben Mann's framing from a prior podcast: "the personality is what makes Claude so good at so many things" — character isn't decoration, it's structural.
Amanda's role#
Cat describes Amanda as the person who "molds Claude's character." Two distinct skills:
- Convicted articulation of who Claude should be. Without conviction, character work drifts toward bland averaging.
- Articulating what's successful. Saying why a given response is on-character or off-character is the core eval skill — and the rare ability to do that consistently is what makes character work tractable.
This is the rare-trusted-evaluator pattern: Cat says "there's a handful of people who are much better than others at articulating what makes a specific model or model harness combination good." Amanda for character; the Claude Code team for code-quality vibes.
Eval discipline for character#
Character is harder to eval than coding (there is no compiler), but Cat lists two practices:
- Team-lunch vibe checks. Every team member gives qualitative feedback on a new model: "Hey, what is your vibe on the model?" Common signals: "this model is too abrupt," "loves writing memories but quality is uncertain," "doesn't test itself enough."
- Hypothesis → data probe. Vibe-check signals inform which logged data to look at, not the other way around. The team has too much data to mine blind; tacit signal narrows where to look.
This pattern (qualitative-first, data-second) generalizes — see Model Introspection Feedback. Character is the place where it's most clearly load-bearing.
What can change with new models — and what shouldn't#
Cat: "new models force product changes." Most of those changes are removing crutches (see Harness Shrinkage as Models Improve). Character is the place where the opposite discipline applies:
- Capability changes between models; character should be stable.
- Removing prompt sections that enforced character would degrade perceived continuity.
- Character work is about preserving identity across capability jumps, not riding them.
This makes character one of the only harness assets that probably doesn't shrink as models improve.
The "manifesting" verb#
Claude Code's thinking-words list (the verbs shown while Claude is reasoning — "thinking," "considering," "exploring," etc.) leaked in the source code leak. Cat's favorite: manifesting. She has it as a sticker.
This is character at its smallest scale — the choice of verb during thinking. Each one is a tiny stylistic call. "Manifesting" reads as gently mystical / wry, fitting the broader low-ego-but-confident character.
Counterpoint / open question#
Character is hard to evaluate from outside Anthropic, and there's no clean A/B isolation — character interacts with capability such that it's hard to say how much of "Claude is great" is character vs reasoning. Empirically, model migrations (e.g. OpenClaw users) suggest character does contribute independently. But at the level of a controlled experiment, it's understudied.
Connections#
- Cat Wu — articulator
- Anthropic — vendor; Amanda's host
- Claude Code — primary surface for the character
- Harness Shrinkage as Models Improve — character is the exception to the "prompt sections shrink" rule
- Model Introspection Feedback — qualitative-first eval discipline that powers character work
- AI Native Product Cadence — character launches happen at the same cadence as features
- Claude Opus 4.7 — model-specific character tuning happens with each release
- Claude's Constitution / Model Spec — the document side of "who Claude is"; character is the felt-experience side, the constitution is the textual specification
- Model Spec Midtraining (MSM) — character + values now empirically installable via midtraining on synthetic spec documents (May 2026 paper); raises the question of how Amanda's vibe-check eval interacts with MSM-installed traits
- Model Spec Science — empirical study of which spec features generalize best; relevant if Anthropic ever quantifies character consistency across model jumps
- The Bitter Lesson — character is a candidate counterexample: a deliberately hand-crafted asset that may not migrate inward as models scale, unlike the harness scaffolding the bitter lesson dissolves
Open questions#
- How is character versioned across model releases? Public commentary doesn't show change-logs at character level.
- Could character be reproduced by competitors via fine-tuning, or is it path-dependent on Anthropic's internal practice?
- For non-coding products like Cowork, does the same character work, or does Cowork need its own character tuning?
Sources#
11 articles link here
- ConceptAI Native Product Cadence
Cat Wu's 6mo→1mo→1day cadence at Anthropic: research-preview branding, mission-as-tiebreaker, evergreen launch room, li…
- ConceptAlignment Fine-Tuning (AFT)
Standard post-pretraining stage (SFT + RLHF) for installing values; shallow-alignment failure mode motivates Model Spec…
- EntityAnthropic
AI safety company / vendor of Claude; mission-as-tiebreaker culture; ~30–40 PMs across teams; Mike Krieger leads Labs r…
- EntityCat Wu
Head of Product for Claude Code and Cowork at Anthropic; primary articulator of AI-native product cadence and engineer-…
- EntityClaude's Constitution / Model Spec
Anthropic Model Spec / Constitution by Askell et al.; document specifying Claude's values + hard constraints (SP1–3, GP…
- ConceptHarness Shrinkage as Models Improve
Prompt scaffolding shrinks each model release; Cat Wu's pruning discipline; Boris Cherny "100 lines of code a year from…
- EssayLearning to Co-Work with AI: A Software Engineer's Field Guide
Field guide for software engineers in the AI era: 6 skill clusters (taste, harness, alignment-first planning, agent-fri…
- ConceptModel Introspection Feedback
Cat Wu's underrated technique: ask the model why it failed; treat answer as harness-debugging signal not model criticis…
- ConceptModel Spec Science
Empirical study of which Model Spec features best generalize alignment; value explanations > rules alone, specific > ge…
- ConceptPrinting Press Software Democratization
Boris Cherny's analogy: 1400s literacy expansion → AI software-writing expansion; domain knowledge displaces coding ski…
- ConceptThe Bitter Lesson
Sutton 2019: scaled general methods beat hand-engineered structure; recurring justification across the wiki for dissolv…
Related articles
- EntityAnthropic
AI safety company / vendor of Claude; mission-as-tiebreaker culture; ~30–40 PMs across teams; Mike Krieger leads Labs r…
- EntityClaude Code
Anthropic's agentic coding product; created by Boris Cherny late 2024; TypeScript/React; CLI/desktop/web/mobile/IDE sur…
- EntityBoris Cherny
Creator of Claude Code at Anthropic; phone-driven workflow with hundreds of agents; primary advocate of `/loop` primiti…
- EntityCat Wu
Head of Product for Claude Code and Cowork at Anthropic; primary articulator of AI-native product cadence and engineer-…
- ConceptHarness Shrinkage as Models Improve
Prompt scaffolding shrinks each model release; Cat Wu's pruning discipline; Boris Cherny "100 lines of code a year from…
