🇫🇷 Français

/insights: The Command That Analyzes How You Code with Claude

Discover how Claude Code generates a comprehensive report of your development habits

By Angelo Lima

Announced in early February 2026 by Anthropic’s Thariq Shihipar, the /insights command is one of the latest additions to Claude Code. The concept: analyze your last 30 days of sessions and generate a detailed report of your habits. Recurring patterns, friction points, inefficient workflows — nothing escapes it.

What is /insights?

/insights is a built-in Claude Code command that analyzes your local session history and produces an interactive HTML report. No configuration needed, no external tracking: everything is based on data already stored on your machine.

# In Claude Code, simply type:
/insights

The report is generated at ~/.claude/usage-data/report.html and opens automatically in your browser.

How It Works Technically

The system processes your data in 6 stages:

Local sessions (~/.claude/projects/)
    ↓
1. Session collection and filtering
    ↓
2. Metadata extraction (tokens, tools, duration, modifications)
    ↓
3. Qualitative transcript analysis via LLM (Haiku)
    ↓
4. Data aggregation across all sessions
    ↓
5. Multi-prompt analysis generating specialized insights
    ↓
6. HTML rendering with interactive visualizations

Smart Filtering

Not all sessions are analyzed. Excluded:

Exclusion Reason
Subagent sessions (agent-*) Noise in the data
Internal extraction sessions Internal technical data
Sessions < 2 user messages Not enough context
Sessions < 1 minute Accidental starts

Data Processing

  • Model used: Haiku (optimal cost/performance ratio)
  • Limit: 50 new sessions analyzed per run
  • Cache: Analyses are cached in ~/.claude/usage-data/facets/<session-id>.json
  • Long sessions: Split into 25,000-character segments, each summarized separately

What the Report Contains

1. Statistics Dashboard

A quantified overview of your activity:

  • Sessions and messages: total count, daily average
  • Time spent: cumulative session duration
  • Tokens consumed: input and output
  • Git activity: commits and pushes
  • Active days: frequency and activity streaks
  • Peak hours: when you’re most productive

2. Executive Summary (“At a Glance”)

Four targeted sections:

Section Content
What’s working Your most effective workflows
What’s hindering Recurring friction points identified
Quick wins Easy improvements to implement
Ambitious opportunities Deeper workflow changes

3. Interactive Visualizations

  • Daily activity chart with timezone selector
  • Tool usage distribution (Read, Edit, Bash, Grep, etc.)
  • Programming language breakdown
  • Satisfaction levels per session
  • Session types: single task, multi-task, iteration, exploration, quick question

4. Friction Analysis

This is the most actionable section. The report categorizes your pain points:

Friction Type Example
Misunderstood request Claude heads in the wrong direction
Wrong approach Technically incorrect solution proposed
Buggy code Generated code doesn’t work
Rejected action You refused a Claude action
Excessive changes Claude modifies too much
Tool failures Tool execution errors

Each friction is documented with concrete examples from your sessions.

5. CLAUDE.md Suggestions

The most valuable section: the report generates copy-paste-ready rules for your CLAUDE.md, based on instructions you repeat often.

Example suggestion:

# Suggestion generated by /insights

## Testing
- Always run tests after modifying a source file
- Use vitest for unit tests, not jest

## Conventions
- Use absolute imports with the @/ alias
- Name files in kebab-case

These suggestions precisely target repetitive patterns detected in your sessions. The idea: tell Claude once what you repeat every day.

6. Feature Recommendations

Based on your usage profile, the report suggests Claude Code features you might not be leveraging:

  • MCP Servers if you frequently interact with external tools
  • Custom Skills if you repeat the same workflows
  • Hooks if you perform manual post-edit actions
  • Headless Mode if you have CI/CD tasks
  • Task Agents if you do complex codebase exploration

Goal Categories Tracked

The report automatically classifies your sessions by task type:

debug/investigate     │ implement feature    │ fix bug
write script/tool     │ refactor code        │ configure system
create PR/commit      │ analyze data         │ understand codebase
write tests           │ write docs           │ deploy/infra

This classification helps understand how you distribute your time with Claude Code.

Privacy and Data

Important point: everything is local.

  • Analysis runs on your machine via the Anthropic API
  • No source code is uploaded
  • Analysis focuses on interaction patterns, not code content
  • The HTML report stays local, shareable at your discretion
  • Cached facets contain only aggregated metadata

Best Practices

Usage Frequency

Don’t run /insights every day. The sweet spot:

  • Every 2-3 weeks for regular monitoring
  • After a milestone (feature completion, release)
  • After a friction period to identify root causes

Leveraging the Report

  1. Start with frictions: that’s where quick wins hide
  2. Copy relevant CLAUDE.md suggestions into your project
  3. Test recommended features you’re not using yet
  4. Compare reports month over month to measure your progress
/insights
    ↓
Read identified frictions
    ↓
Copy relevant CLAUDE.md suggestions
    ↓
Test 1-2 recommended features
    ↓
Resume normal work
    ↓
/insights next month → measure evolution

Limitations

  • 50 sessions max per analysis (most recent are prioritized)
  • Haiku model for analysis (good cost/quality ratio, but less nuanced than Opus)
  • 30-day analysis window
  • Report can take several minutes depending on volume (600+ messages = patience)

My Take

The feature is only a month old as I write this, but it has already surprised me. After my first /insights run, I discovered I was repeating the same formatting instructions in over half my sessions — wasted time that a few lines in CLAUDE.md could have prevented.

It’s a mirror of your habits: sometimes flattering, sometimes brutal. The real value lies in the CLAUDE.md suggestions. Instead of repeating the same instructions session after session, you codify them once and for all. This is exactly the kind of meta-optimization that saves time in the long run.

The fact that everything stays local is a significant plus, especially for those working on proprietary code. We’re still in the early days of this feature — it will be interesting to see how Anthropic evolves it in upcoming versions.


The command is brand new — now is the time to try it and discover what your sessions reveal about you.

Share: