Standard Metrics For AI Collaboration

The AI Collaboration Index (ACI) gives developers, recruiters, hiring managers, and enterprises a common language for AI effectiveness and fluency.

121
ACI Score
The Orchestrator-Sprinter*
* Archetype matching requires Full Report.

One score, easily understood

  • Standardized Normalized mean 100, std dev 15
  • Universal One number, same meaning everywhere
  • Verifiable Report-ID for independent confirmation
  • Actionable Track improvement over time
130 |
    |                                            •••••121
115 |                                ••••••••••••
    |                          •    •
100 |••••••••          •••••••   •••
    |        •        •
 85 |         ••••••••
    |
 70 |
    +---------------------------------------------------
                                                       
                        

// ACI: mean 100, std dev 15
//
// If you can't measure it, you can't improve it.

Developers

Measure your AI collaboration skills. Improve over time. Stand out to employers.

  1. 01 Download the ACI script
  2. 02 Run locally on your project AI transcripts
  3. 03 Get instant ACI score preview in CLI
  4. 04 Send Zip for verified score, full report, and sharable Report-ID
Request Script

Recruiters

Objective candidate signal. Independent standard. Compare with confidence.

  1. 01 Candidate shares Report-ID
  2. 02 View report at aci-metrics.com/verify/Report-ID
  3. 03 Compare candidates fairly
  4. 04 Identify talents and opportunities
  5. 05 Recommend or choose with verified data
Request Sample

Enterprise

Set benchmarks. Drive continuous improvement. Maximize ROI on AI tooling.

  1. 01 Get a live demo
  2. 02 Run pilot with small team
  3. 03 Establish baselines
  4. 04 Automate and scale
  5. 05 Observe trends over time
  6. 06 Align with business objectives
Schedule Demo

// good for developers, hiring, enterprise
//
// solves: you get what you measure
//
// solves: no way to measure ROI on AI spend

Works in your terminal

Run the script locally. Your data stays private. Get your estimated ACI score right there in your terminal. Share with us to get your verified score and full report. For on-premise and air-gapped solutions, Contact Us.

$ node aci-score.js ~/.claude/projects/my-project/

════════════════════════════════════════════════════════════════════════
   AI COLLABORATION INDEX (ACI) - Estimate   

   Report-ID: 7xK9-m2Pq-4R8t-W5nZ-cXv4-aB3y                            
   Generated: Jan 21, 2026                                             
════════════════════════════════════════════════════════════════════════

  ACI SCORE (estimate)    121   ███████████░░░░░

  Velocity                128   █████████████░░░
  Quality                 94    ██████░░░░░░░░░░
  Integration             131   ██████████████░░
  Literacy                119   ██████████░░░░░░

───────────────────────────────────────────────────────────────────────
  RAW METRICS (estimates)
───────────────────────────────────────────────────────────────────────

  Timerange:              Dec 15, 2026 – Jan 20, 2026
  Activity:               38.4 h
  Sessions:               47
  Prompts:                23.5 k
  Tasks:                  63
  Commits:                193
  Deployments:            23
  Concurrency:            2.8
  Modality:               1.1
  Steering:               8.2%
  Tokens:                 1.4 B
  Cost:                   --
  Complexity:             110
  Efficiency:             64.3%


  What does this mean? Explainers at aci-metrics.com/docs

───────────────────────────────────────────────────────────────────────
  DETECTED PATTERNS
───────────────────────────────────────────────────────────────────────

   Parallel orchestration     Terse steering     Rapid iteration

───────────────────────────────────────────────────────────────────────
  TIPS & NEXT-STEPS
───────────────────────────────────────────────────────────────────────

   Providing success feedback is useful for session context
   Text highlighting is useful for providing context to a prompt
   Try encoding your personal shortcuts in your CLAUDE.md
   Try working on your multi-agent management skills

═══════════════════════════════════════════════════════════════════════
  © 2026 ACI Metrics  |  aci-metrics.com/terms
═══════════════════════════════════════════════════════════════════════

  
  ● Generated markdown file: CLAUDE-ACI.md file
    You can use this file to manage your AI collaboration work.

  ● Generated Zip file: 7xK9-m2Pq-4R8t-W5nZ-cXv4-aB3y.zip
    Full report and verified score now available.


  Upload Zip file now for full report?

    [Y] Yes, upload the Zip to aci-metrics.com/upload
    [N] No, maybe later

  Press Y or N: _

// runs locally, no network traffic, air-gap, scale, automations
//
// send zip file to ACI Metrics for full report

How Does It Work?

The ACI score incorporates four dimensions
each with their own subtests and scores.

1
Velocity

How fast do you develop, commit and deploy?

2
Quality

How good is your collaborative output?

3
Integration

How well is AI used in your workflow?

4
Literacy

How effective is your collaboration style?

The scores and reports are first generated locally on your machine
using our downloadable ACI scripts and portable model.

Then, if you want, you can send scrubbed and anonymised data to us
for analysis using our best models and reference data.

1

Local Preview

The data capture, statistical analysis and basic AI interpretation runs entirely on your local machine, where the transcripts are. You can get your ACI scores, useful insights, and a Zip file for upload to us for deep analysis and score verification.

2

Verified Report

We provide a deeper report and a verified (adjusted) ACI score using our server-side models and population data. We don't capture code, and we clean and anonymize the data to the highest level at source. Reports are completely anonymous.

Here is the workflow to follow. (No accounts or payments needed.)

  1. Request Script
    Get the ACI scoring script from our repository Contact Us to request the script for now.
  2. Run Locally
    Execute on your local transcripts — your data stays private
    (Only compatible with Claude Code transcripts at this time)
  3. View ACI Score
    Estimated ACI score with raw metrics and basic AI-poweered analysis right in your terminal
  4. Send Zip
    Upload anonymized data for full analysis and verification using our best proprietary models
  5. View Full Report
    Deep AI-powered insights, personalized feedback, tips and next-steps, tools and templates. Request Sample Report
  6. Share Report-ID
    Share your unique Report-ID. View reports at aci-metrics.com/verify.

// download -> run -> score-> send -> report -> share
//
// paste report to CLAUDE.mb for a virtuous circle?

The Full Report

The full report is available when you share the Zip file generated by the script you run locally. In addition to the verified score, the AI-assisted analysis provides the following reports.

Executive Summary

Your full collaboration index. Clear data for self-improvement, tracking or sharing with employers.

Sub-Score Deep-Dive

What actions and behaviors are driving each score. Strengths, weaknesses and opportunities to improve.

Pattern Detection

Behavioral signatures identified: parallel orchestration, visual verification, terse steering, flow states.

Task Analysis

Every task evaluated for complexity and cost. Performance adjusted for difficulty. See how you handle routine fixes vs. novel tasks.

Collaboration Style

MBTI-style profile matching. What is your collaboration persona? Are you an Orchestrator, Refiner, Sprinter, or something else?

Tips & Next Steps

Personalized coaching recommendations. Concrete actions to improve each pillar based on your personal work patterns.

Request Sample Report

// Verified score and full report available when you share your Zip file

Developers

Measure your skills. Get actionable tips and next-steps. Improve over time.

Download Script

Recruiters

Objective assessment criteria. Compare and choose with confidence.

Request Sample

Enterprise

Set benchmarks. Measure ROI. Drive improvement.

Schedule Demo

// sounds useful, let's get going