How it works Hooks, git notes, the open standard Toolkit Live · Burndown · Menubar · CLI Roadmap Now · Next · Later · Exploring Pair with Microsoft Copilot The outcome layer for your Copilot Power BI Compare vs git-ai, DX, LinearB, Faros Whitepaper A framework for AI code observability Pricing

COMPARE

The only complete AI engineering toolkit.

Per-line attribution is just the start. Obsly AI adds the LLM cost proxy, the live dashboard, the burndown, the menubar, three SaaS views and a multi-vendor enterprise comparison — none of which the alternatives ship.

Built on the open git-ai standard. Compatible with everything else.

Engineering intelligence

DX · LinearB · Faros

Measure team velocity

DORA, SPACE, cycle time
Developer surveys
No per-line AI attribution
No agent / model awareness
No LLM cost tracking
Closed format, vendor lock-in

The open standard

git-ai

Defines how AI authorship is stored

Per-line attribution standard
Local CLI for blame
Open format · refs/notes/ai
No live dashboard
No LLM cost proxy
No multi-vendor enterprise view

The complete toolkit

Obsly AI

Everything teams actually need to manage AI coding

Per-line attribution (uses git-ai)
LLM cost & usage proxy
Live SSE dashboard + burndown
macOS menubar app
3 SaaS views: buyer, vendor, dev
Multi-vendor enterprise comparison

The gap we close

Engineering intelligence measures your team.
git-ai defines the attribution standard.
Nobody else ships the toolkit.

Live dashboards, cost proxy, burndown, menubar, multi-vendor views — all in one install.

HEAD TO HEAD

Obsly AI vs git-ai

git-ai is the open standard. We use it. But that's where the similarities end — Obsly AI ships 17 capabilities that git-ai doesn't.

Obsly AI

22

capabilities shipping today

git-ai

5

capabilities (just the standard + CLI)

What Obsly AI ships

Attribution (the foundation)

  • Per-line attribution via git-ai v3.0.0
  • Hooks for Claude Code, Cursor, Codex, Windsurf
  • Three KPIs: Adoption · Durability · Churn

Cost & usage observability

  • LLM cost & usage proxy (mitmproxy plugin)
  • Cost per developer / repo / branch / model
  • Cross-tool capture: Claude, Cursor, Codex, ChatGPT

Dashboards & UX

  • Live SSE dashboard (real-time activity)
  • Weekly budget burndown chart
  • macOS menubar app (live AI % indicator)
  • Enterprise dashboard — multi-vendor comparison
  • Vendor dashboard — team performance
  • Personal dashboard — private dev stats
  • Per-line blame web UI with cost on hover

Integration & deployment

  • GitHub App + push-based ingest
  • Reports: HTML one-shot, weekly summaries
  • 11 CLI commands (stats, blame, doctor, ...)
  • Self-hosted on-prem option
  • Multilingual UI (EN · ES)
  • Public roadmap with named owners

Free tier

  • Entire toolkitfree for individual devs

What git-ai ships

Attribution (the foundation)

  • Per-line attribution standard (defines it)
  • Hooks for the major agents
  • Local CLI for blame
  • Open format · refs/notes/ai
  • Self-hosted (CLI runs locally)

Cost & usage observability

  • No LLM cost proxy
  • No cost per dev / repo / branch
  • No cross-tool usage capture

Dashboards & UX

  • No live dashboard
  • No weekly burndown
  • No menubar app
  • No enterprise / multi-vendor view
  • No vendor / team dashboard
  • No web UI at all
  • No blame UI with cost

Integration & deployment

  • No GitHub App / cloud ingest
  • No reports (HTML, weekly)
  • CLI: blame only, no full toolkit
  • No multilingual UI
  • No public roadmap

git-ai is excellent at what it does — defining and storing the open attribution standard. Obsly AI is the toolkit you build on top of it.

Full comparison · all platforms

Based on public information from each vendor's website (April 2026)

Capability Obsly AI git-ai Exceeds AI DX LinearB Faros
Attribution core (the open standard)
Per-line AI attribution
Open standard (git-ai v3.0.0 git notes) ✓ uses it ✓ defines it ✗ proprietary
Detect agent + model (Claude / Cursor / Codex / Windsurf) ✓ all 4 ✓ all 4 ~ partial ~ ~ ~
The toolkit on top — what no other tool ships
LLM cost & usage proxy (mitmproxy plugin)
Cost per developer / repo / branch / model
Live SSE dashboard (real-time activity)
Weekly budget burndown chart
macOS menubar app (live AI % indicator)
CLI: ai-stats, ai-blame, doctor (11 commands) ~ blame only
Dashboards & analytics
Three SaaS dashboards (Enterprise · Vendor · Personal) ~ one view ~ team only ~ team only ~ team only
Multi-vendor enterprise comparison (buyer view)
Per-developer private view ✓ local CLI
Three KPIs (Adoption · Durability · Churn) ~ basic
DORA / SPACE / cycle time
Integration & deployment
GitHub App (push-based ingest)
Self-hosted on-prem option ✓ CLI only
Source code never transmitted (metadata only) ~ ~ ~ ~
Single install: pipx install obsly-ai ~ cargo install ✗ SaaS only ✗ SaaS only ✗ SaaS only ✗ SaaS only
Pricing & access
Free tier for individual devs ✓ full toolkit ✓ CLI
Pricing (per dev / month) €0 / €19 / Custom €0 / Team / Ent. Contact Contact ($$$) Contact ($$$) Contact ($$$$)

✓ = supported · ~ = partial / unclear · ✗ = not supported · Sources: vendor public sites, April 2026

The honest distinction

git-ai is the open standard. Obsly AI is the toolkit that runs on it.

git-ai defines how AI authorship is stored in refs/notes/aiand ships a CLI to read it locally. That's a great foundation — and we use it. Obsly AI adds everything an organization needsaround that data: an LLM cost proxy, a live activity dashboard, weekly burndowns, a macOS menubar, three SaaS views (buyer, vendor, developer), and a multi-vendor enterprise comparison — none of which git-ai ships.

Microsoft Copilot Power BI is the system of record for adoption. Obsly AI is the system of record for outcomes.

Microsoft is the source of truth for adoption, engagement, model usage and Copilot Studio MCP transcripts. We do not duplicate any of that. We add the outcome layer — per-line attribution at the commit level, durability over time, the wasted-agent-work KPI — that Microsoft structurally cannot publish themselves. The two datasets coexist in the same Power BI workspace. Read the full Microsoft positioning →

BUILT TO PAIR WITH MICROSOFT COPILOT

Adoption telemetry, meet outcome telemetry.

Microsoft Copilot Power BI ships excellent adoption telemetry through the AppSource template, the Viva Insights M365 reports, and Copilot Studio analytics. Obsly AI extends that picture with the outcome layer — what landed in git and what survived. Two telemetry layers, one workspace, the complete conversation.

MICROSOFT COPILOT — ADOPTION LAYER

Engagement & reach

  • Active users / engaged users (daily, by team)
  • Seat activation, license utilization, dormancy
  • Department-level adoption breakdowns

Model & feature usage

  • Model breakdown (GPT-5, Claude Sonnet, Gemini, Auto resolved)
  • Chat ask / edit / agent / plan mode usage
  • Lines suggested & accepted in the IDE
  • PR-level Copilot review metrics

Copilot Studio agents

  • Per-tool MCP invocations from Copilot Studio agents
  • Conversation transcripts in Dataverse
  • Power BI via Fabric sync

M365 productivity

  • Copilot in Word, Excel, PowerPoint, Outlook, Teams
  • Assisted-hours estimates (Viva Insights research multipliers)

The Microsoft layer answers:"How is the team engaging with Copilot?"

OBSLY AI — OUTCOME LAYER

What lands in git

  • Per-line AI attribution at the commit level (git notes v3.0.0)
  • Lines actually committed to git (post-edit, post-save, post-prune)
  • Lines that passed the quality gate

What survives over time

  • Durability KPI — lines surviving 30 / 60 / 90 days
  • Churn KPI — lines rewritten by humans within 7 days
  • Per-mode outcome — which workflows produce code that lasts

Vendor & team accountability

  • Multi-vendor comparison — same Copilot, different outcomes
  • Per-developer private view (visible only to the dev)
  • Cross-tool unified view: Copilot + Cursor + Claude Code, one dashboard

Per-line cross-correlation

  • MCP tool calls correlated with surviving lines per commit
  • Session-to-commit attribution via the LLM proxy + git notes

The Obsly AI layer answers:"How much of what they ship is still in production?"

Together, both layers are what every CIO running an AI program in 2026 needs in front of them.

The Microsoft layer keeps your existing Power BI workspace as the source of truth for adoption. Obsly AI ships as a Power BI semantic model that lives next to it as a second data source — same workspace, no migration, no replacement, compatible with Power BI Premium and Microsoft Fabric. The customer gains the outcome conversation that the Microsoft AE has been asked about repeatedly in QBRs. Read the full Microsoft pairing positioning →

Which one is for you?

Use DX / LinearB / Faros if

  • You need DORA/SPACE metrics for engineering management
  • You want self-reported developer surveys
  • You don't care which lines are AI vs human
  • Budget is not a constraint

Use git-ai if

  • You only need the open attribution standard, nothing else
  • A local CLI for personal blame is enough
  • You don't need cost tracking, dashboards, or multi-vendor views

Use Obsly AI if

  • You want thecomplete toolkit, not just the standard
  • You need LLM cost tracking on top of attribution
  • You manage multiple vendors and need to compare them
  • You want live dashboards, burndowns, menubar — all in one install
  • You want it open and portable (we use git-ai underneath)

Honest note

Obsly AI is built on the open git-ai v3.0.0standard. We don't compete with it — we build on it. Your attribution data lives inrefs/notes/ai, the format is portable, and if you ever want to stop using Obsly AI you can read your data with any git-ai tool. The same data portability is not available with the other vendors listed here.

Get the complete toolkit.

Free CLI. Eleven commands. One install. Two minutes.

Get started See the toolkit