How it works Hooks, git notes, the open standard Toolkit Live · Burndown · Menubar · CLI Roadmap Now · Next · Later · Exploring Pair with Microsoft Copilot The outcome layer for your Copilot Power BI Compare vs git-ai, DX, LinearB, Faros Whitepaper A framework for AI code observability Pricing

FOR TEAMS ALREADY USING MICROSOFT COPILOT

Your Power BI already tells you who's using Copilot.
We tell you what stays.

Microsoft Copilot measures activity in the IDE. Obsly AI measures what lands in git and is still there 30 days later. Both answers in the same Power BI workspace.

No migration. We don't replace anything Microsoft. Just a second data source next to the templates you already have.

TWO LAYERS, ONE TEAM

Why both layers belong on the same dashboard

Two developers, same Copilot license, same agent-mode session count. Microsoft Copilot Power BI shows their adoption activity. Obsly AI shows their delivery outcome. Both numbers are real, both numbers matter, and the conversation gets a lot richer when they sit on the same canvas.

Dev A Dev B
Agent-mode sessions1212
Accept-button clicks (Microsoft)478
Lines accepted in IDE (Microsoft)210847
Lines committed to git (Obsly AI)1421,684
Lines surviving 30 days (Obsly AI)1101,503

Both layers are right. They answer different questions. The Microsoft layer answers "how is the team engaging with Copilot?" — which is essential for adoption health, license utilization, and feature rollout. The Obsly AI layer answers "how much of what they ship is still in production?" — which is essential for ROI conversations, vendor accountability, and outcome KPIs. A CIO that wants to brief the board needs both. A vendor QBR with both numbers is a structured conversation; with only one, it is a guessing game.

WHAT THE MICROSOFT LAYER MEASURES

The four metrics already in your Copilot Power BI report

Sourced from the GitHub Copilot Usage Metrics API (April 2026 GA) and the Viva Insights M365 Copilot reports. These are the fields the AppSource template, your custom Power BI workspace, and Microsoft's own dashboards pull from. Each one answers a real adoption question — and points to a complementary outcome question we help answer.

ADOPTION SIGNAL

PR activity (commits, merges, reviews)

Sourced from pull_requests.total_merged, total_merged_created_by_copilot, and median_minutes_to_merge_copilot_authored. Tells you how much of your delivery flow has Copilot activity attached to it.

→ Pairs with: durability of those lines after merge

ADOPTION SIGNAL

Lines accepted in the IDE

The total_code_lines_accepted field. Counts the character length of suggestions accepted across all chat-panel modes. A great indicator of how much code volume is flowing through Copilot in a given week.

→ Pairs with: lines that ended up in the commit and survived

ADOPTION SIGNAL

Mode usage (ask / edit / agent / plan)

The chat_panel_agent_mode, chat_panel_edit_mode and related fields. Tells you which workflows your team is adopting and how the mix changes over time.

→ Pairs with: outcome per mode — which workflows produce code that lasts

ADOPTION SIGNAL

Accept events

The total_code_acceptances field. Counts how often the team is engaging with the Accept button across all modes. A strong indicator of whether the team has integrated Copilot into their workflow.

→ Pairs with: how much of each accept actually shipped

THE FULL CHAIN, FROM PROMPT TO PRODUCTION

Microsoft covers the IDE side. Obsly AI extends through the git side.

Every line of AI-assisted code travels nine steps from the moment the developer opens an agent-mode session to the moment that code is still running in production a month later. The first five steps live in the IDE — Microsoft Copilot is the natural system of record. The last four live in git — that is where Obsly AI extends the picture so the chain is complete.

MICROSOFT MEASURES HERE ─────────────► 1. Prompt Dev escribe en chat panel 2. Agent itera read · edit · run N tool calls 3. Accept click total_code_ acceptances ++ 4. Save file written to disk 5. Power BI daily aggregate accept count OBSLY AI MEASURES HERE ───────────────────────────────────► 6. git commit PreToolUse / PostToolUse attribute lines per agent 7. quality gate Sonar · tests · review ¿pasa o no? 8. PR merged refs/notes/ai git-ai v3.0.0 9. 30-day check Durability KPI Churn KPI Steps 1–5: IDE adoption telemetry → Microsoft Power BI · Steps 6–9: git outcome telemetry → Obsly AI semantic model · Both flow into the same workspace.

WHERE ADOPTION ENDS AND OUTCOME BEGINS

The questions each layer is built to answer

Adoption questions — answered by Microsoft Copilot Power BI

Is the team using Copilot? Which models? Which modes? What is the seat utilization across departments? How much of our PR activity has Copilot attached to it? Which features are getting traction? These are the questions a Microsoft AE and a Microsoft Partner Manager can answer with the existing templates. They are essential — and Microsoft is the right system of record for them.

Outcome questions — answered by Obsly AI

Of the lines accepted in the IDE, how many landed in the commit? How many passed the quality gate? How many are still in production at 30, 60, 90 days? Which agent / model combinations produce code that lasts longer in our codebase? When two vendors use the same Copilot, why are their durability numbers different? These questions require observing git directly, line by line — that is the layer Obsly AI brings to the table.

Together — the conversation a CIO can take to the board

When both layers are visible in the same Power BI workspace, the conversation about Copilot ROI stops being a debate and becomes structured. The Microsoft layer shows the investment is being adopted. The Obsly AI layer shows the investment is producing code that lasts. The two together are what every CIO running an AI program in 2026 needs in front of them at the next quarterly review.

This is the conversation we want with your Microsoft Partner Manager. Obsly AI is not a replacement for any Microsoft product — it is an extension that makes Microsoft Copilot's existing investment more defensible to the board. We ship as a Power BI semantic model that lives next to the Microsoft templates in the same workspace. The customer keeps everything they bought from Microsoft and gains the outcome layer the Microsoft AE has been asked about repeatedly in QBRs.

ONE POWER BI. TWO TELEMETRY LAYERS.

How it lands in your existing workspace

Your analyst already has a Power BI workspace with Microsoft Copilot templates loaded. Obsly AI ships a DAX-queryable semantic model that lives next to them as a second data source. The customer drags it into existing reports next to the Microsoft Copilot Metrics chart. No migration. No replacement. Compatible with Power BI Premium, Power BI Service, and Microsoft Fabric.

YOUR POWER BI WORKSPACE MICROSOFT DATA SOURCES GitHub Copilot Metrics App AppSource · adoption · model usage · accepts M365 Copilot Adoption Report Viva Insights · Word/Excel/Teams adoption M365 Copilot Impact Report Viva Insights · assisted hours (multipliers) Custom GitHub REST joins analyst-built commit + PR data OBSLY AI SEMANTIC MODEL Adoption · Durability · Churn 3 KPIs at the line / commit level Per-line agent & model attribution via git-ai v3.0.0 + LLM proxy Per-session MCP usage cross-correlated with commits ⭐ Wasted-agent-work KPI cumulative agent activity − surviving lines Your Microsoft templates stay. Obsly AI adds a second data source via OData over HTTPS. Standard Power BI integration pattern, certified for Premium and Fabric workspaces.

Pair the outcome layer with your Microsoft Copilot Power BI workspace.

Ten minutes of setup. Zero migration. Built on the same Power BI standards your team already uses.