FOR TEAMS ALREADY USING MICROSOFT COPILOT
Microsoft Copilot measures activity in the IDE. Obsly AI measures what lands in git and is still there 30 days later. Both answers in the same Power BI workspace.
No migration. We don't replace anything Microsoft. Just a second data source next to the templates you already have.
TWO LAYERS, ONE TEAM
Two developers, same Copilot license, same agent-mode session count. Microsoft Copilot Power BI shows their adoption activity. Obsly AI shows their delivery outcome. Both numbers are real, both numbers matter, and the conversation gets a lot richer when they sit on the same canvas.
| Dev A | Dev B | |
|---|---|---|
| Agent-mode sessions | 12 | 12 |
| Accept-button clicks (Microsoft) | 47 | 8 |
| Lines accepted in IDE (Microsoft) | 210 | 847 |
| Lines committed to git (Obsly AI) | 142 | 1,684 |
| Lines surviving 30 days (Obsly AI) | 110 | 1,503 |
Both layers are right. They answer different questions. The Microsoft layer answers "how is the team engaging with Copilot?" — which is essential for adoption health, license utilization, and feature rollout. The Obsly AI layer answers "how much of what they ship is still in production?" — which is essential for ROI conversations, vendor accountability, and outcome KPIs. A CIO that wants to brief the board needs both. A vendor QBR with both numbers is a structured conversation; with only one, it is a guessing game.
WHAT THE MICROSOFT LAYER MEASURES
Sourced from the GitHub Copilot Usage Metrics API (April 2026 GA) and the Viva Insights M365 Copilot reports. These are the fields the AppSource template, your custom Power BI workspace, and Microsoft's own dashboards pull from. Each one answers a real adoption question — and points to a complementary outcome question we help answer.
ADOPTION SIGNAL
Sourced from pull_requests.total_merged, total_merged_created_by_copilot, and median_minutes_to_merge_copilot_authored. Tells you how much of your delivery flow has Copilot activity attached to it.
→ Pairs with: durability of those lines after merge
ADOPTION SIGNAL
The total_code_lines_accepted field. Counts the character length of suggestions accepted across all chat-panel modes. A great indicator of how much code volume is flowing through Copilot in a given week.
→ Pairs with: lines that ended up in the commit and survived
ADOPTION SIGNAL
The chat_panel_agent_mode, chat_panel_edit_mode and related fields. Tells you which workflows your team is adopting and how the mix changes over time.
→ Pairs with: outcome per mode — which workflows produce code that lasts
ADOPTION SIGNAL
The total_code_acceptances field. Counts how often the team is engaging with the Accept button across all modes. A strong indicator of whether the team has integrated Copilot into their workflow.
→ Pairs with: how much of each accept actually shipped
THE FULL CHAIN, FROM PROMPT TO PRODUCTION
Every line of AI-assisted code travels nine steps from the moment the developer opens an agent-mode session to the moment that code is still running in production a month later. The first five steps live in the IDE — Microsoft Copilot is the natural system of record. The last four live in git — that is where Obsly AI extends the picture so the chain is complete.
WHERE ADOPTION ENDS AND OUTCOME BEGINS
Is the team using Copilot? Which models? Which modes? What is the seat utilization across departments? How much of our PR activity has Copilot attached to it? Which features are getting traction? These are the questions a Microsoft AE and a Microsoft Partner Manager can answer with the existing templates. They are essential — and Microsoft is the right system of record for them.
Of the lines accepted in the IDE, how many landed in the commit? How many passed the quality gate? How many are still in production at 30, 60, 90 days? Which agent / model combinations produce code that lasts longer in our codebase? When two vendors use the same Copilot, why are their durability numbers different? These questions require observing git directly, line by line — that is the layer Obsly AI brings to the table.
When both layers are visible in the same Power BI workspace, the conversation about Copilot ROI stops being a debate and becomes structured. The Microsoft layer shows the investment is being adopted. The Obsly AI layer shows the investment is producing code that lasts. The two together are what every CIO running an AI program in 2026 needs in front of them at the next quarterly review.
This is the conversation we want with your Microsoft Partner Manager. Obsly AI is not a replacement for any Microsoft product — it is an extension that makes Microsoft Copilot's existing investment more defensible to the board. We ship as a Power BI semantic model that lives next to the Microsoft templates in the same workspace. The customer keeps everything they bought from Microsoft and gains the outcome layer the Microsoft AE has been asked about repeatedly in QBRs.
ONE POWER BI. TWO TELEMETRY LAYERS.
Your analyst already has a Power BI workspace with Microsoft Copilot templates loaded. Obsly AI ships a DAX-queryable semantic model that lives next to them as a second data source. The customer drags it into existing reports next to the Microsoft Copilot Metrics chart. No migration. No replacement. Compatible with Power BI Premium, Power BI Service, and Microsoft Fabric.
Ten minutes of setup. Zero migration. Built on the same Power BI standards your team already uses.