Skip to main content

Overview

Integrate GitHub Analytics to measure and track engineering performance metrics directly from CORE. Access comprehensive DORA metrics (Deployment Frequency, Lead Time, Change Failure Rate) along with additional delivery, stability, and code quality metrics to gain insights into your team’s productivity and engineering excellence.

Setup Guide

  • Authentication Type : OAuth 2.0

How to Connect

  1. Navigate to CORE DashboardIntegrations
  2. Click on the GitHub Analytics card
  3. Click Connect to authorize CORE
  4. You’ll be redirected to GitHub to authorize the application
  5. Grant the required permissions (read access to repositories and their data)
  6. You’ll be redirected back to CORE once connected

Use Cases

Scenario: Get all four key DORA metrics for your repository
"Calculate all DORA metrics for the 'myorg/myapp' repository
for the last 30 days"
The agent will use all_metrics to calculate:
  • Deployment Frequency: releases per week
  • Lead Time for Changes: time from commit to production
  • Change Failure Rate: percentage of failed deployments
  • Time to Restore Service: recovery time from failures
Scenario: Track how often your team deploys to production
"What's the deployment frequency for 'company/product'?
Show me the last 60 days and compare it to the previous 60 days"
The agent will use deployment_frequency with:
  • days: 60
  • compareWithPrevious: true
This reveals trends in deployment velocity and identifies acceleration or slowdowns.
Scenario: Understand how long it takes from code commit to production
"Calculate the lead time for changes in 'team/api' repository.
I want to see if we're improving our time to production"
The agent will use lead_time_for_changes to measure the average time from first commit to deployment, helping identify bottlenecks in your delivery pipeline.
Scenario: Evaluate pull request merge times and throughput
"Show me PR metrics for 'org/web-app':
- Average time to merge a PR
- Number of PRs merged per week
- Average PR size"
The agent will use:
  • pr_merge_time: Average hours from PR creation to merge
  • pr_throughput: PRs merged per week (velocity indicator)
  • pr_size: Average lines changed per PR (complexity indicator)
Scenario: Measure overall team productivity and commit activity
"How productive is our team on the 'core' repository?
Show me commit frequency and compare with last month"
The agent will use commit_frequency to show:
  • Commits per week to main branch
  • Week-over-week trends
  • Team activity patterns
Scenario: Build a comprehensive performance dashboard
"Create a performance report for 'company/main-app':
- All DORA metrics for the last quarter
- Stability metrics (change failure rate, hotfix rate, revert rate)
- Code quality (average PR size)
- Compare current period with previous quarter"
The agent will use all_metrics with custom date ranges to generate a comprehensive engineering performance report suitable for stakeholder presentations.
Scenario: Understand your system’s stability and reliability
"How stable is our 'backend/api' repository?
Show me change failure rate, hotfix rate, and revert rate for the last month"
The agent will use:
  • change_failure_rate: Percentage of deployments causing issues
  • hotfix_rate: Percentage of emergency releases
  • revert_rate: Percentage of merged PRs that get reverted
These metrics together indicate system stability and code quality.
Scenario: Find where delays occur in your delivery process
"Analyze delivery metrics for 'team/service':
- How often do we deploy?
- How long does it take from commit to deployment?
- How long do PRs sit before being merged?
- What's our change failure rate?"
The agent will correlate multiple metrics to identify whether delays are in:
  • Code development (commit frequency)
  • Code review (PR merge time)
  • Deployment process (deployment frequency, lead time)
  • Quality assurance (change failure rate)
Scenario: Analyze metrics for specific periods
"Calculate all metrics for 'org/app' from 2024-01-01 to 2024-03-31
(Q1 performance review)"
Using startDate and endDate parameters allows analysis of:
  • Quarterly performance reviews
  • Sprint-specific metrics
  • Post-incident analysis windows
  • Year-over-year comparisons
Scenario: Understand the impact of incidents on metrics
"For 'platform/core', what's the change failure rate?
Use custom incident labels: ['critical', 'p0', 'sev1']"
The agent will calculate failure rate based on your organization’s incident classification system, enabling accurate measurement of production stability.

Understanding the Metrics

DORA Metrics (Industry Standard)

Deployment Frequency: How often releases happen to production
  • Elite: On-demand
  • High: Weekly releases
  • Medium: Monthly releases
  • Low: Less than monthly
Lead Time for Changes: Time from code commit to production
  • Elite: Less than 1 hour
  • High: 1 hour to 1 day
  • Medium: 1 day to 1 week
  • Low: More than 1 week
Change Failure Rate: Percentage of deployments causing failures
  • Elite: 0-15% failure rate
  • High: 16-30%
  • Medium: 31-45%
  • Low: 46%+
Time to Restore Service: How quickly you recover from failures
  • Elite: Less than 1 hour
  • High: 1-24 hours
  • Medium: 1-7 days
  • Low: More than 7 days

Delivery Metrics

  • PR Merge Time: Average hours from PR creation to merge (indicates review efficiency)
  • PR Throughput: Number of PRs merged per week (indicates team velocity)
  • Commit Frequency: Number of commits to main branch per week (indicates development activity)

Stability Metrics

  • Hotfix Rate: Percentage of emergency releases (indicates production pressure)
  • Revert Rate: Percentage of merged PRs that get reverted (indicates code quality issues)

Integration Notes

  • All metrics are calculated in real-time from GitHub data
  • Analysis includes merged and released code only
  • Custom date ranges provide flexibility for any analysis window
  • Week-over-week comparisons help identify trends
  • Metrics support both organizations and personal repositories

Scopes

  • repo - Repository access (read)
  • read:org - Read organization data
  • read:user - Read user profile data

Available MCP Tools

The GitHub Analytics integration provides 10 tools for comprehensive performance tracking:
Tool NameDescription
deployment_frequencyCalculate deployment frequency - number of releases/deployments per week. DORA metric for delivery speed.
lead_time_for_changesCalculate lead time for changes - time from first commit to production deployment (in hours/days). DORA metric for delivery speed.
pr_merge_timeCalculate PR merge time - average time from PR creation to merge (in hours). Delivery speed metric.
pr_throughputCalculate PR throughput - number of PRs merged per week. Delivery speed metric.
commit_frequencyCalculate commit frequency - number of commits to main branch per week. Delivery speed metric.

Tool Parameters

Common Parameters

Most analytics tools accept the following parameters:
ParameterTypeRequiredDescription
ownerstringYesRepository owner (organization or user)
repostringYesRepository name
daysnumberNoNumber of days to analyze (default: 30)
startDatestringNoStart date in YYYY-MM-DD format (overrides days)
endDatestringNoEnd date in YYYY-MM-DD format (default: today)
compareWithPreviousbooleanNoCompare with previous period for week-over-week analysis (default: false)

Specialized Parameters

  • Commit Frequency: Includes branch parameter (default: main) to analyze specific branches
  • Change Failure Rate: Includes incidentLabels array parameter (default: [“incident”, “production”, “outage”, “bug”]) to identify production incidents
  • Hotfix Rate: Includes hotfixPatterns array parameter (default: [“hotfix”, “emergency”, “patch”]) to identify emergency releases

Need Help?

Join our Discord community for support in #core-support