search
HomeBackend DevelopmentPython TutorialMy HNG Journey. Stage Six: Leveraging Python to Expose DORA Metrics

My HNG Journey. Stage Six: Leveraging Python to Expose DORA Metrics

Introduction

For stage 6, we were tasked with exposing DORA (DevOps Research and, I recently embarked on a project to expose DORA (DevOps Research and Assessment) metrics using Python. This experience taught me valuable lessons about DevOps practices and the intricacies of working with APIs. In this article, I'll walk you through the process, explain what each metric means, and highlight some common pitfalls to watch out for.

What are DORA Metrics?
Before we dive into the code, let's briefly discuss what DORA metrics are:

  • Deployment Frequency: How often an organization successfully releases to production.
  • Lead Time for Changes: The time it takes a commit to get into production.
  • Change Failure Rate: The percentage of deployments causing a failure in production.
  • Time to Restore Service: How long it takes to recover from a failure in production.

These metrics help teams measure their software delivery performance and identify areas for improvement.

Getting Started
To begin exposing these metrics, you'll need:

  • Python 3.7 or higher
  • A GitHub account and personal access token
  • Basic knowledge of GitHub's API

First, install the necessary libraries:

pip install requests prometheus_client

The Code Structure
I structured my Python script as a class called DORAMetrics. Here's a simplified version of its initialization:

class DORAMetrics:
    def __init__(self, github_token, repo_owner, repo_name):
        self.github_token = github_token
        self.repo_owner = repo_owner
        self.repo_name = repo_name
        self.base_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}"
        self.headers = {
            'Authorization': f'token {github_token}',
            'Accept': 'application/vnd.github.v3+json'
        }

        # Define Prometheus metrics
        self.deployment_frequency = Gauge('dora_deployment_frequency', 'Deployment Frequency (per day)')
        self.lead_time_for_changes = Gauge('dora_lead_time_for_changes', 'Lead Time for Changes (hours)')
        self.change_failure_rate = Gauge('dora_change_failure_rate', 'Change Failure Rate')
        self.time_to_restore_service = Gauge('dora_time_to_restore_service', 'Time to Restore Service (hours)')

This setup allows us to interact with the GitHub API and create Prometheus metrics for each DORA metric.

Fetching Data from GitHub
One of the most challenging aspects was retrieving the necessary data from GitHub. Here's how I fetched deployments:

def get_deployments(self, days=30):
    end_date = datetime.now()
    start_date = end_date - timedelta(days=days)

    url = f"{self.base_url}/deployments"
    params = {'since': start_date.isoformat()}
    deployments = []

    while url:
        response = requests.get(url, headers=self.headers, params=params)
        response.raise_for_status()
        deployments.extend(response.json())
        url = response.links.get('next', {}).get('url')
        params = {} 

    return deployments

This method handles pagination, ensuring we get all deployments within the specified time frame.

Calculating DORA Metrics
Let's look at how I calculated the Deployment Frequency:

def get_deployment_frequency(self, days=30):
    deployments = self.get_deployments(days)
    return len(deployments) / days

This simple calculation gives us the average number of deployments per day over the specified period.

Lead Time for Changes
Calculating the Lead Time for Changes was more complex. It required correlating commits with their corresponding deployments:

def get_lead_time_for_changes(self, days=30):
    commits = self.get_commits(days)
    deployments = self.get_deployments(days)

    lead_times = []
    for commit in commits:
        commit_date = datetime.strptime(commit['commit']['author']['date'], '%Y-%m-%dT%H:%M:%SZ')
        for deployment in deployments:
            if deployment['sha'] == commit['sha']:
                deployment_date = datetime.strptime(deployment['created_at'], '%Y-%m-%dT%H:%M:%SZ')
                lead_time = (deployment_date - commit_date).total_seconds() / 3600  # in hours
                lead_times.append(lead_time)
                break

    return sum(lead_times) / len(lead_times) if lead_times else 0

This method calculates the time difference between each commit and its corresponding deployment. It's important to note that not all commits may result in a deployment, so we only consider those that do. The final result is the average lead time in hours.
One challenge I faced here was matching commits to deployments. In some cases, a deployment might include multiple commits, or a commit might not be deployed immediately. I had to make assumptions based on the available data, which might need adjustment for different development workflows.

Change Failure Rate
Determining the Change Failure Rate required analyzing the status of each deployment:

def get_change_failure_rate(self, days=30):
    deployments = self.get_deployments(days)

    if not deployments:
        return 0

    total_deployments = len(deployments)
    failed_deployments = 0

    for deployment in deployments:
        status_url = deployment['statuses_url']
        status_response = requests.get(status_url, headers=self.headers)
        status_response.raise_for_status()
        statuses = status_response.json()

        if statuses and statuses[0]['state'] != 'success':
            failed_deployments += 1

    return failed_deployments / total_deployments if total_deployments > 0 else 0

This method counts the number of failed deployments and divides it by the total number of deployments. The challenge here was defining what constitutes a "failed" deployment. I considered a deployment failed if its most recent status was not "success".
It's worth noting that this approach might not capture all types of failures, especially those that occur after a successful deployment. In a production environment, you might want to integrate with your monitoring or incident management system for more accurate failure detection.

Exposing Metrics with Prometheus
To make these metrics available for Prometheus to scrape, I used the prometheus_client library:

from prometheus_client import start_http_server, Gauge

# In the main execution block
start_http_server(8000)

# Update metrics every 5 minutes
while True:
    dora.update_metrics()
    time.sleep(300)

This starts a server on port 8000 and updates the metrics every 5 minutes.

Common Pitfalls
During this project, I encountered several challenges:

  • API Rate Limiting: GitHub limits the number of API requests you can make. I had to implement pagination and be mindful of how often I updated metrics.
  • Token Permissions: Ensure your GitHub token has the necessary permissions to read deployments and commits.
  • Data Interpretation: Determining what constitutes a "deployment" or "failure" can be subjective. I had to make assumptions based on the available data.
  • Time to Restore Service: This metric was particularly challenging as it typically requires data from an incident management system, which isn't available through GitHub's API alone.

Conclusion
Exposing DORA metrics using Python was an enlightening experience. It deepened my understanding of DevOps practices and improved my skills in working with APIs and data processing.
Remember, these metrics are meant to guide improvement, not as a stick to beat teams with. Use them wisely to foster a culture of continuous improvement in your development process.
Thank you for reading ❤

The above is the detailed content of My HNG Journey. Stage Six: Leveraging Python to Expose DORA Metrics. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
The 2-Hour Python Plan: A Realistic ApproachThe 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AM

You can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.

Python: Exploring Its Primary ApplicationsPython: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AM

Python is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.

How Much Python Can You Learn in 2 Hours?How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PM

You can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.

How to teach computer novice programming basics in project and problem-driven methods within 10 hours?How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AM

How to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...

How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AM

How to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...

What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?What should I do if the '__builtin__' module is not found when loading the Pickle file in Python 3.6?Apr 02, 2025 am 07:12 AM

Error loading Pickle file in Python 3.6 environment: ModuleNotFoundError:Nomodulenamed...

How to improve the accuracy of jieba word segmentation in scenic spot comment analysis?How to improve the accuracy of jieba word segmentation in scenic spot comment analysis?Apr 02, 2025 am 07:09 AM

How to solve the problem of Jieba word segmentation in scenic spot comment analysis? When we are conducting scenic spot comments and analysis, we often use the jieba word segmentation tool to process the text...

How to use regular expression to match the first closed tag and stop?How to use regular expression to match the first closed tag and stop?Apr 02, 2025 am 07:06 AM

How to use regular expression to match the first closed tag and stop? When dealing with HTML or other markup languages, regular expressions are often required to...

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function