Developer Productivity

Exploring the strengths and flaws of different time management methods

Daniel Foo
Better Programming

--

Photo by Isaac Smith on Unsplash

As the operating cost for engineering teams remains high, executives started asking questions like, how do engineering leaders measure developer productivity?

While there is no shortage of suggestions on how to measure, almost every approach has its own strengths and flaws.

Let’s start with something simple and classical.

Story Point

Using story points to measure developer productivity is common in Agile software development methodologies, particularly in frameworks like Scrum. Story points are a relative estimation technique used to assess the effort required to complete a user story or a task. They are typically assigned to backlog items during the sprint planning phase.

While story points can be useful for measuring the team’s capacity and velocity, it is important to note that they are not intended to measure individual developer productivity directly. Instead, story points measure the complexity and effort involved in implementing a feature or completing a task.

Here are a few key points to consider when using story points in Agile development:

  1. Relative estimation: Story points are assigned based on the relative complexity and effort required for a particular task compared to others. The focus is comparing items within the team’s backlog rather than assigning absolute values.
  2. Collaborative effort: Estimating story points is a collaborative process involving the entire development team. This approach helps leverage the collective knowledge and experience of the team members.
  3. Team velocity: Story points can be used to measure the team’s velocity over time, which reflects their capacity to deliver work within a given sprint. Velocity provides insights into the team’s progress and helps plan future iterations.
  4. Adaptability and learning: Story points are not meant to be a fixed or static metric. They can change as the team gains more experience and learns about the work they are doing. The team can adjust the estimates and refine their understanding of story points.
  5. Limitations: Story points do not capture the time taken to complete a task or the quality of the work. They are primarily a planning and forecasting tool, allowing teams to make informed decisions during sprint planning and backlog prioritization.

It’s worth noting that using story points as a measure of individual developer productivity can lead to unintended consequences, such as encouraging developers to inflate their estimates or promoting competition instead of collaboration within the team. Therefore, it is generally recommended to focus on collective productivity and team performance rather than individual metrics when using story points.

Next, a little more generic and holistic framework of SPACE.

SPACE Framework

Satisfaction

Measurement of code reviews can reveal whether developers view their work in a good or bad view. For example, whether developers are presented with learning, mentorship, or opportunities to shape the codebase.

Performance

Code-review velocity captures the speed of reviews. This can reflect both how quickly a developer completes a review and the team's constraints. It is both the developer and a team-level metric.

Activity

The number of code reviews completed is an individual metric capturing how many reviews have been completed in a given time frame and contributes to the final product.

Communication and collaboration

Code reviews are a way that developers collaborate through code, and a measure or score of the quality or thoughtfulness of code reviews is a great qualitative measure of collaboration and communication.

Efficiency and flow

A code review is important but can cause challenges if it interrupts workflow or if delays cause constraints in the system. Similarly, waiting for a code review can delay a developer’s ability to continue working.

Another one is the famous DORA Metrics from the Accelerate book.

DORA Metrics

DORA (DevOps Research and Assessment) metrics are a set of metrics developed by the DevOps Research and Assessment (DORA) team, now a part of Google Cloud. These metrics aim to measure and assess the effectiveness of software delivery and operational performance within organizations.

DORA metrics are primarily focused on measuring the effectiveness and efficiency of the software delivery process rather than individual developer productivity. They provide insights into the overall performance and health of the development and operations teams and their impact on the business.

The DORA metrics include the following:

Deployment frequency

This metric measures how frequently new features, enhancements, or bug fixes are deployed to production. Higher deployment frequencies indicate a more agile and responsive software delivery process.

Lead time for changes

This metric measures the time it takes for a code change to move from development to production. It provides insights into the efficiency of the development process, including code review, testing, and release activities.

Mean Time to Recover (MTTR)

MTTR measures the time it takes to recover from a service outage or failure. It reflects the organization’s ability to detect and resolve incidents quickly, minimizing the impact on users and the business.

Change failure rate

This metric measures the percentage of deployments or changes that result in service disruption or failure. A lower change failure rate indicates a more stable and reliable software delivery process.

These metrics are intended to provide a holistic view of the software delivery process and enable organizations to identify areas for improvement, optimize their workflows, and drive continuous improvement. They are especially useful for teams practicing DevOps principles and seeking to enhance collaboration, automation, and the overall efficiency of their software development lifecycle.

While DORA metrics focus on the collective performance of the development and operations teams, they can indirectly provide insights into individual productivity by identifying bottlenecks, inefficiencies, or areas for improvement within the overall process. However, it’s important to consider them as part of a broader assessment, not as the sole measure of individual developer productivity.

What’s New?

A recent development on a new approach that evolves around DevEx for measuring and improving developer productivity that focuses on the developer experience.

Here’s the three core dimensions of developer experience:

Feedback loops

Software organizations commonly look for ways to optimize their value stream by reducing or eliminating delays in software delivery. Shortening feedback loops, which is the speed and quality of responses to actions performed, is equally important to improving DevEx. Fast feedback loops allow developers to complete their work quickly with minimal friction.

Cognitive loads

Software development is inherently complex, and the ever-growing number of tools and technologies further adds to the cognitive load developers face. Cognitive load impedes developers’ most important responsibility, delivering value to customers. To improve developer experience, teams, and organizations should aim to reduce cognitive load by eliminating unnecessary hurdles in the development process.

Flow state

Developers often say, “getting into the flow” or “being in the zone.” Frequent flow state experiences at work lead to higher productivity, innovation, and employee development. To improve DevEx, teams, and organizations should focus on creating the optimal conditions for the flow state.

Summary

This article covered three main approaches: Story Point, SPACE Framework, Dora Metrics, and the latest one, DevEx Metrics. While some are more specific than others, each approach carries its own strengths and weaknesses.

--

--