My Favorite Quotes from Accelerate

I recently finished Accelerate by Forsgren et al. and I wanted to highlight my favorite quotes.

The authors spent years researching the characteristics of high performing tech teams. Some of these characteristics resonated with me because I see them in my teams and I agree that they contribute our success. Other characteristics are novel to me and I am excited to incorporate them in my work.

The authors present what characteristics they found in the first half of the book. They go into details about how they collected their findings in the second half; they interviewed thousands of tech professionals from hundreds of companies every year for over four years in connection with a large professional conference. My favorite quotes pertain to the first half of the book.

High Level Theme

“Software delivery is an exercise in continuous improvement” and “the best teams keep getting better, and those who fail to improve fall further and further behind.”

I found this quote highly motivating. The authors find that the things that make a team high performers have a compounding effect. Year after year the high performs continue to grow while low performing teams do not. Over time this leads to a greater and greater gap in performance between the two groups.

“Organizations in all industries are turning away from delivering new products and services using big projects with long lead times. They are using small teams that work in short cycles … to rapidly deliver value to their organization.”

This is true in my field. There have been several rounds of layoffs in the tech industry in the last few years and openings for new positions are at an historic low. As a result, we see leadership is calling for teams to do more with less. It means that the advice I found in Accelerate is timely and hopefully is something I can use to answer that call.

The book notes that teams should focus on capabilities instead of maturity when measuring their teams.

“Maturity models are not the appropriate mindset to have. Shifting to a capabilities model of measurement is essential … to accelerate software delivery. The highest performing organizations … never consider themselves ‘mature’ or ‘done’ with their improvement or transformation.”

This is a mindset I plan to adopt in my job and with my teams. Some projects have numerical success criteria and we take targets to improve the number until it gets above some threshold. Once projects exceed that threshold, they typically take future targets to keep the number from dropping below that threshold. Projects in that state are said to be mature. Instead, we should be looking at the capabilities those projects provide and we should move on filling in the gaps we find.

Measuring Performance

“Measuring performance in the domain of software is hard because the inventory is invisible.”

Unfortunately, software is not like Adam Smith’s pin factory where we can measure how many pins we produced to see how productive we have been. Instead we launch software features that hopefully make our customers more effective and happy. The effort needed to launch each feature is highly variable. The impact each feature launches is also highly variable. For those two reasons, it is not enough to count the number of features we built (like Smith did with pins) in order to determine if we are performant.

“Our first step is to define a valid, reliable measure of software delivery performance. Good measures should focus on outputs rather than inputs. They focus on individual measures rather than team ones.”

The book explores popular three bad performance metrics that are not valid or reliable before proposing measures that are good. The three bad measures of productivity are lines of code, velocity, and utilization.

“Rewarding developers for writing lines of code leads to bloated software that incurs higher maintenance costs and higher cost of change.”

Maximizing the number of lines of code developers write is a terrible idea. There are countless ways to increase lines of code changed that lead to no impact. Furthermore, deleting obsolete code often makes a code base more maintainable but would negatively impact this success metric.

“Velocity is a relative and team-dependent measure. Teams inevitably work to game their velocity at the expense of collaboration with other teams.”

Teams often will estimate how many units of work, e.g., story points, each task they are assigned will take to complete. Each week or so, each developer reports on how many story points they completed in the last week. Velocity is a measure of how many story points each engineer completes. The authors highlight that velocity encourages teams to only work on problems they will get story points for and therefore ignore collaborating with teams that are in trouble and need help that does not have points associated with it.

“Utilization is only good up to a point. Once you get to very high levels of utilization, it takes teams exponentially longer to get anything else done.”

Sticking with story points, utilization is the ratio of story points an engineer is planned to work on this week to the total possible number of points that engineer could theoretically accomplish in a week. It may seem tempting to push this number to 100%. But if an engineer is fully utilized, there is no slack for them to work on emergency unplanned or higher impact work that may arise. This reminds me of a well known trade-off in supply chain management between efficiency (i.e., utilization) and resilience; being too efficient comes at the cost of resilience when emergencies arise.

“A successful measure of performance should have two key characteristics: a focus on global outcomes; a focus on outcomes not output.”

The authors move on to talking about what good performance metrics should look like: they should be team-based and not focus on inputs like story points. The authors suggest delivery lead time, deployment frequency, time to restore service after an outage, and failure to deploy rate. These measures are all based on lean manufacturing but are applied to tech teams instead of manufacturing companies. I think my teams do well with most of these but could improve most on lead time.

“Product delivery lead time is the time it takes to go from code committed to code successfully running in production.”

This measures how fast a software engineer can get their changes to customers. Deployment frequency is a proxy for batch size. The idea is to reduce the size of the batch of changes that are made at a time in order to increase efficiency and get changes in front of customers more quickly to gather feedback sooner. I think it is misguided to start the clock at code commit time however. It is a big part of the culture at my company to favor small changes. This means that authors break a large feature request into many small changes, many of which will not make a visible change to customers. Instead I think some modification is needed to measure the start of lead time when the project is first prioritized by the team.

Most interestingly, the authors find that “there is no trade off between improving performance and achieving higher levels of stability and quality.” As the performance measures increase so two does the stability and quality. I find this encouraging and even more of a reason to invest in improvements to how we measure performance.

Culture

“Organizational culture can exist at three levels: basic assumptions, values, and artifacts.”

“Culture” is not something that is well defined so I appreciated this quote from the authors about where we can even see it. The authors encourage leaders to push culture from basic assumptions to values to artifacts, e.g., written documents. In doing so, we are forced to discuss what our culture should be like and ultimately record it somewhere (i.e., the artifacts) so that it is discoverable by everyone in the organization. I am interested in spending more time writing down some what my organization’s culture should be.

The book references Westrum’s Typology of Organisational Cultures to define three types of organizations.

“Pathological organizations are characterized by large amounts of fear and threat. Beureaucratic organizations protect departments. Generative organizations focus on the mission.”

This especially stood out to me given the repeated rounds of layoffs and lean efficiency I see tech companies pushing for lately. In that environment I worry it is easy to shift to become a pathological organization as the fear of being laid off motivates many decisions we make. This is a big mistake though. As the book points out, generative organizations are the ones that are best able to accelerate the work of their teams. Our goal then should always be to our organizations away from pathological and towards being more generative.

Technical Practices

“Continuous delivery is a set of capabilities that enable us to get changes into the hands of users.”

The authors focus much of the book on how teams should adopt technical practices that improve “continuous delivery”. The book mentions five principles in continuous delivery and two in particular stood out to me: “work in small batches“ and “computers perform repetitive tasks; people solve problems.”

Working in small batches moves us to get feedback from our users more quickly so that we can avoid low impact work, i.e., it allows us to fail fast and move on when necessary.

The quote about repetitive tasks is especially prescient in the light of all the AI projects the tech industry is obsessed with right now. It is a good reminder of what computers are good at and what things we want to free up people to do.

“Teams that did well at continuous delivery achieved the following outcomes: higher levels of delivery performance; lower change fail rates; a generative, performance-oriented culture.”

The authors found that when teams deliver continuous small batches of new features to users, they are able to quickly get feedback from customers and rapidly iterate their products. I am reminded of famous software failures that took the opposite approach like the FBI’s sentinel project. Instead of launching something scrappy and iterating, they followed a long waterfall process that was endlessly delayed.

How do we measure quality to see if our technical practices are doing well? The book proposes three proxies for quality:

  1. “The quality and performance of applications, as perceived by those working on them”

  2. “The percentage of time spent on rework or unplanned work”

  3. “The percentage of time spent working on defects identified by end users”

I plan to work with my team to build or improve monitoring to address item 1. I plan to work with each of my teams to define how each one wants to measures item 2. on a quarterly basis. My organization already does an excellent job tracking user identified issues.

“Successful teams had adequate test data to run their fully automated test suites.”

While my teams have excellent unit testing, we struggle with running pipelines and certain UX flows end to end. We rely on staging environments that have stale copies of production data. We should probably invest more time building hermetic tests that rely on test data crafted to execute specific scenarios instead of hoping that they exist in the staging environment.

Architecture

“The architecture of your software and the services it depends on can be a significant barrier to the stability of the system.”

Architecture is a funny thing. I don’t recall any universities I attended offering a course on it. I don’t think professionals have a standard definition or understanding of what it is. There are some books that try to pierce the topic. I have glowing things to say about Clean Architecture. Fundamentals of Software Architecture presents an (non-exhaustive) overview of different types of architectures. But in general, I feel like architecture is a concept that neither is well-defined nor used often in the software designs I see. This is unfortunate because the authors (and I strongly agree) express that architecture is a key component in accelerating.

“We found that high performance is possible with all kinds of systems, provided that systems are loosely coupled. It allows organizations to increase their productivity as they scale.”

Loose coupling is a key point in Clean Architecture too. It allows teams to change components they own without getting help from other teams. It allows teams to deploy changes to components they own without waiting for other teams to be ready to deploy their changes. In general, it lets teams launch features to customers faster.

The authors found in their data that high performers said: “We can do most of our testing without requiring an integrated environment. We can and do deploy or release our application independently of other applications it depends on.” This encourages use to “design systems that are loosely coupled - that is, can be changed and validated independently of each other.” “We must ensure delivery teams are cross-functional with all the skill necessary to design, develop, test, deploy and operate the system on the same team.” I work on a team that has inherited a big ball of mud before and I know how frustrating it can be when everything is deployed in one enormous release shared across dozens of teams. In that state, it is too easy for one team to launch a change that causes a bug in another team’s component. Many times all tests for all teams need to be run before submitting a change, which can take a long time and can be exacerbated if there are flaky tests. The list of pain points is endless.

“The goal is for your architecture to support the ability of teams to get their work done without requiring high-bandwidth communication between teams.”

This is my number one quote in the book. I have seen my organization grow in size a lot in the last few years and the need to decouple systems to allow us to accelerate has grown with it. My teams have already prioritized big efforts to decouple systems but I want to spend more time educating my teams on architecture principles. I want to make sure that we are designing future systems that will allow us to scale.

“A goal-oriented generative culture, a modular architecture, continuous delivery and effective leadership can scale deployments per developer per day linearly or better.”

This quote really pulls everything the book is saying into one unified theory. The idea is that if we have all of these great characteristics in our team’s the amount of work we can do will accelerate as the size of our organization grows (reminiscent of an earlier quote about high performers getting better year over year). This is shown in the graph below.

We see that high performing teams (darkest colored line) produce exponentially more output as we increase the number of developers on those teams. Low performing teams actually get less productive as we add developers; the cross-team and cross-component complexities slow everyone down. Our goal should be to build the characteristics that high performers have so that we can grow our teams and actually accelerate what we are able to build.

Product Development

“In larger companies it’s still common to see months spent on budgeting, analysis, and requirements-gathering before work starts.”

Oh boy this is true. My organization spends a long time planning what we will work on each year. In fact we are already starting to talk about it now in March for next year. The authors warn that overplanning runs lots of risks, e.g., those of the Sentinel project mentioned above.

The authors find that in order to accelerate teams, they need to focus on “building and validating prototypes from the beginning”.

“One of the points of Agile development is to seek input from customers throughout the development process. But if a development team isn’t allowed, without authorization from some outside body, to change requirements or specifications in response to what they discover their ability to innovate is sharply inhibited.”

This means that when we plan too much and release changes in too big of a batch we are unable to accelerate their product development lifecycle. I find this to be incredibly true. There are countless times where I have seen either: a project die waiting for the changed requirements to get approved; a project launch with bad requirements it was unable to change.

“The ability to work and deliver in small batches is especially important because it enables teams to integrate user research into product development and delivery.”

The key takeaway here is one already mentioned in this blog: we need to iterate faster and in smaller batches so that our customers can give us feedback sooner. This creates a virtuous cycle which allows us to improve our products sooner and accelerate. I am excited about trying a lot of these out at work to see how well they help.

Jim Herold

Jim Herold is a Catholic, Husband, Father, and Software Engineer.

He has a PhD in Computer Science with a focus on machine learning and how it improves natural, sketch-based interfaces.

Jim researched at Harvey Mudd and UC Riverside, taught at Cal Poly Pomona and UC Riverside, and worked at JPL NASA and Google.

Next
Next

What is my Manager Doing?