Realistic case study on agile development at large scale

A Practical Approach to Large-Scale Agile Development” by Gary Gruver, Mike Young and Pat Fulghum is a real-world example on how scaling of agile software development really works in a huge software producing organisation.

A Practical Approach to Large-Scale Agile Development

The authors describe in an easy-to-read language the journey of the HP firmware organisation starting in 2008 and taking around 3 years with a clear goal in mind: “10x developer productivity improvement”.

In the beginning they were stuck with a waterfall planning process with a huge planning organisation and not being able to move in software development as fast as business expected. A quick summary of activities showed that in the beginning the organization was spending 25% of developer time in planning sessions to plan the next years’ releases. Only 5% was spent on innovation. Nowadays, after acomplishing the 10x goal, 40% of developer’s time is spent on innovation.

The book highlights the relevance of a right mix of agile technologies with a good approach in software architecture and organisational measures to form a successful team of people striving for common goals. A fascinating read!

Most striving is the unemotional view on agile and how to apply it. They purposefully decided not to have self-organizing teams. So, agile is broken? Can’t be applied in such an environment? Not at all! The authors give good reason for not applying all agile patterns from the books – and it is working.

The Real Cost of Slow Time vs. Downtime – great presentation

Tammy Everts stands for the topic “page speed load” and is usually referenced to with other names like e.g. Steve Souders and Stefanov Stoyan. Just recently she released a presentation on “The Real Cost of Slow Time vs. Downtime“.

The Real Cost of Slow Time vs Downtime (Tammy Everts)

In general, the calculation for downtime losses is quite simple:

downtime losses = (minutes of downtime) x (average revenue per minute)

Calculating the cost for slow page performance is ways more tricky since the impact is deferred to the actual accurance of slow pages. The talk basically differentiates between short-term losses and long-term losses due to slow pages.

Short-Term losses

  1. Identify your cut-off performance threshold (4.4 seconds is a good industry value)
  2. Measure Time to Interact (TTI) for pages in flows for typical use cases on your site
  3. Calculate the difference of TTI and cut-off performance threshold
  4. Pick a business metric according to industry best practice. 1 second delay in page load time correlates to
    1. 2.1% decrease in cart size
    2. 3.5-7% decrease in conversion
    3. 9-11% decrease in page views
    4. 8% decrease in bounce rate
    5. 16% decrease in customer satisfaction
  5. Calculate losses

Long-Term losses

The long-term impact is calculated on a customer lifetime value (CLV) calculation basis. The relationship – according to studies – between CLV and performance is interesting. 9% of users will permanently abandon a site that is down temporarily – but 28% of users will never again visit a site showing inacceptable performance.

  1. Identify your site’s performance impact line (8 seconds is a good industry value). Above this timeline business really got impacted.
  2. Identify the percentage of traffic experiencing slower traffic than the impact line.
  3. Identify CLV for those customers
  4. Calculate loss knwoing that 28% of these customers will never return to your site.