BLOG | OFFICE OF THE CTO

Slow is the new Down

 Miniatur
Published September 14, 2020
  • Share via AddThis


We have heard from our customers that "Slow is the new down." Having bad application performance is as bad as being down to most modern application owners and operators. FAANG companies have gotten consumers used to consistently excellent performance, and consumers that are customers of other software begin to have those same expectations, especially when competitors in a crowded market offer good user experience.

Some customers have even told us that slow performance can often be worse than being completely down. Having something tantalizingly seem to almost work, then re-trying over and over is more aggravating than just having a lack of function. For example, voice over IP quality being bad with both parties having to repeat themselves over and over, compared to it being unavailable, which causes people to grab a cell phone or landline instead.

The importance of meeting customer experience expectations is recognized as high across industries. A 2020 survey of the retail industry found improving the customer experience a top digital priority for nearly one-third (32%) of the respondents. Over 71% cited improving customer experience as the top short-term business outcome they sought from digital transformation efforts.

Now, it's certainly the case that operators and business stakeholders alike care about their users. One of the reasons poor performance goes unaddressed is a lack of visibility into what causes "slow," or what “slow” even means for an application like theirs. Sometimes, that lack of visibility is the direct result of a failure to measure anything.

A survey from Turbonomic exposes this phenomenon (emphasis added): "When we asked respondents how their organization is measuring application performance, it was promising to see that over 60% are measure it in some form. But the most common approach was measuring availability, as opposed to managing to Service Level Objectives (SLOs), which typically take the form of response time or transaction throughput. 13% do not measure application performance at all."

But before we laud those who do measure, note what they're measuring. The most common approach to measuring performance was to measure availability. Availability is a measure of up or down. It's not a measure of slow or fast, though we could spend an entire blog (or more) arguing that it should.

But it doesn't, and one of the reasons can be found in the measurability of the business costs. The financial impact of downtime is well-documented. We can find multiple sources that provide detailed breakdowns of the costs across the organization. But for performance? We have a few surveys that highlight user responses in the form of abandonment or negative social media. But actual costs to the business? Almost non-existent.

According to Esteban Kolsky, 72% of customers will share a positive experience with 6 or more people. On the other hand, if a customer is not happy, 13% of them will share their experience with 15 or even more.

In general, we can sum up the problem with measuring performance today as "We don't measure the cost of slow. We measure the cost of downtime." Humans tend to work toward what they're measured on. This is not a new concept and, in fact, it's one of the tenets of DevOps and the reason the methodology includes a shift in measurements toward what matters most. At F5, we plan on not only helping you measure on an absolute scale, but also relative to data from other people’s applications to see how your end-users are experiencing your applications compared to how they experience other similar applications.

What matters most is meeting the expectations of end-users and today that means more than just available; it means fast and reliable too. We plan on helping application owners with not only data and visualizations on end-user experience, but with insights, stated in natural language such as “The changes you pushed to production over the weekend improved the typical Monday morning experience for your end users, great job!” or “Your experience for Chrome users in New York is predicted to get worse than what is average for banking applications like yours in four days. Here is the load balancing policy change we recommend you make to your NGINX load balancers in AWS US East. Feel free to make the change yourself or click here to have us make the change for you.”

If you want answers to questions such as:

  • “Are my end-users having a good experience overall? 
  • How is their experience compared to applications that are similar to mine? 
  • Can I expose a simple health indicator for my application that incorporates adaptive end-user experience for my support teams to monitor? 
  • What steps can I take to improve end user experience? 
  • What can I do to keep my end-user experience the same but lower costs? 
  • Do end-users that have a bad experience leave or do they come back?” 

...then stay tuned for follow-up articles where we will go into more detail.