Application acceleration uses a number of technologies to improve application performance and response time over network connections.
Application acceleration was first implemented for web-based applications using a variety of caching techniques on both the browser and the server. Eventually, caching became inadequate as a mechanism to improve application performance and optimization of protocols became part of the solution. Optimizations were at first confined to transport layer protocols like TCP, but eventually grew to encompass application-specific protocols such as HTTP.
Application acceleration overcomes network effects such as WAN latency, packet loss, and bandwidth congestion. Application acceleration also addresses application challenges that adversely affect performance such as "chatty" protocols, e.g. HTTP, CIFS, and Samba, differences in TCP/IP stack implementations, and the lack of distinction in web applications between cacheable and non-cacheable content.
F5 achieves application acceleration by combining intelligent compression, WAN optimization, Layer 7 rate shaping, smart caching, SSL acceleration, and other technologies in a complementary and cohesive way.
An application delivery controller is a device that is typically placed in a data center between the firewall and one or more application servers (an area known as the DMZ). First-generation application delivery controllers primarily performed application acceleration and handled load balancing between servers.
The latest generation of application delivery controllers, such as the F5 BIG-IP® product family, handles a much wider variety of functions, including rate shaping and SSL offloading, as well as serving as a Web application firewall.
A series of F5 devices, often located in widespread data centers within the same enterprise, is capable of working in concert by sharing a common operating system and control language. This holistic approach is termed application delivery networking.