The Hyper Text Transfer Protocol (HTTP), the simple, constrained and ultimately boring application layer protocol forms the foundation of the World Wide Web. In essence, HTTP enables the retrieval of network connected resources available across the cyber world and has evolved through the decades to deliver fast, secure and rich medium for digital communication.
This Guide Highlights The Following Key Aspects of HTTP/2:
What is HTTP/2?
HTTP was originally proposed by Tim Berners-Lee, the pioneer of the World Wide Web who designed the application protocol with simplicity in mind to perform high-level data communication functions between Web-servers and clients.
The first documented version of HTTP was released in 1991 as HTTP0.9, which later led to the official introduction and recognition of HTTP1.0 in 1996. HTTP1.1 followed in 1997 and has since received little iterative improvements.
In February 2015, the Internet Engineering Task Force (IETF) HTTP Working Group revised HTTP and developed the second major version of the application protocol in the form of HTTP/2. In May 2015, the HTTP/2 implementation specification was officially standardized in response to Google’s HTTP-compatible SPDY protocol. The HTTP/2 vs SPDY argument continues throughout the guide.
What is a Protocol?
The HTTP/2 vs HTTP1 debate must proceed with a short primer on the term Protocol frequently used in this resource. A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).
Protocols usually consist of three main parts: Header, Payload and Footer. The Header placed before the Payload contains information such as source and destination addresses as well as other details (such as size and type) regarding the Payload. Payload is the actual information transmitted using the protocol. The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.
The system is similar to the post mail service. The letter (Payload) is inserted into an envelope (Header) with destination address written on it and sealed with glue and postage stamp (Footer) before it is dispatched. Except that transmitting digital information in the form of 0s and 1s isn’t as simple, and necessitates a new dimension innovation in response to heightening technological advancements emerging with the explosive growth of internet usage.
HTTP protocol originally comprised of basic commands: GET, to retrieve information from the server and POST, to deliver the requested information to the client. This simple and apparently boring set of few commands to GET data and POST a response essentially formed the foundation to construct other network protocols as well. The protocol is yet another move to improve internet user experience and effectiveness, necessitating HTTP/2 implementation to enhance online presence.
Goal of Creating HTTP/2
Since its inception in early 1990s, HTTP has seen only a few major overhauls. The most recent version, HTTP1.1 has served the cyber world for over 15 years. Web pages in the current era of dynamic information updates, resource-intensive multimedia content formats and excessive inclination toward web performance have placed old protocol technologies in the legacy category. These trends necessitate significant HTTP/2 changes to improve the internet experience.
The primary goal with research and development for a new version of HTTP centers around three qualities rarely associated with a single network protocol without necessitating additional networking technologies – simplicity, high performance and robustness. These goals are achieved by introducing capabilities that reduce latency in processing browser requests with techniques such as multiplexing, compression, request prioritization and server push.
Mechanisms such as flow control, upgrade and error handling work as enhancements to the HTTP protocol for developers to ensure high performance and resilience of web-based applications.
The collective system allows servers to respond efficiently with more content than originally requested by clients, eliminating user intervention to continuously request for information until the website is fully loaded onto the web browser. For instance, the Server Push capability with HTTP/2 allows servers to respond with a page’s full contents other than the information already available in the browser cache. Efficient compression of HTTP header files minimizes protocol overhead to improve performance with each browser request and server response.
HTTP/2 changes are designed to maintain interoperability and compatibility with HTTP1.1. HTTP/2 advantages are expected to increase over time based on real-world experiments and its ability to address performance related issues in real-world comparison with HTTP1.1 will greatly impact its evolution over the long term.
“…we are not replacing all of HTTP – the methods, status codes, and most of the headers you use today will be the same. Instead, we’re re-defining how it gets used “on the wire” so it’s more efficient, and so that it is more gentle to the internet itself…” Mark Nottingham, Chair the IETF HTTP Working Group and member of the W3C TAG. Source
It is important to note that the new HTTP version comes as an extension to its predecessor and is not expected to replace HTTP1.1 anytime soon. HTTP/2 implementation will not enable automatic support for all encryption types available with HTTP1.1, but definitely opens the door to better alternatives or additional encryption compatibility updates in the near future. However feature comparisons such as HTTP/2 vs HTTP1 and SPDY vs HTTP/2 present only the latest application protocol as the winner in terms of performance, security and reliability alike.
What Was Wrong With HTTP1.1?
HTTP1.1 was limited to processing only one outstanding request per TCP connection, forcing browsers to use multiple TCP connections to process multiple requests simultaneously.
However, using too many TCP connections in parallel leads to TCP congestion that causes unfair monopolization of network resources. Web browsers using multiple connections to process additional requests occupy a greater share of the available network resources, hence downgrading network performance for other users.
Issuing multiple requests from the browser also causes data duplication on data transmission wires, which in turn requires additional protocols to extract the desired information free of errors at the end-nodes.
The internet industry was naturally forced to hack these constraints with practices such as domain sharding, concatenation, data inlining and spriting, among others. Ineffective use of the underlying TCP connections with HTTP1.1 also leads to poor resource prioritization, causing exponential performance degradation as web applications grow in terms of complexity, functionality and scope.
The web has evolved well beyond the capacity of legacy HTTP-based networking technologies. The core qualities of HTTP1.1 developed over a decade ago have opened the doors to several embarrassing performance and security loopholes.
The Cookie Hack for instance, allows cybercriminals to reuse a previous working session to compromise account passwords because HTTP1.1 provides no session endpoint-identity facilities. While the similar security concerns will continue to haunt HTTP/2, the new application protocol is designed with better security capabilities such as the improved implementation of new TLS features.
HTTP/2 Feature Upgrades
Bi-directional sequence of text format frames sent over the HTTP/2 protocol exchanged between the server and client are known as “streams”. Earlier iterations of the HTTP protocol were capable of transmitting only one stream at a time along with some time delay between each stream transmission.
Receiving tons of media content via individual streams sent one by one is both inefficient and resource consuming. HTTP/2 changes have helped establish a new binary framing layer to addresses these concerns.
This layer allows client and server to disintegrate the HTTP payload into small, independent and manageable interleaved sequence of frames. This information is then reassembled at the other end.
Binary frame formats enable the exchange of multiple, concurrently open, independent bi-directional sequences without latency between successive streams. This approach presents an array of benefits of HTTP/2 explained below:
- The parallel multiplexed requests and response do not block each other.
- A single TCP connection is used to ensure effective network resource utilization despite transmitting multiple data streams.
- No need to apply unnecessary optimization hacks – such as image sprites, concatenation and domain sharding, among others – that compromise other areas of network performance.
- Reduced latency, faster web performance, better search engine rankings.
- Reduced OpEx and CapEx in running network and IT resources.
With this capability, data packages from multiple streams are essentially mixed and transmitted over a single TCP connection. These packages are then split at the receiving end and presented as individual data streams. Transmitting multiple parallel requests simultaneously using HTTP version 1.1 or earlier required multiple TCP connections, which inherently bottlenecks overall network performance despite transmitting more data streams at faster rates.
HTTP/2 Server Push
This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.
The client places the pushed resource Y into its cache for future use. This mechanism saves a request-respond round trip and reduces network latency. Server Push was originally introduced in Google’s SPDY protocol. Stream identifiers containing pseudo headers such as :path allow the server to initiate the Push for information that must be cacheable. The client must explicitly allow the server to Push cacheable resources with HTTP/2 or terminate pushed streams with a specific stream identifier.
Other HTTP/2 changes such as Server Push proactively updates or invalidates the client’s cache and is also known as “Cache Push”. Long-term consequences center around the ability of servers to identify possible push-able resources that the client actually does not want.
HTTP/2 implementation presents significant performance for pushed resources, with other benefits of HTTP/2 explained below:
- The client saves pushed resources in the cache.
- The client can reuse these cached resources across different pages.
- The server can multiplex pushed resources along with originally requested information within the same TCP connection.
- The server can prioritize pushed resources – a key performance differentiator in HTTP/2 vs HTTP1.
- The client can decline pushed resources to maintain an effective repository of cached resources or disable Server Push entirely.
- The client can also limit the number of pushed streams multiplexed concurrently.
Similar Push capabilities are already available with suboptimal techniques such as Inlining to Push server responses, whereas Server Push presents a protocol-level solution to avoid complexities with optimization hacks secondary to the baseline capabilities of the application protocol itself.
The HTTP/2 multiplexes and prioritizes the pushed data stream to ensure better transmission performance as seen with other request-response data streams. As a built-in security mechanism, the server must be authorized to Push the resources beforehand.
The latest HTTP version has evolved significantly in terms of capabilities, and attributes such as transforming from a text protocol to a binary protocol. HTTP1.x used to process text commands to complete request-response cycles. HTTP/2 will use binary commands (in 1s and 0s) to execute the same tasks. This attribute eases complications with framing and simplifies implementation of commands that were confusingly intermixed due to commands containing text and optional spaces.
Although it will probably take more efforts to read binary as compared text commands, it is easier for the network to generate and parse frames available in binary. The actual semantics remain unchanged.
Browsers using HTTP/2 implementation will convert the same text commands into binary before transmitting it over the network. The binary framing layer is not backward compatible with HTTP1.x clients and servers and a key enabler to significant performance benefits over SPDY and HTTP1.x. Using binary commands to enable key business advantages for internet companies and online business as detailed with benefits of HTTP/2 explained below:
- Low overhead in parsing data – a critical value proposition in HTTP/2 vs HTTP1.
- Less prone to errors.
- Lighter network footprint.
- Effective network resource utilization.
- Eliminating security concerns associated with the textual nature of HTTP1.x such as response splitting attacks.
- Enables other capabilities of the HTTP/2 including compression, multiplexing, prioritization, flow control and effective handling of TLS.
- Compact representation of commands for easier processing and implementation.
- Efficient and robust in terms of processing of data between client and server.
- Reduced network latency and improved throughput.
HTTP/2 implementation allows the client to provide preference to particular data streams. Although the server is not bound to follow these instructions from the client, the mechanism allows the server to optimize network resource allocation based on end-user requirements.
Stream prioritization works with Dependencies and Weight assigned to each stream. Although all streams are inherently dependent on each other except, the dependent streams are also assigned weight between 1 and 256. The details for stream prioritization mechanisms are still debated.
In the real world however, the server rarely has control over resources such as CPU and database connections. Implementation complexity itself prevents servers from accommodating stream priority requests. Research and development in this area is particularly important for long term success of HTTP/2 since the protocol is capable of processing multiple data streams with a single TCP connection.
This capability can lead to simultaneous arrival of server requests that actually differ in terms of priority from an end-user perspective. Holding off data stream processing requests on a random basis undermines the efficiencies and end-user experience promised by HTTP/2 changes. At the same time, an intelligent and widely adopted stream prioritization mechanism presents benefits of HTTP/2 explained as follows:
- Effective network resource utilization.
- Reduced time to deliver primary content requests.
- Improved page load speed and end-user experience.
- Optimized data communication between client and server.
- Reduced negative effect of network latency concerns.
Stateful Header Compression
Delivering high-end web user experience requires websites rich in content and graphics. The HTTP application protocol is state-less, which means each client request must include as much information as the server needs to perform the desired operation. This mechanism causes the data streams to carry multiple repetitive frames of information such that the server itself does not have to store information from previous client requests.
In the case of websites serving media-rich content, clients push multiple near-identical header frames leading to latency and unnecessary consumption of limited network resource. A prioritized mix of data streams cannot achieve the desired performance standards of parallelism without optimizing this mechanism.
HTTP/2 implementation addresses these concerns with the ability to compress large number of redundant header frames. It uses the HPACK specification as a simple and secure approach to header compression. Both client and server maintain a list of headers used in previous client-server requests.
HPACK compresses the individual value of each header before it is transferred to the server, which then looks up the encoded information in list of previously transferred header values to reconstruct the full header information. HPACK header compression for HTTP/2 implementation presents immense performance advantages, including some benefits of HTTP/2 explained below:
- Effective stream prioritization.
- Effective utilization of multiplexing mechanisms.
- Reduced resource overhead – one of the earliest areas of concerns in debates on HTTP/2 vs HTTP1 and HTTP/2 vs SPDY.
- Encodes large headers as well as commonly used headers which eliminates the need to send the entire header frame itself. The individual transfer size of each data stream shrinks rapidly.
- Not vulnerable to security attacks such as CRIME exploiting data streams with compressed headers.
Similarities With HTTP1.x and SPDY
Underlying application semantics of HTTP including HTTP status codes, URIs, methodologies and header files remain same in the latest iteration of the HTTP/2. HTTP/2 is based on SPDY, Google’s alternative to HTTP1.x. Real differences lies in the mechanisms used to process client-server requests. The following chart identifies a few areas of similarities and improvements among HTTP1.x, SPDY and HTTP/2:
|SSL not required but recommended.||SSL required.||SSL not required but recommended.|
|Slow encryption.||Fast encryption.||Even faster encryption.|
|One client-server request per TCP connection.||Multiple client-server request per TCP connection. Occurs on a single host at a time.||Multi-host multiplexing. Occurs on multiple hosts at a single instant.|
|No header compression.||Header compression introduced.||Header compression using improved algorithms that improve performance as well as security.|
|No stream prioritization.||Stream prioritization introduced.||Improved stream prioritization mechanisms used.|
How Does HTTP/2 Work With HTTPS
HTTPS is used to establish an ultra-secure network connecting computers, machines and servers to process sensitive business and consumer information. Banks processing financial transactions and healthcare institutions maintaining patient records are prime targets for cybercriminal offenses. HTTPS works as an effective layer against persistent cybercrime threats, although not the only security deployment used to ward off sophisticated cyber-attacks infringing high-value corporate networks.
The HTTP/2 browser support includes HTTPS encryption and actually complements the overall security performance of HTTPS deployments. Features such as fewer TLS handshakes, low resource consumption on both client and server sides and improved capabilities in reusing existing web sessions while eliminating vulnerabilities associated with HTTP1.x present HTTP/2 as a key enabler to secure digital communication in sensitive network environments.
HTTPS is not limited to high-profile organizations and cyber security is just as valuable to online business owners, casual bloggers, e-commerce merchants and even social media users. The HTTP/2 inherently requires the latest, most secure TLS version and all online communities, business owners and webmasters must ensure their websites use HTTPS by default.
Usual processes to set up HTTPS include using web hosting plans, purchasing, activating and installing a security certificate and finally updating the website to use HTTPS.
The Main Benefits of HTTP/2
Internet speed is not the same across all networks and geographic locations. The increasingly mobile user-base demands seamless high performance internet across all device form factors even though congested cellular networks can’t compete with high speed broadband internet. A completely revamped and overhauled networking and data communication mechanism in the form of HTTP/2 emerged as a viable solution with the following significant advantages.
The term sums up all advantages of HTTP/2 changes. HTTP/2 benchmark results (see in chapter: Performance Benchmark Comparison of HTTPS, SPDY and HTTP/2) demonstrate the performance improvements of HTTP/2 over its predecessors and alternatives alike.
The protocol’s ability to send and receive more data per client-server communication cycle is not an optimization hack but a real, realizable and practical HTTP/2 advantage in terms of performance. The analogy is similar to the idea of vacuum tube trains (Vactrain) in comparison with standard railway: eliminating air resistance from Vactrain tunnels allows the vehicle to travel faster and carry more passengers with improved utilization of the available channels without having to focus on installing bigger engines, reducing weight and making the vehicle more aerodynamic.
Technologies such as Multiplexing create additional space to carry and transmit more data simultaneously – like multi-story seating compartments in the Airbus airplane.
And what happens when the data communication mechanism eliminates all hurdles to improve web performance? The byproduct of superior website performance includes increased customer satisfaction, better search engine optimization, high productivity and resource utilization, expanding user-base, better sales figures and a whole lot more.
Fortunately, adopting the HTTP/2 is far more practical than creating vacuum chambers for large multistory locomotives.
Mobile Web Performance
Millions of internet users access the web from their mobile devices as a primary gateway to the cyber world. The Post PC era has fueled smartphone adoption to access Web-based services from the palm of their hand, and perform most of the mundane computing tasks on the go instead of sitting in front of desktop computers for prolonged periods of time.
HTTP/2 is designed in context of present-day web usage trends. Capabilities such as multiplexing and header compression work well to reduce latency in accessing internet services across mobile data networks offering limited bandwidth per user. HTTP/2 optimizes web experience for mobile users with high performance and security previously only attributed to desktop internet usage. HTTP/2 advantages for mobile users promises immediate positive impact in the way online businesses target customers in the cyber world.
The cost of internet has plunged rapidly since the dawn of the World Wide Web. Expanding web access and rising internet speed was always the aim with advancements in internet technologies. Meanwhile, cost improvements appear to have bottlenecked especially considering the allegations surrounding the monopoly of telecom service providers.
The HTTP/2 promising increased throughput and enhanced data communication efficiencies will allow telco providers to shrink operational expenses while maintaining the standards of high speed internet. The reduced OpEx will encourage service providers to slash pricing for the low-end market and introduce high speed service tiers for the existing pricing model.
Densely populated Asian and African markets remain underserved with limited access to affordable internet. Internet service providers focus their investments to yield the highest returns from services offered only to urban and developed locations. HTTP/2 advantages leading to large-scale adoption of the advanced application protocol will naturally reduce network congestion to spare resources and bandwidth for distant underserved geographic locations.
Media Rich Experience
Modern web experience is all about delivering media-rich content at lightning-fast page load speeds. Internet users ostensibly demand media-rich content and services updated on a regular basis. The cost of the underlying infrastructure even delivered via cloud as a subscription-based solution is not always affordable for internet startup firms. HTTP/2 advantages and technology features such as Header Compression may not shrink the actual file size, but do shave a few bytes of size overhead to transmit resource-consuming media rich content between client and servers.
Improved Mobile Experience
Progressive online businesses follow a Mobile-First strategy to effectively target the exploding mobile user-base. Mobile device hardware limitations are perhaps the biggest constraint to mobile web experience impacted by extended time taken to process browser requests. The HTTP/2 cuts load times and mobile network latency to manageable levels.
Improved Technology Utilization
Resource consumption has increased significantly for client and server processing browser requests to deliver media-rich social media content and complex web designs. Although web developers have worked around appropriate optimization hacks, a robust and reliable solution in the form of HTTP/2 was inevitable. Features such as Header Compression, Server Push, Stream Dependencies and Multiplexing all contribute toward improved network utilization as a key HTTP/2 advantage.
HTTP/2 advantages extend beyond performance as HPACK algorithm allows HTTP/2 to circumvent the prevalent security threats targeting text-based application layer protocols. HTTP/2 contains commands in binary and enable compression of the HTTP header metadata in following a ‘Security by Obscurity’ approach to protecting sensitive data transmitted between clients and servers. The protocol also boasts full support for encryption and requires an improved version of Transport Layer Security (TLS1.2) for better data protection.
HTTP/2 embodies innovation and the concept of high performance web. HTTP/2 underpins the cyber world as we know it today, and HTTP/2 changes are primarily based on Google’s SPDY protocol that took giant leaps ahead of the aging HTTP1.x versions and will almost entirely replace SPDY as well as all previous HTTP iterations in the near future. Riddance from complex web optimization hacks presents HTTP/2 browser support as a viable solution for web developers to produce high performance websites and online services.
HTTP/2 SEO Advantage
The discipline of SEO marketing lies somewhere between art and science. Traditional black hat SEO practices fail to manipulate search engine rankings following increasingly complex proprietary algorithms used by popular search engines. Online businesses need to evolve their marketing tactics accordingly. Smarter investments in the form of implementing thoroughly well designed websites not just optimized for speed but built for superior performance, security and user experience from the ground up. These attributes are preferred as means to return search queries with the most accurate information and service, conveniently accessible across a global target audience.
Standardized industry processes for search engine optimization go beyond front-end marketing tactics, and encompass the entire lifecycle of client-server communication. SEOs that were once the staple in internet market teams are not enjoying the same positions since the advent of latest digital communication technologies. Among these, the prevalence of HTTP/2 marks a key tectonic shift forcing web developers and marketers back to the drawing board.
Implementing and optimizing the infrastructure for HTTP/2 and the promising performance advantages is now a critical enabler to search engine optimization. Online businesses lacking adequate organic user-base cannot afford to neglect HTTP/2 and the resulting SEO boost while they compete ever growing online business empires on grounds of innovation and high value online service ranked even higher with the implementation of HTTP/2 on the server side.
Performance Benchmark Comparison of HTTPS, SPDY and HTTP/2
The following performance benchmark comparisons between HTTPS, SPDY and HTTP/2 portray a clear picture of web performance improvements with the latest application protocol.
HTTP/2 benchmark results confirm the ideas that header compression, server push and other mechanisms used specifically to enhance page speed and user experience consistently deliver in the real-world:
Test details: This test comparing HTTPS, SPDY3.1 and HTTP/2 presents the following results:
- Size of client request and server response headers: HTTP/2 benchmarks demonstrate how the use of header compression mechanism shrinks the header size significantly, whereas SPDY only shrinks the header used in server response for this particular request. HTTPS does not shrink header size in both the request and response commands.
- Size of server response message: Although HTTP/2 server response was larger in size, it provides stronger encryption for improved security as a key tradeoff.
- Number of TCP connections used: HTTP/2 and SPDY use fewer network resources by processing multiple concurrent requests (multiplexing) and therefore reduce latency.
- Page Load Speed: HTTP was consistently faster than SPDY. HTTPS was significantly slower due to lack of header compression and server push capabilities.
Gearing Up for a Better Internet: HTTP/2 Browser Support and Availability
HTTP/2 is already available with adequate web server, browser and mobile support. Technologies running HTTP1.x are not compromised upon implementing HTTP/2 for your website but require a quick update to support the new protocol. You can consider networking protocols as spoken languages. Communicating with new languages is only possible as long as it is adequately understood. Similarly, the client and server should be updated to support data communication using the HTTP/2 protocol.
Internet consumers don’t need to worry about configuring their desktop and mobile web browsers to support HTTP/2. Google Chrome and Firefox have supported the technology for years and Apple added HTTP/2 browser support to the Safari web browser back in 2014. Internet Explorer requires users to run Windows 8 to support the latest application protocol.
Major mobile web browsers including Android’s aptly named Browser, Chrome for Android and iOS, as well as Safari in iOS 8 and above support HTTP/2 for mobile web access. Internet users are advised to install the latest stable releases of mobile and desktop web browsers to experience the maximum performance and security advantages of the application protocol as seen in HTTP/2 benchmarks.
Web Server Support: Apache and Nginx
Online service providers running servers on-premise or in the cloud will have to update and configure web servers to add support for HTTP/2. At Kinsta we’ve already modified our servers accordingly of course! Considering the spoken language analogy described earlier, internet visitors accessing information delivered from these servers can only use HTTP/2 as long as the web server is updated and configured for this purpose.
Nginx servers constituting 66 percent of all active web servers boast native support for HTTP/2 whereas Apache servers use the mod_spdy module to offer HTTP/2 browser support. The module was developed by Google to support SPDY features such as multiplexing and header compression for Apache 2.2 servers and the software is now donated to the Apache Software Foundation.