What problems do HTTP/1, HTTP/2, and HTTP/3 solve?

What problems do HTTP/1, HTTP/2, and HTTP/3 solve?

What problems does each generation of HTTP solve? The following diagram illustrates the main characteristics.

picture

HTTP/1

HTTP 1.0 was finalized and fully documented in 1996. Each request to the same server requires a separate TCP connection.

HTTP 1.1 was released in 1997. TCP connections can be kept open for reuse (persistent connections), but this does not solve the HOL (Head of Line) blocking problem.

HOL blocking - When the number of parallel requests allowed by the browser is exhausted, subsequent requests need to wait for the previous request to complete.

HTTP/2

HTTP 2.0 was released in 2015. It solves the HOL problem by request multiplexing and eliminates HOL blocking at the application layer, but HOL still exists at the transport (TCP) layer.

As shown in the figure, HTTP 2.0 introduces the concept of HTTP "streams": an abstraction that allows different HTTP exchanges to be multiplexed over the same TCP connection . Each stream does not need to be sent in order.

Application scenarios:

  • Large websites: The multiplexing feature of HTTP/2 allows multiple requests to share a connection, avoiding the head-of-line blocking problem in HTTP/1.1. This is very suitable for complex web pages that need to load a large number of resources (such as images, scripts, style sheets, etc.).
  • CDN: HTTP/2 header compression and binary format can significantly reduce the amount of data and improve data transmission efficiency. More efficient connection multiplexing allows CDN to have better performance when transmitting large files or streaming media content.
  • Mobile applications: HTTP/2 can significantly reduce network latency on mobile devices and is suitable for mobile applications and API requests that require fast responses.

HTTP/3

The first draft of HTTP 3.0 was released in 2020. It is the successor to HTTP 2.0. It uses QUIC instead of TCP as the underlying transport protocol, thus eliminating HOL blocking in the transport layer.

QUIC is based on UDP. It introduces streams as first-class citizens into the transport layer. QUIC streams share the same QUIC connection, so creating a new QUIC stream does not require additional handshakes and slow starts, but QUIC streams are transported independently, so in most cases, packet loss that affects one stream will not affect other streams.

Application scenarios:

  • Real-time applications and games: HTTP/3's fast handshake and low latency make it ideal for applications that require real-time data transfer, such as online gaming, video conferencing, and live streaming.
  • Modern web applications: Because HTTP/3 provides more efficient connection management and a better user experience, HTTP/3 is a better choice for modern web applications and service providers, such as those using SPAs (single-page applications) and frequent small data requests.
  • Services with higher security requirements: The QUIC protocol comes with encryption and simplifies the TLS handshake process, so HTTP/3 is suitable for services that require fast and secure connection establishment.

<<:  How to efficiently implement scheduled tasks in Redis

>>:  Explore different VGG networks. What do you discover?

Recommend

5G helps: Five future development trends of smart transportation

According to relevant research reports, the globa...

About remote procedure call gRPC

If you have been exposed to distributed systems, ...

TCP send window, receive window and how they work

The chart above is a snapshot taken from the send...

With a downlink rate of over 100Mbps, can Starlink really replace 5G?

According to Mobile World Live, Ookla's lates...

Outlook for domestic 5G development in 2021 (I): Current status

The development of 5G has now become another hot ...

A brief talk about link aggregation, have you learned it?

Speaking of airport expressways, people often use...

...