What problems do HTTP/1, HTTP/2, and HTTP/3 solve?

What problems do HTTP/1, HTTP/2, and HTTP/3 solve?

What problems does each generation of HTTP solve? The following diagram illustrates the main characteristics.

picture

HTTP/1

HTTP 1.0 was finalized and fully documented in 1996. Each request to the same server requires a separate TCP connection.

HTTP 1.1 was released in 1997. TCP connections can be kept open for reuse (persistent connections), but this does not solve the HOL (Head of Line) blocking problem.

HOL blocking - When the number of parallel requests allowed by the browser is exhausted, subsequent requests need to wait for the previous request to complete.

HTTP/2

HTTP 2.0 was released in 2015. It solves the HOL problem by request multiplexing and eliminates HOL blocking at the application layer, but HOL still exists at the transport (TCP) layer.

As shown in the figure, HTTP 2.0 introduces the concept of HTTP "streams": an abstraction that allows different HTTP exchanges to be multiplexed over the same TCP connection . Each stream does not need to be sent in order.

Application scenarios:

  • Large websites: The multiplexing feature of HTTP/2 allows multiple requests to share a connection, avoiding the head-of-line blocking problem in HTTP/1.1. This is very suitable for complex web pages that need to load a large number of resources (such as images, scripts, style sheets, etc.).
  • CDN: HTTP/2 header compression and binary format can significantly reduce the amount of data and improve data transmission efficiency. More efficient connection multiplexing allows CDN to have better performance when transmitting large files or streaming media content.
  • Mobile applications: HTTP/2 can significantly reduce network latency on mobile devices and is suitable for mobile applications and API requests that require fast responses.

HTTP/3

The first draft of HTTP 3.0 was released in 2020. It is the successor to HTTP 2.0. It uses QUIC instead of TCP as the underlying transport protocol, thus eliminating HOL blocking in the transport layer.

QUIC is based on UDP. It introduces streams as first-class citizens into the transport layer. QUIC streams share the same QUIC connection, so creating a new QUIC stream does not require additional handshakes and slow starts, but QUIC streams are transported independently, so in most cases, packet loss that affects one stream will not affect other streams.

Application scenarios:

  • Real-time applications and games: HTTP/3's fast handshake and low latency make it ideal for applications that require real-time data transfer, such as online gaming, video conferencing, and live streaming.
  • Modern web applications: Because HTTP/3 provides more efficient connection management and a better user experience, HTTP/3 is a better choice for modern web applications and service providers, such as those using SPAs (single-page applications) and frequent small data requests.
  • Services with higher security requirements: The QUIC protocol comes with encryption and simplifies the TLS handshake process, so HTTP/3 is suitable for services that require fast and secure connection establishment.

<<:  How to efficiently implement scheduled tasks in Redis

>>:  Explore different VGG networks. What do you discover?

Recommend

Network management benefits! Several difficult problems and solutions for LAN

As a qualified network administrator, I believe t...

Let’s talk about the complete guide to HTTP status codes. Have you learned it?

1. Overview of HTTP Status Codes 1. Concept When ...

Promoting the large-scale development of 5G applications

By the end of last year, the number of 5G base st...

Twisted Pair vs. Fiber Optic Cable Advantages and Challenges

Data transmission is the backbone of today's ...

...

6 IT roles that need retraining

Given the rapid pace of change in the technology ...

Transitioning from IPv4 to IPv6, you can't miss these knowledge points

Preface Network is one of the basic skills for en...