Let’s talk about gRPC that you don’t know today

Let’s talk about gRPC that you don’t know today

Hello everyone, I am Zhibeijun. It is the last day of another work week, but don’t forget to work hard.

If you endure hardship when you grow up, you will drive a Land Rover. If you don't work hard when you are young, you will drive a Xiali when you grow up.

Next, let's get to the point~

Introduction

I believe that everyone has a certain understanding of the RPC protocol, and it will be involved in the project to a certain extent, but you may be similar to me, who directly uses the plug-ins encapsulated by the platform, and you don’t know much about the principles. Today, I would like to take this opportunity to share with you the RPC framework I have recently come into contact with - grpc, and talk about those things that you know but don’t know why.

Overview

RPC (Remote Procedure Call) is a protocol that allows a local computer to request a remote computer through the network to complete the interaction of data content between computers. It can be quickly used without understanding the underlying network technology, making development easier and improving the efficiency of the interactive experience.

To facilitate development, there are many RPC frameworks implemented based on the RPC protocol, such as Thrift, Dubbo, and gRPC, which will be introduced in this article.

What is gRPC

  • gRPC is a cross-platform (language), high-performance, open-source general-purpose RPC framework developed by Google.
  • It is based on the HTTP2.0 protocol, can maintain a long connection between the client and the server, and transmit data based on binary stream (byte stream).
  • Client-server interaction process

The client (gRPC Sub) calls method A to initiate an RPC request

The request content uses Protobf for object serialization and compression

The server (gRPC Server) receives the request, parses the request content, and returns after business processing

The response result is compressed by object serialization through Protobuf

The client receives the response, parses the response content, and finally completes the interaction

​Practical Cases

The editor uses the Java version for case demonstration. Other languages ​​are similar and can be tested by yourself.

POM Dependencies

  • gRPC officially provides a completed dependency configuration, which can be directly referenced according to the instructions (dependencies include plug-ins). The version is for reference only, and other versions can also be selected.

 <!-- gRPC configuration -->
< dependency >
<groupId> io.grpc </groupId>
<artifactId> grpc - netty - shaded </artifactId>
<version> 1.29.0 </version>
</ dependency >
< dependency >
<groupId> io.grpc </groupId>
<artifactId> grpc - protobuf </artifactId>
<version> 1.29.0 </version>
</ dependency >
< dependency >
<groupId> io.grpc </groupId>
<artifactId> grpc - services </artifactId>
<version> 1.29.0 </version>
</ dependency >
< dependency >
<groupId> io.grpc </groupId>
<artifactId> grpc - stub </artifactId>
<version> 1.29.0 </version>
</ dependency >

<!-- proto plugin -->
< plugins >
< plugin >
< groupId > org .xolstice .maven .plugins </ groupId >
<artifactId> protobuf - maven - plugin </artifactId>
<version> 0.6.1 </version>
< configuration >
< protocArtifact > com .google .protobuf : protoc : 3.11 .0 : exe : $ { os .detected .classifier } </ protocArtifact >
<pluginId> grpc - java </pluginId>
< pluginArtifact > io .grpc : protoc - gen - grpc - java : 1.29 .0 : exe : $ { os .detected .classifier } </ pluginArtifact >
</ configuration >
< executions >
< execution >
< goals >
< goal > compile </ goal >
< goal > compile - custom </ goal >
</ goals >
</ execution >
</ executions >
</ plugin >
</ plugins >

Writing protobuf files

  • The editor uses the proto3 version, and needs to pay attention to the fixed directory structure (src/proto/*.proto), otherwise the compilation will fail.
  • proto files have a fixed writing format, you can search online by yourself.

 syntax = "proto3" ;
// Package path
option java_package = "com.greatom.dockerdemo.rule" ;
option java_multiple_files = true ;
package rule ;
// Declare services and methods
service RuleService {
// Query and update rules
rpc getArchivesDic ( RuleRequest ) returns ( RuleResponse ) ;
// Get the current rule dictionary
rpc getRule ( Request ) returns ( Response ) ;
}
// Define the request object
message RuleRequest {
// message RuleRPCDTO {
// int32 ruleCode = 1 ;
// string administrativeCost = 2 ;
// }
Response ruleRPCDTO = 1 ;
int32 basicId = 2 ;
}
// Define the response object
message RuleResponse {
int32 id = 1 ;
}
message Request {
}
// Define the response message
message Response {
int32 ruleCode = 1 ;
string administrativeCost = 2 ;
}
  • Use the Maven plugin to compile and double-click to execute (generate Bean, maven->Plugins->protobuf->protobuf:compile; generate specific interface, maven->Plugins->protobuf->protobuf:compile-custom).
  • The editor only executes the protobuf:compile command, and then finds the generated java file in the target directory (\target\generated-sources\protobuf), copies it and pastes it into the project execution directory.

Writing interface implementation classes

  • After compilation, the RuleServiceGrpc interface will be generated. The next step is to write the logic according to your business needs. The two interfaces defined by the editor are getArchivesDic (update rules) and getRule (query rules). The specific implementation is as follows

 // Inherited from RuleServiceGrpc.RuleServiceImplBase
// Implement the specific logic of the interface
@Component
public class RuleGRPCServer extends RuleServiceGrpc .RuleServiceImplBase {
// Update the rule dictionary
@Override
public void getArchivesDic ( RuleRequest request , StreamObserver < RuleResponse > responseObserver ) {
Response ruleRPCDTO = request .getRuleRPCDTO ( ) ;
RuleDTO ruleDTO = new RuleDTO ( ) ;
BeanUtils .copyProperties ( ruleRPCDTO , ruleDTO ) ;
RuleResponse ruleResponse = RuleResponse .newBuilder ( ) .setId ( 1 ) .build ( ) ;
responseObserver .onNext ( ruleResponse ) ;
responseObserver .onCompleted ( ) ;
}
// Query rule dictionary
@Override
public void getRule ( Request request , StreamObserver < Response > responseObserver ) {
Response response = Response .newBuilder ( ) .setRuleCode ( 1 )
.setAdministrativeCost ( "2222" ) .build ( ) ;
responseObserver .onNext ( response ) ;
responseObserver .onCompleted ( ) ;
}
}

Server and Client

  • Server startup class

 public static void main ( String [ ] args ) throws Exception {
// Set up the service interface.
Server server = ServerBuilder .forPort ( 9999 ) .addService ( new RuleGRPCServiceImpl ( ) ) .build ( ) .start ( ) ;
System .out .println ( String .format ( "GRpc server started successfully, port number: %d." , port ) ) ;
server .awaitTermination ( ) ;
}

Log --- GRpc server started successfully, port number: 9999.
  • Client startup class

 public static void main ( String [ ] args ) throws Exception {
// 1. Get a communication channel
ManagedChannel managedChannel = ManagedChannelBuilder .forAddress ( "localhost" , 9999 ) .usePlaintext ( ) .build ( ) ;
try {
// 2. Get the reason object
RuleServiceGrpc .RuleServiceBlockingStub rpcDateService = RuleServiceGrpc .newBlockingStub ( managedChannel ) ;
Request rpcDateRequest = Request
.newBuilder ( )
.build ( ) ;
// 3. Request
Response rpcDateResponse = rpcDateService .getRule ( rpcDateRequest ) ;
// 4. Output results
System .out .println ( rpcDateResponse .getRuleCode ( ) ) ;
finally
// 5. Close the channel and release resources.
managedChannel .shutdown ( ) ;
}
}

log:
- 16 : 05 : 44.628 [ grpc - nio - worker - ELG - 1 - 2 ] DEBUG io .grpc .netty .shaded .io .grpc .netty .NettyClientHandler - [ id : 0x8447cc92 , L : / 127.0 .0 .1 : 60973 - R : localhost / 127.0 .0 .1 : 9999 ] INBOUND DATA : streamId = 3 padding = 0 endStream = false length = 12 bytes = 0000000007086 f1203323232
- 16 : 05 : 44.648 [ grpc - nio - worker - ELG - 1 - 2 ] DEBUG io .grpc .netty .shaded .io .grpc .netty .NettyClientHandler - [ id : 0x8447cc92 , L : / 127.0 .0 .1 : 60973 - R : localhost / 127.0 .0 .1 : 9999 ] INBOUND HEADERS : streamId = 3 headers = GrpcHttp2ResponseHeaders [ grpc - status : 0 ] padding = 0 endStream = true
- Output result -----111
- 16 : 05 : 44.664 [ grpc - nio - worker - ELG - 1 - 2 ] DEBUG io .grpc .netty .shaded .io .grpc .netty .NettyClientHandler - [ id : 0x8447cc92 , L : / 127.0 .0 .1 : 60973 - R : localhost / 127.0 .0 .1 : 9999 ] OUTBOUND GO_AWAY : lastStreamId = 0 errorCode = 0 length = 0 bytes =

The client log output result means that the client successfully calls the server through gRPC and returns the result.

Conclusion

gRPC is essentially the traditional C|S model, which makes the roles clear and easy to understand.

Another smart thing is that it is based on the HTTP2.0 protocol instead of being self-developed, which is very friendly to future network development and lowers the threshold.

The most difficult part is the writing and use of proto files, which requires plugins and other dependencies, and the process is relatively complicated, but there may also be tools or scripts that can simplify this part. But the generated code is really good~ It reduces some of the workload. ​

<<:  Aruba Wi-Fi 6E Joins 1,400 Colleges and Universities Worldwide to Create a Smart Campus for the Future

>>:  How to ensure the security of 5G wireless networks

Recommend

5G issues should not be politicized

The official website of the European Competitive ...

Mobile 2G 3G 4G 5G Communication Base Station Architecture Evolution

Mobile communication systems have evolved from th...

Stop praising 5G!

On January 17, Huawei founder and CEO Ren Zhengfe...

How Amazon can achieve continuous delivery

I went to a large domestic e-commerce company for...

5G brings more than just internet speed. What does 5G really mean?

What does 5G mean? It means faster upload and dow...

Investigating the environmental and social impacts of 5G technology

The emergence of 5G technology has the potential ...

Four major trends in China's Internet development

On April 20, 1994, China gained full access to th...

How fiber optic networks can create more efficient and secure connections

We live in a technologically advanced age where h...