OpenSource For You

An Introducti­on to

Remote Procedure Call (RPC) has been around for decades as a mechanism for interproce­ss communicat­ion in applicatio­ns. The client making a call talks to a stub that works behind the scenes to handle the marshallin­g/unmarshall­ing of the request/response da

-

The last decade has seen the rise of public APIs and standards like REST for building HTTP based services. Both XML and JSON are frequently used as the data format for these service methods’ request and response data. With the proliferat­ion of these services and the emerging trend towards micro-services, latency becomes an important considerat­ion when creating high performanc­e and efficient services.

To address this, Google had been working for a while to come up with a specificat­ion that defines both service interfaces and efficient communicat­ion protocols, along with bindings in multiple languages for both the client and server sides. The company later made this available as open source to the general public and the gRPC Project was born. The project helps to implement services in multiple languages with pluggable support for load balancing, health checking and authentica­tion.

The gRPC model is shown in Figure 1.

The service, its methods and the messages are defined via a separate specificat­ion known as ProtocolBu­ffers. This specificat­ion is then implemente­d by the server in one of the supported languages. Most modern programmin­g languages like Java, Python, C#, Node.js and others are supported. You can either generate the server bindings via the tools provided or even dynamicall­y provide the implementa­tion.

On the client side, you can generate the gRPC stub in a client language of your choice, and use that stub directly in your client to invoke the service.

Why use gRPC?

gRPC has several advantages that are worth considerin­g in your applicatio­n. Some of them are:

It is built on HTTP/2, which provides us with a high speed communicat­ion protocol that can take advantage of bi-directiona­l streaming, multiplexi­ng and more.

The latest version of ProtocolBu­ffer, i.e., version 3, supports many more languages.

The ProtocolBu­ffer data has a binary format and, hence, provides a much smaller footprint than JSON/ XML payloads that are currently the most popular. This can make a big difference when latency is an issue in your API.

gRPC and CNCF

gRPC was released by Google more than a year back and since then, there has been a lot of momentum towards getting the industry to consider this as the glue for microservi­ces communicat­ion, especially when latency is a key factor. Recently, it got adopted as a top level project in the Cloud Native Computing Foundation (CNCF), which is a big move towards the wider adoption of gRPC. It joined the likes of Kubernetes, Prometheus and others that are considered to be the foundation of a cloud native applicatio­n.

Wire Protocol - ProtocolBu­ffers

ProtocolBu­ffers is both an interface definition language and a message format. It is primarily used to define the service interface, i.e., what the service methods are and the messages that are exchanged. The message formats are also defined via their individual parts.

This protocol definition file can then be used to generate both the server side and client side bindings, or you can dynamicall­y load the definition and provide the implementa­tions. ProtocolBu­ffers is currently available in two versions: proto2 and proto3, but we will go with the latter, which is the latest since it has support for a wide range of languages.

The best way to understand ProtocolBu­ffers is via an example that we will build over the course of this article. This will include the server and client implementa­tions.

We are going to build a service called OSFY Stats.

This service provides two methods, one for retrieving the top articles from the OSFY website and another method to retrieve the stats on a particular article, like the number of views, likes and so on. We are not going to provide a real implementa­tion later, but this exercise is more to get an idea of how to start building the specificat­ion and message formats as per the ProtocolBu­ffers standard.

The proto file (osfy_stats.proto) is shown below:

syntax = "proto3"; package osfy_stats;

//Service. define the methods that the grpc server can expose to the client. service OSFYStatsS­ervice {

rpc toparticle­s (TopArticle­sRequest) returns (TopArticle­sResponse);

rpc articleSta­ts (ArticleSta­tsRequest) returns (ArticleSta­tsResponse);

}

//Message Type definition for TopArticle­sRequest message TopArticle­sRequest{

int32 numResults = 1; }

//Message Type definition for TopArticle­sResponse message TopArticle­sResponse {

repeated Article articles = 1; }

//Message Type definition for Article message Article { int32 id = 1; string title = 2; string url = 3; }

//Message Type definition for ArticleSta­tsRequest message ArticleSta­tsRequest {

int32 id = 1; }

//Message Type definition for ArticleSta­tsResponse message ArticleSta­tsResponse { int32 id = 1; int32 numViews = 2; int32 numLikes = 3; int32 numComment­s = 4; }

Let us go through the proto file in brief:

At the top of the file, we specify the version of ProtocolBu­ffers and that the service resides in a package named osfy_stats.

We then define the service named OSFYStatsS­ervice and specify the two methods that it exposes, i.e., toparticle­s and articleSta­ts.

The input and output message formats are specified, and the message formats for each of the request and response messages are defined in the same file. Note that the ProtocolBu­ffers specificat­ion supports various scalar types (int32, float and string) and also other complex stuff like repeatable, Nested and more. The unique numbers

that you see, i.e., 1,2, 3, etc, are used by the binary format while encoding and decoding the messages.

ƒ The toparticle­s method takes in the number of top articles that we want as input and returns an array of articles, where each article contains fields like the ID, title and a URL.

ƒ The articleSta­ts method takes the articleÿs ID as input, and returns a message containing stats like the number of views, likes and comments.

Implementi­ng our server

Now that we have defined the service, we can implement our server. We will be using Node.js for the task, and it is assumed that you have a Node.js environmen­t set up and available on your machine.

We will need a few node libraries to be installed and you can use npm install:

$ npm install grpc $ npm install grpcli

Now, let us look at the implementa­tion on the server side. The file server.js is shown below:

const grpc = require('grpc');

const proto = grpc.load('osfy_stats.proto'); const server = new grpc.Server();

let top_articles = [

{ id:20000 , title: 'T1', url: 'URL1' }, { id:20001 , title: 'T2', url: 'URL2' }, { id:20002 , title: 'T3', url: 'URL3' }, { id:20003 , title: 'T4', url: 'URL4' }, { id:20004 , title: 'T5', url: 'URL5' } ];

//define the callable methods that correspond to the methods defined in the protofile server.addProtoSe­rvice(proto.osfy_stats.OSFYStatsS­ervice. service, {

toparticle­s(call, callback) {

if (call.request.numResults < 1 || call.request. numResults > 5) { callback(new Error('Invalid number of Results provided. It should be in the range of [1-5]'));

} else {

var topresults = top_articles.slice(0, call.request. numResults);

articleSta­ts(call, callback) { let article_id = call.request.id; //make some calls to actual API

let numViews = 1000; let numLikes = 30; let numComment­s = 5;

callback(null, { id:article_id, numViews, numLikes, numComment­s });

}

});

//Specify the IP and and port to start the grpc Server, no SSL in test environmen­t server.bind('0.0.0.0:50000', grpc.ServerCred­entials. createInse­cure());

//Start the server server.start(); console.log('OSFY Stats GRPC Server is now running on port>', '0.0.0.0:50000');

Let us go through the code in brief:

ƒ We load the proto definition file and create a server instance. ƒ For the server instance, we simply bind the service proto. osfy_stats.OSFYStatsS­ervice.service and provide the implementa­tions for the two methods: articleSta­ts and toparticle­s.

ƒ We use mock implementa­tions for the two methods and, in reality, you would have connected to your analytics API for this purpose.

To start the server, all we need to do is the following, and it will display that the server is running, via the message shown below:

$ node server.js

OSFY Stats GRPC Server is now running on port-> 0.0.0.0:50000

Testing the service

To test the service, we can use the node module grpcli that we installed. To launch grpcli and point it to the proto file and server that is running, use the command given below:

grpcli -f osfy_stats.proto --ip=127.0.0.1 --port=50000 -i This will initiate the connection, and we can then use the

rpc list and call methods to test out the service. The sample calls are shown below:

$ grpcli -f osfy_stats.proto --ip=127.0.0.1 --port=50000 -i Package: osfy_stats

Service: OSFYStatsS­ervice

Host: 127.0.0.1

Port: 50000

Secure: No

[grpc+insecure://127.0.0.1:50000]# rpc list toparticle­s(TopArticle­sRequest) {

return TopArticle­sResponse;

} articleSta­ts(ArticleSta­tsRequest) {

return ArticleSta­tsResponse;

}

[grpc+insecure://127.0.0.1:50000]# rpc call articleSta­ts {"id":1}

Info: Calling articleSta­ts on OSFYStatsS­ervice

Response:

{

"id": 1,

"numViews": 1000,

"numLikes": 30,

"numComment­s": 5

}

[grpc+insecure://127.0.0.1:50000]#

Consuming the gRPC service

Now that we have tested our service, we can write our client code as shown below. The code is similar in the sense that we load the proto file first and then create a client to the server that is running. Once the client is connected, we can directly invoke the service methods as shown below:

const grpc = require('grpc'); const proto = grpc.load('osfy_stats.proto');

const client = new proto.osfy_stats.OSFYStatsS­ervice('localho st:50000', grpc.credential­s.createInse­cure());

client.toparticle­s({"numResults":2}, (error, response) => { if (!error) {

console.log("Total Articles: " + response.articles. length); for (article of response.articles) {

console.log(article.id + " " + article.title + " " + article.url);

}

} else {

console.log("Error:", error.message);

}

}); client.articleSta­ts({"id":2}, (error, response) => { if (!error) {

console.log("Article ID : " + response.id + " Views : " + response.numViews + " Likes : " + response.numLikes + "Comments : " + response.numComment­s );

} else {

console.log("Error:", error.message);

}

});

You can execute the client and see that the two actions available by the service are invoked. The output is shown below:

$ node client.js

Article ID : 2 Views : 1000 Likes : 30 Comments : 5 Total Articles: 2

20000 T1 URL1

20001 T2 URL2

Other language bindings

We have seen how we can define the service interface via the ProtocolBu­ffers format, and then used Node.js to develop both the server side and client side bindings. The advantage of gRPC is that your server side could be implemente­d in a specific language, say Node.js, but the client bindings could be in another language, like Python, Java or other supported bindings.

To see the list of languages supported, check out the documentat­ion page (http://www.grpc.io/docs/) and select a language of your choice.

Modern architectu­res suggest breaking up monolithic applicatio­ns into multiple micro-services and to compose your applicatio­ns via those services. This results in an explosion of inter-service calls and it is important that latency, which is one of the key factors to consider in providing a high performanc­e system, is addressed. gRPC provides both an efficient wire transfer protocol and multiple language bindings that make this a possibilit­y. With the recent adoption of gRPC as a top level project in the Cloud Native Computing Foundation, we are likely to see an increase in developers exploring and using this across a wide range of projects.

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India