Getting started with gRPC-Web
gRPC or "gRPC Remote Procedure Call" (don't you just love recursive acronyms?) is a modern open source high performance RPC framework that can run in any environment. So first off, what is a Remote Procedure Call?
An RPC is when a program starts a procedure in a different space, think one microservice to another, without explicitly implementing details about the remote interaction and gRPC is the implementation provided by Google.
An RPC needs a way to serialize/deserialize data between clients and servers, this is done by an Interface Description Language. By default gRPC uses Protocol Buffers (Proto3), although others like JSON could also be used. Protocol Buffers are Google's language-neutral, platform-neutral extensible way of serializing structured data.
The schemas for Protocol Buffers are written in .proto files and shared between services. To get started, here's a basic example of one defining a message about a person:
The first non-empty/non-commented line in a proto3 file must be syntax = "proto3"; otherwise the Protocol Buffer Compiler will assume you are using the proto2 syntax. Each field in a message definition gets a unique number. These are used to identify your fields in the binary message. Message types can be reused and the keyword repeated is used for lists.
If you are interested in a complete language guide, this can be found in the proto3 developer documentation.
As it's underlying transport layer gRPC uses HTTP/2, this allows it to have features such as streaming and cancellation.
Furthermore gRPC provides authentication, bidirectional streaming and flow control, blocking and nonblocking bindings, and cancellation and timeouts. It also has server and client implementations for several languages, including: Java, Node, Go and Python. The gRPC-Web package lets you access gRPC services from browsers using an idiomatic API.
A browser does not have enough fine-grained control over a request to implement the HTTP/2 gRPC spec. gRPC-Web resolves this by starting its specification from the point of view of the HTTP/2 spec and then defining the differences.
The basic idea is to have the browser send a HTTP/1.1 or HTTP/2 request with Fetch or XHR and have a small proxy (for example Envoy Proxy) in front of the gRPC backend services to translate the requests and responses while communicating. Additional information can be found in this blogpost from gRPC.
My version of this Todo app is available on Github.
For this example you will need the Protocol Buffers Compiler and the grpc-web plugin. Download the correct versions for your OS and move the binaries to somewhere discoverable from your PATH. Also make sure they are executable.
On macOS you can run these commands after downloading the binaries:
$ unzip ~/Downloads/protoc-3.11.4-osx-x86_64.zip -d ~/Downloads/
$ sudo mv ~/Downloads/bin/protoc /usr/local/bin/protoc
$ sudo mv ~/Downloads/protoc-gen-grpc-web-1.0.7-darwin-x86_64 \ /usr/local/bin/protoc-gen-grpc-web
$ chmod +x /usr/local/bin/protoc-gen-grpc-web
I ran into an issue with macOS flagging the binaries as untrusted and solved it by running the following commands in the terminal: $ xattr -d com.apple.quarantine /usr/local/bin/protoc $ xattr -d com.apple.quarantine /usr/local/bin/protoc-gen-grpc-web
For the grpc-web proxy we will be running envoy as a docker container. The first step is to create a simple envoy.Dockerfile
Next, we need to create an envoy.yaml configuration file.
In the admin part we define a route to access the administration interface. This could come in handy when debugging.
In the static_resources part of the configuration we set up the rest of our proxy. Because of its integrated gRPC support all of the heavy lifting is done by the built-in envoy.grpc_web HTTP filter. We also configure all the necessary headers and CORS here.
Now we can already start our proxy while we continue to work on the rest of our application. First build the docker image and then run the container:
$ docker build -t todo/envoy -f ./envoy.Dockerfile .
$ docker run -d -p 8080:8080 -p 9901:9901 todo/envoy
If your check your docker processes with docker ps you should see your envoy container running.
Create a todo.proto file with the following content.
First we define a simple TodoService with 3 methods. Each of these needs a message type for the request and the response.
Now we can compile our protocol buffer to the needed JS files. Assuming you are in the directory containing the todo.proto file, run the following command.
$ protoc todo.proto \ --js_out=import_style=commonjs:./ \ --grpc-web_out=import_style=commonjs,mode=grpcwebtext:./
Two new files should now be added to your working directory todo_pb.js and todo_grpc_web_pb.js. These files contain the stubs needed to continue working on our project.
For our app we will need a couple of dependencies. Including Webpack to resolve our dependencies and compile our frontend app. Create a package.json and run npm install :
Next let's create our server in server.js.
At the top we create some static values and import the needed dependencies. After that we load in todo.proto. We will manage our list from memory using three functions. Each function receives a call and a callback parameter. The message fields received by gRPC can be found in call.request and we use the callback function to send back the result of the operation. At the end we create our service and start the server.
You can now start the backend in your terminal:
$ node server.js
For our client we set up a simple index.html :
Now create client.js, this file will be compiled by Webpack.
At the top we import the files previously compiled with protoc , these imports allow us to communicate with the backend. After that we build some logic for our frontend.
Start your client application:
$ npx webpack-dev-server client.js
You can now navigate to http://localhost:8081 and test your application. 🎉
gRPC is already an established way of communication between different microservices in a Cloud Native environment. With the addition of gRPC-Web you are now capable to retain this consistency throughout your entire stack. You also no longer have to deal with the additional overhead of setting up a REST-API to act as translator between your frontend code and your microservices.