An Executor, Networking TS and std::execution interface to grpc::CompletionQueue for writing asynchronous gRPC clients and servers using C++20 coroutines, Boost.Coroutines, Asio's stackless coroutines, callbacks, sender/receiver and more.
- Asio ExecutionContext compatible wrapper around grpc::CompletionQueue
- Executor and Networking TS requirements fulfilling associated executor
- Support for all RPC types: unary, client-streaming, server-streaming and bidirectional-streaming with any mix of Asio CompletionToken as well as TypedSender, including allocator customization
- Support for asynchronously waiting for grpc::Alarms including cancellation through cancellation_slots and StopTokens
- Initial support for
std::execution
concepts through libunifex and Asio: schedule, connect, submit, scheduler, typed_sender and more - Support for generic gRPC clients and servers (aka. proxies)
- Experimental support for Rust/Golang select-style programming with the help of cancellation safety
- No-Boost version with standalone Asio
- No-Asio version with libunifex
- CMake function to generate gRPC source files: asio_grpc_protobuf_generate
- Server side 'hello world':
std::unique_ptr<grpc::Server> server;
grpc::ServerBuilder builder;
agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};
builder.AddListeningPort(host, grpc::InsecureServerCredentials());
helloworld::Greeter::AsyncService service;
builder.RegisterService(&service);
server = builder.BuildAndStart();
boost::asio::co_spawn(
grpc_context,
[&]() -> boost::asio::awaitable<void>
{
grpc::ServerContext server_context;
helloworld::HelloRequest request;
grpc::ServerAsyncResponseWriter<helloworld::HelloReply> writer{&server_context};
co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context,
request, writer);
helloworld::HelloReply response;
response.set_message("Hello " + request.name());
co_await agrpc::finish(writer, response, grpc::Status::OK);
},
boost::asio::detached);
grpc_context.run();
More examples for things like streaming RPCs, double-buffered file transfer with io_uring, libunifex-based coroutines, sharing a thread with an io_context and generic clients/servers can be found in the example directory. Even more examples can be found in another repository.
Tested by CI:
- CMake 3.16.3 (min. 3.14)
- gRPC 1.44.0, 1.16.1 (older versions might work as well)
- Boost 1.79.0 (min. 1.74.0)
- Standalone Asio 1.17.0 (min. 1.17.0)
- libunifex 2022-02-09
- MSVC 19.32 (Visual Studio 17 2022)
- GCC 8.4.0, 9.3.0, 10.3.0, 11.1.0
- Clang 10.0.0, 11.0.0, 12.0.0
- AppleClang 13.0.0.13000029
- C++17 and C++20
For MSVC compilers and asio-grpc before v1.6.0 the following compile definitions need to be set:
BOOST_ASIO_HAS_DEDUCED_REQUIRE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EXECUTE_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_EQUALITY_COMPARABLE_TRAIT
BOOST_ASIO_HAS_DEDUCED_QUERY_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_QUERY_STATIC_CONSTEXPR_MEMBER_TRAIT
BOOST_ASIO_HAS_DEDUCED_PREFER_MEMBER_TRAIT
When using standalone Asio then omit the BOOST_
prefix.
The library can be added to a CMake project using either add_subdirectory
or find_package
. Once set up, include the individual headers from the agrpc/ directory or the combined header:
#include <agrpc/asioGrpc.hpp>
As a subdirectory
Clone the repository into a subdirectory of your CMake project. Then add it and link it to your target.
Using Boost.Asio:
find_package(gRPC)
find_package(Boost)
add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC gRPC::grpc++ asio-grpc::asio-grpc Boost::headers)
Or using standalone Asio:
find_package(gRPC)
find_package(asio)
add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC gRPC::grpc++ asio-grpc::asio-grpc-standalone-asio asio::asio)
Or using libunifex:
find_package(gRPC)
find_package(unifex)
add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC gRPC::grpc++ asio-grpc::asio-grpc-unifex unifex::unifex)
As a CMake package
Clone the repository and install it.
cmake -B build -DCMAKE_INSTALL_PREFIX=/desired/installation/directory .
cmake --build build --target install
Locate it and link it to your target.
Using Boost.Asio:
# Make sure CMAKE_PREFIX_PATH contains /desired/installation/directory
find_package(asio-grpc)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)
Or using standalone Asio:
# Make sure CMAKE_PREFIX_PATH contains /desired/installation/directory
find_package(asio-grpc)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc-standalone-asio)
Or using libunifex:
# Make sure CMAKE_PREFIX_PATH contains /desired/installation/directory
find_package(asio-grpc)
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc-unifex)
Using vcpkg
Add asio-grpc to the dependencies inside your vcpkg.json
:
{
"name": "your_app",
"version": "0.1.0",
"dependencies": [
"asio-grpc",
// To use the Boost.Asio backend add
// "boost-asio",
// To use the standalone Asio backend add
// "asio",
// To use the libunifex backend add
// "libunifex"
]
}
Locate asio-grpc and link it to your target in your CMakeLists.txt
:
find_package(asio-grpc)
# Using the Boost.Asio backend
target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc)
# Or use the standalone Asio backend
#target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc-standalone-asio)
# Or use the libunifex backend
#target_link_libraries(your_app PUBLIC asio-grpc::asio-grpc-unifex)
boost-container
- Use Boost.Container instead of <memory_resource>
See selecting-library-features to learn how to select features with vcpkg.
Using Hunter
See asio-grpc's documentation on the Hunter website: https://hunter.readthedocs.io/en/latest/packages/pkg/asio-grpc.html.
ASIO_GRPC_USE_BOOST_CONTAINER
- Use Boost.Container instead of <memory_resource>
.
ASIO_GRPC_DISABLE_AUTOLINK
- Set before using find_package(asio-grpc)
to prevent asio-grpcConfig.cmake
from finding and setting up interface link libraries.
asio-grpc is part of grpc_bench. Head over there to compare its performance against other libraries and languages.
Results from the helloworld unary RPC
Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, Linux, GCC 12.1.0, Boost 1.79.0, gRPC 1.46.3, asio-grpc v1.7.0, jemalloc 5.2.1
Request scenario: string_100B
Results
name | req/s | avg. latency | 90 % in | 95 % in | 99 % in | avg. cpu | avg. memory |
---|---|---|---|---|---|---|---|
go_grpc | 48622 | 19.85 ms | 30.16 ms | 33.44 ms | 40.17 ms | 101.95% | 24.58 MiB |
rust_tonic_mt | 41007 | 24.21 ms | 10.57 ms | 11.63 ms | 637.63 ms | 100.66% | 13.74 MiB |
rust_thruster_mt | 40807 | 24.35 ms | 10.70 ms | 12.33 ms | 630.52 ms | 103.65% | 11.51 MiB |
cpp_asio_grpc_unifex | 37288 | 26.69 ms | 28.34 ms | 28.79 ms | 30.88 ms | 103.08% | 29.52 MiB |
cpp_grpc_mt | 37078 | 26.84 ms | 28.55 ms | 29.02 ms | 30.14 ms | 103.15% | 29.05 MiB |
cpp_asio_grpc_callback | 36801 | 27.05 ms | 28.77 ms | 29.22 ms | 30.70 ms | 102.32% | 29.2 MiB |
rust_grpcio | 35994 | 27.67 ms | 29.54 ms | 30.23 ms | 31.14 ms | 102.77% | 17.19 MiB |
cpp_asio_grpc_coroutine | 32393 | 30.74 ms | 32.91 ms | 33.53 ms | 35.01 ms | 102.54% | 27.11 MiB |
cpp_asio_grpc_io_context_coro | 30757 | 32.38 ms | 34.51 ms | 35.06 ms | 36.33 ms | 77.94% | 26.5 MiB |
cpp_grpc_callback | 10800 | 84.36 ms | 108.19 ms | 155.96 ms | 171.98 ms | 101.62% | 65.79 MiB |
name | req/s | avg. latency | 90 % in | 95 % in | 99 % in | avg. cpu | avg. memory |
---|---|---|---|---|---|---|---|
cpp_asio_grpc_unifex | 85002 | 9.84 ms | 15.42 ms | 18.54 ms | 27.56 ms | 195.27% | 85.67 MiB |
cpp_asio_grpc_callback | 84842 | 9.90 ms | 15.44 ms | 18.52 ms | 27.20 ms | 197.58% | 80.47 MiB |
cpp_grpc_mt | 84513 | 9.89 ms | 15.41 ms | 18.51 ms | 27.40 ms | 198.88% | 79.34 MiB |
cpp_asio_grpc_coroutine | 80263 | 10.65 ms | 16.77 ms | 19.82 ms | 27.94 ms | 211.12% | 80.35 MiB |
cpp_asio_grpc_io_context_coro | 76454 | 11.35 ms | 18.42 ms | 21.51 ms | 29.58 ms | 158.83% | 75.56 MiB |
cpp_grpc_callback | 74806 | 10.87 ms | 19.12 ms | 23.23 ms | 31.87 ms | 209.88% | 131.14 MiB |
go_grpc | 67022 | 13.04 ms | 20.26 ms | 23.44 ms | 30.73 ms | 197.34% | 24.42 MiB |
rust_thruster_mt | 61205 | 15.14 ms | 43.51 ms | 74.85 ms | 96.47 ms | 201.97% | 13.13 MiB |
rust_grpcio | 60897 | 15.36 ms | 22.58 ms | 25.41 ms | 30.79 ms | 212.83% | 31.49 MiB |
rust_tonic_mt | 59668 | 15.74 ms | 42.14 ms | 62.74 ms | 97.83 ms | 202.56% | 15.2 MiB |
The main workhorses of this library are the agrpc::GrpcContext
and its executor_type
- agrpc::GrpcExecutor
.
The agrpc::GrpcContext
implements asio::execution_context and can be used as an argument to Asio functions that expect an ExecutionContext
like asio::spawn.
Likewise, the agrpc::GrpcExecutor
satisfies the Executor and Networking TS and Scheduler requirements and can therefore be used in places where Asio/libunifex expects an Executor
or Scheduler
.
The API for RPCs is modeled closely after the asynchronous, tag-based API of gRPC. As an example, the equivalent for grpc::ClientAsyncReader<helloworld::HelloReply>.Read(helloworld::HelloReply*, void*)
would be agrpc::read(grpc::ClientAsyncReader<helloworld::HelloReply>&, helloworld::HelloReply&, CompletionToken)
.
Instead of the void*
tag in the gRPC API the functions in this library expect a CompletionToken. Asio comes with several CompletionTokens already: C++20 coroutine, stackless coroutine, callback and Boost.Coroutine. There is also a special token created by agrpc::use_sender(scheduler)
that causes RPC functions to return a TypedSender.
If you are interested in learning more about the implementation details of this library then check out this blog article.
Getting started
Start by creating a agrpc::GrpcContext
.
For servers and clients:
grpc::ServerBuilder builder;
agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};
For clients only:
agrpc::GrpcContext grpc_context{std::make_unique<grpc::CompletionQueue>()};
Add some work to the grpc_context
and run it. Make sure to shutdown the server
before destructing the grpc_context
. Also destruct the grpc_context
before destructing the server
. A grpc_context
can only be run on one thread at a time.
grpc_context.run();
server->Shutdown();
} // grpc_context is destructed here before the server
It might also be helpful to create a work guard before running the agrpc::GrpcContext
to prevent grpc_context.run()
from returning early.
std::optional guard{asio::require(grpc_context.get_executor(), asio::execution::outstanding_work_t::tracked)};
Check out the examples and the API documentation.
Asio-grpc abstracts away the implementation details of asynchronous grpc handling: crafting working code is easier, faster, less prone to errors and considerably more fun. At 3YOURMIND we reliably use asio-grpc in production since its very first release, allowing our developers to effortlessly implement low-latency/high-throughput asynchronous data transfer in time critical applications.
Our project is a real-time distributed motion capture system that uses your framework to stream data back and forward between multiple machines. Previously I have tried to build a bidirectional streaming framework from scratch using only gRPC. However, it's not maintainable and error-prone due to a large amount of service and streaming code. As a developer whose experienced both raw grpc and asio-grpc, I can tell that your framework is a real a game-changer for writing grpc code in C++. It has made my life much easier. I really appreciate the effort you have put into this project and your superior skills in designing c++ template code.