You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's all about GraphQL server benchmarking across many languages.
Benchmarks cover maximum throughput and normal use latency. For a more
detailed description of the methodology used, the how, and the why see the
bottom of this page.
Results
Top 5 Ranking
Rate
Latency
Verbosity
1️⃣
agoo-c (c)
agoo (ruby)
fastify-mercurius (javascript)
2️⃣
ggql-i (go)
agoo-c (c)
express-graphql (javascript)
3️⃣
ggql (go)
ggql-i (go)
koa-koa-graphql (javascript)
4️⃣
agoo (ruby)
ggql (go)
apollo-server-fastify (javascript)
5️⃣
fastify-mercurius (javascript)
koa-koa-graphql (javascript)
apollo-server-express (javascript)
Parameters
Last updated: 2021-08-19
OS: Linux (version: 5.7.1-050701-generic, arch: x86_64)
Install all dependencies, Ruby, Docker, Perfer, Oj, and RSpec.
Build containers
build all
build.rb
build just named targets
build.rb [target] [target] ...
Runs the tests (optional)
rspec spec.rb
Run the benchmarks
frameworks is an options list of frameworks or languages run (example: ruby agoo-c)
benchmarker.rb [frameworks...]
Methodology
Performance of a framework includes latency and maximum number of requests
that can be handled in a span of time. The assumption is that users of a
framework will choose to run at somewhat less that fully loaded. Running fully
loaded would leave no room for a spike in usage. With that in mind, the
maximum number of requests per second will serve as the upper limit for a
framework.
Latency tends to vary significantly not only radomly but according to the
load. A typical latency versus throughput curve starts at some low-load value
and stays fairly flat in the normal load region until some inflection
point. At the inflection point until the maximum throughput the latency
increases.
These benchmarks show the normal-load latency as that is what most users will
see when using a service. Most deployments do not run at near maximum
throughput but try to stay in the normal-load are but are prepared for spike
in usage. To accomdate slower frameworks a value of 1000 request per second is
used for determing the median latency. The assumption is that a rate of 1000
request per second falls in the normal range for most if not all frameworks
tested.
The perfer benchmarking tool is used for these reasons:
A rate can be specified for latency determination.
JSON output makes parsing output easier.
Fewer threads are needed by perfer leaving more for the application being benchmarked.