GithubHelp home page GithubHelp logo

fastify / benchmarks Goto Github PK

View Code? Open in Web Editor NEW
569.0 17.0 222.0 601 KB

Fast and low overhead web framework fastify benchmarks.

Home Page: https://github.com/fastify/fastify

License: MIT License

JavaScript 100.00%
fastify nodejs web framework fastest

benchmarks's Introduction

CI Coverage Status js-standard-style NPM version NPM downloads Discord


TL;DR

  • Fastify is a fast and low overhead web framework for Node.js.
  • This package shows how fast it is comparatively.
  • For metrics (cold-start) see metrics.md

Requirements

To be included in this list, the framework should captivate users' interest. We have identified the following minimal requirements:

  • Ensure active usage: a minimum of 500 downloads per week
  • Maintain an active repository with at least one event (comment, issue, PR) in the last month
  • The framework must use the Node.js HTTP module

Usage

Clone this repo. Then

node ./benchmark [arguments (optional)]

Arguments

  • -h: Help on how to use the tool.
  • compare: Get comparative data for your benchmarks.

You may also compare all test results, at once, in a single table; benchmark compare -t

You can also extend the comparison table with percentage values based on fastest result; benchmark compare -p

Benchmarks

  • Machine: linux x64 | 4 vCPUs | 15.6GB Mem
  • Node: v20.12.1
  • Run: Mon Apr 15 2024 03:53:58 GMT+0000 (Coordinated Universal Time)
  • Method: autocannon -c 100 -d 40 -p 10 localhost:3000 (two rounds; one to warm-up, one to measure)
Version Router Requests/s Latency (ms) Throughput/Mb
bare v20.12.1 46892.0 20.83 8.36
polkadot 1.0.0 45604.0 21.44 8.13
polka 0.5.2 45405.6 21.51 8.10
fastify 4.26.2 45164.8 21.64 8.10
0http 3.5.3 45126.4 21.66 8.05
rayo 1.4.6 44833.6 21.81 8.00
server-base 7.1.32 44674.4 21.89 7.97
connect 3.7.0 44626.4 21.91 7.96
server-base-router 7.1.32 43204.0 22.65 7.70
h3 1.11.1 42976.8 22.77 7.66
connect-router 1.3.8 42652.8 22.94 7.61
h3-router 1.11.1 41413.6 23.65 7.39
hono 4.2.4 40416.0 24.24 7.21
restana 4.9.9 38696.0 25.34 6.90
koa 2.15.3 35555.8 27.62 6.34
take-five 2.0.0 34290.0 28.66 12.33
koa-isomorphic-router 1.0.1 33973.8 28.92 6.06
restify 11.1.0 33343.6 29.48 6.01
koa-router 12.0.1 32594.4 30.17 5.81
hapi 21.3.9 30410.0 32.36 5.42
fastify-big-json 4.26.2 11882.0 83.61 136.71
express 4.19.2 10500.6 94.65 1.87
express-with-middlewares 4.19.2 10055.2 98.85 3.74
micro-route 2.5.0 N/A N/A N/A
micro 10.0.1 N/A N/A N/A
microrouter 3.1.3 N/A N/A N/A
trpc-router 10.45.2 N/A N/A N/A

benchmarks's People

Contributors

3imed-jaberi avatar 9ssi7 avatar aboutlo avatar aichholzer avatar ardalanamini avatar cagataycali avatar dancastillo avatar dependabot[bot] avatar dotcypress avatar dougwilson avatar eomm avatar fdawgs avatar giacomorebonato avatar github-actions[bot] avatar hekike avatar hnry avatar hueniverse avatar jameskyburz avatar jkyberneees avatar leizongmin avatar lukeed avatar mannil avatar mcollina avatar mudrz avatar pi0 avatar rafaelgss avatar salesh avatar sinchang avatar yusukebe avatar zekth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarks's Issues

actions to update bench results

🚀 Feature Proposal

Run the benchmark every x time (month?) updating the readme file automatically using GitHub actions

In this way, we could test also the banch script

Related to #101

please add units to the table

🚀 Feature Proposal

just add the units (latency in ms i assume?)

Motivation

clarity

Example

read the table and know for a fact the units instead of assuming it's ms

Add Restana

On https://github.com/the-benchmarker/web-frameworks benchmark there are nodejs frameworks which come faster than fastify (Restana, rayo, polka). Here Rayo and Polka come slower than Fastify. It will be great to add Restana here as well, to see how well they do this benchmark.

Also, it will be interesting to find out why Rayo and Polka come faster on web frameworks benchmark but are slower here. Maybe this benchmark is somehow biased towards fastify?

Add polka server

I just heard about polka.
Should it be added to the benchmarks?
Maybe @lukeed could add a benchmark here to see how it compares 😄

Chalk v5 requiers ESM

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the regression has not already been reported

Last working version

1.0

Stopped working in version

current master

Node.js version

16

Operating system

Windows

Operating system version (i.e. 20.04, 11.3, 10)

10

💥 Regression Report

Version of chalk was bumped to 5, but chalk 5 requiers use of ESM. So benchmark-compare.js throws an error:

Error [ERR_REQUIRE_ESM]: require() of ES Module benchmarks\node_modules\chalk\source\index.js from benchmarks\benchmark-compare.js not supported.
Instead change the require of index.js in benchmarks\benchmark-compare.js to a dynamic import() which is available in all CommonJS modules.
    at Object.<anonymous> (benchmarks\benchmark-compare.js:8:15) {
  code: 'ERR_REQUIRE_ESM'
}

Steps to Reproduce

npm run compare

Expected Behavior

To not throw an error

Markdown table

benchmarks.js compare -t should generate a copy-n-pasteable markdown table. See here.

Setup CI for pull requests

There is a npm test script included in the module. Can someone with privileges set this up in CI so the result shows in pull requests?

It currently fails because no one realized that some PRs were not passing before merging because there is no CI.

$ npm test

> [email protected] test /Users/doug.wilson/Code/NodeJS/fastify-benchmarks
> standard | snazzy

standard: Use JavaScript Standard Style (https://standardjs.com)
standard: Run `standard --fix` to automatically fix some problems.

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmark-bench.js
  92:1  error  Expected indentation of 4 spaces but found 2
  93:1  error  Expected indentation of 6 spaces but found 4
  94:1  error  Expected indentation of 4 spaces but found 2

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/@leizm-web.js
  3:46  error  Extra semicolon
  5:26  error  Extra semicolon

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/trek-engine-router.js
  26:7   error  Expected literal to be on the right side of ==
  26:11  error  Expected '===' and instead saw '=='

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/trek-engine.js
  15:7   error  Expected literal to be on the right side of ==
  15:11  error  Expected '===' and instead saw '=='

✖ 9 problems
npm ERR! Test failed.  See above for more details.

response time

Hello Team,

Just wanted to know the response time of fastify in comparison to other node.js frameworks??

update benchmark table in readme

Please could the benchmark table be updated in the readme.

It's been a while and it would be nice to include the new frameworks.

Thanks!

Results not maching my tests

I'm currently seeing some slightly different numbers when running on my dedicated server. Some frameworks results are similar, but others varies slightly.

I'm testing on a dedicated server EX41S-SSD, Intel Core i7, 4Ghz, 64GB RAM, 4C/8T, SSD.

Router Requests/s Latency Throughput/Mb
bare 67344.0 1.41 10.53
connect-router 61968.0 1.54 9.69
connect 66796.8 1.42 10.45
egg.js 26318.4 3.72 8.68
express-route-prefix 47273.6 2.03 16.45
express-with-middlewares 30648.0 3.17 11.08
express 37750.4 2.57 5.90
fastify-big-json 14890.4 6.57 170.99
fastify 72889.6 1.30 11.40
foxify 72108.8 1.31 10.25
hapi 29752.0 3.28 4.65
koa-router 45865.6 2.11 7.17
koa 55267.2 1.74 8.64
micro-route 60528.0 1.58 9.47
micro 69984.0 1.35 10.95
microrouter 38044.8 2.54 5.95
polka 66166.4 1.44 10.35
rayo 66070.4 1.44 10.34
restify 39241.6 2.05 6.21
server-base-router 62409.6 1.53 9.76
server-base 54473.6 1.77 8.52
spirit-router 57923.2 1.30 9.06
spirit 60822.4 1.25 9.51
take-five 0.0 0.00 0.00
total.js 47382.4 2.03 12.38
trek-engine 53059.2 1.82 7.54
trek-router 51734.4 1.86 7.35
vapr 54422.4 1.76 7.73
yeps-router 44278.4 2.18 6.93
yeps 56905.6 1.69 8.90

Mention the impact of schema use on performance

Newcomers to fastify are likely to be particularly interested in performance gain over other frameworks (and the introducing article emphasizes that).

So IMHO, it's better to be the more precise & transparent on how to achieve this performance level.

One main advantage of fastify seems to be its stringify module (https://github.com/fastify/fast-json-stringify), however currently this module will be only used if a schema is provided.

In my tests, I noticed a 25% perf gain by specifying a schema.

So I think it would be mentioned somewhere.

Is node v6.11.3 supported?

I tried to run benchmark on node-v6.11.3, and it seems that it needs async to work properly.

$ node -v
v6.11.3
$ npm -v
3.10.10
$ benchmark 
/usr/local/lib/node_modules/fastify-benchmarks/lib/bench.js:7
const doBench = async (handler) => {
                      ^

SyntaxError: Unexpected token (

Maintainers to issue a release

Hi Folks,

I got npm publish rights from @cagataycali! Thanks!

Is there a volunteer that would help in cutting releases of this module, so we can fix #83?

There is problem a bunch of changes to make in the README to make it easily installable.

Thanks

Matteo

[Feature] Add percentage value to cells

Hi! I created a small feature which adds percentage value to the cells based on the fastest choice. I think it is easier to compare this way. You can check the code here.

$ node benchmark-compare.js -t -p will output table like this:

┌─────────┬─────────┬────────┬─────────────┬─────────────┬───────────────┐
│         │ Version │ Router │ Requests/s  │ Latency     │ Throughput/Mb │
│         │         │        │ (% of bare) │ (% of bare) │ (% of bare)   │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ bare    │ 8.11.2  │ ✗      │ 10714.8     │ 9.13        │ 1.52          │
│         │         │        │ (100.00)    │ (100.00)    │ (100.00)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ micro   │ 9.3.2   │ ✗      │ 10595.6     │ 9.25        │ 1.66          │
│         │         │        │ (98.89)     │ (101.31)    │ (109.05)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ fastify │ 1.6.0   │ ✓      │ 10065.21    │ 9.73        │ 1.57          │
│         │         │        │ (93.94)     │ (106.57)    │ (103.29)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ express │ 4.16.3  │ ✓      │ 5806.8      │ 16.82       │ 0.90          │
│         │         │        │ (54.19)     │ (184.23)    │ (59.47)       │
└─────────┴─────────┴────────┴─────────────┴─────────────┴───────────────┘

If you like it, I happy to create a PR.

Add version column in REAME

🚀 Feature Proposal

Add a column in the README that tracks the version of the framework which has run the banch

Motivation

Transparency

Example

Router Requests/s Latency Throughput/Mb Version
polkadot 57384.0 1.63 8.97 1.?
fastify 56417.6 1.68 8.82 3.0.1
polka 54064.8 1.76 8.46 0.5.?

Bug: Faster benchmark has negative percentage

Sometimes benchmark is presented with negative percentage:
Both are awesome but take-five is -12.07% faster than fastify

Edit: I realized that problem is with autocannon-compare module, so feel free to close/update issue

Errors in log

I'm concerned about certain lines of logs in the benchmarks which could lead to potential false results:

// egg.js
2020-08-01T00:47:46.2549900Z 2020-08-01 00:47:46,237 ERROR 2839 [-/undefined/-/3ms GET /] nodejs.EPIPEError: write EPIPE
2020-08-01T00:47:46.2550242Z     at afterWriteDispatched (internal/stream_base_commons.js:154:25)
2020-08-01T00:47:46.2550525Z     at writevGeneric (internal/stream_base_commons.js:137:3)
2020-08-01T00:47:46.2550797Z     at Socket._writeGeneric (net.js:784:11)
2020-08-01T00:47:46.2551057Z     at Socket._writev (net.js:793:8)
2020-08-01T00:47:46.2551293Z     at doWrite (_stream_writable.js:401:12)
2020-08-01T00:47:46.2551543Z     at clearBuffer (_stream_writable.js:519:5)
2020-08-01T00:47:46.2551790Z     at Socket.Writable.uncork (_stream_writable.js:338:7)
2020-08-01T00:47:46.2552070Z     at ServerResponse._flushOutput (_http_outgoing.js:854:10)
2020-08-01T00:47:46.2552330Z     at ServerResponse._flush (_http_outgoing.js:823:22)
2020-08-01T00:47:46.2552541Z     at ServerResponse.assignSocket (_http_server.js:219:8)
2020-08-01T00:47:46.2552766Z errno: "EPIPE"
2020-08-01T00:47:46.2553000Z code: "EPIPE"
2020-08-01T00:47:46.2553237Z syscall: "write"
2020-08-01T00:47:46.2553458Z headerSent: true
2020-08-01T00:47:46.2553677Z name: "EPIPEError"
2020-08-01T00:47:46.2553882Z pid: 2839
2020-08-01T00:47:46.2554301Z hostname: fv-az54
// express with route
2020-08-01T00:49:47.7387707Z (node:2957) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [Socket]. Use emitter.setMaxListeners() to increase limit
// trek routeur
2020-08-01T01:20:30.2061204Z   HttpError: write ECONNRESET
2020-08-01T01:20:30.2061702Z       at Array.onError (/home/runner/work/benchmarks/benchmarks/node_modules/trek-engine/lib/engine.js:54:15)
2020-08-01T01:20:30.2062194Z       at listener (/home/runner/work/benchmarks/benchmarks/node_modules/on-finished/index.js:169:15)
2020-08-01T01:20:30.2062680Z       at onFinish (/home/runner/work/benchmarks/benchmarks/node_modules/on-finished/index.js:100:5)
2020-08-01T01:20:30.2063161Z       at callback (/home/runner/work/benchmarks/benchmarks/node_modules/ee-first/index.js:55:10)
2020-08-01T01:20:30.2063648Z       at Socket.onevent (/home/runner/work/benchmarks/benchmarks/node_modules/ee-first/index.js:93:5)
2020-08-01T01:20:30.2063841Z       at Socket.emit (events.js:327:22)
2020-08-01T01:20:30.2064054Z       at errorOrDestroy (internal/streams/destroy.js:108:12)
2020-08-01T01:20:30.2064265Z       at onwriteError (_stream_writable.js:418:5)
2020-08-01T01:20:30.2064464Z       at onwrite (_stream_writable.js:445:5)
2020-08-01T01:20:30.2064653Z       at internal/streams/destroy.js:50:7

Mention @Eomm

micro benchmark

I start playing with project and create new micro benchmark with another router, and on my machine it was a little bit faster than original one.

'use strict'

const micro = require('micro')
const dispatch = require('micro-route/dispatch')

const server = micro(
  dispatch('/', 'GET', (req, res) => {
    return micro.send(res, 200, { hello: 'world' })
  })
)

server.listen(3000)

Does it make any sense to send PR with new benchmark?

Error while running benchmarks

I tried running fastify-benchmarks by installing it globaly it threw below error:

fastify-benchmark
⠼ Started bareexec error: Error: Command failed: node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000
module.js:491
    throw err;
    ^

Error: Cannot find module '/home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon'
    at Function.Module._resolveFilename (module.js:489:15)
    at Function.Module._load (module.js:439:25)
    at Function.Module.runMain (module.js:609:10)
    at startup (bootstrap_node.js:158:16)
    at bootstrap_node.js:598:3

✔ Results for bare
{ Error: Command failed: node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000
module.js:491
    throw err;
    ^

Error: Cannot find module '/home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon'
    at Function.Module._resolveFilename (module.js:489:15)
    at Function.Module._load (module.js:439:25)
    at Function.Module.runMain (module.js:609:10)
    at startup (bootstrap_node.js:158:16)
    at bootstrap_node.js:598:3

    at ChildProcess.exithandler (child_process.js:270:12)
    at emitTwo (events.js:125:13)
    at ChildProcess.emit (events.js:213:7)
    at maybeClose (internal/child_process.js:927:16)
    at Socket.stream.socket.on (internal/child_process.js:348:11)
    at emitOne (events.js:115:13)
    at Socket.emit (events.js:210:7)
    at Pipe._handle.close [as _onclose] (net.js:545:12)
  killed: false,
  code: 1,
  signal: null,
  cmd: 'node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000' }

Should we add metrics to cold start?

🚀 Feature Proposal

I know that provide a fast cold start is not the focus of fastify, but I think that should we provide some metrics about fastify regardless of the route response time.

Besides these metrics could help us to improve the initialization at some point.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

npm run compare throws error deep within autocannon-compare dependency chain

(node:82791) UnhandledPromiseRejectionWarning: TypeError: gammaCollection.betaln is not a function
    at beta (/Users/omni/Documents/git/fastify-benchmarks/node_modules/mathfn/functions/beta.js:43:37)
    at Object.incBeta (/Users/omni/Documents/git/fastify-benchmarks/node_modules/mathfn/functions/beta.js:133:35)
    at StudenttDistribution.cdf (/Users/omni/Documents/git/fastify-benchmarks/node_modules/distributions/distributions/studentt.js:32:17)
    at StudentT.AbstactStudentT.pValue (/Users/omni/Documents/git/fastify-benchmarks/node_modules/ttest/hypothesis/abstact.js:22:32)
    at StudentT.AbstactStudentT.valid (/Users/omni/Documents/git/fastify-benchmarks/node_modules/ttest/hypothesis/abstact.js:42:15)
    at calculate (/Users/omni/Documents/git/fastify-benchmarks/node_modules/autocannon-compare/compare.js:39:17)
    at compare (/Users/omni/Documents/git/fastify-benchmarks/node_modules/autocannon-compare/compare.js:12:15)
    at module.exports.compare (/Users/omni/Documents/git/fastify-benchmarks/lib/autocannon.js:47:16)
    at inquirer.prompt.then (/Users/omni/Documents/git/fastify-benchmarks/benchmark-compare.js:126:22)

Better reflect real-world API with bigger payloads

Hello,

First, congrats for these nice & useful benchmarks !

I made a very simple benchmark here with fastify vs Express and found some intereresting data : see fastify/fastify#178 for the background.

The important point is : size of payload matters : with a big JSON payload, json stringification becomes the bottleneck, and fastify matches Express performance only if a schema is applied in fastify route, see fastify/fastify#178 (comment)

I would be glad to send a PR to add a new route in the tests, in order to test
a bigger payload, or you can directly copy the test data I used in my benchmark if you prefer !

Define a base to accept a framework

In my opinion, we cannot add every framework in the Node ecosystem, there are a lot!!
We should define a base rule to accept or not a framework in our list.

I think that the best solution could be the number of downloads, in which case we should define a threshold (at least 1k downloads per week?).

Thoughts?

Failing cron job

The cron job is failing:

Run node_version=$(node --version)
  node_version=$(node --version)
  benchmark_title=$(cat << EOF
  # Benchmarks
  * __Machine:__ $(uname -a) | $(node -r os -p "\`\${os.cpus().length} vCPUs | \${Math.ceil(os.totalmem() / (Math.pow(1024, 3)))}GB\`").
  * __Method:__ \`autocannon -c 100 -d 40 -p 10 localhost:3000\` (two rounds; one to warm-up, one to measure).
  * __Node:__ \`$node_version\`
  * __Run:__ $(date)
  EOF)
  benchmark_table=$(node benchmark-compare.js -t -c)
  strip_readme=$(node -r fs -p 'fs.readFileSync("./README.md", "utf-8").split(/# Benchmarks/)[0]')
  git checkout master
  echo -e "${strip_readme:?}\n${benchmark_title:?}\n\n${benchmark_table}" > README.md
  git add README.md
  git add benchmark-results.json
  git config user.name 'Github Actions'
  git config user.email '<>'
  git commit -m "Add new benchmarks to README.md"
  shell: /bin/bash -e {0}
/home/runner/work/_temp/4d56e358-8cc9-486b-99cb-de45d7b03d5d.sh: line 15: warning: here-document at line 9 delimited by end-of-file (wanted `EOF')
Error: Process completed with exit code 130.

See https://github.com/fastify/benchmarks/runs/1803265902?check_suite_focus=true for more details

add go framework

The node.js performance is not much worse than golang.
Nodejs 12 (v8 7.4 and llhttp) has a significant performance boost.

Add go framework to let more people know that fastify is fast enough.

add iris(go) bin(go) beego(go)

Replace autocannon

I was just trying various performance benchmarks comparing fastify etc and wanted to have a base-line first. So I picked a high-performance web server written in C++ with asio. I tried running the autocannon command in the README and was very surprised by the results. It showed 62k avg req/s. From my experience I know that that couldn't be true, so I picked a high-performance http benchmark tool for the job: wrk and there we have it: numbers doubled to 147k req/s. Other high-performance http benchmark tools confirm this number.

Reason enough for me to benchmark fastify with wrk with 100 clients: ~39177.6 req/s, hey, bombardier and wrk2 confirm that. It's slower than the results states in the README. How can that be that several well-known high-performance benchmark tools show drastically lower results? Well it's because autocanon cheats and obviously works incorrectly. It uses pipelining which basically writes 10 HTTP requests to the same socket buffer without waiting for previous requests. This allows several TCP improvements taking place which are unrealistic in real traffic. So let's disable that: Now autocanon shows as well something around 43439.28 req/s. What does autocanon show with pipelining autocanon -p 10? 74773.82 req/s. Remember the exact same command with 10 pipelining yields 62k req/s for my C++ server. That's ridiculous.

It's pretty clear that fastify is not faster than the high-performance c++ web server every well-known http benchmark tools clearly show that, yet autocanon suggests it is. This is mainly because of pipelining which other benchmarks tools didn't implement. This is highly misleading and should not be used, as seen by the numbers above.

So I wanted to suggest to replace autocanon with wrk since autocanon is obviously too slow and inaccurate to benchmark http servers.

A side note: Node's built-in http server is decent, however node's tcp client code is very slow. For example Node can only send up to 357k packets per seconds for a single client against a high-performance C++ tcp server. A C++ tcp client achieves here 15m packets per second. That's a hell of a lot difference. So that's just another indicator that benchmarks done via Node tcp client code are highly misleading and should be avoided, hence my suggestion to replace autocanon with wrk.

Add support to ESM

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the feature has not already been requested

🚀 Feature Proposal

Frequently modules are dropping support to CommonJS and in order to keep it up to date, we should support the ESM as soon as possible

Motivation

#216

Example

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.