GithubHelp home page GithubHelp logo

fastify / benchmarks Goto Github PK

View Code? Open in Web Editor NEW
575.0 17.0 222.0 603 KB

Fast and low overhead web framework fastify benchmarks.

Home Page: https://github.com/fastify/fastify

License: MIT License

JavaScript 100.00%
fastify nodejs web framework fastest

benchmarks's Introduction

CI Package Manager CI Web SIte js-standard-style CII Best Practices

NPM version NPM downloads Security Responsible Disclosure Discord Contribute with Gitpod Open Collective backers and sponsors


An efficient server implies a lower cost of the infrastructure, a better responsiveness under load and happy users. How can you efficiently handle the resources of your server, knowing that you are serving the highest number of requests as possible, without sacrificing security validations and handy development?

Enter Fastify. Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture. It is inspired by Hapi and Express and as far as we know, it is one of the fastest web frameworks in town.

The main branch refers to the Fastify v5 release, which is not released/LTS yet. Check out the 4.x branch for v4.

Table of Contents

Quick start

Create a folder and make it your current working directory:

mkdir my-app
cd my-app

Generate a fastify project with npm init:

npm init fastify

Install dependencies:

npm i

To start the app in dev mode:

npm run dev

For production mode:

npm start

Under the hood npm init downloads and runs Fastify Create, which in turn uses the generate functionality of Fastify CLI.

Install

To install Fastify in an existing project as a dependency:

Install with npm:

npm i fastify

Install with yarn:

yarn add fastify

Example

// Require the framework and instantiate it

// ESM
import Fastify from 'fastify'

const fastify = Fastify({
  logger: true
})
// CommonJs
const fastify = require('fastify')({
  logger: true
})

// Declare a route
fastify.get('/', (request, reply) => {
  reply.send({ hello: 'world' })
})

// Run the server!
fastify.listen({ port: 3000 }, (err, address) => {
  if (err) throw err
  // Server is now listening on ${address}
})

with async-await:

// ESM
import Fastify from 'fastify'

const fastify = Fastify({
  logger: true
})
// CommonJs
const fastify = require('fastify')({
  logger: true
})

fastify.get('/', async (request, reply) => {
  reply.type('application/json').code(200)
  return { hello: 'world' }
})

fastify.listen({ port: 3000 }, (err, address) => {
  if (err) throw err
  // Server is now listening on ${address}
})

Do you want to know more? Head to the Getting Started.

Note

.listen binds to the local host, localhost, interface by default (127.0.0.1 or ::1, depending on the operating system configuration). If you are running Fastify in a container (Docker, GCP, etc.), you may need to bind to 0.0.0.0. Be careful when deciding to listen on all interfaces; it comes with inherent security risks. See the documentation for more information.

Core features

  • Highly performant: as far as we know, Fastify is one of the fastest web frameworks in town, depending on the code complexity we can serve up to 76+ thousand requests per second.
  • Extensible: Fastify is fully extensible via its hooks, plugins and decorators.
  • Schema based: even if it is not mandatory we recommend to use JSON Schema to validate your routes and serialize your outputs, internally Fastify compiles the schema in a highly performant function.
  • Logging: logs are extremely important but are costly; we chose the best logger to almost remove this cost, Pino!
  • Developer friendly: the framework is built to be very expressive and help the developer in their daily use, without sacrificing performance and security.

Benchmarks

Machine: EX41S-SSD, Intel Core i7, 4Ghz, 64GB RAM, 4C/8T, SSD.

Method:: autocannon -c 100 -d 40 -p 10 localhost:3000 * 2, taking the second average

Framework Version Router? Requests/sec
Express 4.17.3 14,200
hapi 20.2.1 42,284
Restify 8.6.1 50,363
Koa 2.13.0 54,272
Fastify 4.0.0 77,193
-
http.Server 16.14.2 74,513

Benchmarks taken using https://github.com/fastify/benchmarks. This is a synthetic, "hello world" benchmark that aims to evaluate the framework overhead. The overhead that each framework has on your application depends on your application, you should always benchmark if performance matters to you.

Documentation

中文文档地址

Ecosystem

  • Core - Core plugins maintained by the Fastify team.
  • Community - Community supported plugins.
  • Live Examples - Multirepo with a broad set of real working examples.
  • Discord - Join our discord server and chat with the maintainers.

Support

Please visit Fastify help to view prior support issues and to ask new support questions.

Contributing

Whether reporting bugs, discussing improvements and new ideas or writing code, we welcome contributions from anyone and everyone. Please read the CONTRIBUTING guidelines before submitting pull requests.

Team

Fastify is the result of the work of a great community. Team members are listed in alphabetical order.

Lead Maintainers:

Fastify Core team

Fastify Plugins team

Great Contributors

Great contributors on a specific area in the Fastify ecosystem will be invited to join this group by Lead Maintainers.

Past Collaborators

Hosted by

We are a At-Large Project in the OpenJS Foundation.

Sponsors

Support this project by becoming a SPONSOR! Fastify has an Open Collective page where we accept and manage financial contributions.

Acknowledgements

This project is kindly sponsored by:

Past Sponsors:

This list includes all companies that support one or more of the team members in the maintenance of this project.

License

Licensed under MIT.

For your convenience, here is a list of all the licenses of our production dependencies:

  • MIT
  • ISC
  • BSD-3-Clause
  • BSD-2-Clause

benchmarks's People

Contributors

3imed-jaberi avatar 9ssi7 avatar aboutlo avatar aichholzer avatar ardalanamini avatar cagataycali avatar dancastillo avatar dependabot[bot] avatar dotcypress avatar dougwilson avatar eomm avatar fdawgs avatar giacomorebonato avatar github-actions[bot] avatar hekike avatar hnry avatar hueniverse avatar jameskyburz avatar jkyberneees avatar leizongmin avatar lukeed avatar mannil avatar mcollina avatar mudrz avatar pi0 avatar rafaelgss avatar salesh avatar sinchang avatar yusukebe avatar zekth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarks's Issues

Better reflect real-world API with bigger payloads

Hello,

First, congrats for these nice & useful benchmarks !

I made a very simple benchmark here with fastify vs Express and found some intereresting data : see fastify/fastify#178 for the background.

The important point is : size of payload matters : with a big JSON payload, json stringification becomes the bottleneck, and fastify matches Express performance only if a schema is applied in fastify route, see fastify/fastify#178 (comment)

I would be glad to send a PR to add a new route in the tests, in order to test
a bigger payload, or you can directly copy the test data I used in my benchmark if you prefer !

Error while running benchmarks

I tried running fastify-benchmarks by installing it globaly it threw below error:

fastify-benchmark
⠼ Started bareexec error: Error: Command failed: node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000
module.js:491
    throw err;
    ^

Error: Cannot find module '/home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon'
    at Function.Module._resolveFilename (module.js:489:15)
    at Function.Module._load (module.js:439:25)
    at Function.Module.runMain (module.js:609:10)
    at startup (bootstrap_node.js:158:16)
    at bootstrap_node.js:598:3

✔ Results for bare
{ Error: Command failed: node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000
module.js:491
    throw err;
    ^

Error: Cannot find module '/home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon'
    at Function.Module._resolveFilename (module.js:489:15)
    at Function.Module._load (module.js:439:25)
    at Function.Module.runMain (module.js:609:10)
    at startup (bootstrap_node.js:158:16)
    at bootstrap_node.js:598:3

    at ChildProcess.exithandler (child_process.js:270:12)
    at emitTwo (events.js:125:13)
    at ChildProcess.emit (events.js:213:7)
    at maybeClose (internal/child_process.js:927:16)
    at Socket.stream.socket.on (internal/child_process.js:348:11)
    at emitOne (events.js:115:13)
    at Socket.emit (events.js:210:7)
    at Pipe._handle.close [as _onclose] (net.js:545:12)
  killed: false,
  code: 1,
  signal: null,
  cmd: 'node /home/nilesh/.nvm/versions/node/v8.4.0/lib/node_modules/fastify-benchmarks/node_modules/autocannon -c 100 -d 5 -p 10 localhost:3000' }

add go framework

The node.js performance is not much worse than golang.
Nodejs 12 (v8 7.4 and llhttp) has a significant performance boost.

Add go framework to let more people know that fastify is fast enough.

add iris(go) bin(go) beego(go)

response time

Hello Team,

Just wanted to know the response time of fastify in comparison to other node.js frameworks??

Errors in log

I'm concerned about certain lines of logs in the benchmarks which could lead to potential false results:

// egg.js
2020-08-01T00:47:46.2549900Z 2020-08-01 00:47:46,237 ERROR 2839 [-/undefined/-/3ms GET /] nodejs.EPIPEError: write EPIPE
2020-08-01T00:47:46.2550242Z     at afterWriteDispatched (internal/stream_base_commons.js:154:25)
2020-08-01T00:47:46.2550525Z     at writevGeneric (internal/stream_base_commons.js:137:3)
2020-08-01T00:47:46.2550797Z     at Socket._writeGeneric (net.js:784:11)
2020-08-01T00:47:46.2551057Z     at Socket._writev (net.js:793:8)
2020-08-01T00:47:46.2551293Z     at doWrite (_stream_writable.js:401:12)
2020-08-01T00:47:46.2551543Z     at clearBuffer (_stream_writable.js:519:5)
2020-08-01T00:47:46.2551790Z     at Socket.Writable.uncork (_stream_writable.js:338:7)
2020-08-01T00:47:46.2552070Z     at ServerResponse._flushOutput (_http_outgoing.js:854:10)
2020-08-01T00:47:46.2552330Z     at ServerResponse._flush (_http_outgoing.js:823:22)
2020-08-01T00:47:46.2552541Z     at ServerResponse.assignSocket (_http_server.js:219:8)
2020-08-01T00:47:46.2552766Z errno: "EPIPE"
2020-08-01T00:47:46.2553000Z code: "EPIPE"
2020-08-01T00:47:46.2553237Z syscall: "write"
2020-08-01T00:47:46.2553458Z headerSent: true
2020-08-01T00:47:46.2553677Z name: "EPIPEError"
2020-08-01T00:47:46.2553882Z pid: 2839
2020-08-01T00:47:46.2554301Z hostname: fv-az54
// express with route
2020-08-01T00:49:47.7387707Z (node:2957) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [Socket]. Use emitter.setMaxListeners() to increase limit
// trek routeur
2020-08-01T01:20:30.2061204Z   HttpError: write ECONNRESET
2020-08-01T01:20:30.2061702Z       at Array.onError (/home/runner/work/benchmarks/benchmarks/node_modules/trek-engine/lib/engine.js:54:15)
2020-08-01T01:20:30.2062194Z       at listener (/home/runner/work/benchmarks/benchmarks/node_modules/on-finished/index.js:169:15)
2020-08-01T01:20:30.2062680Z       at onFinish (/home/runner/work/benchmarks/benchmarks/node_modules/on-finished/index.js:100:5)
2020-08-01T01:20:30.2063161Z       at callback (/home/runner/work/benchmarks/benchmarks/node_modules/ee-first/index.js:55:10)
2020-08-01T01:20:30.2063648Z       at Socket.onevent (/home/runner/work/benchmarks/benchmarks/node_modules/ee-first/index.js:93:5)
2020-08-01T01:20:30.2063841Z       at Socket.emit (events.js:327:22)
2020-08-01T01:20:30.2064054Z       at errorOrDestroy (internal/streams/destroy.js:108:12)
2020-08-01T01:20:30.2064265Z       at onwriteError (_stream_writable.js:418:5)
2020-08-01T01:20:30.2064464Z       at onwrite (_stream_writable.js:445:5)
2020-08-01T01:20:30.2064653Z       at internal/streams/destroy.js:50:7

Mention @Eomm

Bug: Faster benchmark has negative percentage

Sometimes benchmark is presented with negative percentage:
Both are awesome but take-five is -12.07% faster than fastify

Edit: I realized that problem is with autocannon-compare module, so feel free to close/update issue

[Feature] Add percentage value to cells

Hi! I created a small feature which adds percentage value to the cells based on the fastest choice. I think it is easier to compare this way. You can check the code here.

$ node benchmark-compare.js -t -p will output table like this:

┌─────────┬─────────┬────────┬─────────────┬─────────────┬───────────────┐
│         │ Version │ Router │ Requests/s  │ Latency     │ Throughput/Mb │
│         │         │        │ (% of bare) │ (% of bare) │ (% of bare)   │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ bare    │ 8.11.2  │ ✗      │ 10714.8     │ 9.13        │ 1.52          │
│         │         │        │ (100.00)    │ (100.00)    │ (100.00)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ micro   │ 9.3.2   │ ✗      │ 10595.6     │ 9.25        │ 1.66          │
│         │         │        │ (98.89)     │ (101.31)    │ (109.05)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ fastify │ 1.6.0   │ ✓      │ 10065.21    │ 9.73        │ 1.57          │
│         │         │        │ (93.94)     │ (106.57)    │ (103.29)      │
├─────────┼─────────┼────────┼─────────────┼─────────────┼───────────────┤
│ express │ 4.16.3  │ ✓      │ 5806.8      │ 16.82       │ 0.90          │
│         │         │        │ (54.19)     │ (184.23)    │ (59.47)       │
└─────────┴─────────┴────────┴─────────────┴─────────────┴───────────────┘

If you like it, I happy to create a PR.

Results not maching my tests

I'm currently seeing some slightly different numbers when running on my dedicated server. Some frameworks results are similar, but others varies slightly.

I'm testing on a dedicated server EX41S-SSD, Intel Core i7, 4Ghz, 64GB RAM, 4C/8T, SSD.

Router Requests/s Latency Throughput/Mb
bare 67344.0 1.41 10.53
connect-router 61968.0 1.54 9.69
connect 66796.8 1.42 10.45
egg.js 26318.4 3.72 8.68
express-route-prefix 47273.6 2.03 16.45
express-with-middlewares 30648.0 3.17 11.08
express 37750.4 2.57 5.90
fastify-big-json 14890.4 6.57 170.99
fastify 72889.6 1.30 11.40
foxify 72108.8 1.31 10.25
hapi 29752.0 3.28 4.65
koa-router 45865.6 2.11 7.17
koa 55267.2 1.74 8.64
micro-route 60528.0 1.58 9.47
micro 69984.0 1.35 10.95
microrouter 38044.8 2.54 5.95
polka 66166.4 1.44 10.35
rayo 66070.4 1.44 10.34
restify 39241.6 2.05 6.21
server-base-router 62409.6 1.53 9.76
server-base 54473.6 1.77 8.52
spirit-router 57923.2 1.30 9.06
spirit 60822.4 1.25 9.51
take-five 0.0 0.00 0.00
total.js 47382.4 2.03 12.38
trek-engine 53059.2 1.82 7.54
trek-router 51734.4 1.86 7.35
vapr 54422.4 1.76 7.73
yeps-router 44278.4 2.18 6.93
yeps 56905.6 1.69 8.90

Mention the impact of schema use on performance

Newcomers to fastify are likely to be particularly interested in performance gain over other frameworks (and the introducing article emphasizes that).

So IMHO, it's better to be the more precise & transparent on how to achieve this performance level.

One main advantage of fastify seems to be its stringify module (https://github.com/fastify/fast-json-stringify), however currently this module will be only used if a schema is provided.

In my tests, I noticed a 25% perf gain by specifying a schema.

So I think it would be mentioned somewhere.

update benchmark table in readme

Please could the benchmark table be updated in the readme.

It's been a while and it would be nice to include the new frameworks.

Thanks!

Is node v6.11.3 supported?

I tried to run benchmark on node-v6.11.3, and it seems that it needs async to work properly.

$ node -v
v6.11.3
$ npm -v
3.10.10
$ benchmark 
/usr/local/lib/node_modules/fastify-benchmarks/lib/bench.js:7
const doBench = async (handler) => {
                      ^

SyntaxError: Unexpected token (

actions to update bench results

🚀 Feature Proposal

Run the benchmark every x time (month?) updating the readme file automatically using GitHub actions

In this way, we could test also the banch script

Related to #101

Markdown table

benchmarks.js compare -t should generate a copy-n-pasteable markdown table. See here.

Chalk v5 requiers ESM

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the regression has not already been reported

Last working version

1.0

Stopped working in version

current master

Node.js version

16

Operating system

Windows

Operating system version (i.e. 20.04, 11.3, 10)

10

💥 Regression Report

Version of chalk was bumped to 5, but chalk 5 requiers use of ESM. So benchmark-compare.js throws an error:

Error [ERR_REQUIRE_ESM]: require() of ES Module benchmarks\node_modules\chalk\source\index.js from benchmarks\benchmark-compare.js not supported.
Instead change the require of index.js in benchmarks\benchmark-compare.js to a dynamic import() which is available in all CommonJS modules.
    at Object.<anonymous> (benchmarks\benchmark-compare.js:8:15) {
  code: 'ERR_REQUIRE_ESM'
}

Steps to Reproduce

npm run compare

Expected Behavior

To not throw an error

please add units to the table

🚀 Feature Proposal

just add the units (latency in ms i assume?)

Motivation

clarity

Example

read the table and know for a fact the units instead of assuming it's ms

npm run compare throws error deep within autocannon-compare dependency chain

(node:82791) UnhandledPromiseRejectionWarning: TypeError: gammaCollection.betaln is not a function
    at beta (/Users/omni/Documents/git/fastify-benchmarks/node_modules/mathfn/functions/beta.js:43:37)
    at Object.incBeta (/Users/omni/Documents/git/fastify-benchmarks/node_modules/mathfn/functions/beta.js:133:35)
    at StudenttDistribution.cdf (/Users/omni/Documents/git/fastify-benchmarks/node_modules/distributions/distributions/studentt.js:32:17)
    at StudentT.AbstactStudentT.pValue (/Users/omni/Documents/git/fastify-benchmarks/node_modules/ttest/hypothesis/abstact.js:22:32)
    at StudentT.AbstactStudentT.valid (/Users/omni/Documents/git/fastify-benchmarks/node_modules/ttest/hypothesis/abstact.js:42:15)
    at calculate (/Users/omni/Documents/git/fastify-benchmarks/node_modules/autocannon-compare/compare.js:39:17)
    at compare (/Users/omni/Documents/git/fastify-benchmarks/node_modules/autocannon-compare/compare.js:12:15)
    at module.exports.compare (/Users/omni/Documents/git/fastify-benchmarks/lib/autocannon.js:47:16)
    at inquirer.prompt.then (/Users/omni/Documents/git/fastify-benchmarks/benchmark-compare.js:126:22)

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Add version column in REAME

🚀 Feature Proposal

Add a column in the README that tracks the version of the framework which has run the banch

Motivation

Transparency

Example

Router Requests/s Latency Throughput/Mb Version
polkadot 57384.0 1.63 8.97 1.?
fastify 56417.6 1.68 8.82 3.0.1
polka 54064.8 1.76 8.46 0.5.?

micro benchmark

I start playing with project and create new micro benchmark with another router, and on my machine it was a little bit faster than original one.

'use strict'

const micro = require('micro')
const dispatch = require('micro-route/dispatch')

const server = micro(
  dispatch('/', 'GET', (req, res) => {
    return micro.send(res, 200, { hello: 'world' })
  })
)

server.listen(3000)

Does it make any sense to send PR with new benchmark?

Add Restana

On https://github.com/the-benchmarker/web-frameworks benchmark there are nodejs frameworks which come faster than fastify (Restana, rayo, polka). Here Rayo and Polka come slower than Fastify. It will be great to add Restana here as well, to see how well they do this benchmark.

Also, it will be interesting to find out why Rayo and Polka come faster on web frameworks benchmark but are slower here. Maybe this benchmark is somehow biased towards fastify?

Failing cron job

The cron job is failing:

Run node_version=$(node --version)
  node_version=$(node --version)
  benchmark_title=$(cat << EOF
  # Benchmarks
  * __Machine:__ $(uname -a) | $(node -r os -p "\`\${os.cpus().length} vCPUs | \${Math.ceil(os.totalmem() / (Math.pow(1024, 3)))}GB\`").
  * __Method:__ \`autocannon -c 100 -d 40 -p 10 localhost:3000\` (two rounds; one to warm-up, one to measure).
  * __Node:__ \`$node_version\`
  * __Run:__ $(date)
  EOF)
  benchmark_table=$(node benchmark-compare.js -t -c)
  strip_readme=$(node -r fs -p 'fs.readFileSync("./README.md", "utf-8").split(/# Benchmarks/)[0]')
  git checkout master
  echo -e "${strip_readme:?}\n${benchmark_title:?}\n\n${benchmark_table}" > README.md
  git add README.md
  git add benchmark-results.json
  git config user.name 'Github Actions'
  git config user.email '<>'
  git commit -m "Add new benchmarks to README.md"
  shell: /bin/bash -e {0}
/home/runner/work/_temp/4d56e358-8cc9-486b-99cb-de45d7b03d5d.sh: line 15: warning: here-document at line 9 delimited by end-of-file (wanted `EOF')
Error: Process completed with exit code 130.

See https://github.com/fastify/benchmarks/runs/1803265902?check_suite_focus=true for more details

Define a base to accept a framework

In my opinion, we cannot add every framework in the Node ecosystem, there are a lot!!
We should define a base rule to accept or not a framework in our list.

I think that the best solution could be the number of downloads, in which case we should define a threshold (at least 1k downloads per week?).

Thoughts?

Maintainers to issue a release

Hi Folks,

I got npm publish rights from @cagataycali! Thanks!

Is there a volunteer that would help in cutting releases of this module, so we can fix #83?

There is problem a bunch of changes to make in the README to make it easily installable.

Thanks

Matteo

Add support to ESM

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the feature has not already been requested

🚀 Feature Proposal

Frequently modules are dropping support to CommonJS and in order to keep it up to date, we should support the ESM as soon as possible

Motivation

#216

Example

No response

Should we add metrics to cold start?

🚀 Feature Proposal

I know that provide a fast cold start is not the focus of fastify, but I think that should we provide some metrics about fastify regardless of the route response time.

Besides these metrics could help us to improve the initialization at some point.

Add polka server

I just heard about polka.
Should it be added to the benchmarks?
Maybe @lukeed could add a benchmark here to see how it compares 😄

Replace autocannon

I was just trying various performance benchmarks comparing fastify etc and wanted to have a base-line first. So I picked a high-performance web server written in C++ with asio. I tried running the autocannon command in the README and was very surprised by the results. It showed 62k avg req/s. From my experience I know that that couldn't be true, so I picked a high-performance http benchmark tool for the job: wrk and there we have it: numbers doubled to 147k req/s. Other high-performance http benchmark tools confirm this number.

Reason enough for me to benchmark fastify with wrk with 100 clients: ~39177.6 req/s, hey, bombardier and wrk2 confirm that. It's slower than the results states in the README. How can that be that several well-known high-performance benchmark tools show drastically lower results? Well it's because autocanon cheats and obviously works incorrectly. It uses pipelining which basically writes 10 HTTP requests to the same socket buffer without waiting for previous requests. This allows several TCP improvements taking place which are unrealistic in real traffic. So let's disable that: Now autocanon shows as well something around 43439.28 req/s. What does autocanon show with pipelining autocanon -p 10? 74773.82 req/s. Remember the exact same command with 10 pipelining yields 62k req/s for my C++ server. That's ridiculous.

It's pretty clear that fastify is not faster than the high-performance c++ web server every well-known http benchmark tools clearly show that, yet autocanon suggests it is. This is mainly because of pipelining which other benchmarks tools didn't implement. This is highly misleading and should not be used, as seen by the numbers above.

So I wanted to suggest to replace autocanon with wrk since autocanon is obviously too slow and inaccurate to benchmark http servers.

A side note: Node's built-in http server is decent, however node's tcp client code is very slow. For example Node can only send up to 357k packets per seconds for a single client against a high-performance C++ tcp server. A C++ tcp client achieves here 15m packets per second. That's a hell of a lot difference. So that's just another indicator that benchmarks done via Node tcp client code are highly misleading and should be avoided, hence my suggestion to replace autocanon with wrk.

Setup CI for pull requests

There is a npm test script included in the module. Can someone with privileges set this up in CI so the result shows in pull requests?

It currently fails because no one realized that some PRs were not passing before merging because there is no CI.

$ npm test

> [email protected] test /Users/doug.wilson/Code/NodeJS/fastify-benchmarks
> standard | snazzy

standard: Use JavaScript Standard Style (https://standardjs.com)
standard: Run `standard --fix` to automatically fix some problems.

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmark-bench.js
  92:1  error  Expected indentation of 4 spaces but found 2
  93:1  error  Expected indentation of 6 spaces but found 4
  94:1  error  Expected indentation of 4 spaces but found 2

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/@leizm-web.js
  3:46  error  Extra semicolon
  5:26  error  Extra semicolon

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/trek-engine-router.js
  26:7   error  Expected literal to be on the right side of ==
  26:11  error  Expected '===' and instead saw '=='

/Users/doug.wilson/Code/NodeJS/fastify-benchmarks/benchmarks/trek-engine.js
  15:7   error  Expected literal to be on the right side of ==
  15:11  error  Expected '===' and instead saw '=='

✖ 9 problems
npm ERR! Test failed.  See above for more details.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.