Comments (11)
Note that cold start is not such a critical thing anymore unless your application has extremely low traffic
I Agree. Although, I believe that the more we expose the metrics of Fastify more confident will be the consumers.
The relevant metric to track is the time between starting the Node.js process.
Good! Perhaps this metric can be the first one to create. @Eomm the fastify/fastify#2195 get this metric already? We can continue on that PR.
from benchmarks.
Would it mean without endpoints?
Yes, I mean, maybe increase the number of routes until reaching some limit and document it or to understand that each route can increase the startup time in X.
from benchmarks.
I'm generically +1 to add that benchmark. Note that cold start is not such a critical thing anymore unless your application has extremely low traffic. (Note that on AWS the process is reused and it allows to make https://github.com/fastify/aws-lambda-fastify more performant than the express equivalent under traffic).
The relevant metric to track is the time between starting the Node.js process, calling .listen()
and then closing the server as soon as it's done listening.
As far as Fastify goes, fastify/fastify#1598 sums it up. Our main bottleneck is in the Ajv dependency tree - TL;DR essentially executing a really long regexp we are not matching.
from benchmarks.
I started something like this:
fastify/fastify#2195
To check if this (empty) repo can speed up the start!
https://github.com/Eomm/fastify-fast-startup
I would glad to work on this with you 👍
from benchmarks.
The hard things is that the schema compiling is the slower part, so we need to define a set of use cases.
Because if we don't use schemas, for example, we are ignoring that component
from benchmarks.
We can do both actually.
I thinking about:
- Benchmark of raw startup
- Benchmark startup with schemas
- Benchmark startup with X plugins
I don't know if expose the memory consumption when we use a lot of schemas is a good metric though. What do you think?
from benchmarks.
Benchmark of raw startup
Would it mean without endpoints? - maybe we could do a poll on Twitter to define a good tradeoff for a "typical fastify application"
For example, when I drafted that code I thought to try the start with 1, 100 and 1000 fake endpoint
expose the memory consumption
I think it would be not because a higher memory consumption :
- should not scare the user to use schemas
- it depends too much on the implementation (aka if the user reuse the schemas in the routes configuration)
For sure we could measure it to write down some "best practice"
Usually, I run in prod fastify applications with:
- between 10/20 routes with:
- 3/4 hooks
- add some fastify decorator
- add some request decorator
- header schema (high reuse)
- body schema (quite always unique)
- query string schema (high reuse)
- response schema that has till 4 levels of nesting object
from benchmarks.
Ok, would you like to work on a branch in this repo?
I would focus on creating a json schema fake generator
(I don't have write access here, so I should do PRs to your branch tho)
from benchmarks.
Ok, would you like to work on a branch in this repo?
Ok. How fastify/fastify#2195 is related to the work above? There is something to reuse?
from benchmarks.
There is something to reuse?
I think so: the structure with benchmark.js
(I choose it because it is quite easy to use and it is bullet proof) and the concept to generate routes
For sure feel free to start a fresh one if you know a better option!
from benchmarks.
Closing since the improvements is being discussed in the PR above.
from benchmarks.
Related Issues (20)
- Failing cron job HOT 1
- Get script error when running benchmark compare HOT 1
- please add units to the table HOT 1
- Chalk v5 requiers ESM
- Add support to ESM HOT 4
- Reported throughput very low HOT 12
- Add benchmarks for native NodeJS (LTS) server? HOT 7
- feat: add benchmark for istanbuljs HOT 7
- feat: change package name istanbuljs to galatajs
- Add Express 5 (Beta) benchmark HOT 1
- inquirer update broke everything HOT 2
- Add support to hyper-express HOT 3
- Update external Fastify benchmarks HOT 4
- Clean benchmarks list HOT 5
- Add tinyhttp benchmark
- Consistent Benchmark Result HOT 3
- Sveltekit Performance Benchmark Comparison HOT 4
- @tinyhttp/app Missing from dependencies causes benchmark to fail
- Fails to run HOT 1
- New release? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmarks.