Would you be sad if Step CI did not exist?
No response
How likely would you recommend Step CI to a colleague?
No response
Is there something you dont like about Step CI?
No response
Do you have any ideas that could improve Step CI?
No response
Anything else you want to tell us?
I haven't had time to look at this library comprehensively, but scanning in the docs in its early stage is already quite appealing to me.
From someone who worked using Artillery to create performance testing, these are my takes on it. I'm gonna look forward to this library.
Documentation
For someone who didn't have experience with performance testing, having comprehensive documentation is really helpful and their categories are what allow me to write performance testing easily. They even have a changelog on their website which makes it easy to see everything in one place.
WISH LIST
I do wish they explain more about the configuration syntax a bit more or they could just have one place to describe all of it.
I also wish they explain how the worker works or they are using it as I have to figure out myself that they do run workers.
The below script is a sample of config logging different UUIDs.
What's weird to me is that before
and after
don't share the same UUID which is fine but for someone using it to associate things as one whole flow, they should have at least shared some context.
before:
name: 'beforeAll'
flow:
- log: 'Before all hook process ID - {{ $uuid }}'
after:
name: 'afterAll'
flow:
- log: 'After all hook process ID - {{ $uuid }}'
scenarios:
- name: 'Scenario'
flow:
- log: 'Start scenario for a virtual user with ID - {{ $uuid }}'
Configuration
It is a good choice to use YAML but I heard some people wanted to use JSON as well. I am content with YAML and just keep everything in YAML even custom payloads or other configs are in YAML format anyway.
I love how they design the whole config syntax.
I love how they have a environments
property where you can create different configs for each environment or override some config from your base config like if you want to turn off some plugin for some environment or have different phases.
I love how they have a processor
property where you expose a bunch of custom functions that you can call throughout the whole config, but I just do hope you write it in modern syntax that uses require
instead of import
. You even update the context.vars
and write some additional serializable values into it for it to be available in the succeeding custom function or the next flow of the scenario.
I love the flow
configuration like having log
, loop
, function
, think
, and conditional requests as well and you can even combine them.
scenarios:
- name: 'Scenario'
flow:
- function: 'setAuthPayload'
- post:
name: 'loginApi'
url: '/login'
beforeRequest:
- 'setCustomHeader'
json:
username: '{{ authPayload.username }}'
password: '{{ authPayload.password }}'
I love how the templating works where you can just hook up with context.vars
like {{ $uuid }}
in the YAML config.
WISH LIST
It would be better if the phases
run sequentially.
The below config is the sample phase with 10 arrivalCount
and 10s duration.
config:
phases:
- duration: 10
arrivalCount: 10
- duration: 60
arrivalCount: 20
Based on the above config, after 10s, it will run the next phase even though the previous phase is not yet finished due to slow network requests or asynchronous functions or long-running tasks, etc.
Having a consistent or configurable arrival rate would be better. They explain that arrivalCount
happen in a fixed rate of 1s but rampTo
and arrivalRate
aren't and it differs when you combine them together or with maxVusers
.
config:
target: "https://staging.example.com"
phases:
- duration: 300
arrivalRate: 50
- duration: 300
arrivalRate: 10
maxVusers: 50
- duration: 120
arrivalRate: 10
rampTo: 50
- duration: 60
arrivalCount: 20
Based on the above config, the first phase generates 50 virtual users every second for 5 minutes while the second one generates 10 virtual users every second for 5 minutes, with no more than 50 concurrent virtual users at any given time. The third one ramps up the arrival rate of virtual users from 10 to 50 over 2 minutes. The final one creates 20 virtual users in 60 seconds (one virtual user approximately every 3 seconds).
All of them seem to be easy at first glance but I wish it would be more consistent, especially for a phase that should not exceed custom maxVusers
of 300 but with an arrival rate of like 2 seconds and 3 Vusers at a time for example.
I remembered this inconsistent issue with rampTo
as well. artilleryio/artillery#843
Consistent configuration with beforeScenario
, afterScenario
, and before
, after
.
The former two hooks are just a list of custom functions but it would be better to align it with before
and after
hooks where you can use any existing syntax like log
or trigger an HTTP request or something. I still feel sometimes that it should be the case from the beginning but at the same time, I also thought what you will do in those hooks is literally run some cleanup or initialization which is a custom function anyway.
before:
name: 'beforeAll'
flow:
- log: 'Before all hook process ID - {{ $uuid }}'
after:
name: 'afterAll'
flow:
- log: 'After all hook process ID - {{ $uuid }}'
scenarios:
- name: 'Scenario'
beforeScenario:
- 'customInitializationFunction'
afterScenario:
- 'cleanupFunction'
flow:
- log: 'Start scenario for a virtual user with ID - {{ $uuid }}'
Rename before
and after
to like jest properties like beforeAll
and afterAll
or even beforeAllScenarios
and afterAllScenarios
to have more meaning when those hooks run or just improve the documentation to let the users know of these.
Plugins
Bunch of useful plugins but I haven't used most of them and only metrics-by-endpoint
, expect
, and the extendedMetrics
for http
for additional reporting.
Metric name |
Meaning |
http.dns |
Time taken by DNS lookups |
http.tcp |
Time taken to establish TCP connections |
http.tls |
Time taken by completing TLS handshakes |
http.total |
Time for the entire response to be downloaded |
The above table is taken from https://www.artillery.io/docs/guides/guides/http-reference#additional-performance-metrics-v2
I also love how you can capture the response of the request via JSON path expression and use the expect
plugin to verify it but I just hope it was more functionalities in it like jest
expect but configurable in the YAML file.
WISH LIST
Having the ability to write custom plugins locally. Right now I believe you have to publish it before you can use it and that you have to prefix it with artillery-plugin-name-of-your-plugin
.
Reporting
I love how you can export it and create a report locally and then open an HTML file with a graph.
WISH LIST
I wish they allow disabling the report in the CLI but still show the verbose logs. It is creating a lot of noise if you're going to generate a report anyway and also the metrics sometimes log in between even the whole phase is still in progress.
As of right now you can only show/hide logs altogether but no logs categorization.