stepci / stepci Goto Github PK
View Code? Open in Web Editor NEWAutomated API Testing and Quality Assurance
Home Page: https://stepci.com
License: Mozilla Public License 2.0
Automated API Testing and Quality Assurance
Home Page: https://stepci.com
License: Mozilla Public License 2.0
Useful for storing connection data and persisting connections across test steps
version: "1.1"
name: gRPC API
tests:
example:
connections:
example:
grpc:
proto: public/helloworld.proto
host: 0.0.0.0:50051
tls: {}
steps:
- name: gRPC request
grpc:
# Using a reference
connection: example
service: helloworld.Greeter
method: SayHello
data:
name: world!
check:
jsonpath:
$.message: Hello world!
- name: gRPC request
grpc:
# Inline
connection:
proto: public/helloworld.proto
host: 0.0.0.0:50051
tls: {}
service: helloworld.Greeter
method: SayHello
data:
name: world!
check:
jsonpath:
$.message: Hello world!
Pre-request, teardown and checks script support
Example:
version: "1.1"
name: Status Check
tests:
example:
scripts:
example: |
console.log("hello")
example2:
file: script.js
steps:
- http:
url: https://example.com
method: GET
hooks:
beforeRequest:
- example
afterRequest:
- example2
check:
status: /^20/
custom:
name: example2
See #57 for details
One thing we should think about is - providing secret arguments to the runner (like API-keys)
The differences are:
Secret variables should not be displayed in the CLI output, unlike env variables
You can only define secrets from the CLI
I'm thinking like:
stepci run workflow.yml -s variable=$VAR
Mask secret variables in output (low-priority for now)
Seem as though the worflow config for continueOnFail, does not continue. After looking at the code, it seems to be implemented. I might have defined it in the wrong spot, but if it worked, it would be really handy.
I expected all test to run and stop on the first failing test.
2.4.5
node 16.16
version: "1.1"
name: Status Check
env:
host: example.com
config:
continueOnFail: true
tests:
example:
steps:
- name: GET request
http:
url: https://{{env.host}}1
method: GET
check:
status: /^20/
- name: GET request
http:
url: https://{{env.host}}
method: GET
check:
status: /^20/
Request HTTP
POST http://localhost:3000/users/verify-code HTTP/1.1
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiaXBJZUJJaGJHMllEeGdQNHc1N1B6d0s2WDNJMyIsInVzZXJSZWNvcmQiOnsidWlkIjoiaXBJZUJJaGJHMllEeGdQNHc1N1B6d0s2WDNJMyIsImVtYWlsIjoia2F5ZWoudGFrQGdtYWlsLmNvbSIsImVtYWlsVmVyaWZpZWQiOmZhbHNlLCJkaXNhYmxlZCI6ZmFsc2UsIm1ldGFkYXRhIjp7Imxhc3RTaWduSW5UaW1lIjoiTW9uLCAwNyBOb3YgMjAyMiAxODo1MToyNCBHTVQiLCJjcmVhdGlvblRpbWUiOiJNb24sIDA3IE5vdiAyMDIyIDE4OjUxOjI0IEdNVCIsImxhc3RSZWZyZXNoVGltZSI6IlR1ZSwgMDggTm92IDIwMjIgMDE6Mjc6MzggR01UIn0sInRva2Vuc1ZhbGlkQWZ0ZXJUaW1lIjoiTW9uLCAwNyBOb3YgMjAyMiAxODo1MToyNCBHTVQiLCJwcm92aWRlckRhdGEiOlt7InVpZCI6ImtheWVqLnRha0BnbWFpbC5jb20iLCJlbWFpbCI6ImtheWVqLnRha0BnbWFpbC5jb20iLCJwcm92aWRlcklkIjoicGFzc3dvcmQifV19LCJpYXQiOjE2Njc4NzcwODB9.ayF9GYkVlUa7kD5FP4NrkoPiw3Im-p7VJ9BicJgOrYt83TZ1GFxGmBgBEVQFOgnl-Gsk9brMmQAWkrrgshrAac3J87aBusD5st--D4aTr20ql1bq_E5drbwadhuB6G_kWkjh62akZcMgSfNJ9cU9M8shiLSgFSeui1QSUjWVl0SDFojPGvlNKrTeIOmXGRHN58pKqdRCD7f-uI27Dzy3nDk1LpU5swAAnNQUpeuokcj7SoIaH_uq1N-w15hJ6-fe0wmd9WSFoBUO2irHiv1AizInIk2sryQOfjSRrYD-U0qaANFcKj_efyb_oBWKvCN4DvLm821RLQQCH21h7D8Gp68_SpKm989GoUioTa6oiUgcZMnPnwUBRv0l9IUz0bGvjH2q-RlHbysZwDaB_Iibs7kcbpxrAH5rWJRaYazlSjdrjqxcXxLpBlE6EBuIaW8cUDJbj_fKtCfzYGzV6tfjSHeTflW8wN9XZRrve66bXeZPF1sRvpjUG-qejwYYryPmGsmXEqjM9lUYDllOYmUkPrTPQ7fJE-GmtwFyMww8U3RyZvdiNk4NO7e4sCHbkM_ldhDV_SRmZrrAOnl3qX3TvdUbvTCvDZtiyomi4_AqMt9knN2uZEsJ8JFmieUQQ4n0nGjjkF_jzdAQ0coiOEKoAc2KG4qA13xi4ERkO71Rl8k
{"phoneNumber":"+14049060803","verificationCode":"492275"}
Response
HTTP/1.1 500 Internal Server Error
x-powered-by: Express
access-control-allow-origin: *
content-type: application/json; charset=utf-8
content-length: 28
etag: W/"1c-GTLvyKxlvOrEj2GGBdfOAn6LHp0"
date: Tue 08 Nov 2022 03:11:22 GMT
connection: close
{"message":"Incorrect code"}
Checks
Status
✕ 500 (expected 200)
⚠︎ users_tests › Get User in OnBoarding
Step was skipped because previous one failed
We will want to tell users how to setup dev environment to work on Step CI codebase. For that we would need an extra section in our readme, which would contain the following:
workflow.yml
version: "1.1"
name: Status Check
env:
host: example.com
testsFrom:
- test.yml
test.yml
tests:
example:
steps:
- name: GET request
http:
url: https://{{env.host}}
method: GET
check:
status: /^20/
Users who did not visit the website, should be able to see the examples and features
Would be cool if Step CI would support HAR format
We want to let our users know when a new version of the CLI is available
See: https://raml.org/
We want to have some more examples how to use Step CI with different tools or in different environments. Here's a list:
Line 32 in ca64626
We want to be able to convert Insomnia collections to a workflow
Reference: #17
This was an oversight from the last update. Thanks to koki from Twitter for figuring it out!
We are using JWTs for authentication. I am testing on macos (M1)
export JWT=$(genJwtToken)
stepci workflow.yml -s JWT=$JWT
&&
stepci workflow.yml -s JWT=$JWT
{{env.JWT}} in the bearer/token section never gets replaced "sends to testing server [object object]"
{{secrets.JWT}} in the bearer/token section never gets replaced "sends to testing server [object object]"
auth:
bearer:
token: {{secret.JWT}}
auth:
bearer:
token: {{secrets.JWT}}
Now I am using sed to transform the workflow in memory for JWT before running.
Love the tool. Also would love to know if I am doing something wrong as well.
I expected the test to pass . Trying to mirror tests out of swagger.
2.4.5
v16.16
I started with the starting example.
POST http://localhost:3000/schools/getRoster HTTP/1.1
Content-Type: application/json
Authorization: Bearer secrets.ABCJ
{"schoolId":"8eLtalPcqv5Ioae4E4Wn"}
Response
HTTP/1.1 403 Forbidden
x-powered-by: Express
access-control-allow-origin: *
content-type: application/json; charset=utf-8
content-length: 33
etag: W/"21-6WgjtJhAT2yQTa63ODczjQj9Xro"
date: Mon 07 Nov 2022 19:25:56 GMT
connection: close
{"message":"Could not authorize"}
Checks
S
We want a doc, with all our CLI commands and options
You can already retrieve the list from the CLI by running stepci --help
When no arguments are specified we want to output the help screen
This could be accomplished by default command option in yargs
https://github.com/yargs/yargs/blob/main/docs/advanced.md#default-commands
We want to be able to supply env variables for the Workflow from command line
I'm thinking something like:
stepci run workflow.yml -e variable=$VAR
We want to display request and response information like headers, cookies and body in the CLI for easy debugging
No response
No response
No response
No response
I haven't had time to look at this library comprehensively, but scanning in the docs in its early stage is already quite appealing to me.
From someone who worked using Artillery to create performance testing, these are my takes on it. I'm gonna look forward to this library.
For someone who didn't have experience with performance testing, having comprehensive documentation is really helpful and their categories are what allow me to write performance testing easily. They even have a changelog on their website which makes it easy to see everything in one place.
WISH LIST
I do wish they explain more about the configuration syntax a bit more or they could just have one place to describe all of it.
I also wish they explain how the worker works or they are using it as I have to figure out myself that they do run workers.
The below script is a sample of config logging different UUIDs.
What's weird to me is that before
and after
don't share the same UUID which is fine but for someone using it to associate things as one whole flow, they should have at least shared some context.
before:
name: 'beforeAll'
flow:
- log: 'Before all hook process ID - {{ $uuid }}'
after:
name: 'afterAll'
flow:
- log: 'After all hook process ID - {{ $uuid }}'
scenarios:
- name: 'Scenario'
flow:
- log: 'Start scenario for a virtual user with ID - {{ $uuid }}'
It is a good choice to use YAML but I heard some people wanted to use JSON as well. I am content with YAML and just keep everything in YAML even custom payloads or other configs are in YAML format anyway.
I love how they design the whole config syntax.
I love how they have a environments
property where you can create different configs for each environment or override some config from your base config like if you want to turn off some plugin for some environment or have different phases.
I love how they have a processor
property where you expose a bunch of custom functions that you can call throughout the whole config, but I just do hope you write it in modern syntax that uses require
instead of import
. You even update the context.vars
and write some additional serializable values into it for it to be available in the succeeding custom function or the next flow of the scenario.
I love the flow
configuration like having log
, loop
, function
, think
, and conditional requests as well and you can even combine them.
scenarios:
- name: 'Scenario'
flow:
- function: 'setAuthPayload'
- post:
name: 'loginApi'
url: '/login'
beforeRequest:
- 'setCustomHeader'
json:
username: '{{ authPayload.username }}'
password: '{{ authPayload.password }}'
I love how the templating works where you can just hook up with context.vars
like {{ $uuid }}
in the YAML config.
WISH LIST
It would be better if the phases
run sequentially.
The below config is the sample phase with 10 arrivalCount
and 10s duration.
config:
phases:
- duration: 10
arrivalCount: 10
- duration: 60
arrivalCount: 20
Based on the above config, after 10s, it will run the next phase even though the previous phase is not yet finished due to slow network requests or asynchronous functions or long-running tasks, etc.
Having a consistent or configurable arrival rate would be better. They explain that arrivalCount
happen in a fixed rate of 1s but rampTo
and arrivalRate
aren't and it differs when you combine them together or with maxVusers
.
config:
target: "https://staging.example.com"
phases:
- duration: 300
arrivalRate: 50
- duration: 300
arrivalRate: 10
maxVusers: 50
- duration: 120
arrivalRate: 10
rampTo: 50
- duration: 60
arrivalCount: 20
Based on the above config, the first phase generates 50 virtual users every second for 5 minutes while the second one generates 10 virtual users every second for 5 minutes, with no more than 50 concurrent virtual users at any given time. The third one ramps up the arrival rate of virtual users from 10 to 50 over 2 minutes. The final one creates 20 virtual users in 60 seconds (one virtual user approximately every 3 seconds).
All of them seem to be easy at first glance but I wish it would be more consistent, especially for a phase that should not exceed custom maxVusers
of 300 but with an arrival rate of like 2 seconds and 3 Vusers at a time for example.
I remembered this inconsistent issue with rampTo
as well. artilleryio/artillery#843
Consistent configuration with beforeScenario
, afterScenario
, and before
, after
.
The former two hooks are just a list of custom functions but it would be better to align it with before
and after
hooks where you can use any existing syntax like log
or trigger an HTTP request or something. I still feel sometimes that it should be the case from the beginning but at the same time, I also thought what you will do in those hooks is literally run some cleanup or initialization which is a custom function anyway.
before:
name: 'beforeAll'
flow:
- log: 'Before all hook process ID - {{ $uuid }}'
after:
name: 'afterAll'
flow:
- log: 'After all hook process ID - {{ $uuid }}'
scenarios:
- name: 'Scenario'
beforeScenario:
- 'customInitializationFunction'
afterScenario:
- 'cleanupFunction'
flow:
- log: 'Start scenario for a virtual user with ID - {{ $uuid }}'
Rename before
and after
to like jest properties like beforeAll
and afterAll
or even beforeAllScenarios
and afterAllScenarios
to have more meaning when those hooks run or just improve the documentation to let the users know of these.
Bunch of useful plugins but I haven't used most of them and only metrics-by-endpoint
, expect
, and the extendedMetrics
for http
for additional reporting.
Metric name | Meaning |
---|---|
http.dns |
Time taken by DNS lookups |
http.tcp |
Time taken to establish TCP connections |
http.tls |
Time taken by completing TLS handshakes |
http.total |
Time for the entire response to be downloaded |
The above table is taken from https://www.artillery.io/docs/guides/guides/http-reference#additional-performance-metrics-v2
I also love how you can capture the response of the request via JSON path expression and use the expect
plugin to verify it but I just hope it was more functionalities in it like jest
expect but configurable in the YAML file.
WISH LIST
Having the ability to write custom plugins locally. Right now I believe you have to publish it before you can use it and that you have to prefix it with artillery-plugin-name-of-your-plugin
.
I love how you can export it and create a report locally and then open an HTML file with a graph.
WISH LIST
I wish they allow disabling the report in the CLI but still show the verbose logs. It is creating a lot of noise if you're going to generate a report anyway and also the metrics sometimes log in between even the whole phase is still in progress.
As of right now you can only show/hide logs altogether but no logs categorization.
We want to make credentials reusable across multiple tests
version: "1.1"
name: Status Check
env:
host: example.com
credentials:
test:
basic:
username: Mish
password: Ushakov
tests:
example:
steps:
- name: GET request
http:
url: https://{{env.host}}
method: GET
useCredentials: test
check:
status: /^20/
Would be awesome to convert cURL command into a Step CI workflow format
Will be important later on if we want to aggregate the logs
The documentation does not mention how would one make a request with a client side certificate.
The documentation explains how would one make a request with a client side certificate.
?
?
No response
No response
We want to support chai assertions in our matcher
Please see our current implementation for more
https://github.com/stepci/runner/blob/main/src/matcher.ts
We want to be able to convert Postman collections to a workflow
See: #49
It would be awesome to have a formula for Homebrew.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.