GithubHelp home page GithubHelp logo

stepci / stepci Goto Github PK

View Code? Open in Web Editor NEW
1.5K 1.5K 62.0 470 KB

Automated API Testing and Quality Assurance

Home Page: https://stepci.com

License: Mozilla Public License 2.0

Dockerfile 2.04% TypeScript 95.18% Shell 2.78%
actions api-client api-rest api-testing api-testing-framework automated-testing ci continuous-integration github-actions graphql grpc grpc-client integration-testing load-testing qa soap swagger test-automation testing-tools trpc

stepci's Introduction

Screen Recording 2023-10-04 at 15 43 17

Note We just announced Support Plan for Step CI

Important For users migrating from Postman and Insomnia, see issues #29 and #30 respectively

Welcome

Step CI is an open-source API Quality Assurance framework

  • Language-agnostic. Configure easily using YAML, JSON or JavaScript
  • REST, GraphQL, gRPC, tRPC, SOAP. Test different API types in one workflow
  • Self-hosted. Test services on your network, locally and CI/CD
  • Integrated. Play nicely with others

Read the Docs

Try the Online Playground

Join us on Discord

Get started

  1. Install the CLI

    Using Node.js

    npm install -g stepci
    

    Note: Make sure you're using the LTS version of Node.js

    Using Homebrew

    brew install stepci
    
  2. Create example workflow

    workflow.yml

    version: "1.1"
    name: Status Check
    env:
      host: example.com
    tests:
      example:
        steps:
          - name: GET request
            http:
              url: https://${{env.host}}
              method: GET
              check:
                status: /^20/

    Note: You can also also use JSON format to configure your workflow

  3. Run the workflow

    stepci run workflow.yml
    
    PASS  example
    
    Tests: 0 failed, 1 passed, 1 total
    Steps: 0 failed, 1 passed, 1 total
    Time:  0.559s, estimated 1s
    
    Workflow passed after 0.559s
    

Documentation

Documentation is available on docs.stepci.com

Examples

You can find example workflows under examples/

Community

Join our community on Discord and GitHub

Contributing

As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request

Support Plan

Get Pro-level support with SLA, onboarding, prioritized feature-requests and bugfixes.

Learn more

Book us with Cal.com

Privacy

By default, the CLI collects anonymous usage data, which includes:

  • Unique user ID
  • OS Name
  • Node Version
  • CLI Version
  • Command (stepci init, stepci run, stepci generate)
  • Environment (Local, Docker, CI/CD)

Note The usage analytics can be disabled by setting STEPCI_DISABLE_ANALYTICS environment variable

License

The source code is distributed under Mozilla Public License terms

stepci's People

Contributors

andrewfarley avatar chenrui333 avatar dejmedus avatar dinoosawruss avatar koki-develop avatar lemondouble avatar mishushakov avatar mschfh avatar npostulart avatar plungingchode avatar umeshchavda05 avatar yu-fuku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stepci's Issues

[Feedback]: What I think about Artillery as an alternative

Would you be sad if Step CI did not exist?

No response

How likely would you recommend Step CI to a colleague?

No response

Is there something you dont like about Step CI?

No response

Do you have any ideas that could improve Step CI?

No response

Anything else you want to tell us?

I haven't had time to look at this library comprehensively, but scanning in the docs in its early stage is already quite appealing to me.

From someone who worked using Artillery to create performance testing, these are my takes on it. I'm gonna look forward to this library.

Documentation

For someone who didn't have experience with performance testing, having comprehensive documentation is really helpful and their categories are what allow me to write performance testing easily. They even have a changelog on their website which makes it easy to see everything in one place.

WISH LIST

I do wish they explain more about the configuration syntax a bit more or they could just have one place to describe all of it.

I also wish they explain how the worker works or they are using it as I have to figure out myself that they do run workers.
The below script is a sample of config logging different UUIDs.

What's weird to me is that before and after don't share the same UUID which is fine but for someone using it to associate things as one whole flow, they should have at least shared some context.

before:
  name: 'beforeAll'
  flow:
    - log: 'Before all hook process ID - {{ $uuid }}'
after:
  name: 'afterAll'
  flow:
    - log: 'After all hook process ID - {{ $uuid }}'
scenarios:
  - name: 'Scenario'
     flow:
       - log: 'Start scenario for a virtual user with ID - {{ $uuid }}'

Configuration

It is a good choice to use YAML but I heard some people wanted to use JSON as well. I am content with YAML and just keep everything in YAML even custom payloads or other configs are in YAML format anyway.

I love how they design the whole config syntax.

I love how they have a environments property where you can create different configs for each environment or override some config from your base config like if you want to turn off some plugin for some environment or have different phases.

I love how they have a processor property where you expose a bunch of custom functions that you can call throughout the whole config, but I just do hope you write it in modern syntax that uses require instead of import. You even update the context.vars and write some additional serializable values into it for it to be available in the succeeding custom function or the next flow of the scenario.

I love the flow configuration like having log, loop, function, think, and conditional requests as well and you can even combine them.

scenarios:
  - name: 'Scenario'
     flow:
       - function: 'setAuthPayload'
       - post:
          name: 'loginApi'
          url: '/login'
          beforeRequest:
            - 'setCustomHeader'
          json:
            username: '{{ authPayload.username }}'
            password: '{{ authPayload.password }}'

I love how the templating works where you can just hook up with context.vars like {{ $uuid }} in the YAML config.

WISH LIST

It would be better if the phases run sequentially.
The below config is the sample phase with 10 arrivalCount and 10s duration.

config:
  phases:
    - duration: 10
       arrivalCount: 10
    - duration: 60
       arrivalCount: 20

Based on the above config, after 10s, it will run the next phase even though the previous phase is not yet finished due to slow network requests or asynchronous functions or long-running tasks, etc.


Having a consistent or configurable arrival rate would be better. They explain that arrivalCount happen in a fixed rate of 1s but rampTo and arrivalRate aren't and it differs when you combine them together or with maxVusers.

config:
  target: "https://staging.example.com"
  phases:
    - duration: 300
      arrivalRate: 50
    - duration: 300
      arrivalRate: 10
      maxVusers: 50
   - duration: 120
      arrivalRate: 10
      rampTo: 50
   - duration: 60
      arrivalCount: 20

Based on the above config, the first phase generates 50 virtual users every second for 5 minutes while the second one generates 10 virtual users every second for 5 minutes, with no more than 50 concurrent virtual users at any given time. The third one ramps up the arrival rate of virtual users from 10 to 50 over 2 minutes. The final one creates 20 virtual users in 60 seconds (one virtual user approximately every 3 seconds).

All of them seem to be easy at first glance but I wish it would be more consistent, especially for a phase that should not exceed custom maxVusers of 300 but with an arrival rate of like 2 seconds and 3 Vusers at a time for example.

I remembered this inconsistent issue with rampTo as well. artilleryio/artillery#843


Consistent configuration with beforeScenario, afterScenario, and before, after.
The former two hooks are just a list of custom functions but it would be better to align it with before and after hooks where you can use any existing syntax like log or trigger an HTTP request or something. I still feel sometimes that it should be the case from the beginning but at the same time, I also thought what you will do in those hooks is literally run some cleanup or initialization which is a custom function anyway.

before:
  name: 'beforeAll'
  flow:
    - log: 'Before all hook process ID - {{ $uuid }}'
after:
  name: 'afterAll'
  flow:
    - log: 'After all hook process ID - {{ $uuid }}'
scenarios:
  - name: 'Scenario'
     beforeScenario:
       - 'customInitializationFunction'
     afterScenario:
       - 'cleanupFunction'
     flow:
       - log: 'Start scenario for a virtual user with ID - {{ $uuid }}'

Rename before and after to like jest properties like beforeAll and afterAll or even beforeAllScenarios and afterAllScenarios to have more meaning when those hooks run or just improve the documentation to let the users know of these.

Plugins

Bunch of useful plugins but I haven't used most of them and only metrics-by-endpoint, expect, and the extendedMetrics for http for additional reporting.

Metric name Meaning
http.dns Time taken by DNS lookups
http.tcp Time taken to establish TCP connections
http.tls Time taken by completing TLS handshakes
http.total Time for the entire response to be downloaded

The above table is taken from https://www.artillery.io/docs/guides/guides/http-reference#additional-performance-metrics-v2

I also love how you can capture the response of the request via JSON path expression and use the expect plugin to verify it but I just hope it was more functionalities in it like jest expect but configurable in the YAML file.

WISH LIST

Having the ability to write custom plugins locally. Right now I believe you have to publish it before you can use it and that you have to prefix it with artillery-plugin-name-of-your-plugin.

Reporting

I love how you can export it and create a report locally and then open an HTML file with a graph.

WISH LIST

I wish they allow disabling the report in the CLI but still show the verbose logs. It is creating a lot of noise if you're going to generate a report anyway and also the metrics sometimes log in between even the whole phase is still in progress.

As of right now you can only show/hide logs altogether but no logs categorization.

Postman import

We want to be able to convert Postman collections to a workflow

Insomnia import

We want to be able to convert Insomnia collections to a workflow

(Docs) Add integration examples

We want to have some more examples how to use Step CI with different tools or in different environments. Here's a list:

  • BitBucket
  • Circle CI
  • AWS CodeBuild
  • Travis CI
  • TeamCity
  • Jenkins
  • Cloud Build
  • Dagger

Supply secrets as argument

One thing we should think about is - providing secret arguments to the runner (like API-keys)

The differences are:

  1. Secret variables should not be displayed in the CLI output, unlike env variables

  2. You can only define secrets from the CLI

I'm thinking like:

stepci run workflow.yml -s variable=$VAR

Load testing

  • Implement in the runner
  • Implement in CLI
  • Documentation

[Bug]: CLI arguments are not being replaced in testing templates

What happened?

We are using JWTs for authentication. I am testing on macos (M1)

export JWT=$(genJwtToken)
stepci workflow.yml -s JWT=$JWT
&&
stepci workflow.yml -s JWT=$JWT
{{env.JWT}} in the bearer/token section never gets replaced "sends to testing server [object object]"
{{secrets.JWT}} in the bearer/token section never gets replaced "sends to testing server [object object]"

auth:
bearer:
token: {{secret.JWT}}

auth:
bearer:
token: {{secrets.JWT}}

Now I am using sed to transform the workflow in memory for JWT before running.

Love the tool. Also would love to know if I am doing something wrong as well.

What did you expect to happen?

I expected the test to pass . Trying to mirror tests out of swagger.

Version

2.4.5

Environment

v16.16

How can we reproduce this bug?

I started with the starting example.

Relevant log output

POST http://localhost:3000/schools/getRoster HTTP/1.1
Content-Type: application/json
Authorization: Bearer secrets.ABCJ

{"schoolId":"8eLtalPcqv5Ioae4E4Wn"}
Response

HTTP/1.1 403 Forbidden
x-powered-by: Express
access-control-allow-origin: *
content-type: application/json; charset=utf-8
content-length: 33
etag: W/"21-6WgjtJhAT2yQTa63ODczjQj9Xro"
date: Mon 07 Nov 2022 19:25:56 GMT
connection: close

{"message":"Could not authorize"}

Checks

S

Would you be interested in working on a bugfix for this issue?

  • Yes! Assign me

Proposal: Add "connections" directive

Useful for storing connection data and persisting connections across test steps

version: "1.1"
name: gRPC API
tests:
  example:
    connections:
      example:
        grpc:
          proto: public/helloworld.proto
          host: 0.0.0.0:50051
          tls: {}
    steps:
      - name: gRPC request
        grpc:
          # Using a reference
          connection: example
          service: helloworld.Greeter
          method: SayHello
          data:
            name: world!
          check:
            jsonpath:
              $.message: Hello world!
      - name: gRPC request
        grpc:
          # Inline
          connection:
            proto: public/helloworld.proto
            host: 0.0.0.0:50051
            tls: {}
          service: helloworld.Greeter
          method: SayHello
          data:
            name: world!
          check:
            jsonpath:
              $.message: Hello world!

Proposal: Scripts

Pre-request, teardown and checks script support

Example:

version: "1.1"
name: Status Check
tests:
  example:
    scripts:
      example: |
        console.log("hello")
      example2:
        file: script.js
    steps:
      - http:
          url: https://example.com
          method: GET
          hooks:
            beforeRequest:
              - example
            afterRequest:
              - example2
          check:
            status: /^20/
            custom:
              name: example2

(Docs) Add docs how to setup dev environment

We will want to tell users how to setup dev environment to work on Step CI codebase. For that we would need an extra section in our readme, which would contain the following:

  • List dependencies like Node, NPM, etc.
  • How to install them
  • How to test the code
  • How to run the build

cURL import

Would be awesome to convert cURL command into a Step CI workflow format

(Docs) CLI Reference

We want a doc, with all our CLI commands and options
You can already retrieve the list from the CLI by running stepci --help

Proposal: Reusable credentials

We want to make credentials reusable across multiple tests

version: "1.1"
name: Status Check
env:
  host: example.com
credentials:
  test:
    basic:
      username: Mish
      password: Ushakov
tests:
  example:
    steps:
      - name: GET request
        http:
          url: https://{{env.host}}
          method: GET
          useCredentials: test
          check:
            status: /^20/

Allow loading tests from other directories

workflow.yml

version: "1.1"
name: Status Check
env:
  host: example.com
testsFrom:
  - test.yml

test.yml

tests:
  example:
    steps:
      - name: GET request
        http:
          url: https://{{env.host}}
          method: GET
          check:
            status: /^20/

[Bug]: continueOnFail doesn't work as expected

What happened?

Seem as though the worflow config for continueOnFail, does not continue. After looking at the code, it seems to be implemented. I might have defined it in the wrong spot, but if it worked, it would be really handy.

What did you expect to happen?

I expected all test to run and stop on the first failing test.

Version

2.4.5

Environment

node 16.16

How can we reproduce this bug?

version: "1.1"
name: Status Check
env:
host: example.com
config:
continueOnFail: true
tests:
example:
steps:
- name: GET request
http:
url: https://{{env.host}}1
method: GET
check:
status: /^20/
- name: GET request
http:
url: https://{{env.host}}
method: GET
check:
status: /^20/

Relevant log output

Request  HTTP 

POST http://localhost:3000/users/verify-code HTTP/1.1
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiaXBJZUJJaGJHMllEeGdQNHc1N1B6d0s2WDNJMyIsInVzZXJSZWNvcmQiOnsidWlkIjoiaXBJZUJJaGJHMllEeGdQNHc1N1B6d0s2WDNJMyIsImVtYWlsIjoia2F5ZWoudGFrQGdtYWlsLmNvbSIsImVtYWlsVmVyaWZpZWQiOmZhbHNlLCJkaXNhYmxlZCI6ZmFsc2UsIm1ldGFkYXRhIjp7Imxhc3RTaWduSW5UaW1lIjoiTW9uLCAwNyBOb3YgMjAyMiAxODo1MToyNCBHTVQiLCJjcmVhdGlvblRpbWUiOiJNb24sIDA3IE5vdiAyMDIyIDE4OjUxOjI0IEdNVCIsImxhc3RSZWZyZXNoVGltZSI6IlR1ZSwgMDggTm92IDIwMjIgMDE6Mjc6MzggR01UIn0sInRva2Vuc1ZhbGlkQWZ0ZXJUaW1lIjoiTW9uLCAwNyBOb3YgMjAyMiAxODo1MToyNCBHTVQiLCJwcm92aWRlckRhdGEiOlt7InVpZCI6ImtheWVqLnRha0BnbWFpbC5jb20iLCJlbWFpbCI6ImtheWVqLnRha0BnbWFpbC5jb20iLCJwcm92aWRlcklkIjoicGFzc3dvcmQifV19LCJpYXQiOjE2Njc4NzcwODB9.ayF9GYkVlUa7kD5FP4NrkoPiw3Im-p7VJ9BicJgOrYt83TZ1GFxGmBgBEVQFOgnl-Gsk9brMmQAWkrrgshrAac3J87aBusD5st--D4aTr20ql1bq_E5drbwadhuB6G_kWkjh62akZcMgSfNJ9cU9M8shiLSgFSeui1QSUjWVl0SDFojPGvlNKrTeIOmXGRHN58pKqdRCD7f-uI27Dzy3nDk1LpU5swAAnNQUpeuokcj7SoIaH_uq1N-w15hJ6-fe0wmd9WSFoBUO2irHiv1AizInIk2sryQOfjSRrYD-U0qaANFcKj_efyb_oBWKvCN4DvLm821RLQQCH21h7D8Gp68_SpKm989GoUioTa6oiUgcZMnPnwUBRv0l9IUz0bGvjH2q-RlHbysZwDaB_Iibs7kcbpxrAH5rWJRaYazlSjdrjqxcXxLpBlE6EBuIaW8cUDJbj_fKtCfzYGzV6tfjSHeTflW8wN9XZRrve66bXeZPF1sRvpjUG-qejwYYryPmGsmXEqjM9lUYDllOYmUkPrTPQ7fJE-GmtwFyMww8U3RyZvdiNk4NO7e4sCHbkM_ldhDV_SRmZrrAOnl3qX3TvdUbvTCvDZtiyomi4_AqMt9knN2uZEsJ8JFmieUQQ4n0nGjjkF_jzdAQ0coiOEKoAc2KG4qA13xi4ERkO71Rl8k

{"phoneNumber":"+14049060803","verificationCode":"492275"}
Response

HTTP/1.1 500 Internal Server Error
x-powered-by: Express
access-control-allow-origin: *
content-type: application/json; charset=utf-8
content-length: 28
etag: W/"1c-GTLvyKxlvOrEj2GGBdfOAn6LHp0"
date: Tue 08 Nov 2022 03:11:22 GMT
connection: close

{"message":"Incorrect code"}

Checks

Status

  ✕ 500 (expected 200)

⚠︎ users_tests › Get User in OnBoarding

Step was skipped because previous one failed

Would you be interested in working on a bugfix for this issue?

  • Yes! Assign me

[Bug]: How does one provide a client side certificate to a request?

What happened?

The documentation does not mention how would one make a request with a client side certificate.

What did you expect to happen?

The documentation explains how would one make a request with a client side certificate.

Version

?

Environment

?

How can we reproduce this bug?

No response

Relevant log output

No response

Would you be interested in working on a bugfix for this issue?

  • Yes! Assign me

HAR support

Would be cool if Step CI would support HAR format

  • Generate Workflow from a HAR file
  • Export test result as a HAR file

Add tags

Will be important later on if we want to aggregate the logs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.