GithubHelp home page GithubHelp logo

oai / tools.openapis.org Goto Github PK

View Code? Open in Web Editor NEW
96.0 12.0 27.0 260.67 MB

A collection of open-source and commercial tools for creating your APIs with OpenAPI - Sourced from and published for the community

Home Page: https://tools.openapis.org/

JavaScript 82.71% Nunjucks 17.10% CSS 0.19%
openapi openapi-spec openapi3

tools.openapis.org's Introduction

OpenAPI Tooling

This project is provided by the OpenAPI Initiative as a means to centralize ecosystem information on OpenAPI-related tooling. It leverages open-source projects that have gone before to provide a consolidated list of tooling.

The project is split into two features:

  • A list of tooling merged from sources across the interwebs that users can grab and slice and dice as they see fit.
  • A website that allows users to search and inspect the tooling data first-hand.

Each is expanded upon in the sections below.

The project Kanban board for Tooling can be found here: https://github.com/OAI/Projects/projects/4

Roll Call

The following projects are being leveraged to provide the majority of the source information.

Name Source Description
OpenAPI.Tools https://github.com/apisyouwonthate/openapi.tools APIs You Won't Hate efforts to create uber list of tooling.
APIs.guru https://github.com/apis-guru/awesome-openapi3 Repository/site based on tagged repositories in Github.
This repository reuses the build approach rather than pulling the list from the source.

How Can You Help?

This project is designed to continue the work of APIs.guru and collect data based on repositories tagged with a topic.

If you want your project included in the tooling list tag your project with one-or-more of the following topics:

  • swagger or openapi2 (For Swagger/OpenAPI 2.0 support).
  • openapi3 (For OpenAPI 3.0 support).
  • openapi31 (For OpenAPI 3.1 support).

If you aren't familiar with topics in GitHub please follow this guide to add them to your repository.

Note: Collection of the swagger/openapi2 topics is not currently implemented - see dependencies described in this issue.

Tooling List

The tooling list is built in largely the same format as the majority of projects that have blazed a trail in tooling before (which of course this project takes full advantage of).

In order to bring this together in a sensible way a Gulp-based process has been implemented. Gulp was chosen given the relative ease with which functions can be implemented to massage the data stream and to ensure the build is not closely coupled to a (commercial) CI tool. There are a couple of principles around the design worth stating:

  • The transform functions that massage the data are abstracted away from Gulp to enable the build to "lift-and-shift" to a new build tool as required.
  • Pipes between functions are always formatted as YAML to allow for simple dumping of the data for humans appraisal.
  • The source data collection is written as independent packages referenced by metadata to allow new sources to be "slotted" in.

Note that if better tools are identified for the build then Gulp should be easy to change.

Environment Variables

Access to the GitHub API is required to run the build. Access is supported through basic authentication using a GitHub username and a personal access token as environment variables.

The following variables are therefore required to run the build:

Name Description
GH_API_USERNAME GitHub username to access the GitHub API
GH_API_TOKEN OAuth/Personal Access Token to access the GitHub API
GH_API_CONCURRENCY_TOKEN Number of simultaneous connections to the GitHub API. Recommended value is 2.
Values greater than 2 appear to result in connections being throttled and the API returning a 403.

You must export these before running either of the data collection builds.

We've used custom environment variables for GitHub API access rather than default GitHub variables provided by Actions. This provides both a separation-of-concerns between access controls and the build mechanism and enables higher rate limits.

Note: We plan to introduce dotenv to help with the setting of environment variables.

Full Build

The full build takes the following approach:

  • Retrieve each tooling source, including the existing list at src/_data/tools.yaml.
  • Combine source data based on repository name.
  • Normalise property names across sources using simple statistics (Sørensen–Dice, Damerau–Levenshtein distance).
  • Get repository metadata from GitHub.
  • Categorise the tools using Bayesian statistics.
  • Write to src/_data/tools.yaml.

Currently this build is scheduled using GitHub Actions and runs once a week on Sunday.

The schedule will be reviewed as we collect data to see if executing it with greater frequency would be beneficial.

To run the full build locally:

yarn install

GH_API_USERNAME=<username> GH_API_TOKEN=<personal-access-token> GH_AP_CONCURRENCY_LIMIT=2
export GH_API_USERNAME GH_API_TOKEN GH_AP_CONCURRENCY_LIMIT
yarn run build:data:full

Metadata Update

The goal of the metadata update is to provide consistent repository metadata without sourcing new tooling:

Currently this build is scheduled using GitHub Actions and runs every day.

The scheduled will be reviewed as we collect data to see if executing it with greater frequency would be beneficial.

To run the metadata build locally:

# If you haven't done this already
yarn install

# If you haven't done this already
GH_API_USERNAME=<username> GH_API_TOKEN=<personal-access-token> GH_AP_CONCURRENCY_LIMIT=2

# If you haven't done this already
export GH_API_USERNAME GH_API_TOKEN GH_AP_CONCURRENCY_LIMIT

yarn run build:metadata

Testing Locally

To test locally you can clone or fork the Tooling repository and create yourself a .env file that meets your needs. For example:

export GH_API_USERNAME=<your GitHub username>
export GH_API_TOKEN=<your GitHub personal access token>
export GH_API_CONCURRENCY_LIMIT=2
export TOOLING_REPOSITORY_OWNER=<your GitHub organisation or username>
export TOOLING_REPOSITORY_REPO_NAME=Tooling

With this in-hand there's a bunch of options to send into either yarn run build:full or yarn run build:metadata:

  • --metadata: If you don't want to run everything you can change the configuration that drives the build (more on this below).
  • --env-file: Supply an alternative .env file as described above.
  • --output-dir: Change where you write the tools.yaml file.
  • --dry-run: Don't do destructive things like closing issues (actually this is all it does right now).

Configuration File

The build is driven by a configuration file, the default being gulpfile.js/metadata.json which is validated at the start of the build using the JSON Schema found in validate-metadata.js.

The purpose of the configuration file is to define what data sources are collected. It contains an array of objects, each with two mandatory properties:

  • title: The name of data processor. This is recorded in tools.yaml to identify which processor picked the data up.
  • processor: Path to a JavaScript library that implements the data collection logic.

Other properties specific to a particular data source can be defined as required.

These data processors collect data and pass it into the Gulp pipeline for processing. That's all they do - everything else is done downstream in the JavaScript libraries found in the transform directory.

If you want to test only a subset of data sources in isolation you can create your own configuration file. For example, if you are testing the GitHub issue-sourced data processor you can define just this in the configuration file - you'd only get those tools in the resultant tools.yaml file.

You should also consider whether to test with a subset of master data from src/_data/tools.yaml (as it is voluminous). You can edit your configuration file to point somewhere else e.g.:

{
  "title": "master",
  "url": "<Path to your alternative tools.yaml file>",
  "processor": "../processors/master-processor.js"
}

Website

The website is a static site built from the tooling data. It is exposed by GitHub Pages and can be found here.

The design of the site is intentionally "lean", and provides the tooling list by category (the categorisation being done as described above).

Build

The site uses the eleventy site generator and is rebuilt after each full and metadata build, using the newly-updated data at src/_data/tools.yaml.

To run the site build locally:

yarn install
yarn run build:site

Note the build uses an environment variable HOSTED_AT to allow the site to be deployed to an alternative root URI and therefore amend the "Home" button. This is for the benefit of GitHub Pages, where the site is deployed to /Tooling. Unless you need to move URL then just leave this unset.

Running locally

If you want to run the site locally it's just damn simple:

yarn install
yarn run serve

The development server is set to reload on change. Now isn't that convenient.

Contributing

Please refer to the Contributing Guide

tools.openapis.org's People

Contributors

kinlane avatar mikeralphson avatar philsturgeon avatar priyansh121096 avatar sensiblewood avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tools.openapis.org's Issues

Ignore Approach and Implementation

User Story

As a user of tooling-related data I want to ignore anything that appears to be boilerplate code, unmaintained or archived.

As a maintainer of tooling-related data I want to reduce the amount of queries run against projects that appear to be boilerplate code, unmaintained or archived.

Detailed Requirement

Given the "coarse-grained" nature of the data collection approach there is a great deal of opportunity for "dross" to clutter up the tooling dataset. Some examples:

  • Tools removed from sources.
  • Dead repositories with no history or anything particularly interesting about them.
  • Repositories with zero stars that have not changed in eons.
  • Repositories that are tagged but not actually anything to do with OpenAPI.

We therefore need to decide on:

  • The policy for ignoring this stuff.
  • An implementation in the gulp build to sift it out.

Nueva edición api

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

menschen in not

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

General update to README

User Story

As a tooling repository user I want the most up-to-date information possible so I can correctly understand how the site and data collection mechanisms function.

Detailed Requirement

Currently the README is lagging behind the state of the code. It needs a general update that includes:

  • Improve: The build mechanics for data collection.
  • Add: The construct of the site.
  • Add: The build mechanics for the site.
  • Add: Contributing to the repository.

API for OpenAPI tools

User Story

As an architect, I would like to use API instead of HTML pages when I do research.
With queries, I would filter the tools according to my needs.
The API would provide a data source for content creators and other data users (the reason why we do APIs).

Detailed Requirement

Design and publish OAI Tooling API in OpenAPI (preferably) or GraphQL.

Public API would provide:
Filtering

  • categories
  • compliance with standards (OpenAPI 2.0, 3.0, 3.1)
  • full text

Detail of the tool

  • name
  • text description
  • list of functions
  • link to product home page, link to the documentation
  • compliance (yes, no, partial, unknown)
  • licensing (free, commercial)
  • code repository
  • created date
  • last update

Optionally administration API

  • creating new items
  • review new items/changes, etc.

Ensure the repository is always captured correctly

User Story

As a user of tooling data I want the repository to be captured consistently across all sources.

Detailed Requirement

There's a couple of instances where the repository is not captured correctly, for example:

- source: IMPLEMENTATIONS.md
  name: KaiZen OpenAPI Parser
  homepage: https://github.com/RepreZen/KaiZen-OpenAPI-Parser
  language: Java
  curated_description: High-performance Parser, Validator, and Java Object Model for OpenAPI 3.x
  category: Low-Level tooling

This instance is specific to IMPLEMENATIONS.md processor - and could be tackled in this array builder - but it would be good to implement in the main gulp file so it works across all sources.

Speakeasy SDK Generation

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Speakeasy SDK Creation
  • Description: Enterprise-grade SDKs in 8+ languages: Typescript, Python, Go, Terraform, C#, Java, PHP, Ruby, Unity
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: https://www.speakeasyapi.dev/docs/create-client-sdks

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: true
  • 3.0: true
  • 2.0: false

1112322330

[Jawaker tokens generator](https://www.jawaker.com/en/)

Implement a time-based sharding approach to data collection

User Story

As a tooling developer I want data to be collected consistently and without failing due to rate limits applied at any source code repository platform.

Detailed Requirement

GitHub (obviously) applies rate limits on API calls, which we rely on heavily to collect data. As we expand the number of topics we are collecting we need to be cognisant of the limits and amend our approach to spread the collection period over multiple hours.

There's a few approaches:

  1. A simple manual slicing of the workload based on the alphabet (low sophistication, much manual tweaking).
  2. Splitting the build into multiple steps to seed files for later processing (medium sophistication, limited manual tweaking).
  3. Splitting the build as per option 2 and using a dependency mechanism to allow a build to trigger others (high sophistication, largely automated)

Option 3 seems feasible. The most sensible option seems to be:

  • Run a "collection" mechanism to get the superset of repositories we will query for their metadata.
  • Based on the collected data shard the data set into multiple groups, each bound to a given schedule.
  • Write workflow files based on the known rate limits at a given repository platform, target data set and schedule.
  • Allow the builds to run of their own volition.

This approach should scale as we collect more data. The main thing to be aware of is the overall build time limits, although that should be "OK" as we have a fair amount of head room for the time being.

Instructions to add a new tool on the site itself

User Story

As an end-user, I wish to know how to add a tool to the site when I come to it directly, not via the GitHub repo/README

Detailed Requirement

  • At a minimum, a link to the GitHub repo (such as a fork-me ribbon)
  • More comprehensively, a new page on the site with instructions (possibly on a drop-down menu for future expansion, re: API access #60 etc)

Implement throttling for GitHub API access

User Story

As a tooling developer I want calls to the GitHub API to made at a rate that I can expect a consistent response

Detailed Requirement

Both data builds implement a firehose-style approach to accessing the GitHub API i.e. the data sources are collected up and then it hits go wrapped in a native Promise.all. This appears to result in some occasional "spikey" behaviour in the GitHub API, with either Axios bailing completely or GitHub returning a 403 (Forbidden) or 503 (Service Unavailable).

It's hard to find any documentation on throttling behaviours at the GitHub end, but having some mechanism to "slow down" the rate of consumption is likely to help. Suggest using the Bluebird Promise.map instead and leveraging the concurrency property to only spin up a maximum number of concurrent calls at any one time.

Add Google Analytics

User Story

As the owner of the Tooling website
I want to implement Google Analytics
So I can track visits to the website

Detailed Requirement

Add Google Analytics snippet as per Slack message from Marsh.

PayPal

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

"Manually" add tools to the repository

User Story

As a tooling repository user
I want to see all available tools in the repository
So I can make an informed decisions based on the data therein

Detailed Requirement

Currently the repository sweeps existing data sources (Implementations.md, openapi.tools, GitHub tags). We need a means to submit tools outside these sources.

As a first pass we'll use an Issue template and an automated process (OK a bot 🤨︎) to sweep tool request issues, run a DQ process and do PR and merge to main. This will help facilitate the work to do off-main builds as well (which is nice).

Check variables before starting build

User Story

As a system owner I want to ensure my configuration is correct before starting build tasks

Detailed Requirements

Currently if GITHUB_USER and GITHUB_TOKEN are not set the build will not fail until significantly into the build process with a non-obvious error.

Add a simple Gulp task to check that both of these are set before starting the build.

Implement workflow

User Story

As a user of tooling data I want updates to source data to be regularly retrieved and applied.

Detailed Requirement

A (GitHub Actions) workflow is required to retrieve source data as regular intervals. Suggest this is executed on a schedule, nominally once per day.

A

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

Build strategy for retrieving repository metadata

User Story

As a system owner I want to ensure that metadata is retrieved judiciously and is not wasteful of machine time or resources.

Detailed Requirement

The majority of repositories referenced in the build will change very slowly (if they change at all). Using a "one-in, all-in" full build approach is therefore inappropriate. A more granular approach is required.

The following is suggested:

  • Daily: Update the repository metadata for each repository held in docs/tools.yaml.
  • Weekly: Run the source & merge process to pull in new sources and update the repository metadata.

There is also a fair amount of "noise" in the data. For example:

  • There are many boilerplate repositories that have not changed since they were created and appear abandoned. An "ignore" rule should be implemented for these repositories.
  • Any repositories that return a 403 could be ignored from subsequent runs.

The existing full build therefore needs to be split out as above and then reflected in the GitHub Actions config.

Adding Spring Cloud Gateway for Kubernetes

Tool Properties

  • Display name: Spring Cloud Gateway for Kubernetes
  • Description: Spring Cloud Gateway for Kubernetes provides an implementation of Spring Cloud Gateway, along with integrating other Spring ecosystem projects such as Spring Security, Spring Session, and more. This product includes commercial-only features on top of those open source including OpenAPI auto-generated documentation.
  • Homepage: https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/index.html

OpenAPI Versions

  • 3.1: false
  • 3.0: true
  • 2.0: true

Search refinements / API

User Story

As a user, I want to quickly find quality and well supported tools that meet my needs and work in my environment. The current categories are broad and the lists contain too many items. I would like to reduce the number of matches using various filtering criteria.

Detailed Requirement

  • Add the following search criteria:
    • popularity (e.g. >= number of 'stars' , watchers, forks). Or possibly derive a popularity score based on multiple properties?
    • last update (e.g. past n-months)
    • supported version (check one or more)
    • operating environment / language
  • Understand this may be challenging to implement with 11ty. Indexing the tools.yaml file using an Apache Solr or Elastic Search server could provide a search API. Another option would be to implement a simple serverless microservice for search and retrieval (e.g. using node.js express)
  • Some of the criteria may not be currently supported by the model

note: suspect this is already on the roadmap....

Allow topics to override primary category

User Story

As a tool developer, I'd like to be able to override the category classification given to my tool. Specifically I'd like https://github.com/mnahkies/openapi-code-generator to be labelled as a "Code Generator" rather than a "Parser"

Context

Currently the category is assigned using https://www.npmjs.com/package/bayes which essentially uses the frequency of tokens in a provided text against the frequency of tokens in already classified text to assign a class.

However, because the current category/class distributions are pretty uneven (>30% are assigned to "Parsers") it seems to have ended up overly biasing assignment to "Parsers". For example, Redoc is assigned "User Interfaces" and "Parsers", but not "Documentation"

And these are all assigned to "Parsers" as well:

  • OpenAPI Server Code Generator (oapi-codegen)
  • OpenAPI Mocker
  • docs
  • php-openapi-faker
  • ...

Rather than "Code Generator" / "Mock" / "Documentation" / "Testing Tools"

I'm not sure if this is inherent to the classification approach / problem space (eg: is the written language used for different types of tool lacking enough distinguishing tokens to give a good signal), or a negative feedback loop from the existing classifications, but either way I think it would be good to have a way to override this behavior.

I'm hopeful that introducing this would over time improve the accuracy of the classification using bayes as a result of the accurate manually labelled data.

Detailed Requirement

Propose adding a way to manually label a primary category for a tool. I see two main options:

  • Field on the tools.yaml entries like manualCategoryOverride
  • Looking for new topics on the source entries like the existing openapi3 / openapi31 ones that indicate the primary category

I see the primary benefit of the first option being that it gives control of curation to the maintainers of this repository, whilst the second option allows tool writers to self serve. It's possible that both might be desirable, especially to account for entries that aren't scrapped from Github (though I guess their categories are essentially manually configured already).

I think some amount of rationalization (eg: Testing vs Testing Tools) of the existing categories may be useful as well, and potentially adding a description of each category explaining what is in/out of scope for it.

Improve tool modal pop-up

User Story

As a tooling site user I want to glean as much possible information when I select a tool from the categories pages.

Detailed Requirement

The modal pop-up only contains the barebones information atm. It should be updated to include:

  • Source repository platform (GitHub, Gitlab, etc.)
  • Avatar for repository owner
  • Pleasantly formatted dates
  • Excerpt from README, formatted from original Markdown
  • Links to the tool homepage/repository (for ease-of-reference)

Add Traefik Hub

Tool Properties

OpenAPI Versions

  • 3.1: false
  • 3.0: true
  • 2.0: true

Note

We've added the tags to Traefik Hub based on the "how can you help" section in the readme. However, it ended up in the "Server implementations" category, but it'd be a better fit in the "Gateway" category. Also, the metadata is incorrect (name, homepage link, 2.0 support), so I think it's better to add it through an issue now.

image

Fix pop-up and improve look-and-feel

User Story

As a tooling owner
I want my tools to be available on a fully-functioning website
Which is great to look at
So that users of the site have a great user experience

Detailed Requirement

The site currently has - and has had for some time - a bug that is stopping pop-ups from working correctly. This needs to be resolved.

Then the general look-and-feel needs to be revamped to make it easier to read and navigate.

Implement Bayesian analysis for categorisation

User Story

As a tooling data user I want the data to be categorised to provide easy-to-use references for different tooling types.

Detailed Requirement

In the APIs.guru repository Mike implemented Bayesian analysis to help with data categorisation. This should be implemented in this repository to provide equivalent functionality.

Ensure all tools are uniquely identified

User Story

As a tooling user I want all available tools to be uniquely identified for ease of understanding and referencing

Detailed Requirement

At the moment a tool name isn't consistently present as not all sources e.g. data from GitHub provides one.

In order to always provide a reference we'll use the URL instead - GitHub for open source, the homepage for commercial tools - and then hash that baby to create a unique and consistent reference

PayPal

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be usd to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

Error while runing yarn install

Describe the bug
When I run yarn install I get an error

To Reproduce
Steps to reproduce the behavior:

  1. Start with macOS Monterey v12.6.6, 1.6 GHz Dual-Core Intel Core i5
  2. run brew install yarn (successfully installs yarn version 1.22.19)
  3. clone this repo and cd into it, run yarn install
  4. See error
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
[3/4] 🔗  Linking dependencies...
[4/4] 🔨  Building fresh packages...
warning Error running install script for optional dependency: "/Users/justinblack/programming/tooling/node_modules/glob-watcher/node_modules/fsevents: Command failed.
Exit code: 1
Command: node install.js
Arguments: 
Directory: /Users/justinblack/programming/tooling/node_modules/glob-watcher/node_modules/fsevents
Output:
node:events:492
      throw er; // Unhandled 'error' event
      ^

Error: spawn node-gyp ENOENT
    at ChildProcess._handle.onexit (node:internal/child_process:286:19)
    at onErrorNT (node:internal/child_process:484:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
    at ChildProcess._handle.onexit (node:internal/child_process:292:12)
    at onErrorNT (node:internal/child_process:484:16)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -2,
  code: 'ENOENT',
  syscall: 'spawn node-gyp',
  path: 'node-gyp',
  spawnargs: [ 'rebuild' ]
}

Node.js v20.5.0"
info This module is OPTIONAL, you can safely ignore this error
✨  Done in 47.66s.

Expected behavior
No errors

Screenshots
N/A

Desktop (please complete the following information):

  • OS: macOS Monterey v12.6.6
  • Browser N/A
  • Version Monterey v12.6.6

Additional context
I see that the module is optional, but why is there an error?
Running build instructions before making a PR on my branch: https://github.com/spacether/Tooling/tree/updates_openapi_json_schema_generator

Phuoc Nguyen . Hello World

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

Tool in more categories

User Story

As a user, I select a category, pick one of the tools and display its detail. I cannot see whether the tool also belongs to other categories. I would need to go to different categories to check (to which one?). For instance, Postman could come into Testing, Editor, Documentation, Mock, and maybe some others. Knowing which categories the tool covers is interesting for me.

As a maintainer, I want to enter a tool and select all applicable categories.

Detailed Requirement

When entering the product, the maintainer selects all applicable categories.
When a user opens the tool detail, he will see all categories in one place.

Implement web UI for tooling data

User Story

As a tooling data user I want to be able to browse and discover the available tooling data.

Detailed Requirement

This is going to be light on detail... but this simply covers cutting in the first drop of the web interface for the tooling data.

Rough overview:

  • Refactor structure to move tools.yaml away from docs to src/_data
  • Introduce eleventy, tailwindcss, etc packages.
  • Create structure for eleventy assets.
  • Create Actions support for cutting new website once tools.yaml has been updated.

Adding API portal

Tool Properties

  • Display name: API portal
  • Description: API portal enables API consumers to find APIs they can use in their applications by assembling a dashboard and detailed API documentation views by ingesting OpenAPI documentation from the source URLs., And allows for managing, creating, and consuming API Keys.
  • Homepage: https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/index.html

OpenAPI Versions

  • 3.1: false
  • 3.0: true
  • 2.0: true

Implement notifications on build for Discord

User Story

As a tooling user I want to see when new tools are added to the repository on #tooling Discord channel.

Detailed Requirement

As part of marketing activity it'd be great to get some outbound notifications when new tools are added to the repository.

To that end our new Discord environment seems like a good place to start:

  • Add statistics generation to the build and a diff on old/new docs/tools.yaml.
  • Use stats & new tools to drive messaging.

This is most likely best done as a "summary", possibly with the statistics checked back into the repository to drive a changelog.

Update openapi json schema generator

User Story

Update openapi json schema generator

Detailed Requirement

Now fill in the blanks:

  • How would this work?
    3.1 document processing now works, this needs to be added
  • What's the approach to implementing it?
  • Does it reference any existing issues you know about? (include a link please)
  • Any caveats, considerations or quid pro quos?

Data build is failing with 403 returned by Github API

Describe the bug

The build currently fails due to a new restriction at the Github API.

A 403 is returned where it should not be i.e. the permissions applied by Github should not result in a 403 being returned.

The behaviour was experienced in the past when there were too many active connections to the Github API. This has been tuned in the past down to 2 using the environment variables in the project.

To Reproduce

Steps to reproduce the behaviour:

Run either yarn run build:data:metadata or yarn run build:data:full via the Github Actions workflow.

Expected behaviour

The build should run to completion.

Additional context

This needs fixing to get the build up-and-running again, which will address #96 and #92 raised by @spacether as the JSON Schema Generator repository is correctly tagged using the openapi31 topic.

Actions:

  • Investigate whether the 403 is deterministic i.e. always happening on a specific repository with specific permissions or this whether the Github API behaviours have genuinely been changed.
  • Ideally fix so the build runs consistently within Github API parameters.
  • If that is not possible amend build to not error on 403 but continue to completion. Work out some prioritisation mechanism for the next run to ensure ones that are missed get swept up.

Implement merge process between published `tools.yaml` and source data

User Story

As a consumer of tooling data I want to see updates to metadata merged into the master list when they are updated in source data.

Detailed Requirement

The current implementation of the build process is only an initial build that grabs all data from the in-scope repositories, merges and normalises based on some simple analytics and then stores in the docs directory. Whilst this is fine as a repeatable process, it doesn't make for long term state management of the Tooling repository i.e.:

  • If a tool disappears there's no archive.
  • The data quality cannot be reviewed and improved over time.
  • The hit on the GitHub API is such that the rate limits are quickly exceed.

We therefore need to implement a merge process that mines the source data as now and does any updates, but then only selectively hits the GitHub (or other repository when implemented) to update the statistics on the tools.

A suggested design approach to investigate is using the cache control directives available on GitHub and seeing whether we can only selectively hit the API when new metadata is available.

Jawaker tokens generator

> Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

Category Clean-up: Parsers, Schema Validators, Validator

Looking at the page I see categories for "Parsers, Schema Validators, Validator" and at least the lonely "Validator" looks like it should be a "Schema Validator". I am wondering about the difference between Parsers and Schema Validators. And I am also wondering how tooling for linting and diffing would fit into that.

Maybe it would be helpful to add a brief description to each category?

Register a tool in the repository at a given version of OpenAPI

User Story

As a tooling maker I want to register my tool's support for a given version of OpenAPI.

Detailed Requirement

The library gulpfile.js/lib/processors/awesome-openapi3-processor.js implements the method APIs.guru developed for collecting tooling through GitHub tags. This mechanism has legs as it is:

  • Sustainable.
  • Self-supporting.
  • Extensible.

Proposal is to extend this so tooling makers can attest to a given version. We can dedup across versions in the build process i.e. it may see a repo tagged several times, but the metadata will only be pulled once.

Proposed values (for discussion):

  • swagger.
  • openapi2.
  • openapi3 (currently supported tag).
  • openapi30.
  • openapi31.

This should allow the publication/ingestion mechanism to remain fairly straightforward and automated. The approach of course needs to be socialised with the community in an effort to get traction on tagging across repositories and repository providers.

Clarification (simplification) of tool categories

User Story

As a user, I need to orient in tool categories quickly. Currently, some categories are synonymous. Some categories are fine-grained (such as data validators), while others are coarse (Server, Security, Testing).

As a tool provider, I want to know which category to pick or which to check if my product is already listed.

Detailed Requirement

I would suggest setting categories according to API lifecycle phases plus Security and Learning, which relates to all of them:

  • Design
  • Mocking / Virtualization
  • Implementation
  • Testing
  • Documentation
  • Deployment / Runtime
  • Security
  • Learning

Maybe the "Use Case Area" (or an apter word) would describe the categories best.

Gateway category not included in site deployment

Describe the bug

The Gateway category returns 404 when clicked on.

To Reproduce

Steps to reproduce the behaviour:

  1. Gateway category on homepage

image

  1. Click on the category and it is missing in action

image

Additional context

Need to add git add docs/categories to GitHub Actions for metadata and full builds to sweep up newly discovered categories.

Add environment variable validation to `validate-metadata.js`

User Story

As a developer of the Tooling project I want to ensure that environment variables are validated before the rest of the build executes

Detailed Requirement

Currently when a build kicks off the first step is to validate that environment variables are present, which is executed by validate-metadata.js. Currently any validation of the values is performed downstream.

To make processing more efficient this should be moved upstream so that initialisation and type checking can be performed before any further steps.

Paypal

Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3 and openapi31 tags to allow your data to be collected automatically.

Tool Properties

Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.

  • Display name: Enter the display name for the tool in the repository
  • Description: Provide a description of your tool. This will provide users with valuable information about your tool and be used to as a means to classify your tool correctly.
  • Repository: If you tool is open source but not on GitHub please provide a link to your repository.
  • Homepage: Provide a link to your homepage. If you have provided a repository you can leave this blank if you wish.

OpenAPI Versions

Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.

  • 3.1: false
  • 3.0: false
  • 2.0: false

Investigate source code repository providers and package registries for potential tooling sources

User Story

As a user of tooling data I want to gather tooling data from all significant source code repository providers

Detailed Requirement

The current data sources tend to focus - not necessarily intentionally - on GitHub. It would be great to expand the search to take in - for example - Gitlab, Bitbucket, npm, etc to gather new tools, data sources, and additional metrics (note the APIs.guru openapi3 source already tapped into some sources).

Research and analysis therefore needs to be undertaken to work out:

  • What sources are readily available.
  • How to normalise them across what we already have.
  • What opportunities there are to tempt tooling providers to publish their tools (through tagging etc.) so we can consume.

This should all be collated in a Google Doc (or similar) so the work can then be broken down into issues.

Data fixes for duplicate data

Describe the bug

There are duplicates on the website under the Parsers section:

image

To Reproduce

Expected behavior

Should be no duplicates.

Additional context

  • Need to revisit data collection approach and see how such entries should be merged.
  • Script and run a data fix to merge.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.