GithubHelp home page GithubHelp logo

elixir-cloud-aai / foca Goto Github PK

View Code? Open in Web Editor NEW
19.0 19.0 11.0 560 KB

Opinionated Flask microservice archetype for quick OpenAPI-based microservice development

License: Apache License 2.0

Python 99.35% Dockerfile 0.65%
api flask mongodb openapi openapi3 rest-api swagger

foca's Introduction

ELIXIR Cloud & AAI

License Chat

Welcome to the GitHub presence of the ELIXIR Cloud and Authentication & Authorization Infrastructure (AAI) project.

ResourcesContribute

Who we are

logo-elixir-cloud logo-elixir logo-ga4gh

  • We are a Driver Project of the Global Alliance for Genomics and Health (GA4GH), an international organization developing policies and technical standards to enable the responsible sharing of sensitive data across international boundaries. As such, the majority of our work is directly related to either the implementation or the further development/framing of these policies and standards, in particular those handled by the following GA4GH Work Streams:

  • We are also a subgroup of ELIXIR, a multinational Europe-based initative that unites life science laboratories and organizations to establish a common infrastructure that supports and integrates scalable, sustainable bioinformatics and data analysis services for member states and beyond. Within the ELIXIR network, we are responsible for leveraging a common cloud computing infrastructure in line with international community standards. ELIXIR is a strategic partner of the Global Alliance for Genomics and Health.

Our mission

ELIXIR Cloud & AAI develops services towards establishing a federated cloud computing network that enables the analysis of population-scale genomic and phenotypic data across participating, international nodes.

Our solutions

This section is still in an early stage - check back soon!

schmatic_overview

Note: Implementations & services shown here are just for reference and include both currently unavailable (and possibly unplanned) implementations, as well as ones developed by independent organizations. We are not endorsing nor are being endorsed by any external organization.

Our audience

Our solutions will have benefits for multiple stakeholders in the handling and analysis of personalized health and other big data. Key benefits for each target audience are listed below.

Click on the chat button at the top of the page to get in touch with us and discuss how you can be among the first to make use of our products!

Note that the listed points reflect our vision for the years 2025 and beyond. Moreover, for several of them we will require help by other GA4GH work streams and the corresponding implementers. See the section on FAIRness for more info on that. Also have a look at GA4GH's strategic roadmap.

Data analysts, experimentalists & healthcare practicioners

  • Analyze your data in the cloud - no need to install anything or buy and maintain expensive IT infrastructure!
  • Bring your own data or analyze available data sets - safely and securely!
  • Select from a wide range of available workflows - or just use your own! Or perhaps you don't deal with workflows but are looking for a solution to run individual compute jobs on cloud infrastructure? Sure, that's possible, too!
  • Reproduce your analysis with just a few button clicks to increase your confidence - or why not reproduce other people's analysis to build on top of it?
  • Tired of collecting metadata about your data and analyses? Our products help in digitizing and, to some extent, automating a lot of this work!

Workflow engine developers

  • You would like to write a new workflow engine but are scared of having to implement compute backends for a wide array of diverse IT infrastructure solutions? Or you already wrote one but have a hard time to maintain your compute backends and keep up with the technologies? Our tools allow you to focus on writing the code that interpretes your workflows, generates your DAGs and schedules execution - by (almost) any backend! Talk to us about implementing a TES client for your product.
  • You would like to increase your user base and make it easy for people to run workflows in your language? Talk to us about implementing a WES shim around your new or existing engine.

IT infrastructure specialists & system administrators

  • You are developing IT compute infrastructure solutions and you would like to increase their adoption? Talk to us about implementing a TES shim around your prodcuts and allow them to be hooked up to the federated compute networks that we help to build - and which we project will handle a lot of the big data analysis in the personalized medicine sector and beyond!
  • You are managing a compute cluster or data center at a university, hospital, research center or in a company? Talk to us about implementing or deploying a TES or DRS instance to add your nodes or data to the network. Consumers of our services will be able to access them without hassle!

FAIR infrastructure

Section coming soon!

Supported projects

We provide services and technical support for the following projects and initiatives, which in turn test our products and drive future development. Check out the links for more details:

Collaborators

Apart from the GA4GH Cloud community as a whole, we are working together closely with the following projects that develop similar services:

Contact

Chat

Also see the list of individual members to see some actual people involved in this project, including contact information.

foca's People

Contributors

alohamora avatar anuragxxd avatar dependabot[bot] avatar kushagra189 avatar lakig avatar lvarin avatar rahuljagwani avatar sarthakgupta072 avatar stikos avatar uniqueg avatar vipulchhabra99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

foca's Issues

Check for content types accepted by client

API endpoints can serve different content types, e.g., application/json, text/plain etc. Clients can ask a service to respond with one or more specific accepted content types.

It would be nice if FOCA implemented a solution (e.g., a decorator) that compares the client's desired content type(s) against the ones that the service offers for the respective endpoint (as defined in the API specs). An appropriate error response should be returned if a client requests a content type that the endpoint cannot deliver. If the client does not explicitly ask for a specific content type (i.e., if the Accept: <content-type> header is missing), the default response should be given.

Other important points to think about:

Add API reference

Many packages offer an API reference for users to browse the modules, classes and functions of the package. This should be auto-generated from the code, making use of the typing definitions and doc strings. It should also be auto-updated upon merges into the default branch dev (probably by updating .travis.yml). Ideally the API ref should be hosted inside this repo, but if this is inconvenient, they could alternatively be hosted elsewhere, e.g., at Read the Docs. A link to the API ref should be added to the main docs (README.md).

Implement a new FOCA class

This issue aims to implement a main FOCA() class that will be called by the end-users. It would take a specs parameter (as type Iterable[str, OpenAPIConfig] to correspond with issue #40 ) as well as any other config options for database registration, Celery, custom errors and handlers etc. and would return the configured ready-to-go app object.

Expected Behaviour

In practice the user (i.e., the person who sets up a microservice-based on FOCA), will write his app config file in YAML, then parse it as a dictionary and pass it to the FOCA() constructor as briefly described at the end of issue #40. FOCA will then validate that all the sections it requires adhere to the schema defined in the config class and raise an appropriate error otherwise. Now, the user of course wants to add more specific params that are relevant only to their particular service and they can go ahead and do so, and these will be added to app.config as well, inside the class, as a "free" untyped dictionary field, without any further validation (which isn't possible anyway, because we don't know what params the user wants to add). So instead, the user will be responsible for validating params and coming up with a error handling strategy only for the custom, non-essential params that they add. So the policy is: if you want to configure FOCA, you have to stick with our vocabulary, but besides that, any config handling is up to you. The config file for a FOCA-powered app would then look something like this:

foca:  # this is the only reserved keyword, and everything under here will be validated before adding to app.config
  database:
    # any_database_params
  specs:
    # any_spec_params, see #40 for examples
  errors:
    # any_custom_errors_and_params (let's see how we can generalize that)
  celery:
    # any_celery_params
  log:
    # any_log_params
  security:
    # any_security_params
  server:
    # any_general_Flask_Connexion_Gunicorn_params (e.g., host, port...)


# everything below is service-specific and will be added to the config class and app.config without validation


service_specific_section_1:  # will be added to app.config without validation
  param_1: my_param_1
  param_2: my_param_2
service_specific_section_2:
  param_1: my_param_1
  param_2: my_param_2

Inventory of common code

Make an inventory of modules, classes & functions that are common between various services (cwl-WES, proTES and proWES, and to a lesser extent TEStribute).

Add list items here:
  • - api.register_openapi
  • - config.config_parser
  • - config.log_config
  • - database.db_utils
  • - errors.errors
  • - factories.celery_app
  • - factories.connexion_app
  • - security.cors
  • - security.decorators
  • - Dockerfile

Note for Contributors: While adding the common modules, please write unit tests along with them; let's shoot for 100% coverage.

Add OpenAPI registration module

Problem

Users should have the ability to

  • register multiple OpenAPI specs passed as absolute paths (relative paths, as currently defined in the corresponding cwl-WES, proTES and proWES modules, won't work easily because the anchor point relative to which relative paths are to be interpreted would be inside the app's directory tree and NOT in the one of the FOCA package)
  • provide OpenAPI specs of version 3 or version 2; providing specs of both versions for the same service should be supported
  • provide OpenAPI specs in either YAML or JSON formats; as connexion requires YAML format, specs provided in JSON format will need to be dumped in YAML format
  • add one or more fields to the root of the specification or overwrite an existing specs section, e.g., to allow the addition (or replacement) of a security (OpenAPI 3) or securityDefinitions (OpenAPI 2) field
  • add fields to the Operation Object of each Path Info Object in the required paths field (or overwrite them), which is required for both OpenAPI 3 and OpenAPI 2; this is mostly to support the addition (or replacement) of the x-swagger-router-controller field
  • provide output file paths for each modified spec file (ignore if no modifications are done, i.e., if a YAML file is provided and no fields are added)

Proposed solution

  • Nothing particular has to be done to support providing both OpenAPI 2 and 3 specs, as connexion already supports that since version v2.0.0

  • For the specs format, it should first be tried to yaml.safe_load() each spec, and if that doesn't work, json.load() them; if that doesn't work either (i.e., both YAML and JSON loaders fail), an error InvalidSpecs (needs to be defined in the errors.py module) should be raised.
    It should look something like this:

    import json
    
    import yaml
    
    from foca.errors.erros import InvalidSpecs
    
    try:
        specs = yaml.safe_load(specs_file)
    except yaml.parser.ParserError:
        try:
            specs = json.load(specs_file)
        except json.decoder.JSONDecodeError:
            raise InvalidSpecs

    At this point, if no error is raised, we will now have a dict() with the OpenAPI contents.

  • For adding content to the root of the specs (or replace a section), just loop over the list of dictionaries to be added and update the specs dictionary with the like so:

    # example values:
    # specs = {'something': 'something'}
    # append = [{'other_thing': 'other_thing'}, {'yet_another_thing': {'a': 1, 'b': 2}}]
    for item in append:
        specs.update(item)
  • For adding fields to all Operation Objects (or replacing them, if they already exist), one could do something like:

    # example values:
    # field = 'x-swagger-router-controller'
    # value = 'controllers.ga4gh.wes'
    try:
        for path_item_object in specs['paths'].values():
            for operation_object in path_item_object.values():
                operation_object[field] = value
    except (AttributeError, KeyError):
        raise InvalidSpecs  
  • Finally, if specs have been modified or are supplied in JSON (which connexion cannot process), the specs dictionary has to be dumped back to a file with yaml.safe_dump(specs, output_file); the behavior for output file paths for modified spec files should be such that if no value is supplied, the file extension of the original input file (e.g., /path/to/my/specs.json) should be stripped (to yield, e.g., /path/to/my/specs) and a suffix with .yaml extension appended (parameterize, but do not make available for the user; use .modified.yaml as default, i.e., the previous example would yield /path/to/my/specs.modified.yaml); if a file path for the output file is specified, of course the contents of specs should be dumped to that.

  • To simplify usage and allow validation of the parameters that need to be passed, a class could be defined for the config params for the spec files (i.e., the output file path, the stuff to be added to the root and to the Operation Objects); a call to its constructor could look something like this:

    my_specs_config = OpenAPIConfig(
        out_file='/path/to/my/modified/specs.yaml',
        append=[
            {
                'security':
                    'jwt':
                        'type': 'apiKey',
                        'name': 'Authorization',
                        'in': 'header',
            },
            {
               'my_other_root_field': 'some_value',
            },
        ],
        add_operation_fields = {
            'x-swagger-router-controller': 'controllers.ga4gh.wes',
            'some-other-custom-field': 'some_value',
        },      
    )

    All fields should be optional, so that OpenAPIConfig() can be called without any parameters.
    The register_openapi() function could then be called with something like this:

    register_openapi({
        '/path/to/my/specs.yaml': my_specs_config,
        '/path/to/my/other_specs.json': OpenAPICOnfig(),
    })

    This would register two OpenAPI spec files with connexion:

    • The first one is a YAML file that has security definitions and another field appended to its root, and two fields added to each operation; the modified file is written to the specified output file path
    • The second one is a JSON file that is converted to YAML but has no other additions; as no output file path is specified for the modified file, it will be written to the same directory that the spec file was read from (/path/to/my/other_specs.json), but under a modified name: /path/to/my/other_specs.modified.yaml

    In practice, of course, the app admin (i.e., the person who wants to, say, set up cwl-WES or another FOCA-based service) would have all these values in a properly structured FOCA-compatible app config, so he/she would just pass something like this to the FOCA constructor (which we still have to implement):

    import yaml
    
    with open('/path/to/config.yaml') as config_file:
        config = yaml.safe_load(config_file)
    
    app = FOCA(**config)

    That should be pretty much all the code that is needed in the final app. Of course we will need to work on the model/class to not only configure OpenAPI specs, but also set up custom errors and handlers, configure the database etc.

  • Make sure to properly document the behavior in the functions/methods docstrings and of course write tests for the expected behavior

Build FOCA Dockerfile in Petstore example

Currently, the Dockerfile in the Petstore example uses the latest FOCA image. This is not ideal as any changes to the FOCA code base will not be reflected immediately when starting the Petstore app. Also this presents a risk of the code base and the Petstore example diverging even if the repo is freshly pulled. It would thus be better to build the FOCA image on the fly, or, alternatively, overwrite the installed foca package in the FOCA container with the one that is currently being worked on.

Thanks to @git-anurag-hub for pointing this out!

Create Dockerfiles for each supported Python version

Currently, FOCA was tested for Python versions >=3.6,<=3.9, but there is only a single Dockerfile, using python:3.6-slim-stretch as its base image. Ideally, app developers should be able to pick which of the supported Python versions they want to use, and thus pick the corresponding FOCA Docker image as the base image for their application.

To do so, prepare Dockerfiles for each of the supported minor Python versions in a directory docker, named Dockerfile_py3.6 etc:

  • Python 3.6
  • Python 3.7
  • Python 3.8
  • Python 3.9

Then, adapt the Travis CI configuration in .travis.yml to build and push all of the images with tags v0.6.0-py3.6 etc., and with tag latest always pointing to the latest FOCA image built for the latest Python version (currently 3.9).

Generalize the database module

Database name, collection names and some indexing options can likely be generalized and passed via the config.

Basically, the database model should be extended by another field collections, which will be of type Dict[str, CollectionConfig], where the keys are collection names and the values of model CollectionConfig (needs to be written still) should list some options to configure the collection. Basically, ascending/descending & adding other primary key fields. You can have a look at cwl-WES to see what we typically need when registering database collections. Also a look in the MongoDB (or PyMongo) might help to see what's supported. I think it's not that much, so perhaps we can spec all that PyMongo has in CollectionConfig (more power for the user).. Anyway, once the models are done, it should be easy to refactor the database module accordingly.

Package project

Create the necessary files for uploading the project to PyPI (setup.py and possibly MANIFEST.in and setup.cfg)

Extend documentation

Add more details about FOCA usage, in particular:

  • How to write a FOCA config and set up a FOCA-backed app
  • How to configure specific FOCA features (errors, databases/collections etc)
  • How to access FOCA- and app-specific configuration parameters within the app (in Flask application context and otherwise)
  • How to use optional utility/helper functions that come with FOCA

Improve config handling

Ideally, the config handling should be done along those lines: https://www.hackerearth.com/ja/practice/notes/samarthbhargav/a-design-pattern-for-configuration-management-in-python/

The downside of writing a class for this is that it's not so easy for the user to change as is a YAML file. However, the upside is that it's easy to validate config params and also that you can call params with dot notations (e.g., we could do something like config = app.config["app"] at the top of a module and then could call individual config params with, e.g., config.database.some_param instead of get_conf(app.config, "app", "database", "some_param") or, worse, app.config["app"]["database"]["some_param"] - which doesn't even validate anything and doesn't allow for a streamlined way of raising InvalidConfig errors (which would just show up as KeyErrors or AttributeErrors most likely, which are not too bad, but could for sure be better).

In our case, we can have the best of both worlds: YAML and class. We define a class for all params that FOCA needs, and we will set reasonable defaults ideally for every single one of them. But in practice the user (i.e., the person who sets up a microservice based on FOCA), will write his app config file in YAML, then parse it as a dictionary and pass it to the FOCA() constructor as briefly described at the end of issue #40. FOCA will then validate that all the sections it requires to adhere to the schema defined in the config class and raise an appropriate error otherwise. Now, the user, of course, wants to add more specific params that are relevant only to their particular service and they can go ahead and do so, and these will be added to app.config as well, inside the class, as a "free" untyped dictionary field, without any further validation (which isn't possible anyway, because we don't know what params the user wants to add). So instead, the user will be responsible for validating params and coming up with an error-handling strategy only for the custom, non-essential params that they add.

P.S: This is a gist of discussion on #foca slack group

Gunicorn support

For scalability, and if possible, FOCA should support Gunicorn out-of-the-box, in a similar way as it has been implemented in, e.g., cwl-WES.

Implement foca main function with class based implementation

Current implementation entails a simple method with multiple individual steps. These steps can be separated out as individual methods.

def foca(config: Optional[str] = None) -> App:
    """Set up and initialize FOCA-based microservice.

    Args:
        config: Path to application configuration file in YAML format. Cf.
            :py:class:`foca.models.config.Config` for required file structure.

    Returns:
        Connexion application instance.
    """

    # Parse config parameters and format logging
    conf = ConfigParser(config, format_logs=True).config
    logger.info(f"Log formatting configured.")
    if config:
        logger.info(f"Configuration file '{config}' parsed.")
    else:
        logger.info(f"Default app configuration used.")

    # Add permission specs
    conf = _create_permission_config(conf)

    # Create Connexion app
    cnx_app = create_connexion_app(conf)
    logger.info(f"Connexion app created.")

    # Register error handlers
    cnx_app = register_exception_handler(cnx_app)
    logger.info(f"Error handler registered.")

    # Enable cross-origin resource sharing
    if(conf.security.cors.enabled is True):
        enable_cors(cnx_app.app)
        logger.info(f"CORS enabled.")
    else:
        logger.info(f"CORS not enabled.")

    # Register OpenAPI specs
    if conf.api.specs:
        cnx_app = register_openapi(
            app=cnx_app,
            specs=conf.api.specs,
        )
    else:
        logger.info(f"No OpenAPI specifications provided.")

    # Register MongoDB
    if conf.db:
        cnx_app.app.config['FOCA'].db = register_mongodb(
            app=cnx_app.app,
            conf=conf.db,
        )
        logger.info(f"Database registered.")
    else:
        logger.info(f"No database support configured.")

    # Create Celery app
    if conf.jobs:
        create_celery_app(cnx_app.app)
        logger.info(f"Support for background tasks set up.")
    else:
        logger.info(f"No support for background tasks configured.")

    return cnx_app

Clean up dependencies

Currently, all dependencies are listed in requirements.txt. However, many of these are not required for actual FOCA functionality, but rather for developing FOCA (e.g., to run tests, calculate and publish code coverage, for linting the code, building/releasing packages etc.). For end users of FOCA, all of these packages are possibly not needed and they simply bloat the FOCA Docker image and derived app images.

To clean up dependencies, first remove any packages from requirements.txt that are used for building and distributing packages (currently only twine) and add them to the setup_requires section inside setup.py. Then move all other packages that are not required for FOCA functionality (i.e., packages only used for testing, linting etc) into a new file requirements_dev.txt. To make sure the CI environment has all packages it needs to run tests, add an install section (see, e.g., here) to the Travis CI config and add the commands to first install the packages in requirements.txt (Travis automatically does that if no installs section is provided and the language is set to python, but I think it will not do so if install is overridden), then to install the development packages inside requirements_dev.txt. While you are at it, you can add python-semantic-release to requirements_dev.txt as well and then remove the line pip install python-semantic-release==7.15.0 from the Travis config (as it will then be installed along with other packages in the development dependencies file). Lastly, check whether any changes to the documentation (README.md, examples/petstore/README.md) are required as a consequence of this change (there may not be, but it's important to verify).

Note: I think that similarly to Travis, Read the Docs (used for building API docs) by default will install packages from requirements.txt, but not any custom environment files, so if it needs any of these packages, doc building may fail. If that is the case (triggering the CI by pushing, e.g., a feature branch will tell you), please let me know and I will figure out the necessary changes in the Read the Docs configuration.

Generate random identifier

A function to implement random strings for identifiers is implemented across several FOCA-based implementations and should therefore be implemented as a utility function in FOCA.

Update docs

  • mention all features
    • Connexion factory
    • CORS support
    • OpenAPI 2.x/3.x spec regsitration
    • Endpoint protection / JWT validation
    • Database (MongoDB)
    • Background tasks (RabbitMQ & Celery)
  • describe usage of main FOCA class
  • link to exampe app (#50)
  • add some ELIXIR Cloud & AAI branding, contacts etc.

Solution for adding app_config.py and app_config.yaml to config

Since the app_config.py and app_config.yaml can be different for different services, we need some solution to add those files to foca archetype.

Based on some discussions, the following different mechanism has been proposed so as not to break anything with existing deployments:

  1. check if WES_CONFIG is defined; if so, use that
  2. check if the app's config directory contains an app_config.yaml file; if so, use that
  3. otherwise use FOCA's default app_config.yaml

Note: WES_CONFIG is just specific to cwl-WES.

(1) and (3) are already in place. We need to figure out a solution for (2), which doesn't break any existing deployments.

See app_config.* files in these locations fore reference:

https://github.com/elixir-cloud-aai/cwl-WES/tree/dev/cwl_wes/config
https://github.com/elixir-cloud-aai/proWES/tree/dev/pro_wes/config
https://github.com/elixir-cloud-aai/proTES/tree/dev/pro_tes/config

CORS support should be optional

While I don't understand tne implications fully, I think [CORS][] should not be enabled for every microservice and thus should be made optional.

It's rather an easy fix but needs some changes all around (code, docstrings, possibly examples).

In principle, this is what needs to be done, as far as I can tell:

  • A config parameter enable_cors should be added to the SecurityConfig model (set default to True) in foca/models/config.py; update docstring accordingly, including example and, if necessary, dependent examples (search file for any mentions of SecurityConfig)
  • In the main FOCA file (foca/foca.py), the value of the config parameter should be read to decide whether to call enable_cors() in line 49 (i.e., add an if ... around that call and the log message, maybe add another log message to say that CORS is disabled in the else ... block)
  • Check if any documentation needs to be updated

Format all docstrings in Google style

To allow the Sphinx engine to properly parse docstrings when building the API doc](https://foca.readthedocs.io/en/latest/), re-write all docstrings to follow Python PEP 257 conventions and follow the Google style for Python docstrings, as described here and here.

The foca.api.register_openapi module already adheres to the conventions/style and can be used as an example (raw file, rendered docs).

Also make sure that all modules, classes, methods and functions have a docstring, as per PEP 257.

Use the list below to ensure that all new or refactored modules stick with the documentation style.

  • foca/foca.py @stikos (merged)
  • foca/api/register_openapi.py @uniqueg (merged)
  • foca/config/config_parser.py @uniqueg (merged)
  • foca/database/db_utils.py @uniqueg (merged)
  • foca/database/register_mongodb.py @uniqueg (merged)
  • foca/errors/errors.py @uniqueg (merged)
  • foca/factories/celery_app.py @uniqueg (merged)
  • foca/factories/connexion_app.py @uniqueg (merged)
  • foca/models/config.py @kushagra189 (merged)
  • foca/security/auth.py @uniqueg (merged)
  • foca/security/cors.py @uniqueg (merged)

Create a utility to merge two or more OpenAPI yaml files

We need a solution that builds a complete OpenAPI file from either two or more parts that are in themselves incomplete (e.g., one part defines the core structure, one part the endpoints and one part the models) OR at most one complete spec and one or more incomplete parts (e.g., one complete spec plus one with additional endpoints and models). The second option is the problem we need to solve, because we have a complete spec (DRS or TRS) and a part with additional endpoints (and perhaps models). In each case, all parts need to be valid YAML.

Note that this solution should be actually quite simple because it is just like a dictionary update. So there's no need to actually parse Swagger/OpenAPI. Just parse the files as YAML into dictionaries and then update the first dictionary with the second, third etc. You just need to make sure that dictionaries are merged in a way that preserves existing dictionary keys, e.g., existing endpoints in the first file should not be overwritten if the second, third etc files contain endpoints, too, unless they define the same endpoint (in which case we would assume that the parts to be merged want to override the already existing endpoints). It is actually the same behavior that the original ConfigParser in cwl-WES implemented: you could supply multiple YAML config files.

Fix docstring formatting & set private members

Some docstrings are rendered in a slightly warped way in the API docs, mostly in cases where highlighting features are used (italics, bold, bullet points, refs to arguments/classes/methods/functions). Check the Sphinx docs and Google style reference to fix these.

Moreover, some helper functions/methods are shown that are better not exposed in the public API to keep it as easily accessible and focused as possible.

Add decorator for protecting endpoints

We need to take into consideration and remove the following redundant code by adding a corresponding module to the Foca python package.

While addressing the same we need to keep in mind the following aspects

  • check for the required parameters that are essential to pass through the decorator
  • for unit tests relying on service/HTTP calls to identity provider, mock up responses in cases where valid token requirement is essential for the code to proceed.

Add documentation

  • Write a README.md that describes what the package is about.
  • Put code coverage service badge in docs

Add integration tests for petstore example app

  • Integration tests for petstore example app endpoints need to be added to Travis CI. (as mentioned in #57 )
    There seem to be a port/host declaration error that need to be fixed.
  • The following example tests need to be added :
  - >
    test $(curl -sL -w '%{http_code}' -X POST 'http://localhost/pets' -H 'accept: application/json' -H 'Content-Type: application/json' -d "{\"name\":\"karl\",\"tag\":\"frog\"}" -o /dev/null) == '200'
  - >
    test $(curl -sL -w '%{http_code}' -X GET 'http://localhost/pets' -H 'accept: application/json' -H 'Content-Type: application/json' -o /dev/null) == '200'
  - >
    test $(curl -sL -w '%{http_code}' -X GET 'http://localhost/pets/1' -H 'accept: application/json' -H 'Content-Type: application/json' -o /dev/null) == '200'
  - >
    test $(curl -sL -w '%{http_code}' -X DELETE 'http://localhost/pets/1' -H 'accept: application/json' -H 'Content-Type: application/json' -o /dev/null) == '204'

  • Additions might be needed if any other endpoints are implemented for the example application.

Connexion-compatible token validation

Since Connexion 2.0, Connexion helps with validation of Swagger 2 / OpenAPI 3 security schemas. While this is in princple welcome, it inteferes with the functionality of the dedicated auth/security decorator implemented in FOCA.

In order to be compatible with Connexion, an x-{auth_method}TokenInfo field needs to be added to the securityDefinitions (Swagger 2) or securitySchemes (OpenAPI 3) objects, where {auth_method} is one of several options, depending on the version of the OpenAPI specificaiton.

This will require a major refactoring in various places (decorator in security.auth module needs to be re-factored/-implemented, the API registration needs to be udpated, and likely a lot of tests will be affected, too).

Update flask and other deprecated dependencies.

A lot of issues are being caused while running tests and upgrades in new python versions. Apparently, the patches in old versions of dependencies like flask are not properly functional. Hence to sustain our development cycle we need to upgrade these packages.

Generalization of the errors module

Generalize the current behavior of the errors module by passing an error model to FOCA's register_error_handlers() function and then pass that model to the different handlers. Also find a way to pass actual error parameters (like the exact error message) to a single handler which will match it to the model. Lastly write unit tests for the same. In this way, :

  • only need a single handler
  • can define the model in each service according to how it is defined in the specs
  • don't require the service developer to overwrite error handlers one by one

Create example app

Write a pet store example app with FOCA.

  • use FOCA to create app
  • implement endpoint logic
  • add detailed markdown description
  • add link to description and short summary to README.md
  • package as test and add as integration test
  • add Dockerfile
  • add docker-compose.yaml for all services

Endpoint logic can already be implemented, but requires #42 to be resolved in order to finish.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.