GithubHelp home page GithubHelp logo

hackillinois / api Goto Github PK

View Code? Open in Web Editor NEW
24.0 4.0 52.0 3.08 MB

The Official API supporting HackIllinois

Home Page: https://api.hackillinois.org

License: Other

Go 93.56% Dockerfile 0.13% Makefile 0.63% Shell 0.76% Python 4.92%
api

api's Introduction

HackIllinois API

Build Status Go Report Card

This repository contains the code which runs the backend services supporting HackIllinois.

  1. Developer Environment Setup
  2. Building, Testing, and Running the API
  3. Release Deployment
  4. Contributing
  5. Documentation

Developer Environment Setup

In order to work on the API there are a few setups necessary in order to setup your developer environment.

Installing Dependencies

We highly recommend that you use Ubuntu 18.04 when working on API development. The API is written and Go and makes use of MongoDB for storing data. You will have to install both of these before working on the API. You will also need a few common development tools including make and git.

Installing Development Tools

Both make and git can be installed from the default ubuntu package repositories. Run the following commands to install both tools. You may need to run the commands under sudo.

apt-get update
apt-get install build-essential git

Installing Go

Follow the Go Installation Instructions for installing Go. Run go version to ensure that Go has been properly installed.

Installing MongoDB

Follow the MongoDB Installation Instructions for installing MongoDB. Once MongoDB is installed ensure mongod is running. If it is not running then start the service.

Downloading the API source and dependencies

Run the following command to retrieve the API and all it's dependencies. The API source will be cloned to a folder called api in your current directory.

git clone https://github.com/HackIllinois/api.git

First time API setup

After downloading the API source code you will want to build the entire repository and generate an Admin token for testing endpoints. This can be done by moving into the API folder and running:

make setup

You should see your Admin token near the bottom of the output. If this process hangs, ensure that mongod is running.

This Admin JWT should be passed as the Authorization header when making a request via curl, Postman, or a similar tool.

Useful tools for development

There are a couple other useful but not necessary tools for working on the API. The first is a GUI tool for viewing and modifying the database. There are many options including MongoDB Compass and Robo 3T. You will also want to install Postman for making requests to the API when testing.

Building, Testing, and Running the API

In order to simply API development make is used for building, testing, and running the API. All make commands can be run from the root of the repository and they will properly find and operate on all of the services.

Building the API

Run the following command from the root of the repository. The gateway and all services will be built into bin.

make all

Testing the API

Run the following command from the root of the repository. The gateway and all services will have their tests run and any errors will be reported.

make test

Running the API

Run the following command from the root of the repository. Note that this command will not rebuild the API so you must first build the API to ensure your binaries are up to date.

make run

API Container

There are also make targets provided for building a containerized version of the API for usage in production deployments.

Building the API Container

Building a container requires that docker and go have already been installed. The following command should be run from the root of the api repository.

make container

Running the API Container

You can obtain all released versions and the latest version of the container from DockerHub. The API container takes the name of the service to run as it's command arguement in the following docker run command. Ensure that the correct environment variables are set to load the configuration file and overwrite any secret configuration variables.

docker run -d --env-file env.config hackillinois/api:latest <servicename>

Contributing

For contributing guidelines see the CONTRIBUTING.md file in the repository root.

Documentation

We use MkDocs for our documentation, and host at HackIllinois Docs.

api's People

Contributors

asankaran avatar asehgal220 avatar benpankow avatar brianstrauch avatar cktang88 avatar dependabot[bot] avatar devsatpathy avatar ethan-lord avatar ezhang887 avatar heesooy avatar hsiehby avatar lauzadis avatar nydauron avatar patrickkan avatar pattyjogal avatar pradyumnashome avatar takshpsingh avatar tanyongzhi avatar timothy-gonzalez avatar yashosharma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

api's Issues

Use presigned urls for uploads

Right now we use presigned urls for downloading of files. We should also use them for uploading files. This would eliminate the need to move large files though our gateway and upload services greatly reducing load on the API.

  • Upload files using pre-signed URLs

Account Generator

Since OAuth is the only login mechanism supported in the API we need a way to login to the API when developing locally. The token generator provides a method for generating tokens that can be included for authenticated requests to the API. However we also need to insert the user info associated with the ID generated in the token into the database.

  • Create an account generator under utilities/accountgen/
  • Include command line options for user id, email, name, etc.
  • Modify make setup to insert an admin account into the database

High quality, easy to read documentation

The current documentation for the API is currently barely adequate and it is important that documentation is available in an easy to read format for API consumers. This documentation should be accessible from outside this repository at an external site. Setting a solution that can build static documentation files which can be uploaded to an s3 bucket and served as a website would be a good solution.

  • Define how we will be serving documentation to API consumers and how we will be generating this documentation
  • Write the updated documentation
  • Update the build system to deploy documentation on merges into master

Switch order of Auth and Identification Middlewares

The auth middleware should come before the identification middleware. The identification middleware attempts to decode the user's id from the user's JWT. It makes more sense to check if the token is valid before we try to decode it.

  • Switch the order or auth and identification middlewares

Push Notifications

Current the API does not have any method of sending out notifications to user except via email. Prior versions of the API had an endpoint which clients could GET periodically for notifications. A better solution would be to push notifications to mobile and web clients. We don't want to code 3 separate code paths for iOS, Android, and Web so we should look into external service that will allow us to send them our notification and have it relayed to all 3 front ends. AWS provides a service that does this, but we should look into all our options.

  • Determine what service to use to relay cross platform push notifications
  • Add a new non-essential service to API named notification
    • Add the service to the build and deployment system
  • Have endpoints for creating notifications, and getting notifications (we still want a polling option)
  • Integrate the third party push notifications service
  • Add notification service routes to the gateway

Revocable API token

Currently the only way to revoke a user's token is to change the secret used to generate and validate the token, which has the result of invalidating everyone's token. We need a way to blacklist specific tokens. One potential solution would be to maintain a set of blacklisted token in a redis cache. This would give us higher performance that querying the database for every request, and allow us to invalidate individual tokens. An issue with this implementation is that it requires us to know the exact token we want to revoke. It would be better if we could revoke all token for a user generated before a certain time. This could still be stored in redis for high performance.

  • Define a method for blacklisting / revoking user tokens
  • Implement token revocation while not introducing a database operation to every request

Make decisions expire

All ACCEPTED decisions should expire after a defined amount of time. Once an ACCEPTED decision has expired users should not be allowed to rsvp for the event.

  • Add an expiration timestamp to ACCEPTED decisions
  • Block rsvp on expired decisions

Registrations to CSV scripts

Experience wants to be able to view registrations in excel so we should have some scripts that pulls all the registrations from the API and then convert the json response into a csv file.

  • Add a registration to csv utility under utilities/registrationcsv/
  • Add cli flags for output file name and registration filters

Health check

Currently sending a GET / request to the gateway just returns a simple string in response. Instead it should return the health of each microservice. If the service is available it would be considered healthy, otherwise it is unhealthy.

  • Return health of microservices in response to GET / requests

Define level of access for Staff members in the API

Currently the Staff role is not actually used in the gateway for authenticating requests. There are a number of endpoints that are currently restricted to Admin users that can be opened up to Staff as well.

  • Define endpoints that staff should have access to
  • Update these endpoints in the gateway

Set ContentType for S3 Uploads

Currently the upload service does not sent the content type for S3 uploads which creates issues when later retrieving the file.

See ReflectionsProjections/api#14 for the fix.

Add a caching layer

The ideal implementation of this would be add the caching to the database methods defined the common package. This would allow all service to continue using the database methods they currently use and get the benefits of caching. Different caching models will need to be discussed to decide how we write to and invalidate the cache.
Caching should be done with redis. It is fast, simple to run locally, and can be run in a managed deployment on AWS.

  • Add a generic caching layer to all database methods

Change endpoints for setting roles

Currently the auth service has endpoints for retrieving the roles of a user as a list and setting the list of roles the user has. The endpoint which sets the roles of a user to a given list should be removed. Instead two new endpoints should be added. One which takes a single role and adds it to the user and one which takes a single role and removes it from a user.

  • Remove PUT /auth/roles/
  • Add PUT /auth/roles/add/
  • Add PUT /auth/roles/remove/

Note that this is an interface breaking change and will require updating other services for the new interface.

Using a modern build system

Super cool this project has gotten to the point where a build system is required! You guys have put in a ton of work and the contributions to arbor are appreciated. I'd like to caution against using make just from an accessibility and maintainability perspective. Using something that is a bit easier to add and modify as the project changes will make it easier for others to contribute.

A few recommendations:
Since the api is a monorepo a tool like bazel or please would make sense. Id probably recommend please (https://please.build/index.html) for this project since it doesnt add a jdk dependency and has the same nice monorepo behavior as bazel but if you are considering bazel take a look at dazel (https://github.com/nadirizr/dazel)

Mage is also nice if you want something closer to makefiles (https://github.com/magefile/mage)

Cap the maximum size of file uploads

We don't want to let users upload arbitrarily large files to our s3 buckets. The gateway does cap all incoming requests at 16mb, but that is still larger than we need to allow. All uploads are currently resumes and there isn't any reason a single resume should be over 4mb. We could probably lower the limit to 2mb and be okay as well.

  • Block uploads over 4mb bytes

Allow config to specify a domain for Staff & a single Admin user

Right now adding the first Admin user to the API requires manually modifying the database. We should have a config variable that allows an Admin or Superuser role to be granted to a single user. It would also be useful to automatically grant the Staff role to all users with an email at a specified domain.

This needs to avoid the potential security issue of an OAuth provider allowing an email without verification allowing anyone to obtain admin rights. We need to verify that the user actually owns the email we are getting from the OAuth provider. One solution to this is to send a verification email to the user with a link that triggers a granting of the role.

  • The admin role should be granted the user user with the specified email
  • The staff role should be granted to all users with an email at the specified staff domain in the

Write development guide

We can to give new contributors clear instructions on how they can set up their developer environment and start working on issues.

  • Write guide for setting up a development environment
  • Include some basic information about interacting with the API, generating tokens for local development, etc.

Fix socket leaks

Currently there are many http.Get, http.Post, etc calls where we never close the request body causing the the socket for the connection to remain open. Eventually services run out of available sockets and start rejecting connections.

The best long term fix is to use a higher level requests library that manages these issues for us or to add wrapper functions to our common library which defer closing the socket.

Add better logging

Currently the logging in the API is the default logging provided by arbor. It logs the incoming requests to the gateway and the outgoing responses to these requests. More logging information is needed for logs to useful in production.

We should add a logging package to the common package, which takes different logging levels such as ERROR, WARN, INFO, DEBUG. A log level should be configurable so that DEBUG logs can be disabled in production.

Microservices should use the logging package to log any errors that are encountered. It might also be useful to log other information as well. What exactly to log still needs to be defined.

  • Add a logging package to common
  • Use the logging package in microservices to log errors and other important information

Token Generator

Currently when testing the API, developers usually must manually create a token. It would be much easier if developers could generate tokens with specified information encoded into them.

  • Create a token generator under utilities/tokengen/
  • Include command line options for token secret, id, roles and other relevant parameters
  • Add a make target to invoke the generator to create a default Admin token for developers

Make user an Attendee on check-in override

If a user is given an override to check-in they should have the same permissions in the API as anyone else who checked in.

  • Add the role Attendee to a user upon checkin override

Admin Utililties

We lack utilities for managing the user base of the api, things such as role assignment. Basic querying of statistics and event management shouldn't need to be done at the bare bones level.

Expand Rsvp information

Currently Rsvps only hold a yes/no value signifying if an applicant has chosen to attend the event. It looks like we will want to gather more information from the attendee if they indicate they will be attending the event. This would be like a second mini registration used to determine project interests. The exact fields still need to be defined.

  • Add registration like fields to rsvp for determining project interest

gopkg.in/mgo.v2 is now unmaintained

On July 5th, 2018, gopkg.in/mgo.v2 was marked as unmaintained. github.com/globalsign/mgo is now the primary active fork. No features seem to have been removed from the interface. So we should be able to just change the imports and be okay.

Mark users as attending an event only when the event is active

Currently we do not check the event start and end times when checking attendees into events. Attendees should only be able to check-in to an event only if is currently occurring.

  • Verify, using the current time during the request, that a user is allowed to be marked as checked-in.

Add a GetAllMailingLists endpoint

This endpoint should return a list of all mailing lists stored in the database.

The endpoint should be GET /mail/list/.

A rough response struct would be:

{
    "mailList": [
        "mailList1",
        "mailList3",
        "mailList3"
    ]
}
  • Add GET /mail/list/ endpoint

Move common to a separate repo

We should consider moving the common library into a separate repo, and then just treat it as a go dependancy in the main api.This should help with future versioning.

Gracefully handle requests to services which are down

When a service is down the gateway should return a 503 (or maybe 504) to the user along with a json body containing an error message. There is an open issue upstream for this, arbor-dev/arbor#35. We should consider implementing this functionality there.

  • Return 503 / 504 response when a user tries to access a service that is down

Refactor Auth service

Currently the auth service makes a seperate http request for each element of user data fetches from the oauth provider (ie. name, email, etc.). We should be able to get all of the user's information in a single http request and return a struct with all of the user's information to the user.

The auth service also is a good example of where go's interfaces can be used. Every oauth provider should conform to an OauthProvider interface that exposes the methods for getting the authorization url, obtain an oauth token, and retrieving user information.

  • Refactor auth service to minimize http requests
  • Refactor auth service to take advantage of go's interface features

Refine error handling

API integration for consumers would be made easier if error handling was unified across the codebase and errors were presented with a hierarchical order e.g. ApiError indicates there is an unhanded error, UnprocessableRequestError suggests that the request a user had made is not valid, etc..

My proposed solution for this is that:

  • Custom error types are created for the api
  • Current calls to error.New and usage of library errors are replaced with our own error types
  • All error types have a type field to allow for easy checking of what error was returned by the API
  • All error types have a source field

Add event favorites

We want to allow users to mark events are "favorite" so that mobile apps can display these events at the top of the event list for users.

  • Allow users to favorite events
  • Allow users to retrieve a list of their favorite events

Migrate QR Codes to User Service

Currently we have QR Code generation in the checkin service, but it makes more sense for it to be in the user service since it is used in many places and it is a way to identify a user.

  • Move QR Code generation to the user service
  • Change QR Code fields to only contain user information, ie. userid, email, etc.

Parallelize stat retrieval

The stat service currently makes GET requests to each registered service in serial. This can be greatly sped up by making these requests in parallel.

  • Parallelize stat retrieval from registered services

Add delayed sending of emails

  • Allow api caller to specify a timestamp to send the mail at

One possible implementation of this would be to have a go routine dedicated to sending delayed messages. When a delayed message is created is would be passed to the dedicated go routine where the mail would be sent at the time specified. Note that this go routine should not busy-wait, since we don't want to use up CPU resources unnecessarily. Channels will likely be very useful in implementing this.

The delayed mail should generate the list of users and their substitution based on the state of the mailing list at the time of sending (as opposed to when the delay send request is created).

Add integration tests

Currently we only test the service level functionality within each microservice. As a result all the controller logic in the API has only been manually tested. We should have integration tests to the controller logic of each microservice. These integration tests will either have to spin up the entire API or mock the services that are required for the service that is being tested.

  • Add integration tests for end to end testing of controller logic

Make common/database generic

This package was originally designed to be generic, but over time mongo specific types became a part of the interface. The Database interface should be refactored to not return *mgo.ChangeInfo. Instead we should return our own struct that is similar to *mgo.ChangeInfo. Structs which implement the interface will need to populate this custom ChangeInfo like struct. Additionally while the Database interface does return a generic error type, users of the interface currently import mgo.ErrNotFound to check if the returned error was a not found error. We should have our own Not Found error that can be returned and checked against instead. We may also need to provide other common error types as well. We will also need to find a way to abstract away the need to pass in bson.M structs into the query. We can use map[string]interface{} instead.

  • Remove mongo specific return types for the Database interface
  • Create a set of common error types to return instead of database specific errors
  • Abstract away the use of query types much as bson.M

Health Checks

  • Health checks would allow services to mitigate the effect of task definitions that suffer from unexpected bugs that render them unable to serve further requests

Data Generator

Currently when setting up a development environment, all mongo collections start empty making it difficult to quickly test endpoints. Developers must first add the appropriate data to the the collections by making other requests. We should have a data generator that creates some fake data and manually inserts it into the mongo collections, bypassing the API. This will allow new developers to get started working immediately. This data generator should also be designed to be easy to update with changing schemes.

  • Create a data generator under utilities/datagen/
  • Include command line options for number of documents and other relevant parameters
  • Add a make target to invoke the generator with appropriate parameters

Error with redirect_uri parameters

The first url does not work, but the second does. The difference is the ordering of url parameters.

Does not work: https://accounts.google.com/o/oauth2/v2/auth?client_id=985459119454-r05c58fgdkk9o6sq7oupo3e9b44hb8q4.apps.googleusercontent.com&redirect_uri=https://localhost:8080/#/register/&scope=email&response_type=code

Works: https://accounts.google.com/o/oauth2/v2/auth?response_type=code&scope=profile%20email&client_id=985459119454-r05c58fgdkk9o6sq7oupo3e9b44hb8q4.apps.googleusercontent.com&redirect_uri=http://localhost:8080/#/register

The issue is in the parsing of the redirect_uri parameter when there is a # in the url. We should definitely put the redirect_uri parameter last in the url and we should also use a url building library instead of string concatenation.

return "https://accounts.google.com/o/oauth2/v2/auth?client_id=" + config.GOOGLE_CLIENT_ID + "&redirect_uri=" + redirect_uri + "&scope=profile%20email&response_type=code", nil

Define contributing guidelines for new members

As the number of people contributing the API increases it is important that we define specific contribution guidelines for members to make tackling issues simple for new contributers.

  • Define contributing guidelines

Enumerate roles

Create enum for roles and use that instead of string representations.

Write uploads to /tmp when not running in production mode

  • Use the environment variable IS_PRODUCTION to determine if running in production
  • If in production PUT requests should upload files to the specified s3 bucket and GET requests should return a presigned url authorizing the downloading of the file from the s3 bucket
  • If not in production PUT requests should write the file to disk in a location on /tmp and GET requests should return the local path of the file

The logic for determining if running in production should be done at the service level. I recommend having separate functions for the local file handling and having that code path get executed instead of the current s3 code path when not running in production.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.