GithubHelp home page GithubHelp logo

kong / kong Goto Github PK

View Code? Open in Web Editor NEW
37.5K 1.0K 4.7K 94.25 MB

🦍 The Cloud-Native API Gateway and AI Gateway.

Home Page: https://konghq.com/install/#kong-community

License: Apache License 2.0

Shell 0.92% Lua 88.59% Makefile 0.09% Perl 8.89% CSS 0.06% Dockerfile 0.05% Python 0.45% Starlark 0.96%
api-gateway nginx luajit microservices api-management serverless apis consul docker reverse-proxy cloud-native microservice kong devops kubernetes kubernetes-ingress-controller kubernetes-ingress ai artificial-intelligence ai-gateway

kong's Introduction

Stars GitHub commit activity Docker Pulls Build Status Version License Twitter Follow

Kong or Kong API Gateway is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support.

By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease.

Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.


Installation | Documentation | Discussions | Forum | Blog | Builds


Getting Started

Let’s test drive Kong by adding authentication to an API in under 5 minutes.

We suggest using the docker-compose distribution via the instructions below, but there is also a docker installation procedure if you’d prefer to run the Kong API Gateway in DB-less mode.

Whether you’re running in the cloud, on bare metal, or using containers, you can find every supported distribution on our official installation page.

  1. To start, clone the Docker repository and navigate to the compose folder.
  $ git clone https://github.com/Kong/docker-kong
  $ cd docker-kong/compose/
  1. Start the Gateway stack using:
  $ KONG_DATABASE=postgres docker-compose --profile database up

The Gateway is now available on the following ports on localhost:

  • :8000 - send traffic to your service via Kong
  • :8001 - configure Kong using Admin API or via decK
  • :8002 - access Kong's management Web UI (Kong Manager) on localhost:8002

Next, follow the quick start guide to tour the Gateway features.

Features

By centralizing common API functionality across all your organization's services, the Kong API Gateway creates more freedom for engineering teams to focus on the challenges that matter most.

The top Kong features include:

  • Advanced routing, load balancing, health checking - all configurable via a RESTful admin API or declarative configuration.
  • Authentication and authorization for APIs using methods like JWT, basic auth, OAuth, ACLs and more.
  • Proxy, SSL/TLS termination, and connectivity support for L4 or L7 traffic.
  • Plugins for enforcing traffic controls, rate limiting, req/res transformations, logging, monitoring and including a plugin developer hub.
  • Plugins for AI traffic to support multi-LLM implementations and no-code AI use cases, with advanced AI prompt engineering, AI observability, AI security and more.
  • Sophisticated deployment models like Declarative Databaseless Deployment and Hybrid Deployment (control plane/data plane separation) without any vendor lock-in.
  • Native ingress controller support for serving Kubernetes.

Plugin Hub

Plugins provide advanced functionality that extends the use of the Gateway. Many of the Kong Inc. and community-developed plugins like AWS Lambda, Correlation ID, and Response Transformer are showcased at the Plugin Hub.

Contribute to the Plugin Hub and ensure your next innovative idea is published and available to the broader community!

Contributing

We ❤️ pull requests, and we’re continually working hard to make it as easy as possible for developers to contribute. Before beginning development with the Kong API Gateway, please familiarize yourself with the following developer resources:

Use the Plugin Development Guide for building new and creative plugins, or browse the online version of Kong's source code documentation in the Plugin Development Kit (PDK) Reference. Developers can build plugins in Lua, Go or JavaScript.

Releases

Please see the Changelog for more details about a given release. The SemVer Specification is followed when versioning Gateway releases.

Join the Community

Konnect Cloud

Kong Inc. offers commercial subscriptions that enhance the Kong API Gateway in a variety of ways. Customers of Kong's Konnect Cloud subscription take advantage of additional gateway functionality, commercial support, and access to Kong's managed (SaaS) control plane platform. The Konnect Cloud platform features include real-time analytics, a service catalog, developer portals, and so much more! Get started with Konnect Cloud.

License

Copyright 2016-2024 Kong Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

kong's People

Contributors

add-sp avatar bungle avatar catbro666 avatar chronolaw avatar darrenjennings avatar dependabot[bot] avatar dndx avatar fffonion avatar flrgh avatar gszr avatar hanshuebner avatar hbagdi avatar hishamhm avatar hutchic avatar james-callahan avatar javierguerragiraldez avatar jschmid1 avatar kikito avatar locao avatar mayocream avatar oowl avatar outsinre avatar p0pr0ck5 avatar samugi avatar sonicaghi avatar starlightibuki avatar subnetmarco avatar thibaultcha avatar tieske avatar windmgc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kong's Issues

"bin/kong migrate" should be embedded into "bin/kong start"

The bin/kong migrate command should be embedded into bin/kong start and it should try to migrate the database to the latest version.

If a database already exists, then bin/kong start should alert the user asking him to migrate with a message like:

Kong cannot start because your database needs to be migrate. Would you like to start a migration to a new schema (y/n)?

In this way we're making the first execution less painful, and we're also alerting the user when a newer version of Kong is running against and older schema without incurring in runtime exceptions later on.

Finish Cassandra DAO

  • Don't rely on models anymore, use it in a more functional way, such as:
factory.apis:insert {
  name = "api 1",
  public_dns = "hello.com",
  target_url = "http://httpconsole.org"
}
  • Static queries, prepared on start
  • Cassandra constraints: unique checks, exists checks for unique and foreign entities since not natively supported by Cassandra.
  • INSERT, UPDATE, SELECT, DELETE + tests
  • Dynamic SELECT (allowed fields)
  • deserialize SELECT output (json encoded/type)
  • Metrics INCREMENT + DELETE
  • Check all UNIQUE and FOREIGN fields from schema are written into __exists, __unique (tests or in :prepare)
  • Paginated SELECT
  • Proper error types
  • immutable property for values that cannot be updated

Keep track of migrations

In order to have a fully working migrations system, Kong should keep track of the last migration in order to properly revert, or properly go up.

write standard script for daemons

upstart is dead, systemd is the new hotness.

write standard systemd script for daemons like systemd which would use #3 CLI

alternatives include: Upstart, runit, and OpenRC.

Leverage Lapis utilities

Lapis has built-in cache, JSON encode/decode, base64... We should leverage them instead of our custom methods: http://leafo.net/lapis/reference/utilities.html

Eventually, it also has schemas and migrations, even tho unfortunately, they only support PostgreSQL. Writing a Cassandra adapter would be a great contribution from Mashape. We could discuss this with the author of Lapis.

(cross referencing #18)

How to distribute (release process) Kong?

Find a way to ship Kong and an easy way to install/run Cassandra for test purposes.

Ideas to experiment:

  • See if Kong can be bundled in Openresty
  • Deb/RPM packages
  • Chef cookbook?
  • Bash script for installation
  • Use the Makefile
  • Docker/Vagrant
  • Ship a Cassandra test cluster in Docker/Vagrant

Things required to run Kong:

  • Lua
  • Luarocks
  • Openresty
  • Cassandra
  • Our luarocks dependencies to install
  • A Cassandra cluster

An idea could be that on the website, the user lands on a Download page asking him what is his distribution, he selects, we display the way to install Kong on his distribution (Deb vs RPM for example) as a list of commands he can copy/paste.

bin/kong migrate" should be embedded into kong/start

The kong/migrate command should be embedded into kong/start and it should try to migrate the database to the latest version.

If a database already exists, then kong start should alert the user asking him to migrate with a message like:

Kong cannot start because your database needs to be migrate. Would you like to start a migration to a new schema (y/n)?

In this way we're making the first execution less painful, and we're also alerting the user when a newer version of Kong is running against and older schema.

installation from source: missing dependency: libpcre3-dev

I'm installing from scratch, started with nothing, installed luarocks, started sudo make and got the following:

Error: Failed installing dependency: http://luarocks.org/repositories/rocks/lrexlib-pcre-2.7.2-1.src.rock - Could not find expected file pcre.h for PCRE -- you may have to install PCRE in your system and/or pass PCRE_DIR or PCRE_INCDIR to the luarocks command. Example: luarocks install lrexlib-pcre PCRE_DIR=/usr/local

this is resolved with:

sudo apt-get install libpcre3-dev

Do not rely on configuration enabled_plugins order

Currently we rely on the enabled_plugins property to determine which plugins must be executed, and the defined order matters in the execution (authentication must be executed before ratelimiting for example).

Execution order of the core plugins should be hard-coded, not depend on the user. And since they are core plugins, we could change this property to disabled_plugins, eventually.

[GH] Communication channels

As an open source project, we need to choose our communication channels. Having something like:

Communication

If you need help, use Stack Overflow. (Tag 'kong')
If you'd like to ask a general question, use Stack Overflow.
If you found a bug, open an issue.
If you have a feature request, open an issue.
If you want to contribute, submit a pull request.

Or/and use Gitter or/and use a freenode channel.

Inconsistent error messages from DAO

I expect DAO errors to be returned in the following form:

{
  database = true,
  message = "localhost could not be resolved (3: Host not found)"
}

But sometimes they show up in an inconsistent format, like:

{
  database = true,
  message = {
    database = true,
    message = "localhost could not be resolved (3: Host not found)"
  }
}

This specific example can be replicated by setting an invalid IP address/host in the Cassandra hosts property, starting the server and making a request.

Review queries with "ALLOW FILTERING"

Review all the queries done with ALLOW FILTERING on and make sure there is a better way of executing those queries by modeling the primary keys and indexes in a better way.

/bin/kong global script

The global script for Kong should invoke commands such as migrate, start, stop etc...

We need to consider if we are going to write it in Lua or Bash. Lua will be installed on the machine anyways to run Kong and we could leverage tools already written such as migrate and seed.

The Faker sometimes fails in development

Failure → ./spec/unit/dao/cassandra_spec.lua @ 308
Cassandra DAO #dao #cassandra :update() plugins should return nil if no entity was found to update in DB
./spec/unit/dao/cassandra_spec.lua:322: Expected to be falsy, but value was:
(table): {
  [message] = 'Plugin already exists'
  [unique] = true }
make[1]: *** [run-integration-tests] Error 1
make: *** [test-all] Error 2

Base DAO exception (concatenate field)

I'm receiving the following exception when creating a plugin:

curl -d "name=authentication&api_id=ID&value={\"authentication_type\":\"query\"}" 127.0.0.1:8001/plugins/
/usr/local/share/lua/5.1/kong/dao/cassandra/base_dao.lua:100: attempt to concatenate field 'message' (a table value)
stack traceback:
    /usr/local/share/lua/5.1/kong/dao/cassandra/base_dao.lua:100: in function '_check_unique'
    /usr/local/share/lua/5.1/kong/dao/cassandra/plugins.lua:79: in function '_check_unicity'
    /usr/local/share/lua/5.1/kong/dao/cassandra/plugins.lua:115: in function 'insert'
    .../local/share/lua/5.1/kong/web/routes/base_controller.lua:69: in function 'handler'
    /usr/local/share/lua/5.1/lapis/application.lua:393: in function 'resolve'
    /usr/local/share/lua/5.1/lapis/application.lua:402: in function </usr/local/share/lua/5.1/lapis/application.lua:400>
    [C]: in function 'xpcall'
    /usr/local/share/lua/5.1/lapis/application.lua:400: in function 'dispatch'
    /usr/local/share/lua/5.1/lapis/nginx.lua:181: in function 'serve'
    content_by_lua(nginx.conf:102):2: in function <content_by_lua(nginx.conf:102):1>

site: general TODO + dicussions

Currently @rainum and me are partially on the website, and @thefosk is writing the documentation.

Both will need to be reviewed.

TODO list:

  • 404 Page #43
  • Landing page
  • Benchmarking "blog post"
  • Contribute page
  • Support page
  • Plugins page
  • Download page
    • Download instructions for each platform
    • Subscribe form
  • Docs
    • Tutorials
    • Screencats
  • Remove alpha banner

Infinite loop on spec_helpers.stop_kong() if tailing logs

This:

os.execute("while ! [ `ps aux | grep nginx | grep -c -v grep` -gt 0 ]; do sleep 1; done")

never ends if I'm running the tests while tailing nginx_tmp/logs/error.log because nginx_tmp makes grep think that nginx is still running.

Use plain form parameters when valuing the properties of a plugin

Right now the plugin-related properties are sent as a JSON object in a value form field, like:

curl -d 'name=ratelimiting&api_id=ID&value={"period":"second", "limit": 10}' http://127.0.0.1:8001/plugins/

I believe that accepting those parameters in plain form values instead of encoding them into a JSON object will make the interface cleaner, so the same request becomes:

curl -d 'name=ratelimiting&api_id=ID&period=second&limit=10' http://127.0.0.1:8001/plugins/

Internally nothing will change in the way we validate the schema. Thoughts? @thibaultcha

Show a nice error when Cassandra is not available

Right now the following log shows up:

nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/dao/cassandra/base_dao.lua:272: Failed to prepare statement: DELETE FROM apis WHERE id = ?;. Error: Failed to read frame header from nil: closed
stack traceback:
    [C]: in function 'error'
    /usr/local/share/lua/5.1/kong/dao/cassandra/base_dao.lua:272: in function 'prepare'
    /usr/local/share/lua/5.1/kong/dao/cassandra/factory.lua:78: in function 'prepare'
    /usr/local/share/lua/5.1/kong.lua:60: in function 'init'
    init_by_lua:1: in main chunk

Cassandra schema possible improvements

Mainly just writing thoughts down here, and trying to discuss the limitations of a future schema. Not a priority at the moment.

The current schema was built using different column families for accounts, applications, apis, plugins. We should probably handle relations the way Cassandra handles them:

Relations

CREATE TYPE applications(
  public_key text, -- This is the public
  secret_key text, -- This is the secret key, it could be an apikey or basic password
  created_at timestamp
);

CREATE TABLE IF NOT EXISTS accounts(
  id uuid,
  provider_id text,
  applications set<applications>,
  created_at timestamp,
  PRIMARY KEY (id)
);

CREATE INDEX ON accounts(applications);

Here, the index would allow us to query the accounts table by application's values (especially public_key, as it is the only value that will be queried), but it needs to happen like this:

SELECT * FROM accounts WHERE applications CONTAINS ('abcd'); -- 'abcd' being a public_key
  • This model is a better fit for relations in Cassandra, less data duplication
  • Not 100% sure about the efficiency of querying a User Defined Type column vs. a text field like currently. Also as mentioned, not sure if a set can be paginated if it has a lot of entities...
  • We still have to check the unicity of a public_key, like currently

The same applies for plugins. They are currently a table on their own, but a plugin is attached to an API, and optionally to an application.

Community plugins

Plugins from the community will have to use the value property and encode their data to store things. We could provide them with a way of creating a table, or a UDT.

Finish refactoring proxy/controllers

  • Refactor controllers to support the new DAO
  • Refactor proxy to support the new DAO
    • Refactor ratelimiting (careful, new algorithm)
    • Refactor authentication
      • Support the public_key, secret_key model chosen with unicity and encryption for BASIC and API Key
  • Integration tests must run in kong_tests keyspace too
  • Gather constants in constants.lua such as header names

Default folders and configuration location

When installing Kong with a package manager, we can create default folders for the output and for the configuration file. This will require to implement the following changes in the bin/kong script:

  • If the system already has a configuration file at /etc/kong.yml, then it takes precedence over any other configuration file unless explicitly specified with -c
  • If the system has /var/log/kong folder, then it takes precedence over any other output folder, unless explicitly specified with -o

Better code coverage setup

Right now coverage runs through busted, during unit testing. They don't include Kong's core (main.lua and core/).

One solution would be to unit test Kong's core.

An ideal way would be to run them during our integration tests. Having branch testing would be even better.

Caching/Invalidation strategy for APIs, Consumers and Plugins

Kong fetches Api, Application and Plugin data from the datastore on every request. The performance of the system can be improved by implementing either a caching strategy, or an invalidation strategy.

The caching strategy is easier to build, but inefficient, since data will be fetched again every n seconds, even the data that didn't change.

The invalidation strategy is more efficient, but harder to build, since every node needs to invalidate the data, and there needs to be a mechanism that keeps the invalidations in sync on every node. There are two ways of implementing the invalidation strategy:

  • Every time an Api, Application or Plugin is being updated/deleted, the system iterates over all the nodes in the cluster and issues a PURGE HTTP request to invalidate the data. This means that each node needs to be aware of other nodes in the cluster, so we need to introduce the concept of cluster and cluster nodes in the LUA code. This can have performance issues on large node installations, because in a 100-node system that means 100 PURGE requests to be sent.
  • Another option is storing the invalidation data in the datastore in an appropriate invalidations table. Each node will be responsible for checking periodically the table and only invalidate the data that's being inserted into the table, and storing in the same table the number of nodes that have deleted the data. When the number of nodes that have deleted the data matches the number of nodes in the cluster, then the data is supposed to have been invalidated in every node and can be removed from the table. This can lead problems when the invalidation of the data happens at the same time when new nodes are being added or removed.

This issue is a work in progress and more options may be available.

Replace default error responses

The 500 error response returns an html page, it should be JSON. There's probably other default error pages that nginx may serve that should be converted as well.

screen shot 2015-03-04 at 10 32 56 pm

Lua CLI

Let's switch our current CLI from a bash script to a Lua script. Here are reasons why and features we can accomplish:

  • By having a Lua binary, we are closer to Kong's source code and can leverage any module (migrations.lua, dao factories...)
  • luarocks install kong would install Kong with our .rockspec, copying the bin into the user's $PATH. This solves our #51 issue by having this desired "simple way" to install Kong.
  • We can separate every CLI command in sub-scripts, which is way more maintainable
  • We can install plugins via a kong install <plugin> command that leverages luarocks. It could download the plugin, verify the signature, prompt questions to the terminal for configuration of the installed plugin (ex: Do you want to enable this freshly installed plugin on your machine? [y/n]:, etc...
  • We can provide native kong start/stop scripts with nicer stdout/stderr outputs.
  • We can migrate from the CLI (including the first migration) instead of wrapping a lua script from bash or migrating once kong is running (in Nginx like currently).
  • We can unit test the CLI with busted
  • A Lua CLI can be easily extended in the future, adding support for "add-user", or cluster-related informations will be much easier.

TODO list:

  • Global kong bin with sub-binaries for each command
  • Convert existing scripts to Lua (start/stop/migrate/rollback/reset/seed/drop)
  • Default config loading system (/etc/kong/kong.yml or {rocks_path}/kong/conf/kong.yml)
  • Convert the Makefile
  • Cleanup tools.utils and cmd.utils now that we have a CLI utils file.
  • Switch first migrations from kong.lua to be bundled in the start command.
  • Configuration validation

installation from source: missing dependency: luasec, libpcre3-dev

luasec

failed at:

Error: Failed installing dependency: http://luarocks.org/repositories/rocks/busted-2.0.rc6-0.rockspec - Failed installing dependency: http://luarocks.org/repositories/rocks/mediator_lua-1.1-3.rockspec - Error fetching file: Failed downloading https://github.com/Olivine-Labs/mediator_lua/archive/v1.1.tar.gz - Unsupported protocol - install luasec to get HTTPS support.

attempting to install with:

sudo luarocks install luasec

results in the following error:

Error: Could not find expected file libssl.a, or libssl.so, or libssl.so.* for OPENSSL -- you may have to install OPENSSL in your system and/or pass OPENSSL_DIR or OPENSSL_LIBDIR to the luarocks command. Example: luarocks install luasec OPENSSL_DIR=/usr/local

this is resolved with:

sudo luarocks install luasec OPENSSL_LIBDIR=/usr/lib/`gcc -print-multiarch`

libpcre3-dev

failed at:

Error: Failed installing dependency: http://luarocks.org/repositories/rocks/lrexlib-pcre-2.7.2-1.src.rock - Could not find expected file pcre.h for PCRE -- you may have to install PCRE in your system and/or pass PCRE_DIR or PCRE_INCDIR to the luarocks command. Example: luarocks install lrexlib-pcre PCRE_DIR=/usr/local

this is resolved with:

sudo apt-get install libpcre3-dev

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.