GithubHelp home page GithubHelp logo

styrainc / opa-kafka-plugin Goto Github PK

View Code? Open in Web Editor NEW
58.0 12.0 20.0 263 KB

Open Policy Agent (OPA) plug-in for Kafka authorization

License: Apache License 2.0

Scala 69.88% Open Policy Agent 18.98% Shell 11.14%
opa-kafka-plugin opa kafka kafka-authorization authorization openpolicyagent rego open-policy-agent

opa-kafka-plugin's Introduction

Open Policy Agent plugin for Kafka authorization

Maven Central codecov

Open Policy Agent (OPA) plugin for Kafka authorization.

Prerequisites

  • Kafka 2.7.0+
  • Java 11 or above
  • OPA installed and running on the brokers

Installation

Download the latest OPA authorizer plugin jar from Releases (or Maven Central) and put the file (opa-authorizer-{$VERSION}.jar) somewhere Kafka recognizes it - this could be directly in Kafka's libs directory or in a separate plugin directory pointed out to Kafka at startup, e.g:

CLASSPATH=/usr/local/share/kafka/plugins/*

To activate the opa-kafka-plugin add the authorizer.class.name to server.properties
authorizer.class.name=org.openpolicyagent.kafka.OpaAuthorizer


The plugin supports the following properties:
Property Key Example Default Description
opa.authorizer.url http://opa:8181/v1/data/kafka/authz/allow Name of the OPA policy to query. [required]
opa.authorizer.allow.on.error false false Fail-closed or fail-open if OPA call fails.
opa.authorizer.cache.initial.capacity 5000 5000 Initial decision cache size.
opa.authorizer.cache.maximum.size 50000 50000 Max decision cache size.
opa.authorizer.cache.expire.after.seconds 3600 3600 Decision cache expiry in seconds.
opa.authorizer.metrics.enabled true false Whether or not expose JMX metrics for monitoring.
super.users User:alice;User:bob Super users which are always allowed.
opa.authorizer.truststore.path /path/to/mytruststore.p12 Path to the PKCS12 truststore for HTTPS requests to OPA.
opa.authorizer.truststore.password ichangedit changeit Password for the truststore.
opa.authorizer.truststore.type PKCS12, JKS or whatever your JVM supports PKCS12 Type of the truststore.

Usage

Example structure of input data provided from opa-kafka-plugin to Open Policy Agent.

{
    "action": {
        "logIfAllowed": true,
        "logIfDenied": true,
        "operation": "DESCRIBE",
        "resourcePattern": {
            "name": "alice-topic",
            "patternType": "LITERAL",
            "resourceType": "TOPIC",
            "unknown": false
        },
        "resourceReferenceCount": 1
    },
    "requestContext": {
        "clientAddress": "192.168.64.1",
        "clientInformation": {
            "softwareName": "unknown",
            "softwareVersion": "unknown"
        },
        "connectionId": "192.168.64.4:9092-192.168.64.1:58864-0",
        "header": {
            "data": {
                "clientId": "rdkafka",
                "correlationId": 5,
                "requestApiKey": 3,
                "requestApiVersion": 2
            },
            "headerVersion": 1
        },
        "listenerName": "SASL_PLAINTEXT",
        "principal": {
            "name": "alice-consumer",
            "principalType": "User"
        },
        "securityProtocol": "SASL_PLAINTEXT"
    }
}

The following table summarizes the supported resource types and operation names.

input.action.resourcePattern.resourceType input.action.operation
CLUSTER CLUSTER_ACTION
CLUSTER CREATE
CLUSTER DESCRIBE
GROUP READ
GROUP DESCRIPTION
TOPIC CREATE
TOPIC ALTER
TOPIC DELETE
TOPIC DESCRIBE
TOPIC READ
TOPIC WRITE
TRANSACTIONAL_ID DESCRIBE
TRANSACTIONAL_ID WRITE

These are handled by the method authorizeAction, and passed to OPA with an action, that identifies the accessed resource and the performed operation. patternType is always LITERAL.

Creation of a topic checks for CLUSTER + CREATE. If this is denied, it will check for TOPIC with its name + CREATE.

When doing idepotent write to a topic, and the first request for operation=IDEMPOTENT_WRITE on the resourceType=CLUSTER is denied, the method authorizeByResourceType to check, if the user has the right to write to any topic. If yes, the idempotent write is granted by Kafka's ACL-implementation. To allow for a similar check, it is mapped to OPA with patternType=PREFIXED, resourceType=TOPIC, and name="".

{
  "action": {
    "logIfAllowed": true,
    "logIfDenied": true,
    "operation": "DESCRIBE",
    "resourcePattern": {
      "name": "",
      "patternType": "PREFIXED",
      "resourceType": "TOPIC",
      "unknown": false
    },
    "resourceReferenceCount": 1
  },
  ...
}

It's likely possible to use all different resource types and operations described in the Kafka API docs: https://kafka.apache.org/24/javadoc/org/apache/kafka/common/acl/AclOperation.html https://kafka.apache.org/24/javadoc/org/apache/kafka/common/resource/ResourceType.html

Security protocols:

Protocol Description
PLAINTEXT Un-authenticated, non-encrypted channel
SASL_PLAINTEXT authenticated, non-encrypted channel
SASL authenticated, SSL channel
SSL SSL channel

More info:

https://kafka.apache.org/24/javadoc/org/apache/kafka/common/security/auth/SecurityProtocol.html

Policy sample

With the sample policy rego you will out of the box get a structure where an "owner" can one user per type (consumer, producer, mgmt). The owner and user type is separated by -.

  • Username structure: <owner>-<type>
  • Topic name structure: <owner->.*


Example:
User alice-consumer will be...

  • allowed to consume on topic alice-topic1
  • allowed to consume on topic alice-topic-test
  • denied to produce on any topic
  • denied to consume on topic bob-topic

See sample rego

Build from source

Using gradle wrapper: ./gradlew clean test shadowJar

The resulting jar (with dependencies embedded) will be named opa-authorizer-{$VERSION}-all.jar and stored in build/libs.

Logging

Set log level log4j.logger.org.openpolicyagent=INFO in config/log4j.properties Use DEBUG or TRACE for debugging.

In a busy Kafka cluster it might be good to tweak the cache since it may produce a lot of log entries in Open Policy Agent, especially if decision logs are turned on. If the policy isn't dynamically updated very often it's recommended to cache a lot to improve performance and reduce the amount of log entries.

Monitoring

The plugin exposes some metrics that can be useful in operation.

  • opa.authorizer:type=authorization-result
    • authorized-request-count: number of allowed requests
    • unauthorized-request-count: number of denied requests
  • opa.authorizer:type=request-handle
    • request-to-opa-count: number of HTTP request sent to OPA to get authorization result
    • cache-hit-rate: Cache hit rate. Cache miss rate should be 1 - cache-hit-rate
    • cache-usage-percentage: the ratio of cache size over maximum cache capacity

Community

For questions, discussions and announcements related to Styra products, services and open source projects, please join the Styra community on Slack!

opa-kafka-plugin's People

Contributors

akatona84 avatar anderseknert avatar iamatwork avatar irodzik avatar kelvk avatar lukasz-kaminski avatar mdanielolsson avatar quangminhtran94 avatar scholzj avatar xhl1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opa-kafka-plugin's Issues

1.5.1 release doesn't have .jar artifact

would it be possible to add a .jar artifact for 1.5.1 as it does for 1.5.0?

  • It is easier to upgrade the existing pipeline using the 1.5.0 artifact.
  • I am having issues building on my M1 Mac.
  • some clients frown when 3rd party software has to be built, unfortunately.

If there is another place to find the artifact, that would be great -- I didn't in my maven repo search.

Thank you.

Export metrics for Authorizer

Currently, there are no metrics related to authorization recorded. Therefore, it is hard to keep track of how authorizer is doing. I propose some metrics should be recorded here:

  • Number of authorized/unauthorized requests: This can be useful to check if there is any abnormal number of unauthorized request coming to our cluster
  • Cache hit rate: This can be useful to tune cache configuration such as expiration time
  • Cache capacity usage: Because caching is very important for performance

Slack discussion: https://cloud-native.slack.com/archives/CMH3Q3SNP/p1641481042006600

Add option to "chain" authorizers

This feature request would introduce the possibility of gradual rollout of OPA in an enterprise environment which has multiple independent teams using one central production Kafka cluster.

As an example, in our organization, we use ACLs for topic access. What would be great for us so that the move from topic ACLs to using OPA policy does not mean moving hundreds and hundreds of topics en masse to OPA, which would be next to impossible, is that OPA consults the ACL authorizer kafka.security.authorizer.AclAuthorizer and if no ACL exists, it then forwards the request to OPA for authorization.

That would allow us to remove ACLs from a small group of topics and add them to OPA in little chunks, on a team by team, and code base by code base fashion.

Some background is available in this slack thread: https://openpolicyagent.slack.com/archives/CBR63TK2A/p1664790781658039

Allow cache invalidation on policy update (or custom event)

Since caching is essential for good performance it makes sense to cache aggressively - both in terms of cache size but also in keeping the TTL of cached objects high. The downside of this is of course that policy (or policy data) changes take a long time to have an effect.

To tackle this while maintaining high performance, we should look into the possibilities to invalidate the cache on changes.

Some options to explore could be:

  • allow programmatic invalidation at some predefined endpoint.
  • hook into the currently unimplemented [add|get|remove]Acls methods on the authorizer interface to do cache invalidation.
  • use OPA's status API mechanism to know when policy or data changed.

Publishing 1.1.0 to Maven Central

It looks like the last version available from Maven Central is 1.0.0, but the GitHub repo seems to already have 1.1.0. Any chance you can push the 1.1.0 release to Maven Central as well? Thanks

Create example policies

We should include a couple of useful example policies that may be used as a base for custom policy development.

Create first release

Once verified on latest Kafka, create first release in git and on Github and setup a release flow in the Github actions for the repository.

Wrong Request sent by Kafka to OPA

I ran a local kafka 3.4.0 broker, and enabled mTLS with self signed certificates and using OPA policies as ACLs. I am using the example OPA policy posted in the project :

Policy

package kafka.authz

import future.keywords.in

default allow = false

allow {
inter_broker_communication
}

allow {
consume(input.action)
on_own_topic(input.action)
as_consumer
}

allow {
produce(input.action)
on_own_topic(input.action)
as_producer
}

allow {
create(input.action)
on_own_topic(input.action)
}

allow {
any_operation(input.action)
on_own_topic(input.action)
as_mgmt_user
}

allow {
input.action.operation == "READ"
input.action.resourcePattern.resourceType == "GROUP"
}

allow {
describe(input.action)
}

inter_broker_communication {
input.requestContext.principal.name == "ANONYMOUS"
}

inter_broker_communication {
input.requestContext.securityProtocol == "SSL"
input.requestContext.principal.principalType == "User"
username == "localhost"
}

consume(action) {
action.operation == "READ"
}

produce(action) {
action.operation == "WRITE"
}

create(action) {
action.operation == "CREATE"
}

describe(action) {
action.operation == "DESCRIBE"
}

any_operation(action) {
action.operation in ["READ", "WRITE", "CREATE", "ALTER", "DESCRIBE", "DELETE"]
}

as_consumer {
regex.match(".*-consumer", username)
}

as_producer {
regex.match(".*-producer", username)
}

as_mgmt_user {
regex.match(".*-mgmt", username)
}

on_own_topic(action) {
owner := trim(username, "-consumer")
regex.match(owner, action.resourcePattern.name)
}

on_own_topic(action) {
owner := trim(username, "-producer")
regex.match(owner, action.resourcePattern.name)
}

on_own_topic(action) {
owner := trim(username, "-mgmt")
regex.match(owner, action.resourcePattern.name)
}

username = cn_parts[0] {
name := input.requestContext.principal.name
startswith(name, "CN=")
parsed := parse_user(name)
cn_parts := split(parsed.CN, ".")
}

else = input.requestContext.principal.name {
true
}

parse_user(user) = {key: value |
parts := split(user, ",")
[key, value] := split(parts[_], "=")
}

Producer

On running a console producer to produce to the topic bob-topic , :
bin/kafka-console-producer.sh --bootstrap-server localhost:9094 --producer.config bob.producer.config --topic bob-topic

I get the :
org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.

This is the request received by OPA from the Kafka Broker :

{"client_addr":"127.0.0.1:51806","level":"info","msg":"Received request.","req_body":"{"input":{"requestContext":{"clientAddress":"/127.0.0.1","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.4.0"},"connectionId":"127.0.0.1:9094-127.0.0.1:51818-0","header":{"name":{"clientId":"console-producer","correlationId":2,"requestApiKey":22,"requestApiVersion":4},"headerVersion":2},"listenerName":"SSL", "principal":{"principalType":"User", "name": "CN=bob-producer"},"securityProtocol": "SSL"},
"action":{"resourcePattern":{"resourceType":"TOPIC", "name":"" ,"patternType": "PREFIXED" ,"unknown":false}, "operation": "WRITE", "resourceReferenceCount":0, "logIfAllowed":true, "logIfDenied":true}}}" , "req_id":8,"req_method":"POST","req_params":{},"req_path":"/v1/data/kafka/authz/allow","time":"2023-03-30T11:31:25-07:00"}

Notice the
{"resourcePattern":{"resourceType":"TOPIC", "name":"" ,"patternType": "PREFIXED" ,"unknown":false}
which I believe is the cause of the error

If it was
{"resourcePattern":{"resourceType":"TOPIC", "name":"bob-topic" ,"patternType": "LITERAL" ,"unknown":false}
then OPA would have allowed the access.

Is this a bug or am I doing something wrong ?

Not imlpemented methods of Authorizer

Some methods of the Authorizer are not imlpemented by OpaAuthorizer, but are called by Kafka under certain circumstances:
Write to a topic with ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG=true, but no transactional_id produces multiple requests to OPA:

  • DESCRIBE TOPIC "mytopic" (granted by my rules)
  • IDEMPOTENT_WRITE CLUSTER, which is denied by my rules.
  • WRITE TOPIC "hardcode" This comes from Kafka's authorizeByResourceType(..) at Authorizer.java:182. which is called by KafkaApis.scala:2131 in case the previous request was denied.

This leads to multiple reties by the client, and exceptions in Kafka's log.

The comment at Authorizer.authorizeByResourceType(..) states "check if the caller is authorized to perform the given ACL operation on at least one resource of the given type".
A quick fix might be to implement the remaining methods to return an empty collection.

I tried to implement a better solution, which maps Authorizer.authorizeByResourceType to a call to OPA with PatternType.PREFIX, resource.name="". So OPA can take the decision like AclAuthorizer would.
https://github.com/iamatwork/opa-kafka-plugin/tree/imlpement-authorize-by-resource-type

ERROR kafka.server.KafkaApis - [KafkaApi-3] Unexpected error handling request RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=kafka-opa-decisionlog-test-on-lm-98b61dc9-b098-409d-9583-08aa5650f0ce-write-idemPotent-, correlationId=2) -- InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) with context RequestContext(header=RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=kafka-opa-decisionlog-test-on-lm-98b61dc9-b098-409d-9583-08aa5650f0ce-write-idemPotent-, correlationId=2), connectionId='10.30.7.136:9092-10.21.72.98:50773-349', clientAddress=/10.21.72.98, principal=User:XXXXX, listenerName=ListenerName(OUTSIDE), securityProtocol=SSL, clientInformation=ClientInformation(softwareName=apache-kafka-java, softwareVersion=3.0.0), fromPrivilegedListener=false, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@6f9751a0]) scala.NotImplementedError: an implementation is missing at scala.Predef$.$qmark$qmark$qmark(Predef.scala:344) at com.bisnode.kafka.authorization.OpaAuthorizer.acls(OpaAuthorizer.scala:68) at org.apache.kafka.server.authorizer.Authorizer.authorizeByResourceType(Authorizer.java:213) at com.bisnode.kafka.authorization.OpaAuthorizer.authorizeByResourceType(OpaAuthorizer.scala:34) at kafka.server.AuthHelper.$anonfun$authorizeByResourceType$1(AuthHelper.scala:77) at kafka.server.AuthHelper.$anonfun$authorizeByResourceType$1$adapted(AuthHelper.scala:76) at scala.Option.forall(Option.scala:420) at kafka.server.AuthHelper.authorizeByResourceType(AuthHelper.scala:76) at kafka.server.KafkaApis.handleInitProducerIdRequest(KafkaApis.scala:2131) at kafka.server.KafkaApis.handle(KafkaApis.scala:191) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75) at java.base/java.lang.Thread.run(Thread.java:829)

use learnings from kafka plugin for pulsar plugin

sorry this is probably not the perfect place for this question...

are there any plans to use the learnings from building this plugin for topic level authorization for kafka
for a plugin for the georeplicated message and streaming platform apache pulsar ?
see https://pulsar.apache.org/

Pulsars adaption is growing strong and since v2.7 it also supports topic level policies
https://pulsar.apache.org/docs/2.11.x/admin-api-topics/

for authorization https://pulsar.apache.org/docs/2.11.x/security-extending/

Add installer script

Add script to deploy jar + selected policy in correct location - and update configuration accordingly - in a single step.

Have the installer print what it's doing so that it's clear and easily reversible.

Incompatibility with Kafka v3.0+

Hi everyone,

Im having dependency issue while trying to run this setup.

  • OPA version: 0.35+
  • Kafka version: 3.X
  • Opa Kafka plugin version: 1.4.0
[2022-03-05 17:41:58,778] ERROR [KafkaApi-2] Unexpected error handling request RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=2, correlationId=0) -- UpdateMetadataRequestData(controllerId=2, controllerEpoch=3, brokerEpoch=276, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=2, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=9093, host='kafka', listener='CLIENT', securityProtocol=2), UpdateMetadataEndpoint(port=9092, host='kafka', listener='OUTSIDE', securityProtocol=1)], rack=null)]) with context RequestContext(header=RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=2, correlationId=0), connectionId='127.0.0.1:9092-127.0.0.1:51473-0', clientAddress=/127.0.0.1, principal=User:ANONYMOUS, listenerName=ListenerName(OUTSIDE), securityProtocol=SSL, clientInformation=ClientInformation(softwareName=unknown, softwareVersion=unknown), fromPrivilegedListener=true, principalSerde=Optional[org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder@2caf3347]) (kafka.server.KafkaApis)
com.google.common.util.concurrent.ExecutionError: java.lang.ExceptionInInitializerError
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
	at com.google.common.cache.LocalCache.get(LocalCache.java:3951)
	at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4848)
	at org.openpolicyagent.kafka.OpaAuthorizer.allowAccess$1(OpaAuthorizer.scala:154)
	at org.openpolicyagent.kafka.OpaAuthorizer.doAuthorize(OpaAuthorizer.scala:165)
	at org.openpolicyagent.kafka.OpaAuthorizer.authorizeAction(OpaAuthorizer.scala:121)
	at org.openpolicyagent.kafka.OpaAuthorizer.$anonfun$authorize$1(OpaAuthorizer.scala:49)
	at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:99)
	at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:86)
	at scala.collection.convert.JavaCollectionWrappers$JListWrapper.map(JavaCollectionWrappers.scala:103)
	at org.openpolicyagent.kafka.OpaAuthorizer.authorize(OpaAuthorizer.scala:49)
	at kafka.server.AuthHelper.$anonfun$authorize$1(AuthHelper.scala:49)
	at kafka.server.AuthHelper.$anonfun$authorize$1$adapted(AuthHelper.scala:46)
	at scala.Option.forall(Option.scala:420)
	at kafka.server.AuthHelper.authorize(AuthHelper.scala:46)
	at kafka.server.AuthHelper.authorizeClusterOperation(AuthHelper.scala:54)
	at kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:336)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:175)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ExceptionInInitializerError
	at org.openpolicyagent.kafka.AllowCallable.call(OpaAuthorizer.scala:271)
	at org.openpolicyagent.kafka.AllowCallable.call(OpaAuthorizer.scala:268)
	at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4853)
	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
	... 19 more
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Scala module 2.10.5 requires Jackson Databind version >= 2.10.0 and < 2.11.0
	at com.fasterxml.jackson.module.scala.JacksonModule.setupModule(JacksonModule.scala:61)
	at com.fasterxml.jackson.module.scala.JacksonModule.setupModule$(JacksonModule.scala:46)
	at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:17)
	at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:835)
	at com.fasterxml.jackson.databind.cfg.MapperBuilder.addModule(MapperBuilder.java:243)
	at org.openpolicyagent.kafka.AllowCallable$.<clinit>(OpaAuthorizer.scala:264)
	... 26 more

Immediately after this error broker of course shuts down. In case logs are necessary let me know.
Good thing to mention is that there are no issues on version below 3.0.

Not sure whether you encountered it within testing.

Principal data serialization

We were looking at replacing some custom authorization policies with OPA policies. The authorizations that are being replaced are based on OAuth 2.0 based authentication and thus the principal is derived from KafkaPrincipal, i.e., a subclass. The OAuth principal carries information on the claims from the OAuth jwt which may be used for authorization in the rego policies.

However, currently the authorizer explicitly converts the principal to a KafkaPrincipal before serializing to json sending the request OPA. This way we loose all extra information from the jwt.

Would it be possible to change the principal serialization to support a more generic serialization supporting KafkaPrincipal subclasses?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.