GithubHelp home page GithubHelp logo

go-redis / redis_rate Goto Github PK

View Code? Open in Web Editor NEW
767.0 767.0 98.0 82 KB

Rate limiting for go-redis

Home Page: https://redis.uptrace.dev/guide/go-redis-rate-limiting.html

License: BSD 2-Clause "Simplified" License

Go 99.09% Makefile 0.91%
gcra leaky-bucket rate-limiting redis

redis_rate's People

Contributors

alphacentory avatar anmic avatar beihai0xff avatar danehammer avatar gggwvg avatar heynemann avatar johannesvietze avatar nilyang avatar rbg avatar renovate-bot avatar sashati avatar vmihailenco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis_rate's Issues

dose lua script length limit?

I got error following:
ERR Protocol error: expected '$', got 'n'
when I used Allow function
res, err := limiter.Allow(client.Ctx, key, limit),
It will be OK while I remove all of comments in the lua script, so is there limitation of script length ?

question

reset is epoch correct? if so, I found a slight bug. I am writing unit tests for my own project and noticed something.

for i := 0; i < 2; i++ {
    for a := 0; a < 10; a++ {
        .... requests here
        .... sleep here when unauth
    }
  }

if you run that and take parse into a date. and sleep from

then - now

It is a fraction of a second off.
Luckily by adding 1 to reset... it literally solves the problem.

-- Sorry this might be caused by not waiting for nanoseconds. That is my fault.

Unable to import library in project

When I attempt to go get the library

$ go get gopkg.in/go-redis/rate.v5
package gopkg.in/go-redis/rate.v5: unrecognized import path "gopkg.in/go-redis/rate.v5" (parse https://gopkg.in/go-redis/rate.v5?go-get=1: no go-import meta tags)

If I clone via the github url and attempt to use the library I get:

code in directory /go/src/github.com/go-redis/redis_rate expects import "gopkg.in/go-redis/rate.v5"

So as far as I can tell it is not currently possible to import and use this library

is it concurrency safe?

My limit is 60. When I have 6 concurrent requests, the 60th request is not restricted, only when there are more than 60 requests. This issue was not present in the v6 version, but was present in the v9 and v10 versions

README.md Example Does Not Work - Rate Library Function Signature Changed

The example in the README.md no longer works. The call rate.NewLimiter has a new method signature which requires an additional parameter.

Here are the following errors when I try to run the example code verbatim:

main/main.go:49: not enough arguments in call to rate.NewLimiter
main/main.go:51: limiter.Fallback undefined (type *rate.Limiter has no field or method Fallback)
main/main.go:51: undefined: timerate in timerate.NewLimiter
main/main.go:54: cannot use limiter (type *rate.Limiter) as type *redis_rate.Limiter in argument to handler
main/main.go:58: cannot use limiter (type *rate.Limiter) as type *redis_rate.Limiter in argument to statusHandler

Feature Request: As part of the response return the total number of attempts

The current structure is

type Result struct {
	Limit *Limit
	Allowed bool
	Remaining int
	RetryAfter time.Duration
	ResetAfter time.Duration
}

It would be nice if the structure also returned the number of attempts that were made e.g

type Result struct {
	Limit *Limit
	Allowed bool
	Remaining int
        TotalNumberOfAttempts int // tracks how many request where made after the request as been rate limited
	RetryAfter time.Duration
	ResetAfter time.Duration
}

This would allow for more robust logging around user behavior to very easily change limit values to suit the applications needs. Obviously this would also require a change to the lua script to track that...

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

Redis Pool Timeout over 1500 rps

The library gives redis: connection pool timeout when rps crosses over 1500. The default value for PoolTimeout is ReadTimeout + 1 which is equal to 4 seconds in my case. For instance, if I hit a particular microservice with 1800 rps for 10 seconds (That makes a total of 18K requests). The library gave redis timeout for around 5K requests. Is there any solution for this ? Apart from increasing the timeout ?

why burst_offset is neccessary

in allowN's script, I found burst_offset, which is curious, is not necessary. In fact, increment is okay to calculate remaining token.

redis_rate.PerDay() ??

Hi, thank you for developing this really useful rate limiter!! I'm now about to use this library😃

And I thought what about supporting redis_rate.PerDay().
As you know, currently, we have only choices of PerSec(), PerMin() and PerHour(). However, some ppl (not only me, maybe) might wanna set limits per day.

If it doesn't sound like a bad idea, if I may, I would like to work on it😊
Would be really happy if you gave me a reply...!

v is nil,occured pannic error

https://github.com/go-redis/redis_rate/blob/8eadf45ee4d9d7a53189c2a968ead40521a76322/rate.go#L107C28-L107C28
use allowN.Eval replace the allowN.Run then test is ok.

values := []interface{}{limit.Burst, limit.Rate, limit.Period.Seconds(), n}
	// v, err := allowN.Run(ctx, l.rdb, []string{redisPrefix + key}, values...).Result()
	v, err := allowN.Eval(ctx, l.rdb, []string{redisPrefix + key}, values...).Result()
	if err != nil {
		return nil, err
	}
	values = v.([]interface{})
github.com/redis/go-redis/v9 script.go
// Run optimistically uses EVALSHA to run the script. If script does not exist
// it is retried using EVAL.
func (s *Script) Run(ctx context.Context, c Scripter, keys []string, args ...interface{}) *Cmd {
	r := s.EvalSha(ctx, c, keys, args...)
	if HasErrorPrefix(r.Err(), "NOSCRIPT") { // maybe here have bug?
		return s.Eval(ctx, c, keys, args...)
	}
	return r
}

change rate on the fly

hi
is is possible to change limit on the fly?
please write an example for change rate

RetryAfter returns unexpected values

Hi,

First of all, thanks for your awesome work.

I'm playing with the package and I encountered an unexpected (for me) behaviour.

Given the following example:

package main

import (
	"log"
	"time"

	"github.com/go-redis/redis/v7"
	"github.com/go-redis/redis_rate/v8"
)

func main() {
	const (
		perPeriod = 1
		period    = 10 * time.Second
	)

	rdb := redis.NewClient(&redis.Options{
		Addr: "localhost:6379",
	})
	defer rdb.Close()

	limiter := redis_rate.NewLimiter(rdb)
	limit := &redis_rate.Limit{
		Burst:  perPeriod,
		Rate:   perPeriod,
		Period: period,
	}

	for {
		r, _ := limiter.Allow("key", limit)
		log.Println(r.RetryAfter)
		time.Sleep(time.Second / 5)
	}
}

I get the following output:

2020/11/12 09:36:51 -1ns
2020/11/12 09:36:51 9.795602008s
2020/11/12 09:36:51 9.592261001s
2020/11/12 09:36:51 9.385677009s
2020/11/12 09:36:51 9.178840011s
2020/11/12 09:36:52 8.974091008s
2020/11/12 09:36:52 8.770157009s
2020/11/12 09:36:52 8.566715002s
2020/11/12 09:36:52 8.362652003s
2020/11/12 09:36:52 8.158610999s
2020/11/12 09:36:53 7.952590003s
2020/11/12 09:36:53 7.747349008s
2020/11/12 09:36:53 7.544164001s
2020/11/12 09:36:53 7.341191008s
2020/11/12 09:36:53 7.134954005s
2020/11/12 09:36:54 6.929434001s
2020/11/12 09:36:54 6.726941004s
2020/11/12 09:36:54 6.520847007s
2020/11/12 09:36:54 6.316104009s
2020/11/12 09:36:54 6.113963007s
2020/11/12 09:36:55 5.911712005s
2020/11/12 09:36:55 5.708161011s
2020/11/12 09:36:55 5.50307101s
2020/11/12 09:36:55 5.299281001s
2020/11/12 09:36:55 5.095963001s
2020/11/12 09:36:56 -1ns
2020/11/12 09:36:56 14.689835011s
2020/11/12 09:36:56 14.486311003s
2020/11/12 09:36:56 14.281031012s
2020/11/12 09:36:56 14.075329005s
2020/11/12 09:36:57 13.869887009s
2020/11/12 09:36:57 13.666203007s
2020/11/12 09:36:57 13.461514011s
2020/11/12 09:36:57 13.25852701s
2020/11/12 09:36:58 13.056652009s
2020/11/12 09:36:58 12.854091003s
2020/11/12 09:36:58 12.651368007s
2020/11/12 09:36:58 12.446737006s
2020/11/12 09:36:58 12.244395002s
2020/11/12 09:36:59 12.042552009s
2020/11/12 09:36:59 11.838225007s
2020/11/12 09:36:59 11.633044004s
2020/11/12 09:36:59 11.428791001s
2020/11/12 09:36:59 11.224536001s
2020/11/12 09:37:00 11.017291009s
2020/11/12 09:37:00 10.814773008s
2020/11/12 09:37:00 10.612246006s
2020/11/12 09:37:00 10.409669011s
2020/11/12 09:37:00 10.207376003s
2020/11/12 09:37:01 10.005622997s
2020/11/12 09:37:01 9.802719011s
2020/11/12 09:37:01 9.598593011s
2020/11/12 09:37:01 9.396362006s
2020/11/12 09:37:01 9.19143401s
2020/11/12 09:37:02 8.988325998s
2020/11/12 09:37:02 8.783522009s
2020/11/12 09:37:02 8.579552009s
2020/11/12 09:37:02 8.376174002s
2020/11/12 09:37:02 8.173828005s
2020/11/12 09:37:03 7.971050009s
2020/11/12 09:37:03 7.765211999s
2020/11/12 09:37:03 7.56295301s
2020/11/12 09:37:03 7.359366998s
2020/11/12 09:37:03 7.154352009s
2020/11/12 09:37:04 6.951729997s
2020/11/12 09:37:04 6.745942011s
2020/11/12 09:37:04 6.543076008s
2020/11/12 09:37:04 6.337754011s
2020/11/12 09:37:04 6.13114801s
2020/11/12 09:37:05 5.925363004s
2020/11/12 09:37:05 5.723029002s
2020/11/12 09:37:05 5.518610998s
2020/11/12 09:37:05 5.313848003s
2020/11/12 09:37:05 5.104800999s
2020/11/12 09:37:06 -1ns
2020/11/12 09:37:06 14.694633007s

The first time the operation is allowed after half the period (after 5 seconds, instead of 10).
On the next tries it takes exactly 10 seconds to allow the operation, which is fine. What bothers me is that the RetryAfter value starts at almost 15 and the next operation is allowed while the retry after is at 5.

Is there any reason for this or am I doing something wrong?

Thanks,
Ivan

Tests are failing when actual redis server is used

I was porting this module to get it working with github.com/gomodule/redigo. After running the tests, a few of them were failing. It was surprising since I did not change any logical code.

I then ran these tests with a live redis server instead and noticed that the same tests are failing.

Here is the diff if you want to test this yourself

 func rateLimiter() *redis_rate.Limiter {
-	ring := redis.NewRing(&redis.RingOptions{
-		Addrs: map[string]string{"server0": ":6379"},
-	})
-	if err := ring.FlushDB(context.TODO()).Err(); err != nil {
-		panic(err)
-	}
-	return redis_rate.NewLimiter(ring)
+	client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
+	return redis_rate.NewLimiter(client)
 }
 
 func TestAllow(t *testing.T) {

The below tests are failing:

go test ./...
--- FAIL: TestAllowN_IncrementZero (0.00s)
    rate_test.go:82: 
        	Error Trace:	rate_test.go:82
        	Error:      	Not equal: 
        	            	expected: 0
        	            	actual  : 10
        	Test:       	TestAllowN_IncrementZero
--- FAIL: TestRetryAfter (0.00s)
    assertion_compare.go:240: 
        	Error Trace:	rate_test.go:121
        	Error:      	"998632997" is not less than or equal to "1000000"
        	Test:       	TestRetryAfter
        	Messages:   	[]
--- FAIL: TestAllowAtMost (0.00s)
    rate_test.go:133: 
        	Error Trace:	rate_test.go:133
        	Error:      	Not equal: 
        	            	expected: 0
        	            	actual  : 1
        	Test:       	TestAllowAtMost
--- FAIL: TestAllowAtMost_IncrementZero (0.00s)
    rate_test.go:190: 
        	Error Trace:	rate_test.go:190
        	Error:      	Not equal: 
        	            	expected: 0
        	            	actual  : 10
        	Test:       	TestAllowAtMost_IncrementZero
FAIL
FAIL	github.com/go-redis/redis_rate/v9	0.007s
FAIL

Possible Performance Issue

I'm testing this implementation in production right on a server with 30QPS.

I'm running the slowlog command, here is a snippet of the output:

1) 1) (integer) 1935
     2) (integer) 1587949200
     3) (integer) 11632
     4) 1) "evalsha"
        2) "822c0f0d1e578396a10ba06ccc5b63c1be3cb8ed"
        3) "2"
        4) "LIMIT:1a6eabc2-62e1-4b6a-94b8-ae41b74574d1"
        5) "{LIMIT:1a6eabc2-62e1-4b6a-94b8-ae41b74574d1}:S"
        6) "1587949200025"
        7) "30"
        8) "60000"
     5) "10.11.3.252:54926"
     6) ""
  2) 1) (integer) 1934
     2) (integer) 1587858734
     3) (integer) 12448
     4) 1) "evalsha"
        2) "822c0f0d1e578396a10ba06ccc5b63c1be3cb8ed"
        3) "2"
        4) "LIMIT:2d17915e-fc01-4a51-b424-60165b8b0171"
        5) "{LIMIT:2d17915e-fc01-4a51-b424-60165b8b0171}:S"
        6) "1587858734731"
        7) "30"
        8) "60000"
     5) "10.11.167.140:57346"
     6) ""
  3) 1) (integer) 1933
     2) (integer) 1585314000
     3) (integer) 14452
     4) 1) "evalsha"
        2) "822c0f0d1e578396a10ba06ccc5b63c1be3cb8ed"
        3) "2"
        4) "LIMIT:f7e74bd2-05da-4bc8-845c-527bd31951a5"
        5) "{LIMIT:f7e74bd2-05da-4bc8-845c-527bd31951a5}:S"
        6) "1585314000009"
        7) "30"
        8) "60000"
     5) "10.11.3.252:53512"
     6) ""

I can see that the lua script takes over 10ms per request. I'm using ElastiCache on AWS cache.r5.large using redis engine 4.0.10. This is a single node without replication. 10ms is a lot of time to take and I wonder if the script can be optimized further. This would bottleneck other parts of my production ( > 1000QPS). The redis node itself is still seriously under utilized, but this blocking call would stampede the system. I love the go-redis package and have been using it for years.

Any thoughts or suggestions on how to bring down the latency on the script itself? Or perhaps using a different implementation altogether is the best approach.

Update it starts to degrade in performance just above 60 QPS and then has trouble recovering once the QPS comes down, having spikes in timeouts (50ms read timeout) every 5 minutes.

questions about allown and allownMost

Q1: why does the code in function allown:

if remaining < 0 then
  local reset_after = tat - now
  local retry_after = diff * -1
  return {
    0, -- allowed
    0, -- remaining
    tostring(retry_after),
    tostring(reset_after),
  }
end

the remaining less zero? It should not be less than one when it should prompt that the current request cannot be precessed? like in function allownMost:

if remaining < 1 then
  local reset_after = tat - now
  local retry_after = emission_interval - diff
  return {
    0, -- allowed
    0, -- remaining
    tostring(retry_after),
    tostring(reset_after),
  }
end

Q: I can not understand the specific difference between these two function? can you explain in more detail? thank you~

Rate and Burst configurations are not clear

Hi, there. Thank you for providing such a simple and useful library.

There are some configuration options confuse me. What do 'Rate' and 'Burst' mean? How can I modify them to achieve the effect of 'maximum 2 times per 10 minutes'?

Hope to add some explanatory comments.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

gomod
go.mod
  • go 1.19
  • github.com/redis/go-redis/v9 v9.0.2
  • github.com/stretchr/testify v1.8.1

  • Check this box to trigger a request for Renovate to run again on this repository

Way too many events allowed when burst is not 1

Run:

package main

import (
	"log"
	"time"

	"github.com/go-redis/redis/v7"
	"github.com/go-redis/redis_rate/v8"
)

func main() {
	const (
		perPeriod = 17
		period    = 30 * time.Second
	)

	rdb := redis.NewClient(&redis.Options{
		Addr: "localhost:6379",
	})
	defer rdb.Close()

	limiter := redis_rate.NewLimiter(rdb, &redis_rate.Limit{
		Burst:  perPeriod,
		Rate:   perPeriod,
		Period: period,
	})

	allowed := 0
	start := time.Now()

	for i := 0; i < 30; i++ {
		r, _ := limiter.Allow("key")
		if r.Allowed {
			allowed++
		}

		time.Sleep(time.Second / 4)

		r, _ = limiter.Allow("key")
		if r.Allowed {
			allowed++
		}

		time.Sleep(3 * time.Second / 4)
	}

	log.Printf("Allowed %d in %v, expected %d", allowed, time.Since(start), perPeriod)
}

And I get:

2019/08/31 18:25:53 Allowed 32 in 30.073504489s, expected 17

The rate was set to 17 per 30 seconds, but each run allows much more than 17. Change burst to 1:

2019/08/31 18:28:47 Allowed 18 in 30.079242485s, expected 17

Which makes a little more sense. If you make the burst go even higher, then you can end up allowing every event, but I'm assuming a burst above the rate is not a correct configuration.

Export rediser interface

Hi, we're using redis_rate to create a middleware rate limiter and we create a copy of the rediser interface to create an interface for a redis client. Would it be possible to export this interface so we can share it or is it unexported for some good reason?

Update godoc version in repo about

The godoc url in the description of the repo goes to an older version of the godoc, it should navigate to https://pkg.go.dev/github.com/go-redis/redis_rate/v9

What's the difference between rate and burst?

	limit := redis_rate.Limit{
		Rate:   limitCfg.Burst,
		Burst:  limitCfg.Burst,
		Period: time.Duration(limitCfg.Second) * time.Second,
	}

Many times, the same values are used. What's the difference between them

Add Reset method

Add a method to reset a given key.

func (l *Limiter) Reset(key string) error

Add HTTP middleware

It'd be nice to provide a simple HTTP middleware to avoid the boilerplate of adding the headers.

Latency benchmarks

I was wondering if there is any data on latency benchmarks on the redis_rate client that is publicly available

From the code, any Allow() call in rate limiter runs evalsha, where Redis pushes arguments to a Lua stack, run the Lua script, and then pop the value off which has significant latency implications on redis. Also, running the script blocks other clients as well.

I have verified this by running the SLOWLOG command where the evalsha cmd in Allow() takes up to 15ms.

Rate limiter not working as expected

Perhaps this is an implementation issue. See the following file which shows the per second rate not working. If you set the rate limiter to 100 per second and you have 101 requests, it should take more than a second to finish since only 100/101 could have run in the first second and the last request would have to wait until the next second.

package main

import (
	"context"
	"github.com/go-redis/redis/v8"
	"github.com/go-redis/redis_rate/v9"
	"github.com/stretchr/testify/require"
	"os"
	"sync"
	"sync/atomic"
	"testing"
	"time"
)

type Limiter interface {
	Allow(ctx context.Context) (time.Duration, error)
}

type Rediser interface {
	Eval(ctx context.Context, script string, keys []string, args ...interface{}) *redis.Cmd
	EvalSha(ctx context.Context, sha1 string, keys []string, args ...interface{}) *redis.Cmd
	ScriptExists(ctx context.Context, hashes ...string) *redis.BoolSliceCmd
	ScriptLoad(ctx context.Context, script string) *redis.StringCmd
	Del(ctx context.Context, keys ...string) *redis.IntCmd
}

func NewRedisLimiter(r Rediser, key string, perSec int) Limiter {
	return &redisLimiter{
		limiter: redis_rate.NewLimiter(r),
		key:     key,
		perSec:  perSec,
	}
}

type redisLimiter struct {
	limiter *redis_rate.Limiter
	key     string
	perSec  int
}

func (l *redisLimiter) Allow(ctx context.Context) (time.Duration, error) {
	r, err := l.limiter.Allow(ctx, l.key, redis_rate.PerSecond(l.perSec))
	if err != nil {
		return 0, err
	}

	return r.RetryAfter, nil
}

func TestRedisLimiter_Allow(t *testing.T) {
	ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
	defer cancel()

	radd := os.Getenv("REDIS_ADDR") // set this in the env to host:port
	opts := redis.Options{Addr: radd}
	rc := redis.NewClient(&opts)
	defer rc.Close()

	perSecond := 100 // set the per second rate

	var val int64
	limiter := NewRedisLimiter(rc, "TestRedisLimiter_Allow", perSecond)
	runs := perSecond + 1 // one more than the per second rate (last request should be in the next second)
	wg := sync.WaitGroup{}
	wg.Add(runs)
	start := time.Now()
	for i := 0; i < runs; i++ {
		go func() {
			defer wg.Done()

			retryAfter, err := limiter.Allow(ctx)
			require.NoError(t, err)

			for retryAfter > 0 {
				time.Sleep(retryAfter)
				retryAfter, err = limiter.Allow(ctx)
				require.NoError(t, err)
			}

			atomic.AddInt64(&val, 1)
		}()
	}
	wg.Wait()
	elapsed := time.Since(start)
	require.GreaterOrEqual(t, elapsed, time.Second) // one more than the per second rate (last request should be in the next second)
	require.Equal(t, runs, int(val))
}

Here is a docker-compose.yml that should be able to run the test via docker compose up test after go mod init && go mod tidy:

version: "3.8"

services:
  test:
    image: golang:1.18
    volumes:
      - .:/app
    working_dir: /app
    environment:
      - REDIS_ADDR=redis:6379
    command: sh -c "go test -coverprofile=cover.out ./... -race && go tool cover -html=cover.out -o cover.html"
    depends_on:
      redis:
        condition: service_healthy


  redis:
    image: redis:alpine
    expose:
      - "6379"
    healthcheck:
      test: ["CMD", "redis-cli","ping"]

The only thing I can think of is that maybe the first second doesn't really count some how? I tried 100 per second for 1000 total and am getting a little over 9 seconds

how to clear old keys?

At any point if we want to clear data (that is created and used by rate_limit) from redis, how to do it?

something like :

rateLimiter.Clear("key") or rateLimiter.Flush()

How to deal with this speed limit scheme?

Usage Plans:

Plan type Rate (requests per second) Burst
Default 0.0055 20

How do you use your library to handle this kind of restriction request?
I am a client to request other server api

Check if available without limiting

It would be nice to be able to have something like "IsAllowed" which would return me a boolean. This is for being able to block or not a request without "dropping in buckets". I'm using this for the creation of a process which will at times require polling, and I will have to "drop in buckets" only when the process has been successfully created, and not right when the request is made.

Greetings

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.