GithubHelp home page GithubHelp logo

jobs's Introduction

Jobs

Development Status

Jobs is no longer being actively developed. I will still try my best to respond to issues and pull requests, but in general you should not expect much support. No new features will be added. Still, Jobs is reasonably well-tested, and it is probably fine to use it for low-traffic hobby sites. If you are looking for something for more serious, production use-cases, consider alternatives such as RabbitMQ.

Jobs follows semantic versioning but offers no guarantees of backwards compatibility until version 1.0.

About

Jobs is a persistent and flexible background jobs library for go.

Version Circle CI GoDoc

Jobs is powered by Redis and supports the following features:

  • A job can encapsulate any arbitrary functionality. A job can do anything which can be done in a go function.
  • A job can be one-off (only executed once) or recurring (scheduled to execute at a specific interval).
  • A job can be retried a specified number of times if it fails.
  • A job is persistent, with protections against power loss and other worst case scenarios. (See the Guarantees section below)
  • Work on jobs can be spread amongst any number of concurrent workers across any number of machines.
  • Provided it is persisted to disk, every job will be executed at least once, and in ideal network conditions will be executed exactly once. (See the Guarantees section below)
  • You can query the database to find out e.g. the number of jobs that are currently executing or how long a particular job took to execute.
  • Any job that permanently fails will have its error captured and stored.

Why is it Useful?

Jobs is intended to be used in web applications. It is useful for cases where you need to execute some long-running code, but you don't want your users to wait for the code to execute before rendering a response. A good example is sending a welcome email to your users after they sign up. You can use Jobs to schedule the email to be sent asynchronously, and render a response to your user without waiting for the email to be sent. You could use a goroutine to accomplish the same thing, but in the event of a server restart or power loss, the email might never be sent. Jobs guarantees that the email will be sent at some time and allows you to spread the work between different machines.

Installation

Jobs requires Go version >= 1.2. If you do not already have it, follow these instructions:

Jobs requires access to a Redis database. If you plan to have multiple worker pools spread out across different machines, they should all connect to the same Redis database. If you only want to run one worker pool, it is safe to install Redis locally and run it on the same machine. In either case, if you need to install Redis, follow these instructions:

  • Install Redis.
  • Follow the instructions in the section called Installing Redis more properly.
  • Make sure you understand how Redis Persistence works and have edited your config file to get your desired persistence. We recommend using both RDB and AOF and setting fsync to either "always" or "everysec".

After that, you can install Jobs like you would any other go package: go get github.com/albrow/jobs. If you want to update the package later, use go get -u github.com/albrow/jobs. Then you can import Jobs like you would any other go package by adding import github.com/albrow/jobs to your go source file.

Quickstart Guide

Connecting to Redis

You can configure the connection to Redis by editing Config.Db. Here are the options:

  • Address is the address of the redis database to connect to. Default is "localhost:6379".
  • Network is the type of network to use to connect to the redis database Default is "tcp".
  • Database is the redis database number to use for storing all data. Default is 0.
  • Password is a password to use for connecting to a redis database via the AUTH command. If empty, Jobs will not attempt to authenticate. Default is "" (an empty string).

You should edit Config.Db during program initialization, before running Pool.Start or scheduling any jobs. Here's an example of how to configure Jobs to use databse #10 and authenticate with the password "foobar":

func main() {
	// Configure database options at the start of your application
	jobs.Config.Db.Database = 10
	jobs.Config.Db.Password = "foobar"
}

Registering Job Types

Jobs must be organized into discrete types. Here's an example of how to register a job which sends a welcome email to users:

// We'll specify that we want the job to be retried 3 times before finally failing
welcomeEmailJobs, err := jobs.RegisterType("welcomeEmail", 3, func(user *User) error {
	msg := fmt.Sprintf("Hello, %s! Thanks for signing up for foo.com.", user.Name)
	if err := emails.Send(user.EmailAddress, msg); err != nil {
		// The returned error will be captured by a worker, which will then log the error
		// in the database and trigger up to 3 retries.
		return err
	}
})

The final argument to the RegisterType function is a HandlerFunc which will be executed when the job runs. HandlerFunc must be a function which accepts either zero or one arguments and returns an error.

Scheduling a Job

After registering a job type, you can schedule a job using the Schedule or ScheduleRecurring methods like so:

// The priority argument lets you choose how important the job is. Higher
// priority jobs will be executed first.
job, err := welcomeEmailJobs.Schedule(100, time.Now(), &User{EmailAddress: "[email protected]"})
if err != nil {
	// Handle err
}

You can use the Job object returned by Schedule or ScheduleRecurring to check on the status of the job or cancel it manually.

Starting and Configuring Worker Pools

You can schedule any number of worker pools across any number of machines, provided every machine agrees on the definition of the job types. If you want, you can start a worker pool on the same machines that are scheduling jobs, or you can have each worker pool running on a designated machine. Since each pool is assigned an id based on a unique hardware identifier, you must only run one worker pool per machine.

To create a new pool with the default configuration, just pass in nil:

pool, err := jobs.NewPool(nil)
if err != nil {
	// Handle err
}

You can also specify a different configuration by passing in *PoolConfig. Any zero values in the config you pass in will fallback to the default values. So here's how you could start a pool with 10 workers and a batch size of 10, while letting the other options remain the default.

pool, err := jobs.NewPool(&jobs.PoolConfig{
	NumWorkers: 10,
	BatchSize: 10,
})
if err != nil {
	// Handle err
}

After you have created a pool, you can start it with the Start method. Once started, the pool will continuously query the database for new jobs and delegate those jobs to workers. Any program that calls Pool.Start() should also wait for the workers to finish before exiting. You can do so by wrapping Close and Wait in a defer statement. Typical usage looks something like this:

func main() {
	pool, err := jobs.NewPool(nil)
	if err != nil {
		// Handle err
	}
	defer func() {
		pool.Close()
		if err := pool.Wait(); err != nil {
			// Handle err
		}
	}()
	if err := pool.Start(); err != nil {
		// Handle err
	}
}

You can also call Close and Wait at any time to manually stop the pool from executing new jobs. In this case, any jobs that are currently being executed will still finish.

Testing

To run the tests, make sure you have Redis running and accepting unix socket connections on the address /tmp/redis.sock. The tests will use database #14. WARNING: After each test is run, database #14 will be completely erased, so make sure you do not have any important data stored there.

To run the tests just run go test . If anything fails, please report an issue and describe what happened.

Contributing

See Contributing.md

Guarantees

Persistence

Since jobs is powered by Redis, there is a chance that you can lose data with the default Redis configuration. To get the best persistence guarantees, you should set Redis to use both AOF and RDB persistence modes and set fsync to "always". With these settings, Redis is more or less as persistent as a database like postgres. If want better performance and are okay with a slightly greater chance of losing data (i.e. jobs not executing), you can set fsync to "everysec".

Read more about Redis persistence.

Atomicity

Jobs is carefully written using Redis transactions and lua scripting so that all database changes are atomic. If Redis crashes in the middle of a transaction or script execution, it is possible that your AOF file can become corrupted. If this happens, Redis will refuse to start until the AOF file is fixed. It is relatively easy to fix the problem with the redis-check-aof tool, which will remove the partial transaction from the AOF file. In effect, this guarantees that modifications of the database are atomic, even in the event of a power loss or hard reset, with the caveat that you may need to use the redis-check-aof tool in the worst case scenario.

Read more about Redis transactions and scripts.

Job Execution

Jobs guarantees that a job will be executed at least once, provided it has been persisted on disk. (See the section on Persistence directly above). A job can only picked up by one pool at a time because a pool atomically pops (gets and immediately moves) the next available jobs from the database. A job can only be executed by one worker at a time because the jobs are delegated to workers via a shared channel. Each worker pool checks on the health of all the other pools when it starts. If a pool crashes or is otherwise disconnected, any jobs it had grabbed from the database that did not yet finish will be re-queued and picked up by a different pool.

This is in no way an exhaustive list, but here are some known examples of scenarios that may cause a job to be executed more than once:

  1. If there is a power failure or hard reset while a worker is in the middle of executing a job, the job may be stuck in a half-executed state. Since there is no way to know how much of the job was successfully completed, the job will be re-queued and picked up by a different pool, where it may be partially or fully executed more than once.
  2. If a pool becomes disconnected, it will be considered stale and its jobs will be re-queued and reclaimed by a different pool. However, if the stale pool is able to partly or fully execute jobs without a reliable internet connection, any jobs belonging to the stale pool might be executed more than once. You can increase the StaleTimeout parameter for a pool to make this scenario less likely.

License

Jobs is licensed under the MIT License. See the LICENSE file for more information.

jobs's People

Contributors

albrow avatar cdrage avatar epelc avatar soroushjp avatar utrack avatar wuyongzhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jobs's Issues

Make database/storage pluggable

It would be nice to be able to plug in any database that implements an interface instead of having to use redis. For example boltdb could be an interesting backend that could also be embedable.

Job runtime error messages should include a stack trace

If you have a job which panics the only error that's recorded is a short error message. However if it is a runtime error such as a nil pointer access or incorrect use of slices then this makes things very hard to debug because you do not have the stack trace. You end up with an error like the following in the redis hash for your job. Which is very vague in the case of runtime errors.

runtime error: invalid memory address or nil pointer dereference

I've found where this happens and I think the stack should either be logged or also sent to redis as the error message. But I'd be fine if it was just logged as this won't happen often.
https://github.com/albrow/jobs/blob/master/worker.go#L39

Here's an example of logging the stack incase you've never done this before. If you want more examples search for http recovery middlewares as they all have to do this.

stack := make([]byte, 1024*8)
stack = stack[:runtime.Stack(stack, false)]
log.Println("Panic stack:", err, string(stack))

Passing params / string name of job.

Similarly to Resque, you only have to pass in the name of the function ("HelloWorldJob") and a list of params { foo:"bar" }, etc.

Is this possible with jobs? (or perhaps in future #14 implementation?)

Rough example (more pseudo-code than Golang lol):

job, err := Schedule("HelloWorldJob", 100, time.Now(), "{EmailAddress: "foo@example.com"}")
if err != nil {
    // Handle err
}

instead of

job, err := welcomeEmailJobs.Schedule(100, time.Now(), &User{EmailAddress: "[email protected]"})
if err != nil {
    // Handle err
}

That means that within whatever program (whether it is an http server or whatnot), I don't have to define all jobs to say a:

var (
 sche *jobs.Type
)

everytime

FindById and job identifiers/human names

So jobs.FindById attempts to retrieve job by its unique ID, which is generated randomly - so it's not really possible to fetch a job by "some-identifier". Is there any way to list jobs by their readable name?

Error when calling Destroy() from within a job handler function

I've been seeing errors like the following lately. @albrow I was wondering if you could help me diagnose them.

ERR Error running script (call to f_7be0eed842fab7944ce1393b76bf8f46826c6656): @user_script:20: us er_script:20: attempt to concatenate local 'Status' (a boolean value)
I found the line it does this here

I'm trying to destroy the job from within my handler function once it reaches a certain state. Is it not safe to do this?

When I look up the job id I just get finished and time fields using hgetall.

redis:6379[3]> hgetall jobs:bX8Vye9LGk80SDkbawj9qt3vvkpi
1) "finished"
2) "1435185638756948580"
3) "time"
4) "1435200038500419354"
redis:6379[3]> 

Is it supposed to leave the job in redis after you destroy it? From the destroy_script.lua docs it dosen't sound like it.

Job status is changed to executing but job has not started execution

Hi,
I have scheduled a job at 9/15/2017, 1:58:36 PM IST epoch timestamp. The job got in executing status at this time and remained in executing state for next 5 mins without starting. It started at 9/15/2017, 2:03:08 PM IST.
There were no other jobs to be executed as this was very first job.
Please have a look at this, as this much delay will lead to problems in our application.

Screen Shot of job details as job is hanged for execution
screen shot 2017-09-15 at 2 03 53 pm

Screen shot of job when it started
screen shot 2017-09-15 at 2 04 13 pm

super slow

I benchmarked a few job processing libraries that use Redis as a backend: https://github.com/gocraft/work#benchmarks

Most can do about 10,000 to 20,000 jobs/second. Using default options, the albrow/jobs library clocked in at 40 jobs/second. I was able to increase that by increasing the # of works and the batch size, but I wasn't sure what you think good values for those params are.

Benchmark code: https://github.com/gocraft/work/blob/master/benches/bench_jobs/main.go

I'd love it if you could review my benchmark to see if I made a glaring mistake, and also what your thoughts are about good params for batch size.

Allow job handler functions to return an error

Errors could then be picked up by the worker and logged, and if appropriate the job will be queued for retry. Currently the only way to get this behavior is to panic when there are errors, which seems unidiomatic.

Panic in xen container

I have a zen container and this is the output of ip addr show :

ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/void 
    inet 127.0.0.2/32 scope host venet0
    inet V.W.X.Y/32 brd V.W.X.Y scope global venet0:0

This is a default container created using proxmox, and I get a panic on the getHardwareAddr.

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x4737a4]

Depending on such thing is not a good idea, why not setting the hardware id by a method? developer should take care of uniqueness in the pool .

Add Redis Password as a Configuration Option

Currently, you cannot connect to a Redis database that is protected by a password. However, it's not hard at all to add this feature. The basic idea is to add a config variable Config.Db.Password. If set, then all connections will issue the AUTH command when they are initialized to authenticate with the database.

Reschedule with freq

Currently Reschedule() only allows you to set the start time. It doesn't let you change the frequency which means I need to destroy the original job and create a new one. This is more error prone and slower because I need to update reference job id's in my db to do this. It might also be a good idea to let us change a job's retry count.

Export job.freq

It would be useful if we could see the job frequency for recurring jobs.

job not exec with panic

image

my code

var sche scheduler

type scheduler struct {
    tp           *jobs.Type
    golbalHeader map[string]string
}

func main() (err error) {
    tp, err := jobs.RegisterType(DEFAULT_JOB_TYPE, RETRY_TIMES, sche.getHandler())
    if err != nil {
        utils.Log().Error(err)
        return
    }
    sche.tp = tp
    pool, err := jobs.NewPool(nil)
    if err != nil {
        utils.Log().Error(err)
        return
    }
    defer func() {
        pool.Close()
        if err = pool.Wait(); err != nil {
            utils.Log().Error(err)
            return
        }
    }()
    if err = pool.Start(); err != nil {
        utils.Log().Error(err)
        return
    }
    return
}

... 



func (p *scheduler) getHandler() handler {
    return func(req *models.AddReq) error {
        utils.Log().Info("job start")
        post := gorequest.New().Post(req.Target)
        for k, v := range p.golbalHeader {
            post = post.Set(k, v)
        }
        for k, v := range req.Header {
            post = post.Set(k, v)
        }
        _, body, errs := post.Send(string(req.Body)).End()
        if len(errs) > 0 {
            utils.Log().Error(errs)
            return errs[0]
        }
        var rep response
        err := json.Unmarshal([]byte(body), &rep)
        if err != nil {
            utils.Log().Error(err)
            return err
        }
        if rep.Code != 0 {
            utils.Log().Error(rep.Code, rep.Message)
            return errors.New(rep.Message)
        }
        utils.Log().Info(rep)
        return nil
    }
}

Find a better way to purge stale pools and re-queue stale jobs

Currently, there is a process during initialization in which a pool pings all the other pools to determine if any of them have gone down. If they have, any jobs that belong to that pool that were still executing are considered stale and a re-queued. This will prevent jobs from staying stale as long as any time a worker pool machine goes down, either it is rebooted or another machine takes its place.

It would be better if this process occurred periodically instead of just on initialization. The frequency of the pings should be configurable.

Support redis sentinel

@albrow Have you thought about adding sentinel support? I see you use the redigo package which is what I also use. But It hasn't been updated much lately(last commit was 3 months one before was 6 months old).

I was thinking about switching to go-redis/redis which has built in sentinel support and I think better connection pooling support. As you can see it's a very active project and they are also about to release v3 which has an even nicer api.

I know redigo has a fork which seems to support sentinel but to me it really seems like a dead project.

Use go generate to convert lua script files to strings

Over at albrow/zoom, it was brought to my attention that reading scripts from a file at runtime can cause problems for people using certain dependency managers, or people who want to release a binary executable. See PRs albrow/zoom#9, albrow/zoom#10, and albrow/zoom#11.

The solution I came up with was to use a small script (compatible with go generate) to read the contents from the .lua file and write them to a generated file as strings. I'm going to port a similar solution over to albrow/jobs. It will be a little bit more complicated because jobs uses templates for string constants in the lua scripts, but the general idea is the same. This is targeted for version 0.3.0 and will hopefully be released in the next couple days.

Error Handling?

Is it possible to add a common error handler for all jobs that return an error so that I can say log my errors during development and send mails during production. Because right now in development I just return the error and don't know when and where the job failed.

Use sets instead of sorted sets where appropriate

Every job has a status. Currently there is a sorted set for each status, where all jobs that have that status are stored. Technically only the "queued" set needs to have a score (the priority) since that allows higher-priority jobs to get picked up first. Using sets instead of sorted sets when possible should reduce memory usage.

Examples?

Is it possible for an example to be created (full .go files, etc?) I'm pretty new to Go and I love this implementation (I come from using Resque). But finding it difficult to setup everything.

Tthe only example I found that could possibly help me learn was this: #28

So I've got it up and running with the above issue and I've split off workers / pools (I want to have N amount of workers on distributed machines, adding new jobs via back-end API).

However, it's a bit messy now. Is there a straight-forward "hello world" example the encompasses all current features of jobs other than #28 ?

Sorry about all this! Pretty new to Go >.>

Use a unique machine identifier for generating pool ids

Currently, a rebooted machine would get a new pool id. If it was in the middle of executing any jobs when it rebooted, those jobs would be stale. In order to detect that the new pool will attempt to ping the old pool, then when it doesn't get a response it would re-queue the stale jobs. This is a little more work than what is necessary.

If the same machine always gets the same pool id, it would allow for a machine to quickly clean up after itself in the event that it is rebooted. On initialization, it could check for any jobs in the executing state with it's own pool id. We wouldn't need to try to ping the old pool to determine that those jobs are stale.

Add a changelog

I think it'd be nice if there was a changelog. This can either be done by hand or it can be automated if you force a syntax for commits ie changeType(component): msg. For an example of an automated changelog take a look at the angular-material project. Their changelog is generated by the commit messages and then I think they tweak any problems with it before a release. That might be a bit much for this project but it let's you know what can be done. Either way I think a basic changelog would be really nice.

@albrow What do you think about this? I think it'd definitely make this easier to use in production.

Intercept UNIX signals

Currently, if you cause a worker pool process to quit by sending a UNIX signal (e.g. by pressing ctrl-c), it will quit immediately without waiting for jobs to finish executing. This is good behavior for testing purposes (because it lets me simulate hard failures), but ideally the process should intercept certain types of signals and wait before exiting.

Support environmental prefix

Problem: Current version of the package does not support environmental prefixes, hence if a common redis instance is being used between two environments (lets say alpha and beta) then there is no mechanism to ensure that consumers/workers are executing jobs from producers of that particular environment or some other.
Proposed solution: One of the approach is to prefix the environment string to all the redis keys. Hence, there should be a feature to set prefix which varies as per the environment. Also, before implementing that at go layer, the hard coding of keys in the lua script should also be configurable in the sense that it should accept prefix as one of the arguments.

Use an interface for jobs instead of a non-type-safe handler function (would break backwards-compatibility)

I would like to change the public API in a pretty major way by using an interface for jobs instead of a handler function (which is not typesafe). The current implementation feels a little bit messy and unidiomatic to me.

Jobs follows semantic versioning, which means breaking changes are fair game until version 1.0. However, I never put a warning in the README about this, so I wanted to make sure it was okay with everyone before making this breaking change. I would also be happy to hear feedback on the approach.

The basic idea is to create an interface that might look something like this:

type Job interface {
    Execute() error
    JobId() string
    SetJobId(string)
    JobStatus() Status
    SetJobStatus(Status)
}

Most of these have straightforward implementations which could be covered with an embeddable DefaultJob or JobData type, similar to the approach I use in zoom.

type DefaultJob struct {
    Id string
    Status Status
}

func (j DefaultJob) JobId() string {
    return j.Id
}

func (j DefaultJob) SetJobId(id string) {
    j.Id = id
}

// etc for other getters and setters

So job type declarations would now look like this:

type EmailJob struct {
    User *model.User
    jobs.DefaultJob
}

func (j EmailJob) Execute() error {
    msg := fmt.Sprintf("Hello, %s! Thanks for signing up for foo.com.", user.Name)
    if err := emails.Send(j.User.EmailAddress, msg); err != nil {
        return err
    }
}

I can leverage zoom as a library to easily serialize all the exported fields in any job struct. So when a job gets retrieved from the database, all the struct fields will be filled in. Then the worker will just call the Execute method.

There are a few advantages to this approach:

  1. Type-safety: If you don't embed a DefaultJob or provide your own implementations of the methods needed, or if you don't define an Execute method, the compiler will tell you, whereas previously these types of omissions would be runtime errors. Workers can also execute jobs by calling the Execute method instead of jumping through fiery hoops with reflection.
  2. Flexibility: The Execute function can safely access any exported properties of the job type, so in effect this solves the multiple argument problem.
  3. Idiomaticness: Using an empty interface as an argument to RegisterJob just feels wrong.

Let me know what you think. If I don't hear any objections I'll plan on converting to the new implementation sometime in the coming weeks.

Implement spread out retries

Currently, if a job fails it will be immediately queued for retry. This is appropriate in some but not all circumstances. For example, if a third-party API is down for a few hours, retrying the job immediately would cause it to be retried many times before permanently failing. It would be better to spread out the retries over time. E.g. the first retry is immediate, the next one is 15 minutes later, the next one is 1 hour later, etc.

Endless loop?

What would be the best way / practical way to have an endless loop of the pool? From the current looks of it / examples, the pool will automatically close when there are no new jobs.

I know that you can simply add:

for {
}

To the end since the pools are running as goroutines.

But shouldn't this behavior be by default? Or maybe I'm just too used to using Resque haha.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.