GithubHelp home page GithubHelp logo

modular_rl's Introduction

This repository implements several algorithms:

  • Trust Region Policy Optimization [1]
  • Proximal Policy Optimization (i.e., TRPO, but using a penalty instead of a constraint on KL divergence), where each subproblem is solved with either SGD or L-BFGS
  • Cross Entropy Method

TRPO and PPO are implemented with neural-network value functions and use GAE [2].

This library is written in a modular way to allow for sharing code between TRPO and PPO variants, and to write the same code for different kinds of action spaces.

Dependencies:

  • keras (2.0.2)
  • theano (0.9.0)
  • tabulate
  • numpy
  • scipy

To run the algorithms implemented here, you should put modular_rl on your PYTHONPATH, or run the scripts (e.g. run_pg.py) from this directory.

Good parameter settings can be found in the experiments directory.

You can learn about the various parameters by running one of the experiment scripts with the -h flag, but providing the (required) env and agent parameters. (Those parameters determine what other parameters are available.) For example, to see the parameters of TRPO,

./run_pg.py --env CartPole-v0 --agent modular_rl.agentzoo.TrpoAgent -h

To the the parameters of CEM,

./run_cem.py --env=Acrobot-v0 --agent=modular_rl.agentzoo.DeterministicAgent  --n_iter=2

[1] JS, S Levine, P Moritz, M Jordan, P Abbeel, "Trust region policy optimization." arXiv preprint arXiv:1502.05477 (2015).

[2] JS, P Moritz, S Levine, M Jordan, P Abbeel, "High-dimensional continuous control using generalized advantage estimation." arXiv preprint arXiv:1506.02438 (2015).

modular_rl's People

Contributors

breakend avatar finbarrtimbers avatar joschu avatar pcmoritz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

modular_rl's Issues

Error when using saved weights to continue learning

I am getting the following warning when I try to save the weights. Here I am loading the weights from a previously trained model.

{'warnflag': 1, 'task': 'STOP: TOTAL NO. of ITERATIONS EXCEEDS LIMIT', 'nit': 26, 'funcalls': 30}
got zero gradient. not updating

This is the code that I am using

 if __name__ == "__main__":
    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    update_argument_parser(parser, GENERAL_OPTIONS)
    parser.add_argument("--agent",required=True)
    parser.add_argument("--plot",action="store_true")
    parser.add_argument('--visualize', dest='visualize', action='store_true', default=False)
    args,_ = parser.parse_known_args([arg for arg in sys.argv[1:] if arg not in ('-h', '--help')])
    env = StandEnv(args.visualize)
    hdf = h5py.File('a.h5','r')
    snapnames = hdf['agent_snapshots'].keys()
    snapname = snapnames[-1]
    agent = cPickle.loads(hdf['agent_snapshots'][snapname].value)
    agent.stochastic=False
    env_spec = env.spec

    agent_ctor = get_agent_cls(args.agent)
    update_argument_parser(parser, agent_ctor.options)
    args = parser.parse_args()
    
    args.timestep_limit = 200
    cfg = args.__dict__
    np.random.seed(args.seed)
    if args.use_hdf:
        hdf, diagnostics = prepare_h5_file(args)
    gym.logger.setLevel(logging.WARN)

    COUNTER = 0
    def callback(stats):
        global COUNTER
        COUNTER += 1
        # Print stats
        print "*********** Iteration %i ****************" % COUNTER
        print tabulate(filter(lambda (k,v) : np.asarray(v).size==1, stats.items())) #pylint: disable=W0110
        # Store to hdf5
        if args.use_hdf:
            for (stat,val) in stats.items():
                if np.asarray(val).ndim==0:
                    diagnostics[stat].append(val)
                else:
                    assert val.ndim == 1
                    diagnostics[stat].extend(val)
            if args.snapshot_every and ((COUNTER % args.snapshot_every==0) or (COUNTER==args.n_iter)):
                hdf['/agent_snapshots/%0.4i'%COUNTER] = np.array(cPickle.dumps(agent,-1))
        # Plot
        if args.plot:
            animate_rollout(env, agent, min(500, args.timestep_limit))

    run_policy_gradient_algorithm(env, agent, callback=callback, usercfg = cfg)

    if args.use_hdf:
        hdf['env_id'] = env_spec.id
        try: hdf['env'] = np.array(cPickle.dumps(env, -1))
        except Exception: print "failed to pickle env" #pylint: disable=W0703
    env.close()

License?

I'm interested in implementing TRPO with GAE and would love to be able to refer to this codebase as I go along.

I can't do that, however, without a license and I couldn't find one in the repository. Am I missing it somewhere? If not, would you be willing to attach license here? Thanks!

Why Using timesteps in the evaluation of the value function?

Hello John,
After reading your paper on TRPO and view your code on GitHub, I am a little bit confused on steps regarding the prediction of value functions. Here, you concatenate to the observation the time-step.
Why are you doing this? is it mandatory?
Hoping to receive feedback from you.
Regards.

[Q] load_snapshot

I was wondering if the option can be used with run_pg.py to train a deep network starting from the weights of an h5 file already saved.

It was found in sim_agent.py but it was not seen in run_pg.py.

env: HumanoidStandup-v1 error

Using Theano backend.
[2016-05-28 11:20:04,559] Making new env: HumanoidStandup-v1
policy gradient config {'lam': 0.97, 'cg_damping': 0.1, 'env': 'HumanoidStandup-v1', 'plot': False, 'activation': 'tanh', 'agent': 'modular_rl.agentzoo.TrpoAgent', 'outfile': '/tmp/a.h5', 'max_kl': 0.01, 'timestep_limit': 1000, 'video': 1, 'snapshot_every': 0, 'parallel': 0, 'n_iter': 1500, 'load_snapshot': '', 'filter': 1, 'use_hdf': 0, 'seed': 0, 'hid_sizes': [64, 64], 'timesteps_per_batch': 25000, 'gamma': 0.995, 'metadata': ''}
Traceback (most recent call last):
File "run_pg.py", line 61, in
run_policy_gradient_algorithm(env, agent, callback=callback, usercfg = cfg)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/core.py", line 88, in run_policy_gradient_algorithm
paths = get_paths(env, agent, cfg, seed_iter)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/core.py", line 106, in get_paths
paths = do_rollouts_serial(env, agent, cfg["timestep_limit"], cfg["timesteps_per_batch"], seed_iter)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/core.py", line 143, in do_rollouts_serial
path = rollout(env, agent, timestep_limit)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/core.py", line 127, in rollout
rew = agent.rewfilt(rew)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/agentzoo.py", line 103, in rewfilt
return self.rewfilter(rew)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/filters.py", line 31, in call
if update: self.rs.push(x)
File "/Users/lmj/develop/gym-modular_rl/modular_rl/running_stat.py", line 11, in push
assert x.shape == self._M.shape
AssertionError

modular_rl without Gym?

I'm not sure if this is the right place to put this, but I'm curious about running modeul_rl's TRPO and CEM algorithms with environments not in Gym. Are there any examples or pointers on how to do something like that?

Thanks. Really appreciate the work you've done here.

parallel rollout

The parallel rollout has not been implemented though there is a flag available. The rollouts took more time than training in my study since I am working with 3D images. Any plans to make it happen in this repo?

Linesearch in TRPO

Why you do not use the linesearch here at all? Could you further explain the benefits of expected_improve_rate in comparison with the simple linesearch as in the rllab code?

VideoRecorder encoder exited with status -11

Hi John,

I am working on a LfD paper. I need to record a trajectory and I wanted to use your pg algorithm in modular_rl package, available in openai gym with the environment 'Humanoid-v1'.

I installed all they libraries required, including keras 1.0.1 with theano backend, but when I ran:

python run_pg.py --gamma=0.995 --lam=0.97 --agent=modular_rl.agentzoo.TrpoAgent --max_kl=0.01 --cg_damping=0.1 --activation=tanh --n_iter=250 --seed=0 --timesteps_per_batch=50000 --env=Humanoid-v1

it returns:

Using cuDNN version 5110 on context None
Mapped name None to device cuda: GeForce GTX 780 Ti (0000:02:00.0)
Using Theano backend.
[2017-05-22 11:00:06,066] Making new env: Humanoid-v1
policy gradient config {'lam': 0.97, 'cg_damping': 0.1, 'env': 'Humanoid-v1', 'plot': False, 'activation': 'tanh', 'agent': 'modular_rl.agentzoo.TrpoAgent', 'outfile': '/tmp/a.h5', 'max_kl': 0.01, 'timestep_limit': 1000, 'video': 1, 'snapshot_every': 0, 'parallel': 0, 'n_iter': 250, 'load_snapshot': '', 'filter': 1, 'use_hdf': 0, 'seed': 0, 'hid_sizes': [64, 64], 'timesteps_per_batch': 50000, 'gamma': 0.995, 'metadata': ''}
[rawvideo @ 0x993f40] Estimating duration from bitrate, this may be inaccurate
Input #0, rawvideo, from 'pipe:':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 500x500, 67 tbr, 67 tbn, 67 tbc
Traceback (most recent call last):
File "run_pg.py", line 60, in
run_policy_gradient_algorithm(env, agent, callback=callback, usercfg = cfg)
File "/home/pipeline3d/modular_rl-master/modular_rl/core.py", line 88, in run_policy_gradient_algorithm
paths = get_paths(env, agent, cfg, seed_iter)
File "/home/pipeline3d/modular_rl-master/modular_rl/core.py", line 106, in get_paths
paths = do_rollouts_serial(env, agent, cfg["timestep_limit"], cfg["timesteps_per_batch"], seed_iter)
File "/home/pipeline3d/modular_rl-master/modular_rl/core.py", line 142, in do_rollouts_serial
path = rollout(env, agent, timestep_limit)
File "/home/pipeline3d/modular_rl-master/modular_rl/core.py", line 125, in rollout
ob,rew,done,envinfo = env.step(action)
File "/home/pipeline3d/gym/gym/core.py", line 99, in step
return self._step(action)
File "/home/pipeline3d/gym/gym/wrappers/monitoring.py", line 34, in _step
done = self._after_step(observation, reward, done, info)
File "/home/pipeline3d/gym/gym/wrappers/monitoring.py", line 184, in _after_step
self.video_recorder.capture_frame()
File "/home/pipeline3d/gym/gym/monitoring/video_recorder.py", line 121, in capture_frame
self._encode_image_frame(frame)
File "/home/pipeline3d/gym/gym/monitoring/video_recorder.py", line 171, in _encode_image_frame
self.encoder.capture_frame(frame)
File "/home/pipeline3d/gym/gym/monitoring/video_recorder.py", line 306, in capture_frame
self.proc.stdin.write(frame.tobytes())
IOError: [Errno 32] Broken pipe
[2017-05-22 11:01:09,642] VideoRecorder encoder exited with status -11

It seems a problem with the VideoRecorder of openai gym, but I don't even need to record the video. How can i fix this issue? I need only to record the joint angles, that should be in this particular environment the first 22 elements of the observation vector and the actions that should be the joint torques.

Thank you

Line search does not check KL constraint satisfaction

The TRPO paper (Appendix C) claims that "we use a line search to ensure improvement of the surrogate objective and satisfaction of the KL divergence constraint". However, in the current codebase, the linesearch function only checks whether the candidate update improves the surrogate advantage, which deviates from the paper and doesn't check for KL constraint.

https://github.com/joschu/modular_rl/blob/master/modular_rl/trpo.py#L92

Could you help me understand the reasoning behind this? @joschu

Thanks!

Running TRPO with RNN

Hi,

Can TRPO be used with the RNN's and LSTM or GRU in particular? It can be useful for locomotion partially observed tasks. And what modifications are required for adding RNN layers to the algorithm?

'Dense' object has no attribute 'W'

Hi there,

I'm trying to reproduce the results. But when running the code, I first ran into the Monitor error which caused by the updates of the gym environments. And I fixed myself. But this time, another error occurred as follows

python run_pg.py --gamma=0.995 --lam=0.97 --agent=modular_rl.agentzoo.TrpoAgent --max_kl=0.01 --cg_damping=0.1 --activation=tanh --n_iter=250 --seed=0 --timesteps_per_batch=5000 --env=Pendulum-v0 --outfile=$outdir/Pendulum-v0.h5

Using TensorFlow backend.
[2017-03-15 19:09:35,124] Making new env: Pendulum-v0
Traceback (most recent call last):
File "run_pg.py", line 35, in <module>
agent = agent_ctor(env.observation_space, env.action_space, cfg)
File "/home/kudo/openai/modular_rl/modular_rl/agentzoo.py", line 118, in __init__
policy, self.baseline = make_mlps(ob_space, ac_space, cfg)
File "/home/kudo/openai/modular_rl/modular_rl/agentzoo.py", line 36, in make_mlps
Wlast = net.layers[-1].W
AttributeError: 'Dense' object has no attribute 'W'

It seems that this error is due to the updates of Keras. Do you have an idea how to solve it? And it there a updated version of your library?

Thanks in advance.

Atari training and LBFGS gpu memory overhead

Hi John,

I'm trying to apply TRPO to the robotics control task, using vision. But constantly hit a GPU memory overhead in class NnRegression in fit during baseline calculation. On the input there were one 128x128 greyscale image and 14 joints observations. Overhead can be seen even when I tried smaller number of iterations and switched to the 96x96 image size. Replacing LBFGS optimizer helped to some extent - there were no crashes but convergence and calculation time became worse.

Did you meet similar memory overhead issues during Atari training and if yes how did you solve them? Input in Atari games is at least 4 times larger then in my cases. So stored volume of the observations data in paths should be even larger or at least compared to my case.

Wlast.set_value(Wlast.get_value(borrow=True)*0.1)

Hi John,

I have read your TRPO paper and I'm trying to reproduce the Fisher-Vector Product calculation function in C. Line 36-37 in agentzoo.py make me confused. I copy the weights to my code, feed ob_no into the network, and check its outputs against prob_np. It turned out that the mean values in prob_np are the original neural network outputs that are not multiplied by 0.1. (I use theano backend, swimmer-v1 test case, 8-64-64-2 network.) Also the *0.1 thing is not mentioned in the TRPO paper. I was wondering whether you can shed some light on this issue.

    Wlast = net.layers[-1].W
    Wlast.set_value(Wlast.get_value(borrow=True)*0.1)

Thank you in advance!

thanks
Patrick

python 3 support?

currently using python 3 gives:

  File "run_pg.py", line 44
    print "*********** Iteration %i ****************" % COUNTER
                                                    ^
SyntaxError: Missing parentheses in call to 'print'

Problem in net.add(ConcatFixedStd())

Hi there,
I am trying to use your code to train some new pybullet environment. Here is my Pip log:
Keras (2.0.2)
Markdown (2.6.11)
mock (2.0.0)
numpy (1.13.3)
pbr (3.1.1)
Pillow (5.0.0)
pip (9.0.1)
protobuf (3.5.1)
pybullet (1.7.4)
pyglet (1.2.4)
Pyste (0.9.10)
PyYAML (3.12)
requests (2.18.4)
roboschool (1.0, /home/laraki/roboschool)
scikit-learn (0.19.1)
scipy (1.0.0)
setuptools (20.7.0)
six (1.11.0)
sklearn (0.0)
statistics (1.0.3.5)
tabulate (0.8.2)
tensorflow (1.4.1)
tensorflow-tensorboard (0.4.0rc3)
tflearn (0.3.2)
Theano (0.9.0)
unity-lens-photos (1.0)
urllib3 (1.22)
Werkzeug (0.14.1)
wheel (0.29.0)

Traceback:
sudo KERAS_BACKEND=theano python run_pg.py --gamma=0.995 --lam=0.97 --agent=modular_rl.agentzoo.TrpoAgent --max_kl=0.01 --cg_damping=0.1 --activation=tanh --n_iter=250 --seed=0 --timesteps_per_batch=5000 --env=InvertedPendulumBulletEnv-v0 --outfile=$outdir/InvertedPendulum-v0.h
pybullet build time: Dec 6 2017 15:03:38
Using Theano backend.
Traceback (most recent call last):
File "run_pg.py", line 36, in
agent = agent_ctor(env.observation_space, env.action_space, cfg)
File "/home/laraki/roboschool/agent_zoo/modular_rl/modular_rl/agentzoo.py", line 118, in init
policy, self.baseline = make_mlps(ob_space, ac_space, cfg)
File "/home/laraki/roboschool/agent_zoo/modular_rl/modular_rl/agentzoo.py", line 38, in make_mlps
net.add(ConcatFixedStd())
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 455, in add
output_tensor = layer(self.outputs[0])
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 554, in call
output = self.call(inputs, **kwargs)
TypeError: call() takes exactly 3 arguments (2 given)

I did fix all the issues related to net.layers but I don't know how to fix this one, any ideas?
Thanks in advance

Problems with Keras/Tensorflow backend

Hi,
when trying to run your example, I get a TypeError using the Tensorflow backend.
./run_cem.py --env=Acrobot-v1 --agent=modular_rl.agentzoo.DeterministicAgent --n_iter=2

TypeError: Input 'b' of 'MatMul' Op has type float32_ref that does not match type float64 of argument 'a'.

Do you have an idea how to fix this?
I run everything in a virtual python environment.
Find attached the output of pip list and the error output.
pip_list.txt
log.txt

In my Keras.json I use the default settings: floatx:"float32", backend: "tensorflow"
I am new to tensorflow and to openai, so it might also just be because of that.

Thanks,
Andreas

Error using set_value and get_value in agentzoo.py

Pip log:

funcsigs (1.0.2)
gym (0.8.1)
h5py (2.7.0)
Keras (2.0.4)
keras-rl (0.3.0)
mock (2.0.0)
numpy (1.12.1)
pbr (3.0.0)
pip (9.0.1)
protobuf (3.3.0)
pyglet (1.2.4)
PyYAML (3.12)
requests (2.13.0)
scipy (0.19.0)
setuptools (27.2.0)
six (1.10.0)
tabulate (0.7.7)
tensorflow (1.1.0)
Theano (0.9.0)
Werkzeug (0.12.1)
wheel (0.29.0)

Traceback:

Using TensorFlow backend.
[2017-05-05 15:59:47,006] Making new env: CartPole-v0
Traceback (most recent call last):
  File "./run_pg.py", line 34, in <module>
    agent = agent_ctor(env.observation_space, env.action_space, cfg)
  File "/media/Resources/major-project/modular_rl/modular_rl/agentzoo.py", line 118, in __init__
    policy, self.baseline = make_mlps(ob_space, ac_space, cfg)
  File "/media/Resources/major-project/modular_rl/modular_rl/agentzoo.py", line 42, in make_mlps
    Wlast.set_value(Wlast.get_value(borrow=True)*0.1)
AttributeError: 'Variable' object has no attribute 'set_value'
[2017-05-05 15:59:47,041] Finished writing results. You can upload them to the scoreboard via gym.upload('/tmp/a.h5.dir')

I am trying to run the examples. I installed the latest tensorflow and keras packages and the above error came.

I also tried using the old versions (keras=1.0.1) and (tensorflow=0.7.1) but still couldn't get it to work.

Changing Wlast = net.layers[-1].W to Wlast = net.layers[-1].kernel fixes the first issue but there are lot of other issues too.

Python 3 compatibility

Are you considering making this repo compatible for both python2 and 3? I have a fork with a couple of changes to adapt both python versions which is also the one I made for my study with python3 (my other code is in python3). If you are interested, I ll do some more testing and improvement and create a pull request.

Action is not within action space error while running Hopper-v1.

I tried to reproduce hopper-v1 on ubuntu on the latest commit, but I am getting the following error/warning. Could you please help me to figure out what I am doing wrong ?

(TheanoEnv)sarv@sarv-HP:~/summer16/modular_rl$ python run_pg.py --gamma=0.995 --lam=0.97 --agent=modular_rl.agentzoo.TrpoAgent --max_kl=0.01 --cg_damping=0.1 --activation=tanh --n_iter=5 --seed=0 --timesteps_per_batch=25000 --env=Hopper-v1 --outfile=./output/Hopper --video=0
Using Theano backend.
[2016-05-28 19:30:58,104] Making new env: Hopper-v1
policy gradient config {'lam': 0.97, 'cg_damping': 0.1, 'env': 'Hopper-v1', 'plot': False, 'activation': 'tanh', 'agent': 'modular_rl.agentzoo.TrpoAgent', 'outfile': './output/Hopper', 'max_kl': 0.01, 'timestep_limit': 1000, 'video': 0, 'snapshot_every': 0, 'parallel': 0, 'n_iter': 5, 'load_snapshot': '', 'filter': 1, 'use_hdf': 0, 'seed': 0, 'hid_sizes': [64, 64], 'timesteps_per_batch': 25000, 'gamma': 0.995, 'metadata': ''}
[2016-05-28 19:31:04,885] Action '[ 0.28600657 1.49202406 -0.23161876]' is not contained within action space 'Box(3,)'.
[2016-05-28 19:31:04,887] Action '[ 0.26071644 -0.87524641 -2.58158255]' is not contained within action space 'Box(3,)'.
[2016-05-28 19:31:04,889] Action '[ 2.35838938 -1.43223417 0.06147718]' is not contained within action space 'Box(3,)'.
[2016-05-28 19:31:04,892] Action '[-0.06937394 1.51615119 1.47848463]' is not contained within action space 'Box(3,)'.
[2016-05-28 19:31:04,897] Action '[-1.92997825 -0.30192277 0.13340396]' is not contained within action space 'Box(3,)'.
[2016-05-28 19:31:04,900] Action '[ 1.23534644 1.18095422 -0.37543115]' is not contained within action space 'Box(3,)'.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.