GithubHelp home page GithubHelp logo

rle-foundation / rllte Goto Github PK

View Code? Open in Web Editor NEW
436.0 55.0 85.0 88.29 MB

Long-Term Evolution Project of Reinforcement Learning

Home Page: https://docs.rllte.dev/

License: MIT License

Python 51.67% Makefile 0.05% Shell 0.08% CMake 0.22% C++ 46.10% C 1.87% Dockerfile 0.01%
benchmark-framework long-term-project machine-learning reinforcement-learning robotics deep-learning gym huggingface phasic-policy-gradient drq-v2

rllte's Introduction



RLLTE: Long-Term Evolution Project of Reinforcement Learning

| English | δΈ­ζ–‡ |

Contents

Overview

Inspired by the long-term evolution (LTE) standard project in telecommunications, aiming to provide development components for and standards for advancing RL research and applications. Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms.


An introduction to RLLTE.

Why RLLTE?

  • 🧬 Long-term evolution for providing latest algorithms and tricks;
  • 🏞️ Complete ecosystem for task design, model training, evaluation, and deployment (TensorRT, CANN, ...);
  • 🧱 Module-oriented design for complete decoupling of RL algorithms;
  • πŸš€ Optimized workflow for full hardware acceleration;
  • βš™οΈ Support custom environments and modules;
  • πŸ–₯️ Support multiple computing devices like GPU and NPU;
  • πŸ’Ύ Large number of reusable benchmarks (RLLTE Hub);
  • πŸ‘¨β€βœˆοΈ Large language model-empowered copilot (RLLTE Copilot).

⚠️ Since the construction of RLLTE Hub requires massive computing power, we have to upload the training datasets and model weights gradually. Progress report can be found in Issue#30.

See the project structure below:

For more detailed descriptions of these modules, see API Documentation.

Quick Start

Installation

  • Prerequisites

Currently, we recommend Python>=3.8, and user can create an virtual environment by

conda create -n rllte python=3.8
  • with pip recommended

Open a terminal and install rllte with pip:

pip install rllte-core # basic installation
pip install rllte-core[envs] # for pre-defined environments
  • with git

Open a terminal and clone the repository from GitHub with git:

git clone https://github.com/RLE-Foundation/rllte.git

After that, run the following command to install package and dependencies:

pip install -e . # basic installation
pip install -e .[envs] # for pre-defined environments

For more detailed installation instruction, see Getting Started.

Fast Training with Built-in Algorithms

RLLTE provides implementations for well-recognized RL algorithms and simple interface for building applications.

On NVIDIA GPU

Suppose we want to use DrQ-v2 to solve a task of DeepMind Control Suite, and it suffices to write a train.py like:

# import `env` and `agent` module
from rllte.env import make_dmc_env 
from rllte.agent import DrQv2

if __name__ == "__main__":
    device = "cuda:0"
    # create env, `eval_env` is optional
    env = make_dmc_env(env_id="cartpole_balance", device=device)
    eval_env = make_dmc_env(env_id="cartpole_balance", device=device)
    # create agent
    agent = DrQv2(env=env, eval_env=eval_env, device=device, tag="drqv2_dmc_pixel")
    # start training
    agent.train(num_train_steps=500000, log_interval=1000)

Run train.py and you will see the following output:

On HUAWEI NPU

Similarly, if we want to train an agent on HUAWEI NPU, it suffices to replace cuda with npu:

device = "cuda:0" -> device = "npu:0"

Three Steps to Create Your RL Agent

Developers only need three steps to implement an RL algorithm with RLLTE. The following example illustrates how to write an Advantage Actor-Critic (A2C) agent to solve Atari games.

  • Firstly, select a prototype:
from rllte.common.prototype import OnPolicyAgent
  • Secondly, select necessary modules to build the agent:
from rllte.xploit.encoder import MnihCnnEncoder
from rllte.xploit.policy import OnPolicySharedActorCritic
from rllte.xploit.storage import VanillaRolloutStorage
from rllte.xplore.distribution import Categorical
  • Run the .describe function of the selected policy and you will see the following output:
OnPolicySharedActorCritic.describe()
# Output:
# ================================================================================
# Name       : OnPolicySharedActorCritic
# Structure  : self.encoder (shared by actor and critic), self.actor, self.critic
# Forward    : obs -> self.encoder -> self.actor -> actions
#            : obs -> self.encoder -> self.critic -> values
#            : actions -> log_probs
# Optimizers : self.optimizers['opt'] -> (self.encoder, self.actor, self.critic)
# ================================================================================

This will illustrate the structure of the policy and indicate the optimizable parts. Finally, merge these modules and write an .update function:

from torch import nn
import torch as th

class A2C(OnPolicyAgent):
    def __init__(self, env, tag, seed, device, num_steps) -> None:
        super().__init__(env=env, tag=tag, seed=seed, device=device, num_steps=num_steps)
        # create modules
        encoder = MnihCnnEncoder(observation_space=env.observation_space, feature_dim=512)
        policy = OnPolicySharedActorCritic(observation_space=env.observation_space,
                                           action_space=env.action_space,
                                           feature_dim=512,
                                           opt_class=th.optim.Adam,
                                           opt_kwargs=dict(lr=2.5e-4, eps=1e-5),
                                           init_fn="xavier_uniform"
                                           )
        storage = VanillaRolloutStorage(observation_space=env.observation_space,
                                        action_space=env.action_space,
                                        device=device,
                                        storage_size=self.num_steps,
                                        num_envs=self.num_envs,
                                        batch_size=256
                                        )
        dist = Categorical()
        # set all the modules
        self.set(encoder=encoder, policy=policy, storage=storage, distribution=dist)
    
    def update(self):
        for _ in range(4):
            for batch in self.storage.sample():
                # evaluate the sampled actions
                new_values, new_log_probs, entropy = self.policy.evaluate_actions(obs=batch.observations, actions=batch.actions)
                # policy loss part
                policy_loss = - (batch.adv_targ * new_log_probs).mean()
                # value loss part
                value_loss = 0.5 * (new_values.flatten() - batch.returns).pow(2).mean()
                # update
                self.policy.optimizers['opt'].zero_grad(set_to_none=True)
                (value_loss * 0.5 + policy_loss - entropy * 0.01).backward()
                nn.utils.clip_grad_norm_(self.policy.parameters(), 0.5)
                self.policy.optimizers['opt'].step()

Then train the agent by

from rllte.env import make_atari_env
if __name__ == "__main__":
    device = "cuda"
    env = make_atari_env("PongNoFrameskip-v4", num_envs=8, seed=0, device=device)
    agent = A2C(env=env, tag="a2c_atari", seed=0, device=device, num_steps=128)
    agent.train(num_train_steps=10000000)

As shown in this example, only a few dozen lines of code are needed to create RL agents with RLLTE.

Algorithm Decoupling and Module Replacement

RLLTE allows developers to replace settled modules of implemented algorithms to make performance comparison and algorithm improvement, and both built-in and custom modules are supported. Suppose we want to compare the effect of different encoders, it suffices to invoke the .set function:

from rllte.xploit.encoder import EspeholtResidualEncoder
encoder = EspeholtResidualEncoder(...)
agent.set(encoder=encoder)

RLLTE is an extremely open framework that allows developers to try anything. For more detailed tutorials, see Tutorials.

Function List (Part)

RL Agents

Type Algo. Box Dis. M.B. M.D. M.P. NPU πŸ’° πŸ”­
On-Policy A2C βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ ❌
On-Policy PPO βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ ❌
On-Policy DrAC βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ
On-Policy DAAC βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ ❌
On-Policy DrDAAC βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ βœ”οΈ
On-Policy PPG βœ”οΈ βœ”οΈ βœ”οΈ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy DQN βœ”οΈ ❌ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy DDPG βœ”οΈ ❌ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy SAC βœ”οΈ ❌ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy SAC-Discrete ❌ βœ”οΈ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy TD3 βœ”οΈ ❌ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ ❌
Off-Policy DrQ-v2 βœ”οΈ ❌ ❌ ❌ ❌ βœ”οΈ βœ”οΈ βœ”οΈ
Distributed IMPALA βœ”οΈ βœ”οΈ ❌ ❌ βœ”οΈ ❌ ❌ ❌
  • Dis., M.B., M.D.: Discrete, MultiBinary, and MultiDiscrete action space;
  • M.P.: Multi processing;
  • 🐌: Developing;
  • πŸ’°: Support intrinsic reward shaping;
  • πŸ”­: Support observation augmentation.

Intrinsic Reward Modules

Type Modules
Count-based PseudoCounts, RND
Curiosity-driven ICM, GIRM, RIDE
Memory-based NGU
Information theory-based RE3, RISE, REVD

See Tutorials: Use Intrinsic Reward and Observation Augmentation for usage examples.

RLLTE Ecosystem

Explore the ecosystem of RLLTE to facilitate your project:

  • Hub: Fast training APIs and reusable benchmarks.
  • Evaluation: Reasonable and reliable metrics for algorithm evaluation.
  • Env: Packaged environments for fast invocation.
  • Deployment: Convenient APIs for model deployment.
  • Pre-training: Methods of pre-training in RL.
  • Copilot: Large language model-empowered copilot.

API Documentation

View our well-designed documentation: https://docs.rllte.dev/

How To Contribute

Welcome to contribute to this project! Before you begin writing code, please read CONTRIBUTING.md for guide first.

Cite the Project

If you use RLLTE in your research, please cite this project like this:

@article{yuan2023rllte,
  title={RLLTE: Long-Term Evolution Project of Reinforcement Learning}, 
  author={Mingqi Yuan and Zequn Zhang and Yang Xu and Shihao Luo and Bo Li and Xin Jin and Wenjun Zeng},
  year={2023},
  journal={arXiv preprint arXiv:2309.16382}
}

Acknowledgment

This project is supported by The Hong Kong Polytechnic University, Eastern Institute for Advanced Study, and FLW-Foundation. EIAS HPC provides a GPU computing platform, and HUAWEI Ascend Community provides an NPU computing platform for our testing. Some code of this project is borrowed or inspired by several excellent projects, and we highly appreciate them. See ACKNOWLEDGMENT.md.

Miscellaneous

↳ Stargazers, thanks for your support!

Stargazers repo roster for @RLE-Foundation/rllte

↳ Forkers, thanks for your support!

Forkers repo roster for @RLE-Foundation/rllte

↳ Star History

Star History Chart

rllte's People

Contributors

dongdinglin avatar heodel avatar roger-creus avatar shihaoluo avatar williamyangxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rllte's Issues

[Bug]: DAAC trained on MultiBinary envs but returns floats when doing inference?

πŸ› Bug

Note: this is on the current pip version, havn't tried the git repo version.
Also note I have a newer version of gymnasium and matplotlib than this repo specifies:

rllte-core 0.0.1b3 has requirement gymnasium[accept-rom-license]==0.28.1, but you have gymnasium 0.29.0.
rllte-core 0.0.1b3 has requirement matplotlib==3.6.0, but you have matplotlib 3.7.1.

I'll start out by explaining the shapes in the training env.

...
print("SHAPE1 " + repr(env.action_space))
envs = [make_env(env_id, seed + i) for i in range(num_envs)]
envs = AsyncVectorEnv(envs)
envs = RecordEpisodeStatistics(envs)
envs = TransformReward(envs, lambda reward: np.sign(reward))

print("SHAPE2 " + repr(envs.action_space))
return TorchVecEnvWrapper(envs, device)

the printout of the above is

SHAPE1 MultiBinary(12)
SHAPE2 Box(0, 1, (8, 12), int8)

The above code snippet is the return value of a function called make_retro_env that I created.

After training using

env = make_retro_env("SuperHangOn-Genesis", "junior-map", num_envs=8, distributed=False)
eval_env = make_retro_env("SuperHangOn-Genesis", "junior-map", num_envs=1, distributed=False)
print("SHAPE3 " + repr(env.action_space))

model = DAAC(env=env, 
              eval_env=eval_env, 
              device='cuda',
              )
model.train(num_train_steps=1000000)

Note this prints out "SHAPE3 MultiBinary(12)"

When I load the .pth that was automatically saved via the training, using

agent = th.load("./logs/default/2023-08-02-12-49-12/model/agent.pth", map_location=th.device('cuda'))
action = agent(obs)
print("action " + repr(action))

The tensors look like this:

action tensor([[ 9.9971e-02, -2.7629e-01,  4.2010e-03,  3.1142e-02, -1.2863e-01,
          3.5272e-04,  1.9941e-01,  2.8625e-01,  3.2863e-01, -6.0946e-01,
         -1.7830e-01,  1.3129e-01]], device='cuda:0')

I'm not sure if I did something wrong, or if perhaps this bug is fixed in the current repo or related to my library versions. If that is the case, let me know.
Thanks.

To Reproduce

No response

Relevant log output / Error message

No response

System Info

({'OS': 'Windows-10-10.0.19045-SP0 10.0.19045', 'Python': '3.8.16', 'Stable-Baselines3': '2.0.0', 'PyTorch': '2.0.0', 'GPU Enabled': 'True', 'Numpy': '1.23.5', 'Cloudpickle': '2.2.1', 'Gymnasium': '0.29.0', 'OpenAI Gym': '0.26.2'}, '- OS: Windows-10-10.0.19045-SP0 10.0.19045\n- Python: 3.8.16\n- Stable-Baselines3: 2.0.0\n- PyTorch: 2.0.0\n- GPU Enabled: True\n- Numpy: 1.23.5\n- Cloudpickle: 2.2.1\n- Gymnasium: 0.29.0\n- OpenAI Gym: 0.26.2\n')

Checklist

  • I have checked that there is no similar issue in the repo
  • I have read the documentation
  • I have provided a minimal working example to reproduce the bug
  • I've used the markdown code blocks for both code and stack traces.

[Progress Report] Construction of RLLTE Data Hub

Due to the high computing power required for training, we will gradually upload data to the data hub and report the progress in this issue. We will also change the priority of training according to needs, and you can leave a message here.

[Question] AttributeError: 'Categorical' object has no attribute 'dist'

❓ Question

When I try the code in intrinsic_reward_shaping.ipynb, I get an error.

The detailed error are as the following:

Traceback (most recent call last):
File "/home/x1/anaconda3/envs/x2/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "", line 1, in
File "/opt/PyCharm/pycharm-professional-2022.2.4/pycharm-2022.2.4/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/opt/PyCharm/pycharm-professional-2022.2.4/pycharm-2022.2.4/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/x1/PycharmProjects/ddpg-pytorch/test_rllte/test1.py", line 24, in
agent.train(num_train_steps=5000)
File "/home/x1/PycharmProjects/ddpg-pytorch/rllte/rllte/common/prototype/on_policy_agent.py", line 112, in train
eval_metrics = self.eval(num_eval_episodes)
File "/home/x1/PycharmProjects/ddpg-pytorch/rllte/rllte/common/prototype/on_policy_agent.py", line 231, in eval
actions, _ = self.policy(obs, training=False)
File "/home/x1/anaconda3/envs/x2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/x1/anaconda3/envs/x2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x1/anaconda3/envs/x2/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/home/x1/anaconda3/envs/x2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/x1/anaconda3/envs/x2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x1/PycharmProjects/ddpg-pytorch/rllte/rllte/xploit/policy/on_policy_shared_actor_critic.py", line 147, in forward
def forward(self, obs: th.Tensor, training: bool = True) -> Tuple[th.Tensor, Dict[str, th.Tensor]]:
File "/home/x1/PycharmProjects/ddpg-pytorch/rllte/rllte/xplore/distribution/categorical.py", line 99, in mean
return self.dist.probs.argmax(axis=-1)
AttributeError: 'Categorical' object has no attribute 'dist'

Can you help me to solve this problem?

Checklist

  • I have checked that there is no similar issue in the repo
  • I have read the documentation
  • If code there is, it is minimal and working
  • If code there is, it is formatted using the markdown code blocks for both code and stack traces.

[Bug]: RE3 crashes when self.k > self.idx

πŸ› Bug

In the example in the documentation here RE3 crashes in re3.py - line 174 when self.k > self.idx. This can happen when the storage size has been reached and self.idx starts from 0 again. This is the line:

intrinsic_rewards[:, i] = th.log(th.kthvalue(dist, self.k + 1, dim=1).values + 1.0)

To Reproduce

from rllte.agent import PPO
from rllte.env import make_envpool_atari_env 
from rllte.xplore.reward import RE3

if __name__ == "__main__":
    # env setup
    device = "cuda:0"
    env = make_envpool_atari_env(device=device, num_envs=8)
    # create agent
    agent = PPO(env=env, 
                device=device,
                tag="ppo_atari")
    # create intrinsic reward
    re3 = RE3(observation_space=env.observation_space,
              action_space=env.action_space,
              device=device,
              num_envs=8,
              storage_size=100
     )
    # set the module
    agent.set(reward=re3)
    # start training
    agent.train(num_train_steps=5000)

Relevant log output / Error message

IndexError: kthvalue(): Expected reduction dim 1 to have non-zero size.

System Info

No response

Checklist

  • I have checked that there is no similar issue in the repo
  • I have read the documentation
  • I have provided a minimal working example to reproduce the bug
  • I've used the markdown code blocks for both code and stack traces.

[Bug]: Installation difficulties because of `arch==5.3.0` and `envpool` dependencies

πŸ› Bug

Hello,

I'm having problems properly installing and using RLLTE. The fixed arch==5.3.0 dependency makes it impossible to install it using Python 3.11. Changing it to arch>=5.3.0 resolves that issue and installs it without errors (Maybe the limitation is not required anymore). I used the reward branch to install with pip install -e ..

However, even after installing correctly no reward module (from rllte.xplore.reward) can be import due to the missing envpool dependency. Simply installing envpool is not possible as neither pip nor conda have it available. Installing rllte with [envs] fails for the same reason.

Is there some workaround for these issues or would it be possible to somehow remove the dependency for envpool if only needing the reward modules? It doesn't seem like it required for that part of the code

To Reproduce

  • Clone repository and checkout the reward branch
  • Setup environment with Python 3.11: conda create -n rllte python=3.11
  • Change arch==5.3.0 to arch>=5.3.0 in pyproject.toml
  • Install with pip install -e .
  • Run python:
from rllte.xplore.reward import ICM

Relevant log output / Error message

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\xplore\reward\__init__.py", line 27, in <module>
    from .icm import ICM as ICM
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\xplore\reward\icm.py", line 35, in <module>
    from rllte.common.prototype import BaseReward
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\common\prototype\__init__.py", line 26, in <module>
    from .base_agent import BaseAgent as BaseAgent
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\common\prototype\base_agent.py", line 47, in <module>
    from rllte.common.type_alias import VecEnv
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\common\type_alias.py", line 36, in <module>
    from rllte.env.utils import Gymnasium2Torch, GymObs
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\env\__init__.py", line 26, in <module>
    from .testing import make_bitflipping_env as make_bitflipping_env
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\env\testing\__init__.py", line 26, in <module>
    from .box import make_box_env as make_box_env
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\env\testing\box.py", line 33, in <module>
    from rllte.env.utils import Gymnasium2Torch
  File "C:\Users\Onyszkiewicz\Documents\workspace\rllte\rllte\env\utils.py", line 28, in <module>
    import envpool
ModuleNotFoundError: No module named 'envpool'

System Info

No response

Checklist

  • I have checked that there is no similar issue in the repo
  • I have read the documentation
  • I have provided a minimal working example to reproduce the bug
  • I've used the markdown code blocks for both code and stack traces.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.