GithubHelp home page GithubHelp logo

nissymori / jax-corl Goto Github PK

View Code? Open in Web Editor NEW
70.0 4.0 1.0 498 KB

Clean single-file implementation of offline RL algorithms in JAX

License: MIT License

Makefile 0.32% Python 96.98% Shell 2.70%
jax offline-rl single-file flax awac cql decision-transformer iql reinforcement-learning td3bc

jax-corl's Introduction

JAX-CORL

This repository aims JAX version of CORL, clean single-file implementations of offline RL algorithms with solid performance reports.

  • 🌬️ Persuing fast training: speed up via jax functions such as jit and vmap.
  • 🔪 As simple as possible: implement minimum requirements.
  • 💠 Focus on a few battle-tested algorithms: Refer here.
  • 📈 Solid performance report (README, Wiki).

JAX-CORL is complementing the single-file RL ecosystem by offering the combination of offline x JAX.

  • CleanRL: Online x PyTorch
  • purejaxrl: Online x JAX
  • CORL: Offline x PyTorch
  • JAX-CORL(ours): Offline x JAX

Algorithms

Algorithm implementation training time (CORL) training time (ours) wandb
AWAC algos/awac.py 4.46h 11m(24x faster) link
IQL algos/iql.py 4.08h 9m(28x faster) link
TD3+BC algos/td3_bc.py 2.47h 9m(16x faster) link
CQL algos/cql.py 11.52h 56m(12x faster) link
DT algos/dt.py 42m 11m(4x faster) link

Training time is for 1000_000 update steps without evaluation for halfcheetah-medium-expert v2 (little difference between different D4RL mujoco environments). The training time of ours includes the compile time for jit. The computations were performed using four GeForce GTX 1080 Ti GPUs. PyTorch's time is measured with CORL implementations.

Reports for D4RL mujoco

Normalized Score

Here, we used D4RL mujoco control tasks as the benchmark. We reported the mean and standard deviation of the average normalized score of 5 episodes over 5 seeds. We plan to extend the verification to other D4RL benchmarks such as AntMaze. For those who would like to know about the source of hyperparameters and the validity of the performance, please refer to Wiki.

env AWAC IQL TD3+BC CQL DT
halfcheetah-medium-v2 $41.56\pm0.79$ $43.28\pm0.51$ $48.12\pm0.42$ $48.65\pm 0.49$ $42.63 \pm 0.53$
halfcheetah-medium-expert-v2 $76.61\pm 9.60$ $92.87\pm0.61$ $92.99\pm 0.11$ $53.76 \pm 14.53$ $70.63\pm 14.70$
hopper-medium-v2 $51.45\pm 5.40$ $52.17\pm2.88$ $46.51\pm4.57$ $77.56\pm 7.12$ $60.85\pm6.78$
hopper-medium-expert-v2 $51.89\pm2.11$ $53.35\pm5.63$ $105.47\pm5.03$ $90.37 \pm 31.29$ $109.07\pm 4.56$
walker2d-medium-v2 $68.12\pm12.08$ $75.33\pm5.2$ $72.73\pm4.66$ $80.16\pm 4.19$ $71.04 \pm5.64$
walker2d-medium-expert-v2 $91.36\pm23.13$ $109.07\pm0.32$ $109.17\pm0.71$ $110.03 \pm 0.72$ $99.81\pm17.73$

How to use this codebase for your research

This codebase can be used independently as a baseline for D4RL projects. It is also designed to be flexible, allowing users to develop new algorithms or adapt them for datasets other than D4RL.

For researchers interested in using this code for their projects, we provide a detailed explanation of the code's shared structure:

Data structure
Transition(NamedTuple):
    observations: jnp.ndarray
    actions: jnp.ndarray
    rewards: jnp.ndarray
    next_observations: jnp.ndarray
    dones: jnp.ndarray

def get_dataset(...) -> Transition:
    ...
    return dataset

The code includes a Transition class, defined as a NamedTuple, which contains fields for observations, actions, rewards, next observations, and done flags. The get_dataset function is expected to output data in the Transition format, making it adaptable to any dataset that conforms to this structure.

Trainer class
class Trainer(NamedTuple):
    actor: TrainState
    critic: TrainState
    # hyper parameter
    discount: float = 0.99
    ...
    def update_actor(agent, batch: Transition):
        ...
        return agent

    def update_critic(agent, batch: Transition):
        ...
        return agent

    @partial(jax.jit, static_argnames("n_jitted_updates")
    def update_n_times(agent, data, n_jitted_updates)
      for _ in range(n_updates):
        batch = data.sample()
        agent = update_actor(batch)
        agent = update_critic(batch)
      return agent

def create_trainer(...):
    # initialize models...
    return Trainer(
        acotor=actor,
        critic=critic,
    )

For all algorithms, we have Trainer class (e.g. TD3BCTrainer for TD3+BC) which encompasses all necessary components for the algorithm: models, hyperparameters, and update logics. The Trainer class is versatile and can be used outside of the provided files if the create_trainer function is properly implemented to meet the necessary specifications for the Trainer class. Note: So far, we have not followed the policy for CQL due to technical issues. This will be handled in the near future.

See also

Great Offline RL libraries

  • CORL: Comprehensive single-file implementations of offline RL algorithms in pytorch.

Implementations of offline RL algorithms in JAX

Single-file implementations

  • CleanRL: High-quality single-file implementations of online RL algorithms in PyTorch.
  • purejaxrl: High-quality single-file implementations of online RL algorithms in JAX.

Cite JAX-CORL

@article{nishimori2024jaxcorl,
  title={JAX-CORL: Clean Sigle-file Implementations of Offline RL Algorithms in JAX},
  author={Soichiro Nishimori},
  year={2024},
  url={https://github.com/nissymori/JAX-CORL}
}

Credits

  • This project is inspired by CORL, clean single-file implementations of offline RL algorithm in pytorch.
  • I would like to thank @JohannesAck for his TD3-BC codebase and helpful advices.
  • The IQL implementation is based on implicit_q_learning.
  • AWAC implementation is based on jaxrl.
  • CQL implementation is based on JaxCQL.
  • DT implementation is based on min-decision-transformer.

jax-corl's People

Contributors

nissymori avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

afristrup

jax-corl's Issues

pmap

import time
from functools import partial
from typing import Any, Callable, NamedTuple, Optional, Sequence, Tuple

import d4rl
import flax
import flax.linen as nn
import gym
import jax
import jax.numpy as jnp
import numpy as np
import optax
import tqdm
import wandb
from flax.training.train_state import TrainState
from omegaconf import OmegaConf
from pydantic import BaseModel
from tqdm import tqdm

class TD3BCConfig(BaseModel):
# general config
env_name: str = "hopper"
data_quality: str = "medium-expert"
total_updates: int = 1000000
updates_per_epoch: int = 8 # how many updates per epoch. it is equivalent to how frequent we evaluate the policy
num_test_rollouts: int = 5
batch_size: int = 256
data_size: int = 1000000
seed: int = 0
# network config
num_hidden_layers: int = 2
num_hidden_units: int = 256
critic_lr: float = 1e-3
actor_lr: float = 1e-3
# TD3-BC specific
policy_freq: int = 2 # update actor every policy_freq updates
polyak: float = 0.995 # target network update rate
alpha: float = 2.5 # BC loss weight
policy_noise_std: float = 0.2 # std of policy noise
policy_noise_clip: float = 0.5 # clip policy noise
gamma: float = 0.99 # discount factor

conf_dict = OmegaConf.from_cli()
config = TD3BCConfig(**conf_dict)

def default_init(scale: Optional[float] = jnp.sqrt(2)):
return nn.initializers.orthogonal(scale)

class DoubleCritic(nn.Module):
"""
For twin Q networks
"""

@nn.compact
def __call__(self, state, action, rng):
    sa = jnp.concatenate([state, action], axis=-1)
    x_q = nn.Dense(config.num_hidden_units, kernel_init=default_init())(sa)
    x_q = nn.LayerNorm()(x_q)
    x_q = nn.relu(x_q)
    for i in range(1, config.num_hidden_layers):
        x_q = nn.Dense(config.num_hidden_units, kernel_init=default_init())(x_q)
        x_q = nn.LayerNorm()(x_q)
        x_q = nn.relu(x_q)
    q1 = nn.Dense(1, kernel_init=default_init())(x_q)

    x_q = nn.Dense(config.num_hidden_units, kernel_init=default_init())(sa)
    x_q = nn.LayerNorm()(x_q)
    x_q = nn.relu(x_q)
    for i in range(1, config.num_hidden_layers):
        x_q = nn.Dense(config.num_hidden_units, kernel_init=default_init())(x_q)
        x_q = nn.LayerNorm()(x_q)
        x_q = nn.relu(x_q)
    q2 = nn.Dense(1)(x_q)
    return q1, q2

class TD3Actor(nn.Module):
action_dim: int
max_action: float

@nn.compact
def __call__(self, state, rng):
    x_a = nn.relu(
        nn.Dense(config.num_hidden_units, kernel_init=default_init())(state)
    )
    for i in range(1, config.num_hidden_layers):
        x_a = nn.Dense(config.num_hidden_units, kernel_init=default_init())(x_a)
        x_a = nn.relu(x_a)
    action = nn.Dense(action_dim, kernel_init=default_init())(x_a)
    action = self.max_action * jnp.tanh(
        action
    )  # scale to [-max_action, max_action]
    return action

class TD3BCUpdateState(NamedTuple):
critic: TrainState
actor: TrainState
critic_params_target: flax.core.FrozenDict
actor_params_target: flax.core.FrozenDict
update_idx: jnp.int32

def init_params(model_def: nn.Module, inputs: Sequence[jnp.ndarray]):
return model_def.init(*inputs)

def initialize_update_state(
observation_dim, action_dim, max_action, rng, devices
) -> TD3BCUpdateState:
critic_model = DoubleCritic()
actor_model = TD3Actor(action_dim=action_dim, max_action=max_action)
rng, rng1, rng2 = jax.random.split(rng, 3)
# initialize critic and actor parameters
critic_params = init_params(critic_model, [rng1, jnp.zeros(observation_dim), jnp.zeros(action_dim), rng1])
critic_params_target = init_params(critic_model, [rng1, jnp.zeros(observation_dim), jnp.zeros(action_dim), rng1])

actor_params = init_params(actor_model, [rng2, jnp.zeros(observation_dim), rng2])
actor_params_target = init_params(actor_model, [rng2, jnp.zeros(observation_dim), rng2])

critic_train_state: TrainState = TrainState.create(
    apply_fn=critic_model.apply,
    params=critic_params,
    tx=optax.adam(config.critic_lr),
)
actor_train_state: TrainState = TrainState.create(
    apply_fn=actor_model.apply,
    params=actor_params,
    tx=optax.adam(config.actor_lr),
)
update_state = TD3BCUpdateState(
    critic=critic_train_state,
    actor=actor_train_state,
    critic_params_target=critic_params_target,
    actor_params_target=actor_params_target,
    update_idx=0,
)
return jax.device_put_replicated(update_state, devices)

class Transitions(NamedTuple):
states: jnp.ndarray
actions: jnp.ndarray
next_states: jnp.ndarray
rewards: jnp.ndarray
dones: jnp.ndarray

def initialize_data(
dataset: dict, rng: jax.random.PRNGKey
) -> Tuple[Transitions, np.ndarray, np.ndarray]:
"""
This part is D4RL specific. Please change to your own dataset.
As long as your can convert your dataset in the form of Transitions, it should work.
"""
rng, subkey = jax.random.split(rng)
data = Transitions(
states=jnp.asarray(dataset["observations"]),
actions=jnp.asarray(dataset["actions"]),
next_states=jnp.asarray(dataset["next_observations"]),
rewards=jnp.asarray(dataset["rewards"]),
dones=jnp.asarray(dataset["terminals"]),
)
# shuffle data and select the first data_size samples
rng, rng_permute, rng_select = jax.random.split(rng, 3)
perm = jax.random.permutation(rng_permute, len(data.states))
data = jax.tree_map(lambda x: x[perm], data)
assert len(data.states) >= config.data_size
data = jax.tree_map(lambda x: x[: config.data_size], data)
# normalize states and next_states
obs_mean = jnp.mean(data.states, axis=0)
obs_std = jnp.std(data.states, axis=0)
data = data._replace(
states=(data.states - obs_mean) / obs_std,
next_states=(data.next_states - obs_mean) / obs_std,
)
return data, obs_mean, obs_std

def update_actor(
update_state: TD3BCUpdateState, batch: Transitions, rng: jax.random.PRNGKey
) -> TD3BCUpdateState:
"""
Update actor using the following loss:
L = - Q(s, a) * lambda + BC(s, a)
"""
actor, critic = update_state.actor, update_state.critic

def get_actor_loss(
    actor_params: flax.core.frozen_dict.FrozenDict,
) -> Tuple[jnp.ndarray, Tuple[jnp.ndarray, jnp.ndarray]]:
    predicted_action = actor.apply_fn(actor_params, batch.states, rng=None)
    critic_params = jax.lax.stop_gradient(update_state.critic.params)
    q_value, _ = critic.apply_fn(
        critic_params, batch.states, predicted_action, rng=None
    )  # todo this will also affect the critic update :/

    mean_abs_q = jax.lax.stop_gradient(jnp.abs(q_value).mean())
    loss_lambda = config.alpha / mean_abs_q

    bc_loss = jnp.square(predicted_action - batch.actions).mean()
    loss_actor = -1.0 * q_value.mean() * loss_lambda + bc_loss
    return loss_actor

actor_grad_fn = jax.value_and_grad(get_actor_loss)
actor_loss, actor_grads = actor_grad_fn(actor.params)
actor_grads = jax.lax.pmean(actor_grads, axis_name="i")
actor = actor.apply_gradients(grads=actor_grads)
return update_state._replace(actor=actor)

def update_critic(
update_state: TD3BCUpdateState,
batch: Transitions,
max_action,
rng: jax.random.PRNGKey,
) -> TD3BCUpdateState:
"""
Update critic using the following loss:
L = (Q(s, a) - (r + gamma * min(Q_1'(s', a'), Q_2(s', a'))))^2
"""
actor, critic = update_state.actor, update_state.critic

def critic_loss(
    critic_params: flax.core.frozen_dict.FrozenDict,
) -> Tuple[jnp.ndarray, jnp.ndarray]:
    q_pred_1, q_pred_2 = critic.apply_fn(
        critic_params, batch.states, batch.actions, rng=None
    )  # twin Q networks
    target_next_action = actor.apply_fn(
        update_state.actor_params_target, batch.next_states, rng=None
    )
    policy_noise = (
        config.policy_noise_std
        * max_action
        * jax.random.normal(rng, batch.actions.shape)
    )
    target_next_action = target_next_action + policy_noise.clip(
        -config.policy_noise_clip, config.policy_noise_clip
    )
    target_next_action = target_next_action.clip(-max_action, max_action)
    q_next_1, q_next_2 = critic.apply_fn(
        update_state.critic_params_target,
        batch.next_states,
        target_next_action,
        rng=None,
    )  # twin Q networks
    target = batch.rewards[..., None] + config.gamma * jnp.minimum(
        q_next_1, q_next_2
    ) * (1 - batch.dones[..., None])  # take the min of the two Q networks
    target = jax.lax.stop_gradient(target)
    value_loss_1 = jnp.square(q_pred_1 - target)
    value_loss_2 = jnp.square(q_pred_2 - target)
    value_loss = (value_loss_1 + value_loss_2).mean()
    return value_loss

critic_grad_fn = jax.value_and_grad(critic_loss)
critic_loss, critic_grads = critic_grad_fn(critic.params)
critic_grads = jax.lax.pmean(critic_grads, axis_name="i")
critic = critic.apply_gradients(grads=critic_grads)
return update_state._replace(critic=critic)

@partial(jax.pmap, axis_name="i")
def update_steps_fn(
update_state: TD3BCUpdateState,
rng: jax.random.PRNGKey,
batch: Transitions,
max_action: float,
):
def update_step_fn(
update_state: TD3BCUpdateState,
rng: jax.random.PRNGKey,
):
rng, critic_rng, actor_rng = jax.random.split(rng, 3)
# update critic
update_state = update_critic(update_state, batch, max_action, critic_rng)
# update actor if policy_freq is met
new_update_state = update_actor(update_state, batch, actor_rng)
update_state = jax.lax.cond(
update_state.update_idx % config.policy_freq == 0,
lambda: new_update_state,
lambda: update_state,
)
# update target parameters
critic_params_target = jax.tree_map(
lambda target, live: config.polyak * target
+ (1.0 - config.polyak) * live,
update_state.critic_params_target,
update_state.critic.params,
)
actor_params_target = jax.tree_map(
lambda target, live: config.polyak * target
+ (1.0 - config.polyak) * live,
update_state.actor_params_target,
update_state.actor.params,
)
return (
update_state._replace(
critic_params_target=critic_params_target,
actor_params_target=actor_params_target,
update_idx=update_state.update_idx + 1,
),
None,
)

rngs = jax.random.split(rng, config.updates_per_epoch)
update_state, _ = jax.lax.scan(update_step_fn, update_state, rngs)
return update_state

@partial(jax.jit)
def get_action(
actor: TrainState,
obs: jnp.ndarray,
low: float,
high: float,
) -> jnp.ndarray:
action = actor.apply_fn(actor.params, obs, rng=None)
action = action.clip(low, high)
return action

def evaluate(
subkey: jax.random.PRNGKey,
actor: TrainState,
env: gym.Env,
obs_mean,
obs_std,
) -> float:
episode_rews = []
for _ in range(config.num_test_rollouts):
obs = env.reset()
done = False
episode_rew = 0.0
while not done:
obs = jnp.array((obs - obs_mean) / obs_std)
action = get_action(
actor=actor,
obs=obs,
low=env.action_space.low,
high=env.action_space.high,
)
action = action.reshape(-1)
obs, rew, done, info = env.step(action)
episode_rew += rew
episode_rews.append(episode_rew)
return (
env.get_normalized_score(np.mean(episode_rews)) * 100
) # average normalized score

if name == "main":
# setup environemnt, inthis case, D4RL. Please change to your own environment
env = gym.make(f"{config.env_name}-{config.data_quality}-v2")
dataset = d4rl.qlearning_dataset(env)
observation_dim = dataset["observations"].shape[-1]
action_dim = env.action_space.shape[0]
max_action = env.action_space.high

devices =jax.local_devices()
device_num = jax.device_count()
rng = jax.random.PRNGKey(config.seed)
rng, data_rng, model_rng = jax.random.split(rng, 3)
# initialize data and update state
data, obs_mean, obs_std = initialize_data(dataset, data_rng)
data = jax.tree_map(lambda x: x.reshape(device_num, -1, *x.shape[1:]), data)
update_state = initialize_update_state(
    observation_dim, action_dim, max_action, model_rng, devices
)

#wandb.init(project="train-TD3-BC", config=config)
steps = 0
epochs = int(config.total_updates // config.updates_per_epoch)
start_time = time.time()
for _ in tqdm(range(epochs)):
    steps += 1
    rng, batch_rng, update_rng, eval_rng = jax.random.split(rng, 4)
    # sample batch
    batch_size = int(config.batch_size // device_num)
    batch_idx = jax.random.randint(
        batch_rng, (config.updates_per_epoch * device_num * batch_size, ), 0, len(data.states)
    )
    batch: Transitions = jax.tree_map(lambda x: x[batch_idx], data)
    batch: Transitions = jax.tree_map(lambda x: x.reshape(device_num, config.updates_per_epoch, batch_size, *x.shape[1:]), batch)  # [device_num, updates_per_epoch, batch_size, ...]
    update_rng = jax.random.split(update_rng, device_num)
    # update parameters
    print(len(update_state), update_state)
    # max_action should be duplicated size (device_num, action_dim)
    update_state = update_steps_fn(update_state, update_rng, batch, jnp.array(max_action).reshape(1, -1).repeat(device_num, axis=0))
    """
    eval_dict = {}
    normalized_score = evaluate(
        eval_rng,
        update_state.actor,
        env,
        obs_mean,
        obs_std,
    )  # evaluate actor
    eval_dict[f"normalized_score_{config.env_name}"] = normalized_score
    eval_dict[f"step"] = steps
    print(eval_dict)
    wandb.log(eval_dict)
    """
print(f"training time: {time.time() - start_time}")
#wandb.finish()

Performance report

CORLのように性能のreportを出したい.早いんやし,10 seed per one settingかなあ

現状はTD3BCの #16 にちょろっとあるな

[CQL] How to speed up?

@partial(jax.jit, static_argnames=("self", "bc", "batch_size", "n"))
    def train_n_step(self, dataset, batch_size, n, bc=False):
        for _ in range(n):
            batch = batch_to_jax(subsample_batch(dataset, batch_size))
            metrics = self.train(batch, bc)
        return metrics

を追加して,forjitやってみた(どこのrepoにも残していない.).以下,1000 stepにかかる時間の比較.

method time jit time
forloop(original) 3.8s -
forjit(10) 5.6s 74s
forjit(100) 3.3s 740s(approx.)

forjit(100)は早いけど,jitの時間長すぎてoriginalと比べて早くはない.originalが相当いい実装なのかも.とりあえずこコピーは置いておいて,一旦IQLを片付けよう.

[Doc] Explain hyperparameters.

Since we basically used the same hyperparameters as those of reliable implementations. It would be helpful to potential user to know where the hyperparameters is from

[TODO] Preparation for the publication

体裁を合わせよう

全て性能が下がらないことを確認しながらいこうな.

モデル

  • MLPを今のCQLに合わせよう
  • ensemblizeを廃止してDouble criticにしよう!
  • TanhPolicyに統一しよう.
  • TD3BCtarget_critic_paramsやめよう
  • hidden_dimsに統一しよう

データの前処理

  • stateのnormalizeするかどうか統一しよう
  • rewardのnormalizeするかどうか統一しよう

変数名関数名統一しよう

  • tau, polyak
  • gamma, discount
  • key, rng
  • critic_def, critic_model
  • dones_float, dones, mask
  • states, observations

関数名統一しよう

型を表記しよう

  • awac
  • cql
  • iql
  • td3bc

コメントを丁寧に書こう

  • awac
  • cql
  • iql
  • td3bc

Configもう少し考えよう

  • awac
  • cql
  • iql
  • td3bc

シンプルに性能をあげよう

  • CQL
  • AWAC
    • CORLなどと比較
    • 実装
    • eval

report

  • script
  • wandb

公開に向けた雑務

  • requrements.txt
  • make file
  • ライセンス

[TD3-BC] Implementation のスタイルによる学習時間の検証

#10
general

  • algo: TD3-BC
  • env: Hopper
  • total updates: 1000, 000
  • data size: 1000, 000
  • batch size: 256
  • forjit, scanのupdate_per_step: 16
method time steps/time
torch 6333s (approx.) 157
forloop 14608s (approx.) 68 (approx.)
forjit 760.47s 1315.78
scan 722.01s 1385.04
  • 検証の結果,scanが一番いい.
  • CORLより8.8倍

UPDATE
condがある場合 forjitのほうが早い(if)で逃げたほうがいい.

現状の最速は現行の430s #16

[Doc] Compare the results with those reported in reliable papers.

  • AWAC: IQL paper, original
  • IQL: TD7, original
  • TD3+BC: TD7, original
  • CQL: CAL-QL, original
  • DT: IQL, original

Overall, those reported in original is much better than in other paper. Together with sensitivity of algorithms with hyper parameters and small implementation details. It is difficult to reproduce some results perfectly. But it is still worthwhile to report the difference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.