GithubHelp home page GithubHelp logo

zju3dv / animatable_nerf Goto Github PK

View Code? Open in Web Editor NEW
496.0 33.0 50.0 223 KB

Code for "Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos" TPAMI 2024, ICCV 2021

License: Other

Python 94.25% Shell 1.75% C++ 1.17% Cuda 2.53% C 0.30%

animatable_nerf's Introduction

News

  • 01/21/2024 We release the Mobile-Stage dataset and SyntheticHuman++ dataset.
  • 01/12/2024 Animatable Neural Fields gets accepted to TPAMI.
  • 07/09/2022 This repository includes the implementation of Animatable SDF (now dubbed Animatable Neural Fields).
  • 07/09/2022 We release the extended version of Animatable NeRF. We evaluated three different versions of Animatable Neural Fields, including vanilla Animatable NeRF, a version where the neural blend weight field is replaced with displacement field and a version where the canonical NeRF model is replaced with a neural surface field (output is SDF instead of volume density, also using displacement field). We also provide evaluation framework for reconstruction quality comparison.
  • 10/28/2021 To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

teaser

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos
Sida Peng, Zhen Xu, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, Xiaowei Zhou
TPAMI 2024

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies
Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, Hujun Bao
ICCV 2021

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Run the code on Human3.6M

Since the license of Human3.6M dataset does not allow us to distribute its data, we cannot release the processed Human3.6M dataset publicly. If someone is interested at the processed data, please email me.

We provide the pretrained models at here.

Test on Human3.6M

The command lines for test are recorded in test.sh.

Take the test on S9 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_s9p/latest.pth and $ROOT/data/trained_model/deform/aninerf_s9p_full/latest.pth.

  2. Test on training human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True
  3. Test on unseen human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p_full resume True aninerf_animation True init_aninerf aninerf_s9p test_novel_pose True

Visualization on Human3.6M

Take the visualization on S9 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_s9p/latest.pth and $ROOT/data/trained_model/deform/aninerf_s9p_full/latest.pth.

  2. Visualization:

    • Visualize novel views of the 0-th frame
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True vis_novel_view True begin_ith_frame 0
    • Visualize views of dynamic humans with 3-th camera
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume True vis_pose_sequence True test_view "3,"
    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p vis_posed_mesh True
  3. The results of visualization are located at $ROOT/data/novel_view/aninerf_s9p and $ROOT/data/novel_pose/aninerf_s9p.

Training on Human3.6M

Take the training on S9 as an example. The command lines for training are recorded in train.sh.

  1. Train:

    # training
    python train_net.py --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p resume False
    
    # training the blend weight fields of unseen human poses
    python train_net.py --cfg_file configs/aninerf_s9p.yaml exp_name aninerf_s9p_full resume False aninerf_animation True init_aninerf aninerf_s9p
  2. Tensorboard:

    tensorboard --logdir data/record/deform

Run the code on ZJU-MoCap

If someone wants to download the ZJU-Mocap dataset, please fill in the agreement, and email me ([email protected]) and cc Xiaowei Zhou ([email protected]) to request the download link.

We provide the pretrained models at here.

Test on ZJU-MoCap

The command lines for test are recorded in test.sh.

Take the test on 313 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_313/latest.pth and $ROOT/data/trained_model/deform/aninerf_313_full/latest.pth.

  2. Test on training human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True
  3. Test on unseen human poses:

    python run.py --type evaluate --cfg_file configs/aninerf_313.yaml exp_name aninerf_313_full resume True aninerf_animation True init_aninerf aninerf_313 test_novel_pose True

Visualization on ZJU-MoCap

Take the visualization on 313 as an example.

  1. Download the corresponding pretrained models, and put it to $ROOT/data/trained_model/deform/aninerf_313/latest.pth and $ROOT/data/trained_model/deform/aninerf_313_full/latest.pth.

  2. Visualization:

    • Visualize novel views of the 0-th frame
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True vis_novel_view True begin_ith_frame 0
    • Visualize views of dynamic humans with 0-th camera
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume True vis_pose_sequence True test_view "0,"
    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 vis_posed_mesh True
  3. The results of visualization are located at $ROOT/data/novel_view/aninerf_313 and $ROOT/data/novel_pose/aninerf_313.

Training on ZJU-MoCap

Take the training on 313 as an example. The command lines for training are recorded in train.sh.

  1. Train:

    # training
    python train_net.py --cfg_file configs/aninerf_313.yaml exp_name aninerf_313 resume False
    
    # training the blend weight fields of unseen human poses
    python train_net.py --cfg_file configs/aninerf_313.yaml exp_name aninerf_313_full resume False aninerf_animation True init_aninerf aninerf_313
  2. Tensorboard:

    tensorboard --logdir data/record/deform

Extended Version

Addtional training and test commandlines are recorded in train.sh and test.sh.

Moreover, we compiled a list of all possible commands to run in extension.sh using on the S9 sequence of the Human3.6M dataset.

This include training, evaluating and visualizing the original Animatable NeRF implementation and all three extented versions.

Here we list the portion of the commands for the SDF-PDF configuration:

# extension: anisdf_pdf

# evaluating on training poses for anisdf_pdf
python run.py --type evaluate --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True

# evaluating on novel poses for anisdf_pdf
python run.py --type evaluate --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True test_novel_pose True

# visualizing novel view of 0th frame for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True vis_novel_view True begin_ith_frame 0

# visualizing animation of 3rd camera for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume True vis_pose_sequence True test_view "3,"

# generating posed mesh for anisdf_pdf
python run.py --type visualize --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p vis_posed_mesh True

# training base model for anisdf_pdf
python train_net.py --cfg_file configs/sdf_pdf/anisdf_pdf_s9p.yaml exp_name anisdf_pdf_s9p resume False

To run Animatable NeRF on other officially supported datasets, simply change the --cfg_file and exp_name parameters.

Note that for Animatable NeRF with pose-dependent displacement field (NeRF-PDF) and Animatable Neural Surface with pose-dependent displacement field (SDF-PDF), there's no need for training the blend weight fields of unseen human poses.

MonoCap dataset

MonoCap is a dataset composed by authors of animatable sdf from DeepCap and DynaCap.

Since the license of DeepCap and DynaCap dataset does not allow us to distribute its data, we cannot release the processed MonoCap dataset publicly. If you are interested in the processed data, please download the raw data from here and email me for instructions on how to process the data.

SyntheticHuman Dataset

SyntheticHuman is a dataset created by authors of animatable sdf. It contains multi-view videos of 3D human rendered from characters in the RenderPeople dataset along with the groud truth 3D model.

Since the license of the RenderPeople dataset does not allow distribution of the 3D model, we cannot realease the processed SyntheticHuman dataset publicly. If you are interested in this dataset, please email me for instructions on how to generate the data.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{peng2024animatable,
    title={Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos},
    author={Peng, Sida and Xu, Zhen and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
    journal={TPAMI},
    year={2024},
    publisher={IEEE}
}

@inproceedings{peng2021animatable,
  title={Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies},
  author={Peng, Sida and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Zhou, Xiaowei and Bao, Hujun},
  booktitle={ICCV},
  year={2021}
}

animatable_nerf's People

Contributors

dendenxu avatar pengsida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animatable_nerf's Issues

关于开源时间

继Neuralbody之后又一个很棒的工作!

想问问大概什么时间开源。

Why need to retrain the blend weight fields of unseen human poses?

Congrats on this great work, and thanks a lot for open-sourcing this project!

  1. I wonder why we need to retrain the blend weight fields of unseen human poses? It seems can directly use the Nerf and neural blend weight field for 3D animation.

  2. And what‘s the 'Rh" mean? Is it the same of global_orient in SMPL?

    # transform smpl from the world coordinate to the smpl coordinate 
     params_path = os.path.join(self.data_root, cfg.params,
                                '{}.npy'.format(i))
     params = np.load(params_path, allow_pickle=True).item()
     Rh = params['Rh'].astype(np.float32)
     Th = params['Th'].astype(np.float32)
    

关于代码‘pind = pnorm < norm_th’的疑惑

很感谢作者开源这么棒的论文!在学习代码的过程中遇到一个问题想请教一下。在此处的121-127行代码:
`

        pnorm = init_pbw[:, -1]  
        norm_th = cfg.norm_th  
        pind = pnorm < norm_th  
        pind[torch.arange(len(pnorm)), pnorm.argmin(dim=1)] = True  
        pose_pts = pose_pts[pind][None]  
        viewdir = viewdir[pind[0]]  
        dists = dists[pind[0]]  `

不太理解为什么用blend weights的最后一维/最后一个关节点的值pnorm来约束points?这样约束产生的pind变量的物理意义是什么?

期待&感激您的回复!

blend weight matrix

Hi, thanks for your great work!

I'm wondering how to get the smpl's blend weight matrix ( w in the paper equation 3).

from psbody.mesh import Mesh

Hi. I have installed meshlite, but still get the bug.

from psbody.mesh import Mesh
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named mesh

In prepare_blend_weights.py.

Dataset Release

Hi,

great work, I am excited to try out your codebase! :)
I was wondering if you have the dataset available somewhere as the link in the README (Project Page | Video | Paper | Data) does not work for me?
Are you planning to release the data soon, too?

Best,
Katja

Question of custom data

Hi, I'm having some problems implementing my own data with prepare_lbs_meta.py. I try to test the cope with CAPE dataset. The param of CAPE is

for k, v in cape_param.items():
    print(k)
    print(v.shape)

print: transl
        (3,)
        v_cano
        (6890, 3)
        pose
        (72,)
        v_posed
        (6890, 3)

I performed the following processing to fite prepare_lbs_meta.py.

vertices = cape_param["v_posed"].reshape(6890, 3)

shapes = betas.reshape(1, 10)
Th = cape_param["transl"].reshape(1, 3)
Rh = cape_param["pose"][:, :3].reshape(1, 3)
poses = cape_param["pose"].reshape(1, 72)
poses[:, :3] = 0
save_dict = {"shapes":shapes, "poses":poses, "Rh":Rh, "Th":Th, }

out_param = os.path.join(param_dir, f'{i:04d}')
out_verti = os.path.join(vertices_dir, f'{i:04d}')
np.save(out_param, save_dict)
np.save(out_verti, vertices)

But I find the result of get_tpose_blend_weights() seems not right

5d35b75caee6d8921d6936d6e949e2d

from left to right are the posed vertices, GT T-pose vertices, and process T-pose vertices. I simply checked the code against SMPL, there didn't seem to be any problems.

Am I doing it the right way? Can you give me some advice? Thanks

No such file or directory: 'data/zju_mocap/CoreView_394/lbs/bigpose_vertices.npy'

Hi, Sida

I was trying to extract mesh results for sequences 392, 393, and 394 for ZJU-MoCap dataset in order to do some comparison. Since 392, 393, 394 do not come with preprocessed lbs folder I just run tools/prepare_blend_weights.py to generate folder. However when generating meshes it still says "lbs/bigpose_vertices.npy" is missing. Could you share these files for 392, 393, 394 or the script that can generate this "bigpose_vertices.npy"?

Thanks!

关于People_snapshot 测试

你好,非常感谢开源优秀的工作。文章中使用的数据集都是多角度视频,请问作者有在单相机视频上面进行过测试吗? 我在People_snapshot上面训练,并且进行novel view测试,结果不太理想,下面是我的一些测试结果。
0009
0030
0058
0114
0135

这个结果和neural body区别很大,希望作者可以帮我分析一下原因。十分感谢!

Novel Poses

Thank you so much for sharing the code. Could you please let me know how I could input random poses to the trained model?

EOFError: Ran out of input

Hello, Sida. I have encountered the problem of EOFError.
Since I am using Windows, I assumed that it caused problem.
Is there anything that you can give me help?
Thanks

image

关于使用custom dataset

hi,
思达你好,非常感谢工作的开源。
我想请问下关于如何使用自己的数据进行训练,如何进行数据处理,有没有相关的文档可以提供呢?非常感谢

关于公式5和代码实现

非常感谢开源如此出色的工作。我遇到一个疑问,想请教一下:

  1. 公式(5)中bw是由norm(Fw+w_smpl)得到的,其中norm(wi) = wi/sum(w)

  2. 代码中使用的是:
    bw = self.bw_fc(net)
    bw = torch.log(smpl_bw + 1e-9) + bw
    bw = F.softmax(bw, dim=1)

  3. 我化简代码中的结果是bw = norm(Fw*w_smpl),不知道是不是我计算错了?希望作者可以帮我答疑一下,非常感谢。

Surface-guided rendering of AniSDF

Thanks for sharing the code.
It seems that the surface-guided rendering of AniSDF is not implemented in the code. Did I miss something?

questions about how to process H3.6m (lbs/bweights/*.npy)

when loading human3.6m dataset, here comes a question that I can't change the batch_size from 1 to any other number > 1. Then I find out that it's because when loading data, the size of the param "pbw" changes as well. The pbw is loaded in the lbs/bweights folder, which is provided by you. I wanna process more other datasets, what should I do? How to make this pbw blend weights have the same size(n_batch,d,h,w,25)? @pengsida

RuntimeError: cuDNN error / cannot join current thread

While I was trying to visualize novel views of the 0-th frame on Human3.6M.
I encountered the following error...
cuDNN error
My setup is,

  • Ubuntu: 20.04.4
  • CUDA: 10.1
  • torch: v1.4.0

cuda version

Is that neccesary to use CUDA 10.0 to run animatable_nerf?

Thank you.

能否使用ZJU_MOCAP数据集训练

您好,感谢您的开源。

刚申请Human3.6M数据集还没有得到回复。我想问一下:

  1. 能否用ZJU_MOCAP数据集训练?
  2. 如果可以,ZJU_MOCAP数据集缺少lbs相关内容,如何获取呢?

dataset about animatable sdf

According to your readme, I directly run the training code for animatable sdf on human36. This reports an error, i.e., no such file or directory: 'data/h36m/S9/Posing/lbs/weights.npy'.

I don't find weights.npy in the corresponding directory under human36 datasets. Do I need to additionally generate this weights.npy ?

A question about quantitative results

Thanks for the quantitative results you provided.

I find a file called "metrics.npy" in it. At first i think the file records results in term of MSE,PSNR and SSIM.But the result i read is only a 23-dimensional array. When we test the model, i find there is also a file, including the results in term of MSE,PSNR and SSIM, called "metrics.npy".

Can you tell me what is the npy file you upload?

Custom Dataset

Thanks for sharing the code, just wondering if this is possible to run on a custom dataset?

render unseen pose

Hi, Sida
I've trained a model by using HumanNeRF, it also has the canonical pose representation. So how can I use your method to render some unseen pose using my model?

Question about the last value of skinning weights in canonical/observation space

Hi,

First thanks for the nice code&paper!

I am trying to use your code on our own data. One thing I noticed is that the observation/canonical skinning weights (stored in lbs/bweights as well as in lbs/tbw.npy) are pre-computed and their last dimension has 25 values. Since original SMPL skinning weights have only 24 values, my question is that how exactly is this 25th value defined and how could I compute it properly?

thanks in advance!

Novel view render

Hi sidan,
A very nice work.
Here is a question about the code of rendering a 360 degree of the human. I'm very interested about it but I could not understand it. Could anyone share me how does this code work?

def gen_path(RT, center=None):

def gen_path(RT, center=None):
lower_row = np.array([[0., 0., 0., 1.]])

# transfer RT to camera_to_world matrix
RT = np.array(RT)
RT[:] = np.linalg.inv(RT[:])

RT = np.concatenate([RT[:, :, 1:2], RT[:, :, 0:1],
                     -RT[:, :, 2:3], RT[:, :, 3:4]], 2)

up = normalize(RT[:, :3, 0].sum(0))  # average up vector
z = normalize(RT[0, :3, 2])
vec1 = normalize(np.cross(z, up))
vec2 = normalize(np.cross(up, vec1))
z_off = 0

if center is None:
    center = RT[:, :3, 3].mean(0)
    z_off = 1.3

c2w = np.stack([up, vec1, vec2, center], 1)

# get radii for spiral path
tt = ptstocam(RT[:, :3, 3], c2w).T
rads = np.percentile(np.abs(tt), 80, -1)
rads = rads * 1.3
rads = np.array(list(rads) + [1.])

render_w2c = []
for theta in np.linspace(0., 2 * np.pi, cfg.render_views + 1)[:-1]:
    # camera position
    cam_pos = np.array([0, np.sin(theta), np.cos(theta), 1] * rads)
    cam_pos_world = np.dot(c2w[:3, :4], cam_pos)
    # z axis
    z = normalize(cam_pos_world -
                  np.dot(c2w[:3, :4], np.array([z_off, 0, 0, 1.])))
    # vector -> 3x4 matrix (camera_to_world)
    mat = viewmatrix(z, up, cam_pos_world)

    mat = np.concatenate([mat[:, 1:2], mat[:, 0:1],
                          -mat[:, 2:3], mat[:, 3:4]], 1)
    mat = np.concatenate([mat, lower_row], 0)
    mat = np.linalg.inv(mat)
    render_w2c.append(mat)

return render_w2c

Some question about zju_mocap dataset

Hi, Thanks for your great work, I have some questions about the dataset:

  1. The npy files in the new_vertices folder mean the SMPL mesh in the world coordinates, right? I wonder what unit they use, meter or millimeter?
  2. I wonder if the provided smpl_param(pose parameters) are in the coordinate system defined in SMPL or in the world coordinates? In other words, if I use smplx to get mesh with provided params like:
    smpl_layer(pose, shape, th)
    will I get the same result in the vertices folder?

visualize blend weight

Hi Sida,
I'd like to ask how can I visual the initial blend weight or the learned blend weight?

confused about the yaml files

hello I read your paper in ICCV 2021 and newly pre-print paper. I am confused about the experiment settings. In paper "Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos":
1.Section 3.2.1 describe the method in the ICCV2021 paper and its corresponding config file is configs/aninerf_s9p.yaml
2.Section 3.2.2 's config file is configs/aligned_nerf_pdf/aligned_aninerf_pdf_s9p.yaml(termed pdf)
3.the config file of "pdf+sdf" is configs/sdf_pdf/anisdf_pdf_s9p.yaml
am I right?
and I dont know what the config file configs/aligned_nerf_lbw/aligned_aninerf_lbw_s9p.yaml refer to ...

What are A and big_A in blend weight?

Thank you for your excellent work.
I have a question what are the meaning of A and big_A ?

      They are in ./lib/datasets/tpose_pdf_dataset.py
      229 lines---meta = { 'A': A,
                          'big_A': self.big_A,}

latent code

What is the meaning of the 'latent codes'? There is no detailed introduction to its origin and meaning in the paper.

Getting black image result

After I have downloaded Animatable NeRF TPAMi which was given by the author, howeever I have been getting the weird result such as just black image.
Could you give me any help?

When install the libraries pointnet2 and PCPR to run NHR and NT, some errors occur:

Hi, when I want to install the pointnet2 and PCPR libraries to run the code in your project, I met these problems:

  1. Although I had installed the pointnet2 successfully, when I run the code, it occurred this error:

image

  1. When I try to install the PCPR, I can't install it successfully.
    image

How can I install them and run the code successfully? Thanks.

RuntimeError: culabs runtime error: the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:368

While I was trying to visualize novel views of the 0-th frame on Human3.6M.
I change a way to install the PyTorch, but I still encounter the following error...
RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:368
image

I use pip install torch==1.4.0 torchvision==0.5.0 to install the torch (Exactly like the way that was described on the PyTorch Website)
image

And my set-up is

  • Ubuntu: 20.04.4
  • CUDA: 10.1
  • torch: v1.4.0
    image

image

I think that the PyTorch version do match with the system cuda.
Does anyone have encountered the same problem and solved it?

Segmentation Fault(core dumped) error

Hello
I tried to run your code but I get error message... "Segmentation Fault (core dumped)"

I used pytorch==1.11.0, CUDA 11.3, torchvision 0.12.0..

Is your code only available in pytorch 1.4.0 & CUDA 10 version?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.