GithubHelp home page GithubHelp logo

xharlie / disn Goto Github PK

View Code? Open in Web Editor NEW
180.0 180.0 27.0 29.56 MB

(latest updates and bug fixed) DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction

Python 4.10% Shell 0.33% C++ 82.52% Cuda 0.13% CMake 0.69% Makefile 1.04% C 6.42% Assembly 0.33% JavaScript 0.14% Batchfile 0.10% HTML 2.68% CSS 0.41% Objective-C 0.08% SWIG 0.09% NASL 0.65% PHP 0.17% Pascal 0.12%
3d-reconstruction 3d-vision camera-pose-estimation mesh-generation neurips-2019 nips-2019 nips-paper paper reconstructed-models sdf shapenet shapenet-dataset shapenetcore single-view

disn's People

Contributors

xharlie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

disn's Issues

pre-trained cam model cannot be load, cameraprediction/distratio/fc1/biases missing

When using the pre-trained cam model to predict camera pose. It shows model cannot be loaded.
'Model loaded in file: checkpoint/cam_DISN/latest.ckpt
2022-03-24 20:48:42.002209: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key cameraprediction/distratio/fc1/biases not found in checkpoint
Fail to load overall modelfile: checkpoint/cam_DISN'

How to get this 'cameraprediction/distratio/fc1/biases' value?
Thanks a lot.

camere location

Thanks for this great work!

I'm trying to compute camera location from these predicted matrixs, but I can't get right values.
Could you please tell me which matrix should I use and how to get camrea locations(x,y,z)?

create_img_h5 creating h5 file

Hi, I am trying to create .h5 files as per the instructions with the below command:
python -u preprocessing/create_img_h5.py

The above program is expected to create .h5 file but it is expecting .h5 file as below line says:
sdf_fl = os.path.join(sdf_dir, vals, obj, "ori_sample.h5")

Need a little guidance on what is missing as my sdf_dir contains .sdf files, not .h5 file.

Question about the dataset and reconstructed meshes.

Recently I'm trying to reproduce the results of your amazing work. While I have 2 questions.

  1. I downloaded the dataset(sdf ground truth and marching cube objs) from the link provided in README.md. While I found that only part of the models in ShapeNet Core v1 appear in the downloaded files. I counted those missings and the result is as follows. Is it because the pre-processing procedure cannot deal with those models?

Counts:
class 03001627 remove 199/6778 records
class 02958343 remove 4405/7496 records
class 04256520 remove 14/3173 records
class 02691156 remove 79/4045 records
class 03636649 remove 2/2318 records
class 04401088 remove 490/1052 records
class 04530566 remove 88/1939 records
class 03691459 remove 24/1618 records
class 02933112 remove 29/1572 records
class 04379243 remove 125/8509 records
class 03211117 remove 2/1095 records
class 02828884 remove 8/1816 records
class 04090263 remove 1/2372 records

  1. Since you only provided /isosurface/computeMarchingCubes executable files, I don't know the detailed procedures in it. I think it's basicly a Marching Cubes Algorithm to extract surface based on DISN's sdf predictions. After I retrain the network (I strictly followed the instructions in REAMDE, using ground truth camera parameters) and generate the meshes, I use trimesh.load to load the meshes and then use mesh.is_watertight to test whether the mesh is watertight. However, I find nearly 10% of the meshes aren't watertight which is kind of contradict with the Marching Cubes Algorithm in my perspective. Could you please explain why?

I'm hoping to receive your reply :)

different evaluation results

Thanks for uploading the code.
I set up the environment and downloaded the pre-trained model SDF_DISN.
Following the given config I generated the iso object for watercraft and run the test code.

min_cf:3.0742388, arg_cf view:12, avg emd:4.351789, min_emd:1.5337949, arg_em view:12
1 /home/pingzi/nvme/datasets/DISN_datasets/march_cube_objs_v1/04530566/ffffe224db39febe288b05b36358465d/isosurf.obj avg cf:34.068142, min_cf:1.9948646, arg_cf view:6, avg emd:2.78306, min_emd:1.1900781, arg_em view:6
1 /home/pingzi/nvme/datasets/DISN_datasets/march_cube_objs_v1/04530566/5fb24b87514df43a82b0247bfa21216b/isosurf.obj avg cf:14.799217, min_cf:3.808662, arg_cf view:7, avg emd:2.4966927, min_emd:1.8077142, arg_em view:18
1 /home/pingzi/nvme/datasets/DISN_datasets/march_cube_objs_v1/04530566/8605c975778dc8634c634743f56177d4/isosurf.obj avg cf:11.427704, min_cf:3.1768718, arg_cf view:6, avg emd:2.4085941, min_emd:1.7498852, arg_em view:5
1 /home/pingzi/nvme/datasets/DISN_datasets/march_cube_objs_v1/04530566/98da594a65640370c8333f6c4d99e2c8/isosurf.obj avg cf:19.673376, min_cf:10.119444, arg_cf view:13, avg emd:4.026104, min_emd:2.98802, arg_em view:13
cat_nm:watercraft, cat_id:04530566, avg_cf:19.050472438177316, avg_emd:2.8231411160461466
done!

The result CD loss is 19.0504 EMD loss is 2.823. This is very different from your paper "cd:10.87,emd:2.55"
I don't know where is wrong.
Could you help me to figure it out?

I visualized the model and it looks correct. But sometimes there are outliers. Did you run any algorithm to filter outliers?

Screenshot from 2022-03-01 17-19-50
Screenshot from 2022-03-01 17-19-59
Screenshot from 2022-03-01 17-20-34

Loss in training

Hi, I am trying to train my new rendered dataset based on your checkpoint SDF_DISN. But I found that the SDF loss is about 420-450 and stop dropping.
I use 224 * 224 no texture images with white background of bookshelf(02871439) as dataset.

more log for you:
-- 320 / 471 -- accuracy: 0.593915, sdf_loss_realvalue: 0.026241, sdf_loss: 451.525377, regularization: 0.893302, overall_loss: 452.418687, lr: 0.000100 time: 8.39, , fetch time per b: 0.00,
-- 340 / 471 -- accuracy: 0.581656, sdf_loss_realvalue: 0.025718, sdf_loss: 422.088892, regularization: 0.893302, overall_loss: 422.982202, lr: 0.000100 time: 8.35, , fetch time per b: 0.00,
-- 360 / 471 -- accuracy: 0.590842, sdf_loss_realvalue: 0.026418, sdf_loss: 456.267525, regularization: 0.893302, overall_loss: 457.160835, lr: 0.000100 time: 8.39, , fetch time per b: 0.00,
-- 380 / 471 -- accuracy: 0.589285, sdf_loss_realvalue: 0.025546, sdf_loss: 420.783688, regularization: 0.893303, overall_loss: 421.676999, lr: 0.000100 time: 8.36, , fetch time per b: 0.00,
-- 400 / 471 -- accuracy: 0.590341, sdf_loss_realvalue: 0.026002, sdf_loss: 436.560864, regularization: 0.893304, overall_loss: 437.454175, lr: 0.000100 time: 8.42, , fetch time per b: 0.00,

Am I doing right?

Thank you

Question about training data and projections

Hi, thank you for your great work. I have a question regarding training data and how projections work:

If I render my own dataset and save the camera parameters in "rendering_metadata.txt", is that enough to get correct projections? Do I need to change intrinsic parameters somewhere in the code to match my rendering settings?

Thanks in advance.

ValueError: Invalid checkpoint state loaded from

Hi, if I use new rendered 2d image dataset, I should just go through the last two bulletpoint commands in section Camera parameters estimation network right?

But I'm getting

Traceback (most recent call last):
  File "cam_est/train_sdf_cam.py", line 814, in <module>
    train()
  File "cam_est/train_sdf_cam.py", line 483, in train
    ckptstate = tf.train.get_checkpoint_state(PRETRAINED_MODEL_PATH)
  File "/home/johnG/anaconda3/envs/tf_1.13.0/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_management.py", line 278, in get_checkpoint_state
    + checkpoint_dir)
ValueError: Invalid checkpoint state loaded from

when running this command

 python -u cam_est/train_sdf_cam.py --log_dir cam_est/checkpoint/{your checkpoint dir} --gpu 0 --loss_mode 3D --learning_rate 1e-4 --src_h5_dir {your new rendered images' h5 directory} --img_h 224 --img_w 224 

Any idea how to solve this?

Unable to run demo.py

Is there any requirements file to figure out which versions of python and Tensorflow does the demo code run for?

If I enable --cam_est as mentioned in README, tf somehow fails to load the checkpoint even though I downloaded the data from mentioned link, following is an error I get and later on it fails on session.run.
"Fail to load overall modelfile: cam_est/checkpoint/cam_DISN/latest.ckpt"

If I don't enable --cam_est, I get the same issue with SDF_DISN checkpoint loading. Does anybody else face the same issue, does anybody else have any clues as to why I am facing this?

I am using Tensorflow 1.15

Vega FEM version?

Thanks for open sourcing this code! I'm trying to find the transformation between the original ShapeNet objects and the data that you generate via the provided binaries for metric calculations. Can you advise on the version of Vega FEM that you compiled and provided in this repo?
(link to Vega FEM library versions)

Am I doing something wrong?

Thanks for the sharing this with the community!

Should we except a slight misalignment between sdfs in "sdf_dir" and marching cube reconstructed ground truth models in "norm_mesh_dir"?

If I project sdf surface samples (obtained as np.abs(pc_sdf_sample[:,3])<0.005) and mesh vertices on the renders using
trans_mat = np.linalg.multi_dot([K, RT, rot_mat, W2O_mat, norm_mat])
I notice a slight misalignment for the mesh vertices (see second attached screenshot).

Is this expected, or am I missing out something? Thanks!!

image

image

Can't reproduce the results with cam estimation

Hello @Xharlie,

Thanks for sharing this impressive work.

I was trying to reconstruct the results in your paper using images from your 'ShapenetRender_more_variation' dataset. I have got the expected results that match the paper with ground truth camera parameter. However, the results are quite different from the paper when I use the estimated camera parameters, as shown in the image below.

Screenshot from 2022-09-28 16-10-29

I am using the demo.py script with provided checkpoints for both the DISN module and the camera estimation module.

Please let me know if I am missing something.

Best Regards,
Sami

Could you please provide a detailed requirements.txt

It can be easily exported via pip freeze > requirements.txt if you are using virtual environment.

I get stuck in linking to some dynamic libraries. It seems that I miss some dependencies.

The following is the result I get from pip freeze > requirements.txt:

bsl-py==0.8.0
astor==0.8.0
backcall==0.1.0
certifi==2019.9.11
decorator==4.4.0
gast==0.3.2
grpcio==1.16.1
h5py==2.8.0
ipykernel==5.1.2
ipython==7.8.0
ipython-genutils==0.2.0
jedi==0.15.1
joblib==0.13.2
jupyter-client==5.3.3
jupyter-core==4.5.0
Markdown==3.1.1
mkl-fft==1.0.14
mkl-random==1.1.0
mkl-service==2.3.0
networkx==2.3
nose==1.3.7
numpy==1.16.5
parso==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
prompt-toolkit==2.0.9
protobuf==3.9.2
ptyprocess==0.6.0
Pygments==2.4.2
pymesh==1.0.2
pymesh2==0.2.1
python-dateutil==2.8.0
pyzmq==18.1.0
scipy==1.3.1
six==1.12.0
tensorboard==1.10.0
tensorflow==1.10.0
termcolor==1.1.0
tornado==6.0.3
traitlets==4.3.2
trimesh==2.37.20
wcwidth==0.1.7
Werkzeug==0.16.0

Besides these, I build PyMesh from source code and conda install -c menpo opencv.

Problem in running code

Hi @Xharlie @walsvid I am trying to run the demo.py . I have installed all the libraries as mentioned in requirements.txt. My LIIB_PATH is export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./isosurface/:./Users/dhornala.bharadwaj/Documents/DISN-master/DISN-master/isosurface/tbb/tbb2018_20180822oss/lib/intel64/gcc4.7:/opt/intel/lib/intel64:/opt/intel/mkl/lib/intel64:/usr/local/lib64:/usr/local/lib:/usr/local/cuda/lib64

When I run this command python -u demo/demo.py --cam_est --log_dir checkpoint/SDF_DISN --cam_log_dir cam_est/checkpoint/cam_DISN --img_feat_twostream --sdf_res 256 . I get this error
InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'Resampler' with these attrs. Registered devices: [CPU], Registered kernels:

[[Node: resampler/Resampler = Resampler[T=DT_FLOAT](ResizeBilinear_1, Minimum)]]
Please guide. Thanks!

libtbb_preview.so.2: cannot open shared object file: No such file or directory

When I try to use ./isosurface/computeMarchingCubes or generate SDF data
I got this error:
error while loading shared libraries: libtbb_preview.so.2: cannot open shared object file: No such file or directory

I have already install libtbb, and I can fid libtbb.so.2. But how do you get libtbb_preview.so.2?

Thanks!

libtbb.so file being short error

Hi Xharlie,

I could not download pymesh2 or pymesh correctly. I have tried different pymesh libraries and none of them worked. After building pymesh from the source as mentioned here: PyMesh/PyMesh#59
I got the following error when I run : python preprocessing/create_point_sdf_grid.py --thread 8
Traceback (most recent call last):
File "preprocessing/create_point_sdf_grid.py", line 5, in
import pymesh
File "/home/ubuntu/anaconda3/envs/tf13/lib/python3.6/site-packages/pymesh2-0.3-py3.6-linux-x86_64.egg/pymesh/init.py", line 18, in
from .Mesh import Mesh
File "/home/ubuntu/anaconda3/envs/tf13/lib/python3.6/site-packages/pymesh2-0.3-py3.6-linux-x86_64.egg/pymesh/Mesh.py", line 5, in
import PyMesh
ImportError: /home/ubuntu/DISN/isosurface/tbb/tbb2018_20180822oss/lib/intel64/gcc4.7/libtbb.so: file too short
this error.
I cannot understand whether the error is related with libtbb.so file being short or it is due to the pymesh library

Any help would be greatly appreciated

Different evaluation results

Thank you for sharing the code and model.

I used the pre-trained model to generate the predicted camera trans_mat.
Then I used the SDF_DISN model to generate objects in the chair category.
I evaluated cd , emd and iou using the given code
I got 'avg_cf:9.560469307306612, avg_emd:2.9551014315527326 iou_avg: 0.5158811568751639'
Which is worse than the value reported in the paper. ''7.71, 2.67,54.3''
After I ran the clean small parts code, I test again
'cat_id:03001627, avg_cf:17.86277202301404, avg_emd:3.270745912187432
cat_id: 03001627, iou_avg: 0.5100998931483642'
However, the result was worse than the previous without clean small parts.

Am I doing anything wrong? How to get the value reported in the paper.
Could anyone help me? Thanks a lot.

In the inference stage for new data

After training, I want to create 3d object with new images.

However, as I understand your code, to reconstruct object, the values of sdf_params are needed.
How I can get the values with only test images?.

LIB_PATH ?

" Change corresponding libary path in your system in isosurface/LIB_PATH "

I can't understand it.

Here is your LIB_PATH file:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./isosurface/:/home/xharlie/dev/isosurface/tbb/tbb2018_20180822oss/lib/intel64/gcc4.7:/opt/intel/lib/intel64:/opt/intel/mkl/lib/intel64:/usr/local/lib64:/usr/local/lib:/usr/local/cuda/lib64

I can't understand :/home/xharlie/dev/isosurface/tbb/tbb2018_20180822oss/lib/intel64/gcc4.7:

what is tbb ? what is tbb2018_20180822oss ? can you explain that? thanks

how to render to get image

my question is how to examine the rotation result?
if we have obj, then we could sample points from it, with the pose we get, we can render an image from the points, am i correct?
could you provide the script to reproduce the res in your paper?

some questions about code and training

Q1: TOTAL_POINTS / 214669.0 in test/create_sdf.py(line 72)
Why did you divide the TOTAL_POINTS by 214669.0?
Is it because of the amount of GPU memory?
How did you get the number 214669.0?

Q2: How many epochs were trained when training the camera estimation and SDF estimation with ground true camera pose respectively?
Q3: How long does each epoch take when you training SDF estimation with ground true camera pose?
Thanks a lot^_^

log???

nohup: ignoring input
/home/cvpruser/zsf/luzhiqing/3D/DISN/models/models
pid: 16227
Namespace(alpha=False, augcolorback=False, augcolorfore=False, backcolorwhite=False, batch_size=1, binary=False, cam_est=True, cam_log_dir='cam_est/checkpoint/cam_DISN', cat_limit=168000, category='all', create_obj=False, decay_rate=0.9, decay_step=200000, gpu='0', img_feat_onestream=False, img_feat_twostream=True, img_h=137, img_w=137, iso=0.0, learning_rate=0.0001, log_dir='checkpoint/SDF_DISN', loss_mode='3D', max_epoch=1, multi_view=False, num_classes=1024, num_points=1, num_sample_points=1, rot=False, sdf_res=256, shift=False, store=False, tanh=False, test_lst_dir='./data/filelists', threedcnn=False, view_num=24)
RESULT_OBJ_PATH: ./demo/
HOSTNAME: o
checkpoint/SDF_DISN
here we use our cam est network to estimate cam parameters:
Tensor("Placeholder:0", shape=(), dtype=bool, device=/device:GPU:0)
WARNING:tensorflow:From /home/cvpruser/zsf/luzhiqing/luFF/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
--- cam Get model_cam and loss
odict_keys(['vgg_16/conv1/conv1_1', 'vgg_16/conv1/conv1_2', 'vgg_16/pool1', 'vgg_16/conv2/conv2_1', 'vgg_16/conv2/conv2_2', 'vgg_16/pool2', 'vgg_16/conv3/conv3_1', 'vgg_16/conv3/conv3_2', 'vgg_16/conv3/conv3_3', 'vgg_16/pool3', 'vgg_16/conv4/conv4_1', 'vgg_16/conv4/conv4_2', 'vgg_16/conv4/conv4_3', 'vgg_16/pool4', 'vgg_16/conv5/conv5_1', 'vgg_16/conv5/conv5_2', 'vgg_16/conv5/conv5_3', 'vgg_16/pool5', 'vgg_16/fc6', 'vgg_16/fc7', 'vgg_16/fc8'])
x (1, 3) y (1, 3) z (1, 3)
matrix (1, 3, 3)
trans_mat (1, 4, 3)
homo_pc.get_shape() (1, 1, 4)
pc_xyz.get_shape() (1, 1, 3)
homo_pc.get_shape() (1, 1, 4)
pc_xyz.get_shape() (1, 1, 3)
gt_xy, pred_xy (1, 1, 2) (1, 1, 2)
WARNING:tensorflow:From /home/cvpruser/zsf/luzhiqing/3D/DISN/cam_est/model_cam.py:232: get_regularization_losses (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.get_regularization_losses instead.
--- Get training operator
2019-10-31 21:58:05.544934: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-10-31 21:58:05.604962: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2100105000 Hz
2019-10-31 21:58:05.669197: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x571e4f0 executing computations on platform Host. Devices:
2019-10-31 21:58:05.669311: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
WARNING:tensorflow:From /home/cvpruser/zsf/luzhiqing/luFF/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
model_cam loaded in file: cam_est/checkpoint/cam_DISN/latest.ckpt
pred_trans_mat_val [[[-66.20531 3.0475807 -0.37013677]
[-15.545382 -81.84503 -0.22653782]
[-45.68309 -1.9617825 0.24275717]
[100.045494 99.63471 1.4153072 ]]]
Tensor("Placeholder:0", shape=(), dtype=bool)
--- Get model and loss
homo_pc.get_shape() (1, 212183, 4)
pc_xyz.get_shape() (1, 212183, 3)
point_vgg_conv1 (1, 212183, 64)
point_vgg_conv2 (1, 212183, 128)
point_vgg_conv3 (1, 212183, 256)
point_vgg_conv4 (1, 212183, 512)
point_vgg_conv5 (1, 212183, 512)
point_img_feat (1, 212183, 1, 1472)
net2 (1, 212183, 1, 512)
globalfeats_expand (1, 212183, 1, 1024)
gt_sdf (1, 212183, 1)
pred_sdf (1, 212183, 1)
WARNING:tensorflow:From /home/cvpruser/zsf/luzhiqing/3D/DISN/models/model_normalization.py:285: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Model loaded in file: checkpoint/SDF_DISN/latest.ckpt
2019-10-31 21:58:12.730879
all_pts (80, 1, 212183, 3)

I want to know
when I run : nohup python -u demo/demo.py --cam_est --log_dir checkpoint/SDF_DISN --cam_log_dir cam_est/checkpoint/cam_DISN --img_feat_twostream --sdf_res 256 &> log/create_sdf.log &
but,the result is below:
[2] 16227
Could you give me a answer ,why ??? what is it?

Codelab Demo

Is it possible to get a Demo on Google Codelab. So people (I :)) can run it without having TI GPU cards physically?

Or is this not possible?

Training from Scratch

Hi there,

Thanks for releasing the codes, it is amazing work!
I try to train the network from scratch and follow all the steps that were mentioned in Readme file, but I couldn't get the same results in comparison to pretrained model.

I was wondering which hyperparameters are used for the pretarined one. Is it the same as the defaults in train_sdf.py?
How many epochs did you train to get the best accuracy?
Also which dataset was used for training? The old one or the new one that you mentioned in Readme?

code problem : Why output sdf_pc_rot twice in sdf_train?

in ./train/sdf_train.py
line 417/418 and line 427/428
why output "batch_data['sdf_pt_rot'][bid,:,:]" twice, one called 'pred.obj' and other called 'gt.obj'? they are the same.
and during training, the batch_data['sdf_pt_rot'] also equal to batch_data['sdf_pt'],
there are no 3D obj produced during sdf_train right? just 1D pred_sdf value at shape [B, 2048 , 1], right?
i just operated to sdf_train , and haven't run 'inference sdf and create mesh objects'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.