GithubHelp home page GithubHelp logo

river-zhang / sifu Goto Github PK

View Code? Open in Web Editor NEW
186.0 10.0 8.0 97.04 MB

[CVPR 2024 Highlight] Official repository for paper "SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction"

Home Page: https://river-zhang.github.io/SIFU-projectpage/

License: MIT License

Python 98.64% GLSL 0.89% Shell 0.47%
3d-reconstruction 3d-vision clothed-humans clothed-people-digitalization computer-vision digital-human pifu pifuhd python pytorch

sifu's People

Contributors

river-zhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sifu's Issues

infer error

Thanks for your great work! When I use the refer code: python -m apps.infer -cfg ./configs/sifu.yaml -gpu 0 -in_dir ./examples_ -out_dir ./results -loop_smpl 100 -loop_cloth 200 -hps_type pixie . It showed such error:
image
and when I use faulthandler, it shows:
image
Could you help me with this problem? Thanks a lot!

Problems with dependencies and environment

Good afternoon!

I tried to set up the environment on a clean Ubuntu 20.04 (wsl Windows 11), following the instructions:

git clone https://github.com/River-Zhang/SIFU.git
sudo apt-get install libeigen3-dev ffmpeg
cd SIFU
conda env create -f environment.yaml
conda activate sifu
pip install -r requirements.txt

However, I encountered several missing dependencies, conflicts, and other issues. For example, I needed to first install build essentials with:
sudo apt-get install build-essential

And for the mlt package, it is necessary to install version 2024.0.0, otherwise, an error occurs pytorch/pytorch#123097.

Eventually, I forked the repository https://github.com/levnikolaevich/SIFU and tried to fix the packages, but I gave up when I encountered an error trying to execute the command:

python -m apps.infer -cfg ./configs/sifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results -loop_smpl 100 -loop_cloth 200 -hps_type pixie

The error message was:

Traceback (most recent call last):
  File "/home/lev/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/lev/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/lev/SIFU/apps/infer.py", line 30, in <module>
    from lib.common.render import query_color, image2vid
  File "/home/lev/SIFU/lib/common/render.py", line 17, in <module>
    from pytorch3d.renderer import (
  File "/home/lev/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch3d/renderer/__init__.py", line 3, in <module>
    from .blending import (
  File "/home/lev/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch3d/renderer/blending.py", line 9, in <module>
    from pytorch3d import _C
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory

=============

Could you please commit an environment.yml file created with the command in an environment where everything works for you?

conda env export > environment.yml

Then, you can remove the requirements.txt file, which currently lists the package pymeshlab twice.

Thank you in advance, and congratulations on your acceptance to CVPR 2024!

"conda env create -f environment.yaml" failed

The work you did is great and interesting. I have finished reading your paper and want to try to reproduce the relevant functions.

But I encountered some problems.

Have you ever encountered the following problems when configuring a virtual environment? He stopped at "Solving environment: | " and kept spinning in circles. I wonder if anyone has encountered this situation.

instal_new_env_SIFU

Looking forward to replies from you or other friends

results not good

hi, results seems to be not that good on my custom data,could you give me some advice how to improve the performace?
截屏2024-07-19 15 17 00
截屏2024-07-19 15 17 08
截屏2024-07-19 15 17 20
截屏2024-07-19 15 18 02

PyMCubes version in requirements.txt

First of all, thanks to the authors for sharing the code for this great work!

For others trying to setup the environment but failed with installing pip package PyMCubes, here's a quick solution:

Apparently a few weeks ago they released v0.1.6, which dropped support for numpy below v2.0. Therefore, specifying to install 0.1.4 version of PyMCubes in requrements.txt should solve this. (PyMCubes==0.1.4)

Also, during my setup there was a conflict between open3d and ipywidgets. I dropped version specification for the latter.

There's no cloth on the refine.obj

Hi!
Thanks for your great work!
I ran the comand:

python -m apps.infer -cfg ./configs/sifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results -loop_smpl 100 -loop_cloth 200 -hps_type pixie
Then i got 7 obj files.But there's no cloth on all of them.
How can I get obj files with clothes? Did I miss any file uesd to refine?
image

refine.obj:
image

CUDA out of memory

"Is it the case that the code specifies using GPU number 0, and if I want to use GPU number 1, does the command 'CUDA_VISIBLE_DEVICES=1 python -m apps.infer -cfg ./configs/sifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results -loop_smpl 100 -loop_cloth 200 -hps_type pixie' have no effect, and manual modification of the code is required?"

Table 6 in Table

Hi authors, could you provide some details on your evaluation protocol for Table 6? Specifically,

  1. what is the rendering size?
  2. how many views per subject?
  3. which backbone for lpips?

Thank you very much!

About train

(sifu3) pp@ys:~/SIFU$ python -m apps.train -cfg ./configs/train/sifu.yaml
load from ./data/cape/train.txt
total: 152
load from ./data/cape/val.txt
total: 36
ICON:
w/ Global Image Encoder: True
Image Features used by MLP: ['normal_F', 'normal_B']
Geometry Features used by MLP: ['sdf', 'cmap', 'norm', 'vis', 'sample_id']
Dim of Image Features (local): 6
Dim of Geometry Features (ICON): 7
Dim of MLP's first layer: 78

GPU available: True, used: True
TPU available: None, using: 0 TPU cores
Resume MLP weights from ./data/ckpt/sifu.ckpt
Resume normal model from ./data/ckpt/normal.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]

| Name | Type | Params

0 | netG | HGPIFuNet | 413 M
1 | reconEngine | Seg3dLossless | 0

411 M Trainable params
1.3 M Non-trainable params
413 M Total params
1,652.498 Total estimated model params size (MB)
Validation sanity check: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/pp/SIFU/apps/train.py", line 154, in
trainer.fit(model=model, datamodule=datamodule)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 856, in run_sanity_check
_, eval_results = self.run_evaluation(max_batches=self.num_sanity_val_batches)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 712, in run_evaluation
for batch_idx, batch in enumerate(dataloader):
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pp/miniconda3/envs/sifu3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pp/SIFU/lib/dataset/PIFuDataset.py", line 217, in getitem
subject = self.subject_list[mid].split("/")[1]
IndexError: list index out of range

IndexError: index 9259 is out of bounds for axis 0 with size 5023

config default configs/sifu.yaml
input images examples/

(sifu) ubuntu@VM-32-4-ubuntu:/opt/xxxx/SIFU$ python -m apps.infer -cfg ./configs/sifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results -loop_smpl 100 -loop_cloth 200 -hps_type pixie
Traceback (most recent call last):
File "/opt/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/xxxx/SIFU/apps/infer.py", line 31, in
from lib.renderer.mesh import compute_normal_batch
File "/opt/xxxx/SIFU/lib/renderer/mesh.py", line 27, in
model_path=SMPLX().model_dir,
File "/opt/xxxx/SIFU/lib/dataset/mesh_util.py", line 1074, in init
self.smplx_front_flame_vid = self.smplx_flame_vid[np.load(self.front_flame_path)]
IndexError: index 9259 is out of bounds for axis 0 with size 5023
(sifu) ubuntu@VM-32-4-ubuntu:/opt/xxxx/SIFU$

THUman2.0 dataset's UV files

Hello, could you please share the modified PIFu's rendering code for rendering the THUman2.0 dataset's UV files? The original code was for rendering RenderPeople's dataset.

Being stuck in creating the conda environment

Hi there! When I was creating the environment and install the packages, I was stuck in "Solving environment" and looking at this "/" rotating forever.
I use anaconda 22.9.0 and Ubuntu 20.04.6. How can I solve this problem?
image

test error

Traceback (most recent call last):
File "/home/j222/anaconda/envs/sifu/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/j222/anaconda/envs/sifu/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/j222/SIFU-main/apps/train.py", line 18, in
from lib.dataset.PIFuDataModule import PIFuDataModule
File "/home/j222/SIFU-main/lib/dataset/PIFuDataModule.py", line 3, in
from .PIFuDataset import PIFuDataset
File "/home/j222/SIFU-main/lib/dataset/PIFuDataset.py", line 31, in
import vedo
File "/home/j222/anaconda/envs/sifu/lib/python3.8/site-packages/vedo/init.py", line 31, in
from vedo.plotter import *
File "/home/j222/anaconda/envs/sifu/lib/python3.8/site-packages/vedo/plotter.py", line 4, in
import vtk
File "/home/j222/anaconda/envs/sifu/lib/python3.8/site-packages/vtk.py", line 47, in
from vtkmodules.vtkIOXdmf2 import *
ImportError: /home/j222/anaconda/envs/sifu/lib/python3.8/site-packages/vtkmodules/../../.././././libcurl.so.4: undefined symbol: nghttp2_option_set_no_rfc9113_leading_and_trailing_ws_validation

Wrong body generation

Hello, thank you for providing the open-source code. Recently, when I attempted to replicate it, I encountered an issue where the SMPL human body was generated correctly, but the clothed human body appeared as a square erroneously.
微信图片_20240517093438
微信图片_20240517093500
微信图片_20240517093515
My running code is as follows
微信图片_20240517093626
Has anyone encountered similar issues? Is it due to the files I downloaded or some other reason? Could someone assist me? Thank you very much.

Questions about training

GOOD JOB. several questions about the code.

  1. The Values of default "mcube_res" is incosistent in two codes.
    https://github.com/River-Zhang/SIFU/blob/ca76e36e929da7dd0f0d7e617bee19867bcfb046/README.md?plain=1#L87
    https://github.com/River-Zhang/SIFU/blob/ca76e36e929da7dd0f0d7e617bee19867bcfb046/configs/train/sifu.yaml#L68

  2. I noticed that you modified the rendering script for the dataset. Based on your script, I re-rendered the dataset. I think you modified the script mainly to obtain side-view data of the human body. I want to know if the texture training data was rendered using the script from ICON or from the PIFU training data.

'uv_render_path': osp.join(self.root, render_folder, 'uv_color', f'{rotation:03d}.png'),
'uv_mask_path': uv_mask_path,
'uv_pos_path': uv_pos_path,
'uv_normal_path': osp.join(self.root, render_folder, 'uv_normal', '%02d.png' % (0)),
'image_mask_path':image_mask_path,
'param_path':param_path,

3.Are the geometric and texture stages trained together?
error=BCEloss+color_loss

  1. Will there be a detailed document about the training, as well as the the code of texure reconstruction, including test and inference?

Optimization issue

Are there any ways to reduce VRAM consumption to 12 gigabytes? Perhaps offload models partially/completely to the CPU? Thanks in advance!

Testing error

hi there, the inference is ok, but then i ran into an error while testing, i am not sure what caused this error and how to fix it
my GPU is a Tesla V100-SXM2-32GB
here is the error info

(sifu) lby@ubuntu:~/code/SIFU$ python -m apps.train -cfg ./configs/train/sifu.yaml -test
ICON:
w/ Global Image Encoder: True
Image Features used by MLP: ['normal_F', 'normal_B']
Geometry Features used by MLP: ['sdf', 'cmap', 'norm', 'vis', 'sample_id']
Dim of Image Features (local): 6
Dim of Geometry Features (ICON): 7
Dim of MLP's first layer: 78

GPU available: True, used: True
TPU available: None, using: 0 TPU cores
Resume MLP weights from ./data/ckpt/sifu.ckpt
Resume normal model from ./data/ckpt/normal.ckpt
load from ./data/cape/test.txt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 0it [00:00, ?it/s]../aten/src/ATen/native/cuda/MultinomialKernel.cu:109: binarySearchForMultinomial: block: [7,0,0], thread: [96,0,0] Assertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
../aten/src/ATen/native/cuda/MultinomialKernel.cu:109: binarySearchForMultinomial: block: [7,0,0], thread: [97,0,0] Assertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
......(omit)
../aten/src/ATen/native/cuda/MultinomialKernel.cu:109: binarySearchForMultinomial: block: [2,0,0], thread: [95,0,0] Assertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
Traceback (most recent call last):
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/lby/code/SIFU/apps/train.py", line 157, in <module>
    trainer.test(model=model, datamodule=datamodule)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 915, in test
    results = self.__test_given_model(model, test_dataloaders)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 973, in __test_given_model
    results = self.fit(model)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
    self.dispatch()
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 540, in dispatch
    self.accelerator.start_testing(self)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 76, in start_testing
    self.training_type_plugin.start_testing(trainer)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 118, in start_testing
    self._results = trainer.run_test()
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 786, in run_test
    eval_loop_results, _ = self.run_evaluation()
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 725, in run_evaluation
    output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 160, in evaluation_step
    output = self.trainer.accelerator.test_step(args)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 195, in test_step
    return self.training_type_plugin.test_step(*args)
  File "/home/lby/miniconda3/envs/sifu/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 134, in test_step
    return self.lightning_module.test_step(*args, **kwargs)
  File "/home/lby/code/SIFU/apps/ICON.py", line 686, in test_step
    chamfer, p2s = self.evaluator.calculate_chamfer_p2s(num_samples=1000)
  File "/home/lby/code/SIFU/lib/dataset/Evaluator.py", line 167, in calculate_chamfer_p2s
    sample_points_from_meshes(self.tgt_mesh, num_samples))
  File "/home/lby/code/pytorch3d/pytorch3d/ops/sample_points_from_meshes.py", line 100, in sample_points_from_meshes
    sample_face_idxs += mesh_to_face[meshes.valid].view(num_valid_meshes, 1)
RuntimeError: numel: integer multiplication overflow
Testing:   0%|          | 0/450 [00:06<?, ?it/s]

Solution to being stuck in "Solving environment"

Thanks to @levnikolaevich, here is his solution which was tested on Ubuntu 20.04:

sudo apt-get update && \
sudo apt-get upgrade -y && \
sudo apt-get install unzip libeigen3-dev ffmpeg build-essential nvidia-cuda-toolkit
mkdir -p ~/miniconda3 && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh && \
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 && \
rm -rf ~/miniconda3/miniconda.sh && \
~/miniconda3/bin/conda init bash && \
~/miniconda3/bin/conda init zsh

========= close and re-open current shell =========

git clone https://github.com/River-Zhang/SIFU.git
cd SIFU

-->add "mkl=2024.0.0" in the "environment.yaml" pytorch/pytorch#123097

conda env create -f environment.yaml 
conda activate sifu

changes in requirements.txt:
--> delete "pymeshlab" (because there is "pymeshlab==2022.2.post4")
--> change git+https://github.com/YuliangXiu/rembg.git@hf --> git+https://github.com/YuliangXiu/rembg.git (because there is no longer a tag "hf")
--> add open3d==0.17.0 mediapipe einops gdown
--> set numpy==1.24.4
because of "ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use 'numpy._import_array' to disable if you are certain you don't need it)."

pip install -r requirements.txt

Originally posted by @levnikolaevich in #20 (comment)

Regarding the issue during testing

Hello, I would like to ask if it is not necessary to have the left and right normal maps of the SMPL model when testing with the CAPE dataset? I noticed that the dataset does not include them.

Inference error, readme default param.

Body Fitting --- normal: 0.104 | silhouette: 0.054 | Total: 0.158
d5acc65c50a71c7c54eb38323966f6c1_smpl.obj correct.

ICON infer error.
d5acc65c50a71c7c54eb38323966f6c1_recon.obj view:
20240229-182731

Cloth Refinement --- cloth:3.38728 | stiffness:0.66788 | rigid:0.04957 | laplacian:0.28668 | Total: 4.39141

d5acc65c50a71c7c54eb38323966f6c1_cloth.gif view:
d5acc65c50a71c7c54eb38323966f6c1_cloth

thanks greate work!

About training from scratch.

Hi thank you for your great work.
I want to train from scratch your code, however there is no description about training.
Do you have a plan to provide training description?
Thank you in advance.

missing smplx_vertex_lmkid.npy

Hi, thank you for releasing the code. I tried the demo code but got this error.

FileNotFoundError: [Errno 2] No such file or directory: './data/smpl_related/smpl_data/smplx_vertex_lmkid.npy'

Could you please share this file?

Thanks!

model load error

sifu.ckpt load success
normal.ckpt load failed
EOFError: Ran out of input

THuman2.0 test list

Hi thanks for great work!
Is it possible to provide THuman2.0 test list? Thanks!

Processing for THuman2.0

Hello, you mentioned optimizing the rendering process for THuman2.0 in your work on GTA. Question about numbers, evaluation. #9 Could you provide details on this? Or is it that your modified dataset processing script has been made publicly available in the code repository? Do I need to use my script to reprocess the THuman2.0 dataset? Because previously, when I tried to reproduce your code using ICON's data, I couldn't achieve experimental results consistent with those in the paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.