GithubHelp home page GithubHelp logo

skumra / robotic-grasping Goto Github PK

View Code? Open in Web Editor NEW
390.0 390.0 109.0 79.56 MB

Antipodal Robotic Grasping using GR-ConvNet. IROS 2020.

License: Other

Python 99.34% Shell 0.66%
deep-learning grasping robotic-manipulation robotics

robotic-grasping's People

Contributors

nayoung-oh avatar shirinj avatar skumra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

robotic-grasping's Issues

About Depth

In image.py, I find “img[r, c] = np.sqrt(x ** 2 + y ** 2 + z ** 2)” .
As far as I know, the depth value in the pcd file should be "z“.
In the final version, does the extraction of depth information not use ”z“ but ”sqrt(x ** 2 + y ** 2 + z ** 2)“? And why?
Hope you can answer, thank you!

Questions on validation

I have no problems with the training process, but during the validation of evaluate.py, an error is reported: File "/home/amax/baiyong/robotic-grasping/utils/dataset_processing/grasp.py", line 309, in rotate
R = np.array(ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 2) + inhomogeneous part.

Questions regarding setup

Hello,

First, thank you for this impressive contribution to the robotics community and for making it open source!

I had a couple of questions regarding setting up GR-Convnet on a new machine:

  1. What are the versions for the required libraries for the dependencies listed in the "requirements.txt" file?
  2. What version of CUDA is most compatible with this version of Convnet and/or Convnet V2?

I also could not find the link to the GR-Convnet V2 (The version published in your 2022 paper). The link in the paper refers to this repo but I could not find a commit indicating that the model in this paper is updated.

Thank you again for your contributions and I hope to hear from you soon!

Camera connection issue

I have launched the camera and output is like
NODES
/camera/
realsense2_camera (nodelet/nodelet)
realsense2_camera_manager (nodelet/nodelet)

ROS_MASTER_URI=http://011602p0023.local:11311

process[camera/realsense2_camera_manager-1]: started with pid [26131]
process[camera/realsense2_camera-2]: started with pid [26132]
[ INFO] [1617368659.444530186]: Initializing nodelet with 8 worker threads.
[ INFO] [1617368659.487527704]: RealSense ROS v2.2.14
[ INFO] [1617368659.487555222]: Running with LibRealSense v2.35.2
[ INFO] [1617368659.513961826]:
[ INFO] [1617368659.680539226]: Device with serial number 925622071435 was found.

[ INFO] [1617368659.680587294]: Device with physical ID 2-2-34 was found.
[ INFO] [1617368659.680608696]: Device with name Intel RealSense D435 was found.
[ INFO] [1617368659.681496020]: Device with port number 2-2 was found.
[ INFO] [1617368659.686264464]: getParameters...
[ INFO] [1617368659.902394120]: setupDevice...

I am assuming 925622071435 is device id and changing device id with 925622071435 in run_callibration and camera.py. And then running

[baxter - http://011602p0023.local:11311] baxter@baxter:~/catkin_ws/src/robotic-grasping$ python run_calibration.py
Traceback (most recent call last):
File "run_calibration.py", line 13, in
calibration.run()
File "/home/baxter/catkin_ws/src/robotic-grasping/hardware/calibrate_camera.py", line 112, in run
self.camera.connect()
File "/home/baxter/catkin_ws/src/robotic-grasping/hardware/camera.py", line 32, in connect
cfg = self.pipeline.start(config)
RuntimeError: No device connected

Help me if i am missing something.

Trained network, poor results

Hi Sulabh,

I managed to train the network on the Cornell dataset with the default settings (only on depth images) and the best accuracy obtained was 88%. I have seen in your paper that you obtained best accuracy of 95.4%.
Do you have any suggestions, ideas how can I improve the accuracy beside using the RGB images? How much the accuracy should improve if I use the RGB images as well?
Additionally, I tried to run the run_offline.py inference script on the model I just trained, using only depth - it gave me an error. I inspected the code, it seemed that the batch dimension was missing, I added it with unsqueeze.
Now the program ran successfully, but the results are poor on random pictures - scores around 0.1, grasp points outside the objects. Do you have any idea what could go wrong? Maybe the format of the pictures is different than during training?

Thanks,
György

'ResidualBlock' object has no attribute 'dropout'

When I run evaluate.py ( --network trained-models/cornell-randsplit-rgbd-grconvnet3-drop1-ch32/epoch_19_iou_0.98 --dataset jacquard --dataset-path Samples --jacquard-output --iou-eval)

and got an error:'ResidualBlock' object has no attribute 'dropout'.

Thanks

trained models using rgb-image

This is a great grasping detection method.But the depth-image of my camera is poor,can you provide the trained models using rgb-image?Thank you in advance.

Only input depth

Hi, I know the meaning of q, but I don't know how got it according to what principle, can you explain it?And when I only have input depth in python run_realtime --use-rgb 0(jacquard-d-grconvnet3-drop0-ch32), output q,w and angle all blue as if is nothing there.

The environment in which the programs run.

Which version of ubuntu of all modules(inference and control modules) run on, espically Baxter Gazebo simulation? It seems that Baxter Gazebo simulation can only run on the ROS Indigo(ubuntu14.04).Thanks..

no file

hi, I am very interested in your work, but I met the following error while training the network, looking forward to your help.
FileNotFoundError: No such file: '/home/aistudio/robotic/dataset/01/pcd0151d.tiff'。
I found that the downloaded dataset does not have a .tiff file。

Unable to access tool_position.npy file

There are a few .npy files and a folder called saved_data which the code in this repository requires. I cannot find it present here in the repository. Could you please help me with this?

About zoom rgb image and tiff image

In the utils/dataset_processing/image.py, class Image has function zoom for zooming rgb image qand tiff image.
Here are something confusing me.I tried factor value equaling 0.5.Two images before and after being zoomed look very different.
Before zoomed
img
After zoomed
img-re
It looks like that the pencil is much closer to the camera in zoomed image.But the distance value of same position of pencil in zoomed tiff image almost equal the original tiff image.I don't know why this data augmentation approach make sense.

Question regarding augmented dataset size described in paper

Hello,

First of all, thank you very much for your contributions to the robotics community, and for making the code for your paper publicly available.

I have a (hopefully simple) question regarding the augmented dataset size described in your paper.
Specifically, the augmented dataset size described in the following lines:

The extended version of Cornell Grasp Dataset comprises of 1035 RGB-D images with a resolution of 640×480 pixels of 240 different real objects with 5110 positive and 2909 negative grasps. The annotated ground truth consists of several grasp rectangles representing grasping possibilities per object.
However, it is a small dataset for training our GR-ConvNet model, therefore we create an augmented dataset using random crops, zooms, and rotations which effectively has 51k grasp examples. Only positively labeled grasps from the dataset were considered during training.

Would you be able to please let me know how the number 51k was obtained?
From trying to recreate the same calculations, I have only been able to get either 39k or 46k depending on how some quantities are defined.

Thank you very much for your time!

Question about the abstract class GraspModel

pos_pred, cos_pred, sin_pred, width_pred = self(xc)

I am a python new guy
Why this code which is in the abstract class GraspModel could call the method forward in the subclass GenerativeResnet

Seeking for some advice

Hi,

Thanks for your amazing Gr_ConvNet, it's simple but so powerful!
We want to implement Gr_ConvNet on our own dataset. In our dataset, there are plenty of oeverlapped grasp rects, their grasp pose are almost the same, but their grasp angle are different (Imagine you have a can, when grasp from the top, the grasp angle can vary between 0-360).

In Gr_ConvNet, when draw the grasp rect, only one angle will be draw. Is it necessary to keep all the grasp angles information about will this affect?

Sincerely thanks for your advice!

Regards

Availability of pretrained models

Hi Sulabh,

great work, congratulations! Can you please make available the prettained model cornell_rgbd_iou_0.96 which is referred in run_grasp_generator.py.

Thanks,
György Salló

The accuracy is far lower than the accuracy mentioned in the paper

I run train_network.py to train on the jacquard dataset, and the accuracy of the network test obtained is much lower than the accuracy in the paper. What is going on?
This is my parameter setting:
python train_network.py --dataset jacquard --dataset-path xxx --description train_jacquard

The accuracy tested on the Cornell Grasping Dataset is almost 0, while the accuracy tested on the Jacquard Dataset is only 75%.

how to set parameter to train on my data sets

Hello, thank you very much for your work in this field. I would like to ask some questions about this model.
Now I want to train this model on my own dataset, but the model trained with default parameters is very poor, I saw you mentioned the importance of parameter setting in other issues, could you be specific about how and which parameters I should set.
Thank you for your reply !

Question about length and width of GraspRectangle

In the file utils/dataset_processing/grasp.py (line 231), the length attribute is described as "Rectangle length (i.e. along the axis of the grasp)", and in line 170, this attribute is used to draw the width image width_out[rr, cc] = gr.length. I wanted to ask if this is just the way the variables are named (the width for the width images is actually the GraspRectangle's length and not the GraspRectangle's width) or is there any other reason for this? Thanks in advance for your help and for making the code available.

Question about width_img

Hi, thanks for your code. In the file utils/data/grasp_data.py (line 75), The function of this line of code seems to be normalizing the width_img, but why is the output_size/2 parameter used?

unable to load pre-trained models "ModuleNotFoundError: No module named 'inference'"

@skumra
when I try to load the trained model checkpoint, it says "ModuleNotFoundError: No module named 'inference'"
Please help!

model = torch.load("path_to_checkpoint")
Traceback (most recent call last):
  File "/opt/miniconda3/envs/graspfinder/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-20-a26a68e6c721>", line 1, in <module>
    a = torch.load("/Users/shrinath.deshpande/dextra/robotic-grasping/trained-models/jacquard-d-grconvnet3-drop0-ch32/epoch_48_iou_0.93")
  File "/opt/miniconda3/envs/graspfinder/lib/python3.7/site-packages/torch/serialization.py", line 607, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/opt/miniconda3/envs/graspfinder/lib/python3.7/site-packages/torch/serialization.py", line 882, in _load
    result = unpickler.load()
  File "/opt/miniconda3/envs/graspfinder/lib/python3.7/site-packages/torch/serialization.py", line 875, in find_class
    return super().find_class(mod_name, name)
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'inference'

Licencing

G'day @skumra

Super awesome to see you making use of the GG-CNN project! I'm curious to try out some of architecture changes you've made :-)

I just wanted to point out that the GG-CNN project is licenced under the BSD-3 licence. This is a super permissive licence that basically lets you do anything you want (yay!), on the proviso that the original copyright notice and licence are copied along with the redistribution. As such, can you please include the original BSD licence with the copied files.

Also, I'd like to encourage you to consider a more permissive licence like MIT or BSD. From a research point of view, the parasitic nature of GPL makes using your code in future work much more difficult.

Running the script [python evaluate.py xxx] got wrong

Hello, I have some problems.

When I run the script:

python evaluate.py --network ./output/models/200701_1113_training_cornell/epoch_47_iou_0.88 --dataset cornell --dataset-path ./datasets/cornell --iou-eval --use-depth 1 --use-rgb 1

it will occur an error:
image

But When I change the parameter, it can ran:

python evaluate.py --network ./output/models/200701_1113_training_cornell/epoch_47_iou_0.88 --dataset cornell --dataset-path ./datasets/cornell --iou-eval --use-depth 1 --use-rgb 0

How can I solve this problem? It seems that it only can use the depth information to evaluate.

In addition, if possible, can you upload the newest code?

Thank you very much.

some trobule when i run the run_grasp_generator.py

I am new guy in this research feild.I get some trobule when i run the run_grasp_generator.py.
I wish i could get your help.
Could you upload camera_pose.txt and camera_depth_scale.txt ?
Or tell me how to get these files in my environment. or tell us how to you make these files.
Actually, I don't understand this code in the grasp_generator.py :

Load camera pose and depth scale (from running calibration)

self.cam_pose = np.loadtxt('saved_data/camera_pose.txt', delimiter=' ')
self.cam_depth_scale = np.loadtxt('saved_data/camera_depth_scale.txt', delimiter=' ')
homedir = os.path.join(os.path.expanduser('~'), "grasp-comms")
self.grasp_request = os.path.join(homedir, "grasp_request.npy")
self.grasp_available = os.path.join(homedir, "grasp_available.npy")
self.grasp_pose = os.path.join(homedir, "grasp_pose.npy")

Thank you very much. Thanks for your time !

vis_problem

image

Are the images in the visualization process feature maps? If not, what are these pictures

Questions about width and lenght about class Grasp

In the function: load_from_jacquard_file(utils\dataset_processing\grasp.py:line 92)

x, y, theta, w, h = [float(v) for v in l[:-1].split(';')]
grs.append(Grasp(np.array([y, x]), -theta / 180.0 * np.pi, w, h).as_gr)

and in the class 'Grasp'(line 359), the init function is:

def __init__(self, center, angle, quality, length=60, width=30):

why there width is 30 rather than scalar 'w', the order of those scalars in init function might be:

Grasp(np.array([y, x]), -theta / 180.0 * np.pi, h, w).as_gr
def __init__(self, center, angle, quality, width=30, length=60):

the position 'h' and 'w'

How to set --batches-per-epoch when trainning on Jacquard dataset

Hello, thank you very much for your work.
Now I want to train this model on Jacquard dataset, I want to know how to set the -- batches-per-epoch parameter to different data sets (cornell and jacquard). You mentioned batches-per-epoch × batchsize = sample numbers in an issue. Then why did you set batches-per-epoch to 1000 and batchsize to 8 in cornell training? So the sample size is 8000, right? I don't think that makes sense
Thank you for your reply !

generate_cornell_depth

When I run the command: python -m utils.dataset_processing.generate_cornell_depth /Cornel_Grasping_dataset/archive

This is was I get:

(venv) nicholasward2@nicholasward2-Alienware-Aurora-R7:~/robotic-grasping$ python -m utils.dataset_processing.generate_cornell_depth /Cornel_Grasping_dataset/archive
Matplotlib is building the font cache; this may take a moment.

Then it stops running immediately. Does anyone have any suggestions on how to fix this issue?

About trained_model evaluate

Dear skumra:

thanks for your amazing work. i have a problem when i run your evaluate.py file, hope u can help me. thanks a lot.

when i run the evaluate.py , i have an abnormal IOU result :
IOU Results: 4/5449 = 0.000734
the command i use , i put below:
python evaluate.py --network trained-models/cornell-randsplit-rgbd-grconvnet3-drop1-ch32/epoch_19_iou_0.98 --dataset jacquard --dataset-path ../Jacquard --iou-eval --jacquard-output
i don't know the reason that why the IOU results is so low..
i did not change the model and code .
could u help me , thank you!

About the path of trained-network in evaluate.py

Thank you for your amazing work!
When I reappear this code, I successfully run the train_network.py by cornell, but I meet some trouble in repappearing evaluate.py.
Firstly, on the usual running, in my opinion, the trained-models would be saved in logs file, so what is the effect of the file of trained-models.(ps: maybe my guess is wrong.)
Secondly, I haved input the logs's path(D:/code/RGB/robotic-grasping-master/robotic-grasping-master/logs/201229_2242_') and trained-models's path as the default of network in evaluate.py, but I meet this error(FileNotFoundError: [Errno 2] No such file or directory: 'D').(ps:I have tried another file's path,but the same error will be reported, just the difference of the first letter,such as trained-models,the error will be this([Errno 2] No such file or directory: 't')))
My mother tongue is not English. I'm sorry for my mistakes in grammar and words. I hope you can give me an answer to my question. Thank you.

AttributeError: Couldn't find function center in BoundingBoxes or BoundingBox

after

python train_network.py --dataset cornell --dataset-path /home/ubuntu/机械臂入门教程/DATASETS/Cornell_Grasping_Dataset --description training_cornell

root : INFO Beginning Epoch 00
Traceback (most recent call last):
File "/home/ubuntu/机械臂入门教程/robotic-grasping/train_network.py", line 345, in
run()
File "/home/ubuntu/机械臂入门教程/robotic-grasping/train_network.py", line 318, in run
train_results = train(epoch, net, device, train_data, optimizer, args.batches_per_epoch, vis=args.vis)
File "/home/ubuntu/机械臂入门教程/robotic-grasping/train_network.py", line 163, in train
for x, y, _, _, _ in train_data:
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 634, in next
data = self._next_data()
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/_utils.py", line 644, in reraise
raise exception
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ubuntu/miniconda3/envs/bot_play/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ubuntu/机械臂入门教程/robotic-grasping/utils/data/grasp_data.py", line 65, in getitem
depth_img = self.get_depth(idx, rot, zoom_factor)
File "/home/ubuntu/机械臂入门教程/robotic-grasping/utils/data/cornell_data.py", line 52, in get_depth
center, left, top = self._get_crop_attrs(idx)
File "/home/ubuntu/机械臂入门教程/robotic-grasping/utils/data/cornell_data.py", line 37, in _get_crop_attrs
center = gtbbs.center
File "/home/ubuntu/机械臂入门教程/robotic-grasping/utils/dataset_processing/grasp.py", line 43, in getattr
raise AttributeError("Couldn't find function %s in BoundingBoxes or BoundingBox" % attr)
AttributeError: Couldn't find function center in BoundingBoxes or BoundingBox

Questions on evaluation metrics

The paper mentions that a difference of less than 30 degrees between the angle of the predicted grasp rectangle and the ground Truth is considered a successful grasp, however this does not seem to be reflected in the code.

Data augmentation for Cornell Dataset

I could not find the code to augment data for cornell dataset. Could you point me in the right direction on how to go about augmenting images along with the bboxes. The standard data augmentation resources available on net assumes horizontal bboxes. If you still have the data augmenting code, it will be very helpful.

Edit: Found it. I had misunderstood earlier some parts of code.

the question about speed of detecting

Hi !
I have a question when I learn your codes recently about evaluation.

INFO:root:CUDA detected. Running with GPU acceleration.
INFO:root:Loading Cornell Dataset...
INFO:root:Validation size: 89
INFO:root:Done
INFO:root:
Evaluating model iou_0.97
INFO:root:Average evaluation time per image: 134.32957349198588ms

L:\WS\gr-convnet>python evaluate.py --dataset cornell --dataset-path L:/WS/Cornell --network iou_0.97 --iou-eval
INFO:root:CUDA detected. Running with GPU acceleration.
INFO:root:Loading Cornell Dataset...
INFO:root:Validation size: 89
INFO:root:Done
INFO:root:
Evaluating model iou_0.97
INFO:root:Average evaluation time per image: 138.0717566843783ms
INFO:root:IOU Results: 85/89 = 0.955056

L:\WS\gr-convnet>python evaluate.py --dataset cornell --dataset-path L:/WS/Cornell --network iou_0.97 --iou-eval
INFO:root:CUDA detected. Running with GPU acceleration.
INFO:root:Loading Cornell Dataset...
INFO:root:Validation size: 89
INFO:root:Done
INFO:root:
Evaluating model iou_0.97
INFO:root:Average evaluation time per image: 136.26244630706444ms
INFO:root:IOU Results: 85/89 = 0.955056

Some logs above when I run evaluate.py. I used your trained-models to evaluate. I got 136ms but in your paper it is only 20ms.How can I got this result.

Thanks.

angle after gaussian filter

Hi, thanks for your code. The predicted angle image is passed into a gaussian filter. I think the value of the angle will be changed after that. Why need it? Thanks

Camera Calibration Issue

Hello! First, thanks for providing this wonderful repository!

While trying to reproduce the result with a physical robot, I think I found minor issues in the camera calibration part (the calibration does not work well for me...).
In the provided camera_calibration.py file, it calculates the transformation matrix from camera to robot using inv(self.robot2camera)
camera_pose = np.linalg.inv(self.world2camera)

However, I think it is not valid when the translation part (t) exists.
Therefore, I little bit modified the calibration method by calculating camera to robot transformation matrix directly and it worked well in my case.
R, t = self._get_rigid_transform(np.asarray(self.measured_pts), np.asarray(new_observed_pts))
(Also, modified calculating calibration error)

Could you explain what I misunderstood from the original calibration file or is my approach valid for the camera calibration?

Thanks for in advance.

Strange cross shape in quality prediction

The above figure shows the object image and the result of the grasp quality prediction (q_img in code). It can be seen that the red cube becomes a slightly larger cross in the grasp quality prediction, resulting in a wrong grasp prediction (the red line above the red square). Does anyone know why this is or how to avoid it?

How to get the highest accuracy

How is the highest accuracy of the paper obtained, obtained by evaluating the code? Or is it just the highest accuracy verified during training?

depth

Hi, when you use a depth image as your model input, do you reragne the depth value to 0~225? Thanks.

I get nothing when i run run_calibration.py

Hello! When i run run_calibration.py, it falls into an infinite loop mode, probably because of the following code:
image
i don't understand, could you help me?

Could you upload camera_pose.txt? I have completed the camera calibration by other methods, could i make the file through the obtained transform matrix?

Thank you very much! Thanks for your help!

datasets

Hi , congratulations! How do you splits Cornell dataset on image-wise and object-wise? Thanks!

About calibrate

Hi, thanks for your code. I am a bit curious about calibration and would like to learn about the calibration theory. Could you tell me what calibration method is used?Thank you for your reply !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.