GithubHelp home page GithubHelp logo

microsoft / samples-for-ai Goto Github PK

View Code? Open in Web Editor NEW
589.0 2.1K 219.0 184.46 MB

Samples for getting started with deep learning across TensorFlow, CNTK, Theano and more.

License: MIT License

Python 47.53% C# 52.47%

samples-for-ai's Introduction

Samples for AI

MIT licensed Pull requests Issues

Samples for AI is a deep learning samples and projects collection. It contains a lot of classic deep learning algorithms and applications with different frameworks, which is a good entry for the beginners to get started with deep learning.

Samples in Visual Studio solution format are provided for users to get started with deep learning using:

Each solution has one or more sample projects. Solutions are separated by different deep learning frameworks they use:

  • CNTK (both BrainScript and Python languages)
  • TensorFlow
  • PyTorch
  • Caffe2
  • Keras
  • MXNet
  • Chainer
  • Theano

Getting Started

Using a one-click installer to setup deep learning frameworks has been moved to here, please visit it for details.

3. Run Samples

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Related Projects

Open Platform for AI: an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.

NeuronBlocks : A NLP deep learning modeling toolkit that helps engineers to build DNN models like playing Lego. The main goal of this toolkit is to minimize developing cost for NLP deep neural network model building, including both training and inference stages.

License

Most of the samples scripts are from official github of each framework. They are under different licenses.

The scripts of CNTK are under MIT license.

The scripts of Tensorflow samples are under Apache 2.0 license. There are no changes to the original code.

For the scripts of Caffe2, different versions released with different licenses. Currently, the master branch is under Apache 2.0 license. But the version 0.7 and 0.8.1 were released with BSD 2-Clause license. The scripts in our solution are based on caffe2 GitHub source tree version 0.7 and 0.8.1, with BSD 2-Clause license.

The scripts of Keras are under MIT license.

The scripts of Theano are under BSD license.

The scripts of MXNet are under Apache 2.0 license. There are no changes to the original code.

The scripts of Chainer are under MIT license.

samples-for-ai's People

Contributors

cclauss avatar chicm-ms avatar childishchange avatar chris-lauren avatar dyg111 avatar ericsk avatar guozhongxin avatar hgrui avatar imfing avatar kant avatar leochangcn avatar linmajia avatar liusecone avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar mintyiqingchen avatar naykun avatar nuaajeff avatar pr0crustes avatar qfyin avatar qnguyen12 avatar shishaochen avatar shivg7706 avatar somedaywilldo avatar squirrelsc avatar suntobright avatar tenyuuk avatar tobeyqin avatar wangcunxiang avatar wayfear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

samples-for-ai's Issues

Typos in website

At the footer, both "Tearms [Terms] of Use" and "Tradermarks [Trademarks]" are misspelled.

yolo_tensorflow test.py cannot use

as the readme, type python test.py
only shows:
Traceback (most recent call last):
File "test.py", line 83, in
main()
File "test.py", line 42, in main
saver.restore(sess, tf.train.latest_checkpoint("checkpoint"))
File "C:\Users\acer\AppData\Roaming\Python\Python36\site-packages\tensorflow\p
ython\training\saver.py", line 1534, in restore
raise ValueError("Can't load save_path when it is None.")
ValueError: Can't load save_path when it is None.

what happended?

mnist.py Bug

When I am run mnist.py then
TypeError("int() argument must be a string, a bytes-like object or a number, not 'NoneType'",)

bug

can't use in CUDA10 cuDNN7.6and the numpy can't be installed

Missing project subtype

Trying to open the solution for TensorflowExamples I get the following error for each project (other than "yolo_tensorflow")

There is a missing project subtype.
Subtype: '{D22814C2-A430-4A53-8052-A3A64BFB2240}' is unsupported by this installation.

I see this error reported on the Microsoft Docs site in March 2019, but no solution was posted. From what I've gathered, I THINK this is an error stemming from using VS 2019 which might not have older SDK's installed, though I might be completely wrong.

image

StyleTransfer inference not working

run inference.py from StyleTransferTraining project.

Actual result: it failed with error AttributeError("module 'vgg' has no attribute 'read_img'",)

Training model fails with exception in VS Code

Repro Steps

  1. Follow steps here to install vs code, python (anaconda) and the Tools for AI extension
  2. When VS Code launches use the install link to setup the environment
  3. From the Tools for AI Start Page, click the HERE link to download this sample
  4. Press F5

Expected

Successful execution

Actual

Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting input/train-images-idx3-ubyte.gz
Extracting input/train-labels-idx1-ubyte.gz
Extracting input/t10k-images-idx3-ubyte.gz
Extracting input/t10k-labels-idx1-ubyte.gz
Initialized!
Step 0 (epoch 0.00), 3.0 ms
Minibatch loss: 8.334, learning rate: 0.010000
Minibatch error: 85.9%
Validation error: 84.6%
Step 100 (epoch 0.12), 124.2 ms
Minibatch loss: 3.254, learning rate: 0.010000
Minibatch error: 7.8%
Validation error: 7.7%
Step 200 (epoch 0.23), 127.0 ms
Minibatch loss: 3.382, learning rate: 0.010000
Minibatch error: 10.9%
Validation error: 4.3%
Step 300 (epoch 0.35), 127.2 ms
Minibatch loss: 3.157, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 2.9%
Step 400 (epoch 0.47), 128.8 ms
Minibatch loss: 3.201, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.6%
Step 500 (epoch 0.58), 128.1 ms
Minibatch loss: 3.179, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.6%
Step 600 (epoch 0.70), 128.0 ms
Minibatch loss: 3.119, learning rate: 0.010000
Minibatch error: 3.1%
Validation error: 2.2%
Step 700 (epoch 0.81), 128.1 ms
Minibatch loss: 2.991, learning rate: 0.010000
Minibatch error: 3.1%
Validation error: 2.3%
Step 800 (epoch 0.93), 127.8 ms
Minibatch loss: 3.048, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 2.1%
Step 900 (epoch 1.05), 131.8 ms
Minibatch loss: 2.905, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.7%
Step 1000 (epoch 1.16), 127.7 ms
Minibatch loss: 2.876, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.8%
Step 1100 (epoch 1.28), 145.9 ms
Minibatch loss: 2.823, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.5%
Step 1200 (epoch 1.40), 136.1 ms
Minibatch loss: 2.930, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.5%
Step 1300 (epoch 1.51), 143.6 ms
Minibatch loss: 2.815, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.6%
Step 1400 (epoch 1.63), 142.0 ms
Minibatch loss: 2.814, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.5%
Step 1500 (epoch 1.75), 132.6 ms
Minibatch loss: 2.860, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.3%
Step 1600 (epoch 1.86), 134.6 ms
Minibatch loss: 2.724, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.4%
Step 1700 (epoch 1.98), 135.9 ms
Minibatch loss: 2.663, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.6%
Step 1800 (epoch 2.09), 124.8 ms
Minibatch loss: 2.663, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.4%
Step 1900 (epoch 2.21), 124.4 ms
Minibatch loss: 2.638, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.3%
Step 2000 (epoch 2.33), 124.5 ms
Minibatch loss: 2.633, learning rate: 0.009025
Minibatch error: 3.1%
Validation error: 1.2%
Step 2100 (epoch 2.44), 129.6 ms
Minibatch loss: 2.571, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.1%
Step 2200 (epoch 2.56), 133.8 ms
Minibatch loss: 2.575, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Step 2300 (epoch 2.68), 127.3 ms
Minibatch loss: 2.581, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Step 2400 (epoch 2.79), 122.5 ms
Minibatch loss: 2.517, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.3%
Step 2500 (epoch 2.91), 133.5 ms
Minibatch loss: 2.474, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.2%
Step 2600 (epoch 3.03), 124.5 ms
Minibatch loss: 2.470, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.6%
Step 2700 (epoch 3.14), 120.9 ms
Minibatch loss: 2.477, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Step 2800 (epoch 3.26), 120.6 ms
Minibatch loss: 2.500, learning rate: 0.008574
Minibatch error: 4.7%
Validation error: 1.1%
Step 2900 (epoch 3.37), 122.4 ms
Minibatch loss: 2.504, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.0%
Step 3000 (epoch 3.49), 121.3 ms
Minibatch loss: 2.415, learning rate: 0.008574
Minibatch error: 3.1%
Validation error: 0.9%
Step 3100 (epoch 3.61), 121.8 ms
Minibatch loss: 2.396, learning rate: 0.008574
Minibatch error: 3.1%
Validation error: 1.0%
Step 3200 (epoch 3.72), 120.6 ms
Minibatch loss: 2.348, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.1%
Step 3300 (epoch 3.84), 121.2 ms
Minibatch loss: 2.351, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Step 3400 (epoch 3.96), 127.5 ms
Minibatch loss: 2.303, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.2%
Step 3500 (epoch 4.07), 138.8 ms
Minibatch loss: 2.286, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 3600 (epoch 4.19), 159.9 ms
Minibatch loss: 2.251, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 3700 (epoch 4.31), 149.2 ms
Minibatch loss: 2.230, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.0%
Step 3800 (epoch 4.42), 145.7 ms
Minibatch loss: 2.215, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 3900 (epoch 4.54), 140.0 ms
Minibatch loss: 2.228, learning rate: 0.008145
Minibatch error: 1.6%
Validation error: 1.0%
Step 4000 (epoch 4.65), 135.8 ms
Minibatch loss: 2.235, learning rate: 0.008145
Minibatch error: 3.1%
Validation error: 1.1%
Step 4100 (epoch 4.77), 135.8 ms
Minibatch loss: 2.168, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 4200 (epoch 4.89), 136.9 ms
Minibatch loss: 2.159, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.1%
Step 4300 (epoch 5.00), 165.6 ms
Minibatch loss: 2.209, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 0.9%
Step 4400 (epoch 5.12), 172.2 ms
Minibatch loss: 2.148, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 1.1%
Step 4500 (epoch 5.24), 148.4 ms
Minibatch loss: 2.193, learning rate: 0.007738
Minibatch error: 4.7%
Validation error: 0.9%
Step 4600 (epoch 5.35), 134.1 ms
Minibatch loss: 2.085, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Step 4700 (epoch 5.47), 130.8 ms
Minibatch loss: 2.070, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Step 4800 (epoch 5.59), 130.4 ms
Minibatch loss: 2.055, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 0.9%
Step 4900 (epoch 5.70), 142.3 ms
Minibatch loss: 2.080, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 1.1%
Step 5000 (epoch 5.82), 127.7 ms
Minibatch loss: 2.071, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 0.9%
Step 5100 (epoch 5.93), 147.5 ms
Minibatch loss: 2.024, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 1.1%
Step 5200 (epoch 6.05), 148.0 ms
Minibatch loss: 2.056, learning rate: 0.007351
Minibatch error: 4.7%
Validation error: 1.0%
Step 5300 (epoch 6.17), 141.4 ms
Minibatch loss: 1.970, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Step 5400 (epoch 6.28), 134.5 ms
Minibatch loss: 1.958, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.0%
Step 5500 (epoch 6.40), 131.6 ms
Minibatch loss: 1.961, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.0%
Step 5600 (epoch 6.52), 140.4 ms
Minibatch loss: 1.927, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5700 (epoch 6.63), 163.3 ms
Minibatch loss: 1.913, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Step 5800 (epoch 6.75), 146.6 ms
Minibatch loss: 1.897, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5900 (epoch 6.87), 134.0 ms
Minibatch loss: 1.890, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 6000 (epoch 6.98), 135.4 ms
Minibatch loss: 1.889, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.0%
Step 6100 (epoch 7.10), 141.3 ms
Minibatch loss: 1.868, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6200 (epoch 7.21), 143.3 ms
Minibatch loss: 1.844, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6300 (epoch 7.33), 158.5 ms
Minibatch loss: 1.837, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6400 (epoch 7.45), 147.4 ms
Minibatch loss: 1.839, learning rate: 0.006983
Minibatch error: 1.6%
Validation error: 0.9%
Step 6500 (epoch 7.56), 145.9 ms
Minibatch loss: 1.810, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6600 (epoch 7.68), 142.3 ms
Minibatch loss: 1.821, learning rate: 0.006983
Minibatch error: 1.6%
Validation error: 0.9%
Step 6700 (epoch 7.80), 131.6 ms
Minibatch loss: 1.784, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6800 (epoch 7.91), 131.9 ms
Minibatch loss: 1.774, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6900 (epoch 8.03), 133.5 ms
Minibatch loss: 1.757, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.7%
Step 7000 (epoch 8.15), 129.1 ms
Minibatch loss: 1.753, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Step 7100 (epoch 8.26), 132.9 ms
Minibatch loss: 1.734, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.8%
Step 7200 (epoch 8.38), 133.1 ms
Minibatch loss: 1.736, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 1.0%
Step 7300 (epoch 8.49), 136.3 ms
Minibatch loss: 1.749, learning rate: 0.006634
Minibatch error: 3.1%
Validation error: 0.8%
Step 7400 (epoch 8.61), 150.9 ms
Minibatch loss: 1.700, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.7%
Step 7500 (epoch 8.73), 142.8 ms
Minibatch loss: 1.699, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Step 7600 (epoch 8.84), 136.3 ms
Minibatch loss: 1.740, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Step 7700 (epoch 8.96), 135.7 ms
Minibatch loss: 1.666, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 1.0%
Step 7800 (epoch 9.08), 151.3 ms
Minibatch loss: 1.659, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 7900 (epoch 9.19), 138.0 ms
Minibatch loss: 1.646, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Step 8000 (epoch 9.31), 146.6 ms
Minibatch loss: 1.646, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8100 (epoch 9.43), 130.6 ms
Minibatch loss: 1.634, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8200 (epoch 9.54), 129.0 ms
Minibatch loss: 1.618, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8300 (epoch 9.66), 131.6 ms
Minibatch loss: 1.607, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Step 8400 (epoch 9.77), 142.1 ms
Minibatch loss: 1.597, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Step 8500 (epoch 9.89), 153.5 ms
Minibatch loss: 1.606, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Test error: 0.7%
Backend MacOSX is interactive backend. Turning interactive mode on.

Then the debugger broke at line 409 of mnist.py with the following:

Exception has occurred: SystemExit
exception: no description
  File "/Users/seank/Desktop/MNIST/mnist.py", line 409, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

Unable to Start mnist.py

Hello,

I tried to run the mnist.py 'Start without Debugging', but I get an error:

File "samples-for-ai-master\examples\tensorflow\MNIST\mnist.py", line 37, in <module> import tensorflow as tf File "Python\Python36\lib\site-packages\tensorflow\__init__.py", line 22, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "Python\Python36\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 121, in <module> TFE_ContextOptionsSetServerDef = _pywrap_tensorflow_internal.TFE_ContextOptionsSetServerDef AttributeError: module 'tensorflow.python._pywrap_tensorflow_internal' has no attribute 'TFE_ContextOptionsSetServerDef'

I think that this is caused by the fact that I am running Python 3.6. Any suggestion on how to fix this issue?

trained Weight not applied to Sample process of ScreenerNet ?

In the example of ScreenerNet, the resample process of train process seemed should apply with
trained weight of sample, But it seemed like the train loader simply retrieve trained samples in ordinarily
pytorch dataloader manner in the file sent.py in ScreenerNet dir ?
So the only effect of ScreenerNet is the grads update of Main NetWork ?
And I have not see the PrioritizedExperience Replay(PER) Process in the code (adjust the weight for sampling)

I am Confused with this, it may related with my misunderstanding, Please give me an explaination.
@TobeyQin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.