GithubHelp home page GithubHelp logo

tensorflow-1-public's Introduction

DeepLearning.AI TensorFlow Developer

Welcome to the public repo for this course.

Below is the list of assignments and ungraded labs course-wise.

Want to contribute?

At the time we are not accepting Pull Requests but if you have any suggestion or spot any typo please raise an issue.

If you find a bug that is blocking in any way consider joining our community where our mentors and team will help you. You can also find more information on our community in this Reading Item within Coursera.


C1 - Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning

Week 1

Assignment

  • Housing Prices (C1W1_Assignment.ipynb)

Ungraded Labs

  1. Hello World Neural Network (C1_W1_Lab_1_hello_world_nn.ipynb)

Week 2

Assignment

  • Handwriting Recognition (C1W2_Assignment.ipynb)

Ungraded Labs

  1. Beyond Hello World, A Computer Vision Example (C1_W2_Lab_1_beyond_hello_world.ipynb)
  2. Callbacks (C1_W2_Lab_2_callbacks.ipynb)

Week 3

Assignment

  • Improve MNIST with Convolutions (C1W3_Assignment.ipynb)

Ungraded Labs

  1. Improving Accuracy with Convolutions (C1_W3_Lab_1_improving_accuracy_using_convolutions.ipynb)
  2. Exploring Convolutions (C1_W3_Lab_2_exploring_convolutions.ipynb)

Week 4

Assignment

  • Handling Complex Images (C1W4_Assignment.ipynb)

Ungraded Labs

  1. Image Generator (C1_W4_Lab_1_image_generator_no_validation.ipynb)
  2. Image Generator with Validation (C1_W4_Lab_2_image_generator_with_validation.ipynb)
  3. Compacted Images (C1_W4_Lab_3_compacted_images.ipynb)

C2 - Convolutional Neural Networks in TensorFlow

Week 1

Assignment

  • Cats vs. Dogs (C2W1_Assignment.ipynb)

Ungraded Labs

  1. Using more sophisticated images with Convolutional Neural Networks (C2_W1_Lab_1_cats_vs_dogs.ipynb)

Week 2

Assignment

  • Cats vs. Dogs using Augmentation (C2W2_Assignment.ipynb)

Ungraded Labs

  1. Cats vs. Dogs with Augmentation (C2_W2_Lab_1_cats_v_dogs_augmentation.ipynb)
  2. Horses vs. Humans with Augmentation (C2_W2_Lab_2_horses_v_humans_augmentation.ipynb)

Week 3

Assignment

  • Horses vs. Humans using Transfer Learning (C2W3_Assignment.ipynb)

Ungraded Labs

  1. Exploring Transfer Learning (C2_W3_Lab_1_transfer_learning.ipynb)

Week 4

Assignment

  • Multi-class Classifier (C2W4_Assignment.ipynb)

Ungraded Labs

  1. Classifying Rock, Paper, and Scissors (C2_W4_Lab_1_multi_class_classifier.ipynb)

C3 - Natural Language Processing in TensorFlow

Week 1

Assignment

  • Explore the BBC News Archive (C3W1_Assignment.ipynb)

Ungraded Labs

  1. Simple Tokenizing (C3_W1_Lab_1_tokenize_basic.ipynb)
  2. Simple Sequences (C3_W1_Lab_2_sequences_basic.ipynb)
  3. Sarcasm (C3_W1_Lab_3_sarcasm.ipynb)

Week 2

Assignment

  • Categorizing the BBC News Archive (C3W2_Assignment.ipynb)

Ungraded Labs

  1. Positive or Negative IMDB Reviews (C3_W2_Lab_1_imdb.ipynb)
  2. Sarcasm Classifier (C3_W2_Lab_2_sarcasm_classifier.ipynb)
  3. IMDB Review Subwords (C3_W2_Lab_3_imdb_subwords.ipynb)

Week 3

Assignment

  • Exploring Overfitting in NLP (C3W3_Assignment.ipynb)

Ungraded Labs

  1. IMDB Subwords 8K with Single Layer LSTM (C3_W3_Lab_1_single_layer_LSTM.ipynb)
  2. IMDB Subwords 8K with Multi Layer LSTM (C3_W3_Lab_2_multiple_layer_LSTM.ipynb)
  3. IMDB Subwords 8K with 1D Convolutional Layer (C3_W3_Lab_3_Conv1D.ipynb)
  4. IMDB Reviews with GRU (and optional LSTM and Conv1D) (C3_W3_Lab_4_imdb_reviews_with_GRU_LSTM_Conv1D.ipynb)
  5. Sarcasm with Bidirectional LSTM (C3_W3_Lab_5_sarcasm_with_bi_LSTM.ipynb)
  6. Sarcasm with 1D Convolutional Layer (C3_W3_Lab_6_sarcasm_with_1D_convolutional.ipynb)

Week 4

Assignment

  • Writing Shakespeare with LSTMs (C3W4_Assignment.ipynb)

Ungraded Labs

  1. NLP with Irish Music (C3_W4_Lab_1.ipynb)
  2. Generating Poetry from Irish Lyrics (C3_W4_Lab_2_irish_lyrics.ipynb)

C4 - Sequences, Time Series and Prediction

Week 1

Assignment

  • Create and Predict Synthetic Data (C4W1_Assignment.ipynb)

Ungraded Labs

  1. Time Series (C4_W1_Lab_1_time_series.ipynb)
  2. Forecasting (C4_W1_Lab_2_forecasting.ipynb)

Week 2

Assignment

  • Predict with a DNN (C4W2_Assignment.ipynb)

Ungraded Labs

  1. Preparing Features and Labels (C4_W2_Lab_1_features_and_labels.ipynb)
  2. Single Layer Neural Network (C4_W2_Lab_2_single_layer_NN.ipynb)
  3. Deep Neural Network (C4_W2_Lab_3_deep_NN.ipynb)

Week 3

Assignment

  • Using RNN's and LSTM's for time series (C4W3_Assignment.ipynb)

Ungraded Labs

  1. Recurrent Neural Network (RNN) (C4_W3_Lab_1_RNN.ipynb)
  2. Long Short-Term Memory (LSTM) (C4_W3_Lab_2_LSTM.ipynb)

Week 4

Assignment

  • Daily Minimum Temperatures in Melbourne - Real Life Data (C4W4_Assignment.ipynb)

Ungraded Labs

  1. Long Short-Term Memory (LSTM) (C4_W4_Lab_1_LSTM.ipynb)
  2. Sunspots (C4_W4_Lab_2_Sunspots.ipynb)
  3. Sunspots - DNN Only (C4_W4_Lab_3_DNN_only.ipynb)

tensorflow-1-public's People

Contributors

andres-zartab avatar cfav-dev avatar chrismoroney avatar mubsi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-1-public's Issues

Course 3 - Week 3 - Lesson 1c Using Deprecated Function

Hi Deep Learning.AI Team,

For the below code,

train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)

it uses the deprecated function "output_shapes", so all users that use TF2.0 cannot run the cell

C2W1_Assignment.ipynb: Expected Output is wrong

The number of images should be 12500, not 12501

Expected Output:

There are 12501 images of dogs.
There are 12501 images of cats.

In the Dog and Cat directory in the Zip there are 12501 files.
However, after extracting the Zip to tmp, Thumbs.db is deleted by under code, so the file count is 12500.

# Deletes all non-image files (there are two .db files bundled into the dataset)
!find /tmp/PetImages/ -type f ! -name "*.jpg" -exec rm {} +

Lab for C2_W2_Lab_1_cats_v_dogs_augmentation.ipynb

Access denied with the following error:

Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses. 

You might allow access in order to avoid downloading files of which size is big.

tensorflow-1-public/C1/W2/assignment/C1_W2_Assignment_Solution.ipynb fix?

image

I don't understand this 'fix'

In my experiments, logs.get('accuracy') Always returns None.

so you will never exit the callback due to achieving a target accuracy.?

As noted in the previous issue, a fix would be to use 'loss' which does return a real value. Note, the grader may need to be updated as well.
[Edit]
I see that logs.get('accuracy') Does run successfully in the CoLab environment.

So, I guess this issue should really be to update the Coursera environment?

Example: In Coursea, Exercise 2, I have:
class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): print(f"accuracy: {logs.get('accuracy')}") if(logs.get('loss')<0.01): print("\nReached 99% accuracy so cancelling training!") self.model.stop_training = True

I get output like this:
Epoch 9/9 59424/60000 [============================>.] - ETA: 0s - loss: 0.0126 - acc: 0.9957accuracy: None 60000/60000 [==============================] - 10s 162us/sample - loss: 0.0126 - acc: 0.9957

logs.get('accuracy') is returning None. so I used logs.get('loss') instead.

I guess I am not really sure what's going on. Is there a compile/other flag that allows logs.get('accuracy') to work on Coursera?

The difference between the co-lab and coursera environments are confusing. Why not do it all on Coursera?

Also, curiously, the
model.fit()
output includes accuracy in its printout.

NLP Lab

seed_text = "Laurence went to dublin"
next_words = 100
  
for _ in range(next_words):
	token_list = tokenizer.texts_to_sequences([seed_text])[0]
	token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
	predicted = model.predict_classes(token_list, verbose=0)
	output_word = ""
	for word, index in tokenizer.word_index.items():
		if index == predicted:
			output_word = word
			break
	seed_text += " " + output_word
print(seed_text)

predict_classes has been deprecated based on these answers predict_classes

tensorflow-1-public/C1/W2/assignment/C1_W2_Assignment_Solution.ipynb logs.get('accuracy')

In the callback, the ungraded lab demonstrated:
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('loss')<0.4):
...

In the graded assignment the expectation seems to be that
logs.get('accuracy')
should be used!

I suggest:

  1. the ungraded and graded lab use the same method.
  2. the method be 'loss' rather than accuracy because:
    a) the model.fit routine spits out loss at each iteration.
    b) accuracy has not yet been defined in the course

This will require rewriting the instructions to clarify.
Any change here will require coordination with the grader.

Minor typo in C4_W2_Lab_3_deep_NN.ipynb

In the 3rd to last markdown cell, the fifth word is missing a letter. Alternatively, the typo appears on line 696 of the raw ipynb file.

You can get the preictions again and overlay it on the validation set.

should be:

You can get the predictions again and overlay it on the validation set.

Small correction in the ReLU explanation

I think the following sentence:

In other words, it only passes values 0 or greater to the next layer in the network

should be updated as below:

In other words, it only passes values greater than 0 to the next layer in the network

Because the test is:

if x > 0:
return x

C3W1_Assignment.ipynb: wrong remove_stopwords implementation

The notebook mention you should get a vocabulary that contains 29714 words, but that vocab still contains stop words. That's because a basic implementation of remove_stopwords just removes the stop words without consider that ones with punctuation as a part of the word. So if there is a word in a sentence like about. (about being a stop word) when fit_on_texts is implemented, about will be part of the vocab.
With an implementation using re I got a vocab of 29608 words

for word in stopwords:
    sentence = re.sub(rf'\b{word}\b', '', sentence)

Another approach is using a TextVectorization layer. With the next standardize method I got the same number of words, 29608 + 1('').

def custom_standardization(input_data):
    
    text = tf.strings.lower(input_data)
    text = tf.strings.regex_replace(text, '[%s]' % re.escape(string.punctuation), ' ')

    for word in stopwords:
        text = tf.strings.regex_replace(text, rf'\b{word}\b', '')

    return text

Typo

At, 'Build the model' second paragraph line 5.

"For the input_shape, you can specify None (like in the lecture video) if you want to be the model to be more flexible with the number of timesteps."

tensorflow-1-public/C1/W2/ungraded_labs/C1_W2_Lab_1_beyond_hello_world.ipynb Accuracy vs Loss

image

The use of the term 'accuracy' here seems incorrect?

  1. logs.get('loss') is going to return the"'sparse_categorical_crossentropy'" loss.
  2. accuracy is = #correct_preductions/#Total_predictions

I recommend just saying "Loss < xx%".
This is going to be related to the next couple of issues that relate to the graded assignment.

Lets note at this point that the ungraded lab is using logs.get('loss')

C1_W4_Lab_2_image_generator_with_validation.ipynb name error

Set up matplotlib fig, and size it to fit 4x4 pics

next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in val_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in val_human_names[pic_index-8:pic_index]]

should be chaned as below:

next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]

C1_W2_Lab_1_beyond_hello_world.ipynb: some portions still tailored for mnist dataset

In the exploration exercises some portions are still pertaining to mnist, instead of fashion_mnist.

  • mnist = tf.keras.datasets.mnist -> mnist = tf.keras.datasets.fashion_mnist
    • Exercises 2, 4, 5, and 7: no impact on the global meaning, just for consistency
    • Exercise 3: the dimensions in the error message don't match the explanation
    • Exercise 6: just might have an issue with the 128-neuron hidden layer not having the same behavior as the baseline 512-neuron layer in terms of loss evolution
  • Exercise 8, the 95% in the text should be changed to 60% (95% was for mnist)

Error when downloading datset

Access denied with the following error:

Cannot retrieve the public link of the file. You may need to change
the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

output not cleared in installed version

When I opened the first assignment, there were error messages on the output of some of the cells.

This would not appear in the github version- but in the file installed on Coursera. It looked like once the file was installed, maybe it was run without a library install?
Anyway, Just need to clear the output and save the file on the Coursera image.

I would provide a image of the error messages, but once I ran the file, the copy in my container changed.

Small Typo in C3W3_Assignment.ipynb

In the global variables markdown cell, OOV_TOKEN line, it should be written as:
Defaults to \"\<OOV>\".
There is an extra \ in the middle.

Typo in C4_W4_Lab_3_Sunspots_CNN_RNN_DNN

In the C4_W4_Lab_3_Sunspots_CNN_RNN_DNN notebook, there's a small typo. It says:

As mentioned in the lectures, if your results don't good, you can try tweaking the parameters here and see if the model will learn better.

The part which doesn't look right is don't good. I'm not sure what the author intended to write, but it seems like it's missing a verb like look:

As mentioned in the lectures, if your results don't look good, you can try tweaking the parameters here and see if the model will learn better.

tensorflow-1-public/C1/W2/assignment/C1_W2_Assignment_Solution.ipynb 9 Epochs vs 10

In the assignment, there are these instructions:

_```
Some notes:

It should succeed in less than 10 epochs, so it is okay to change epochs to 10, but nothing larger
When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"
If you add any additional variables, make sure you use the same names as the ones used in the class


This should be changed to 9 epochs. (also need to account for changing to loss vs accuracy...)

The grader checks to see if you run more than 9 epochs and fails the notebook if you have.

While the callback might prevent you from running the full 10, it does not always work. This is in part because the version of the code I was using (from the live course last week) did not have the warning about if logs.get('accuracy') and I was using logs.get('loss') < 0.01,, which is not the same as accuracy...

Note, any changes here need to be tested with the grader.

ValueError: Data cardinality is ambiguous:

The below section of the lab notebook " C3_W2_Lab_1_imdb.ipynb" causes Value error:
Code

num_epochs = 10

# Train the model
model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))

Error:

774/782 [============================>.] - ETA: 0s - loss: 0.4963 - accuracy: 0.7408

---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

[<ipython-input-35-f101d07e60e0>](https://localhost:8080/#) in <module>()
      2 
      3 # Train the model
----> 4 model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))

1 frames

[/usr/local/lib/python3.7/dist-packages/keras/engine/data_adapter.py](https://localhost:8080/#) in _check_data_cardinality(data)
   1651                            for i in tf.nest.flatten(single_data)))
   1652     msg += "Make sure all arrays contain the same number of samples."
-> 1653     raise ValueError(msg)
   1654 
   1655 

ValueError: Data cardinality is ambiguous:
  x sizes: 0
  y sizes: 25000
Make sure all arrays contain the same number of samples.```


TypeError: '<' not supported between instances of 'NoneType' and 'float' while using callbacks to stop training after a certain mae

Code for W4 Time Series Notebook on using a callback to stop training after a certain mae
model: LSTM

class myCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    '''
    Halts the training when a certain metric is met

    Args:
      epoch (integer) - index of epoch (required but unused in the function definition below)
      logs (dict) - metric results from the training epoch
    '''

    # Check the validation set MAE
    if(logs.get('val_mae') < 5.2):

      # Stop if threshold is met
      print("\nRequired val MAE is met so cancelling training!")
      self.model.stop_training = True

# Instantiate the class
callbacks = myCallback()

returns the error 'TypeError: '<' not supported between instances of 'NoneType' and 'float' on line ' if(logs.get('val_mae') < 5.2):'

Any help on the above will be appreciated!

C2_W4_Lab_1_multi_class_classifier.ipynb No module named 'keras_preprocessing'

In the "Prepare the ImageDataGenerator" it is needed to replace
"from keras_preprocessing.image import ImageDataGenerator"
with
"from tensorflow.keras.preprocessing.image import ImageDataGenerator"
without this I'm getting the error:

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in <cell line: 1>()
----> 1 from keras_preprocessing.image import ImageDataGenerator
2 from tensorflow.keras.preprocessing.image import ImageDataGenerator
3
4 TRAINING_DIR = "tmp/rps-train/rps"
5 training_datagen = ImageDataGenerator(

ModuleNotFoundError: No module named 'keras_preprocessing'


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------`

Syntax error in the window function

Hello, in the notebook C4_W4_Lab_2_Sunspots.ipynb , in the windowed_dataset function, I think that line:
ds = ds.map(lambda w: (w[:-1], w[1:]))
should be like this:
ds = ds.map(lambda w: (w[:-1], w[-1:]))
Can you please confirm?

AttributeError: 'NoneType' object has no attribute 'set_model'

How do I fix the following error? It happens when I try to run the final codeblock where you have to call the function. Please help, I'm kinda stuck here

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-13-1733272e84a3> in <module>
----> 1 hist = train_mnist(x_train, y_train)

<ipython-input-12-1769252edc00> in train_mnist(x_train, y_train)
     23     # Fit the model for 10 epochs adding the callbacks
     24     # and save the training history
---> 25     history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
     26 
     27     ### END CODE HERE

/opt/conda/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

/opt/conda/lib/python3.8/site-packages/keras/callbacks.py in set_model(self, model)
    283       model.history = self._history
    284     for callback in self.callbacks:
--> 285       callback.set_model(model)
    286 
    287   def _call_batch_hook(self, mode, hook, batch, logs=None):

AttributeError: 'NoneType' object has no attribute 'set_model'

C1_W2_Lab_1_beyond_hello_world.ipynb Error in tf.keras.activation.softmax

input to softmax function: [1. 3. 4. 2.]

ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_21880\3343928682.py in
5
6 #feed the inputs to a softmax activation function
----> 7 outputs = tf.keras.activations.softmax(inputs)
8 print(f'output of softmax function: {outputs.numpy()}')
9

d:\resoluteai\anaconda\lib\site-packages\tensorflow\python\util\traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb

d:\resoluteai\anaconda\lib\site-packages\keras\activations.py in softmax(x, axis)
90 output = e / s
91 else:
---> 92 raise ValueError(
93 f"Cannot apply softmax to a tensor that is 1D. Received input: {x}"
94 )

ValueError: Cannot apply softmax to a tensor that is 1D. Received input: [1. 3. 4. 2.]

Screenshot (20)

Getting error while trying to download file from drive

Executing this command as per C1_W4_Lab_1_image_generator_no_validation.ipynb: !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5

Getting the following error:

Access denied with the following error:

 	Cannot retrieve the public link of the file. You may need to change
	the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

	 https://drive.google.com/uc?id=1onaG42NZft3wCE1WH0GDEbUhu75fedP5 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.