GithubHelp home page GithubHelp logo

tensorflow / lucid Goto Github PK

View Code? Open in Web Editor NEW
4.6K 158.0 650.0 144.66 MB

A collection of infrastructure and tools for research in neural network interpretability.

License: Apache License 2.0

Python 0.26% Jupyter Notebook 99.73% JavaScript 0.01% HTML 0.01% Shell 0.01%
tensorflow interpretability visualization machine-learning colab jupyter-notebook

lucid's Introduction

Lucid

PyPI project status Travis build status Code coverage Supported Python version PyPI release version

Lucid is a collection of infrastructure and tools for research in neural network interpretability.

We're not currently supporting tensorflow 2!

If you'd like to use lucid in colab which defaults to tensorflow 2, add this magic to a cell before you import tensorflow:

%tensorflow_version 1.x

Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support.


Notebooks

Start visualizing neural networks with no setup. The following notebooks run right from your browser, thanks to Colaboratory. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.

You can run the notebooks on your local machine, too. Clone the repository and find them in the notebooks subfolder. You will need to run a local instance of the Jupyter notebook environment to execute them.

Tutorial Notebooks

Feature Visualization Notebooks

Notebooks corresponding to the Feature Visualization article

Building Blocks Notebooks

Notebooks corresponding to the Building Blocks of Interpretability article





Differentiable Image Parameterizations Notebooks

Notebooks corresponding to the Differentiable Image Parameterizations article


Activation Atlas Notebooks

Notebooks corresponding to the Activation Atlas article

Collecting activations Simple activation atlas Class activation atlas Activation atlas patches

Miscellaneous Notebooks



Recomended Reading

Related Talks

Community

We're in #proj-lucid on the Distill slack (join link).

We'd love to see more people doing research in this space!


Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

This project is research code. It is not an official Google product.

Special consideration for TensorFlow dependency

Lucid requires tensorflow, but does not explicitly depend on it in setup.py. Due to the way tensorflow is packaged and some deficiencies in how pip handles dependencies, specifying either the GPU or the non-GPU version of tensorflow will conflict with the version of tensorflow your already may have installed.

If you don't want to add your own dependency on tensorflow, you can specify which tensorflow version you want lucid to install by selecting from extras_require like so: lucid[tf] or lucid[tf_gpu].

In actual practice, we recommend you use your already installed version of tensorflow.

lucid's People

Contributors

1wheel avatar abhinavsp0730 avatar arvind avatar badryoubiidrissi avatar bmiselis avatar colah avatar csvoss avatar dependabot[bot] avatar dguliani avatar gabgoh avatar holt59 avatar jacobhilton avatar kmader avatar ludwigschubert avatar marlonjan avatar merajat avatar michaelpetrov avatar mihaimaruseac avatar ncammarata avatar progamergov avatar rcshubhadeep avatar samuelmarks avatar shancarter avatar stefsietz avatar taneta avatar teonbrooks avatar ttumiel avatar tylersuard avatar zanarmstrong avatar znah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lucid's Issues

Rethinking Abstractions: T, endpoints, "deferred tensors", ...

In this issue, we'll discusses some weaknesses of Lucid's present abstractions. We also present a couple ideas for possible alternatives, but don't presently have a strong view on the right path forward.

(A lot of these thoughts were developed in conversation with @ludwigschubert.)


Introduction

A lot of the weird stuff about Lucid comes from us having different needs that most TF users. A normal TF workflow looks something like "define a graph, then train it for a while". Our needs are often very different: create one graph for 30 seconds, then throw it away. Create another similar graph, then throw it away too. Moreover, these graphs often have a composable structure, where we want to be able to talk about parts of the graph independent of a particular instantiation and use it over and over again.

At a very high level, Lucid's answer is to use closures. For example, when I declare an objective:

obj = objectives.channel("mixed4a", 37)

I'm approximately creating the closure:

obj = lambda T: tf.reduce_sum(T("mixed4a")[..., 37])

Where T is an accessor, kind of like $ in jQuery: it allows you to conveniently access lots of things you might need, without writing a lot of code.

(This isn't quite true: the closure is actually further wrapped with in a convenience Objective object, which allows them to be added and multiplied without explicitly escaping the closure. More on this in a minute.)

At a very high level, I believe that closures like this are the right abstraction for us. However, I also have a few concerns:

  • The role of T is a bit confused and may make things unnecessarily centralized.
  • If we want to wrap things up as objects, it may make sense to have a more general DeferredTensor class for these kinds of closures.
  • Our API is a bit inconvenient for people not buying into the full optvis framework.
    • For example, how do I just get the activations of a layer for a given input?

The role of T

Generally, our closures take an argument T, a special function which allows them to access things like neural networks layers:

obj = lambda T:   ...   T("mixed4a")  ...

The fact that T is getting passed in as an argument might make you believe it has a lot of special state. That's true to some extent -- it's kind of a grabbag of things -- but in it's main usage it equivalent to something like:

def T(name):
  return tf.get_default_graph().get_tensor_by_name("import/" + name)

Thank goodness we don't have to type that all the time, it's quite a mouthful! But it could just as easily be global.

A little bit more state comes from us wanting T to have special names for some nodes, to make them more user friendly. For the most part, this comes from the imported model. From that perspective, it might be more intuitive to do something like model["mixed4a"] instead of T("mixed4a").

There are some things, like the pre-transformation input or the global step that we probably want to get from somewhere else. That said, we could probably do something like render.global_step() if we wanted to get rid of T.

So, what are the pros/cons of of the present T closure arg set up?

Pros: It seems to me that the biggest one is actually preventing users from shooting themselves in the foot by trying to access graphs that haven't been imported or constructed.

Cons: Centralized and annoying to extend; passing around unnecessary variables; alternative setups might make error messages / debugging better.


Closures / Wrapped closures

Before we talk about APIs for dealing with our closures, I'd like to clarify their role a bit:

  • We create closures to defer TF operations so they can be run in association with a graph that doesn't exist yet (or possibly re run with multiple graphs).
  • This is mostly a separate issue from the T issue.

(Observation: These deferring closures form a monad. Most of the interesting API options are ways of reifying Functor/Monad operations in Python.)

Overview of Approaches

Broadly, there are three ways of handling the closures we create:

  1. Our API creates actual tensor objects, and we rely on our users wrap API calls in a closure. We presently do this for parameterizations, such as:
param_f = lambda: param.image(...)

This is the most transparent option, but can be a bit tedious.

  1. Our API returns closures. This makes some simple use cases convenient, but can be a bit annoying. For example, when we did objectives this way, we needed to stuff like this:
obj = lambda: channel(..., 3)() + channel(..., 4)()
  1. Our API returns closures wrapped into a special type of object, like Objective right now. This allows us to have the above just be:
obj = channel(..., 3) + channel(..., 4)

In it's most general sense, this would suggest a kind of DeferredTensor object. As we'll discuss shortly, this might have a number of interesting benefits.

My sense is that we should either do 1 or 3, and that 2 is a kind of unhappy intermediate version. Ideally, it would be nice for all of lucid to be consistent in our choices here.

The "DeferredTensor" approach (option 3)

There are a number of interesting benefits the arise from option 3 (DeferredTensor):

  • A lot of error checking could be done at creation of the DeferredTensor object.

    • For example, if we switched from T to model, model[layer] could check if the appropriate layer exists in the model before generating a DeferredTensor object.
  • DeferredTensor could carry additional meta data.

    • It could track which models need to be imported, so that they could be automatically imported.
    • It could track the maximum batch an objective refers to, so that parameterizations could automatically scale.
  • DeferredTensor could provide operator overloading / automatic coercion / etc to make the closures more convenient to manipulate. This carries the risk of becoming less transparent / confusing / abstraction creep.

    • We could get automatic coercion to Tensors if we wanted by registering with tensorflow. But I worry this would open the door to lots of really annoying subtle bugs, where someone accidentally coerces to a TF Tensor in the wrong context and then people get surprised with graph mixing errors later on.

Resulting API Possiblities

If we went with the closures are user responsibility route, but got rid of T:

# Get layers by indexing model instead of accessing T:
obj = lambda: L2(model["mixed4a", ..., 37])

If we went the DeferredTensor route (and also got rid of T):

# Deferred tensor convenience functions
obj = model["mixed4a", ..., 37].L2

# No model arg to render_vis -- it can be inferred
render_vis(obj)

# Evaluate layers in a standalone way:
model["mixed4a"].isolated_eval(...)

Consider having `param`s return closures by default

Currently param_f is the only argument to render that users manually need to wrap in a lambda.
I understand we need the lambda so we can create the input on the same graph as the model, but I think we should be able to return lambdas from our convenience constructors such as param.images.image.

Happy to handle the coding if @colah agrees this is a good idea, but not confident that it is.
(I haven't created many custom parameterizations so far.)

Put frequently used stuff into top-level namespace

Typical TF import:

import tensroflow as tf

Typical lucid import:

import lucid.misc.io.showing as show
from lucid.misc.io.loading import load
from lucid.optvis import objectives
from lucid.optvis import render
from lucid.optvis import style
from lucid.optvis.param.random import image_sample

Improve test coverage from 70%

~61%~~64%~70%

Remaining:

lucid/optvis/transform.py                        74     57    23%
lucid/optvis/objectives.py                      203    133    34%
lucid/optvis/param/lowres.py                     23     12    48%
lucid/misc/environment.py                        14      7    50%
lucid/misc/io/showing.py                         47     17    64%
lucid/optvis/render.py                           90     22    76%
lucid/optvis/param/spatial.py                    37      8    78%
lucid/misc/io/saving.py                          41      7    83%
lucid/misc/io/serialize_array.py                 47      8    83%
lucid/misc/io/reading.py                         58      9    84%
lucid/optvis/param/resize_bilinear_nd.py         37      6    84%
lucid/misc/channel_reducer.py                    26      3    88%
lucid/optvis/param/images.py                     17      2    88%
lucid/misc/io/writing.py                         29      3    90%
lucid/optvis/param/color.py                      21      2    90%
lucid/misc/gradient_override.py                  36      3    92%
lucid/modelzoo/vision_base.py                    30      1    97%
lucid/misc/io/loading.py                         46      1    98%

Loading other models

Hi, thanks for provding the powerful visualization. I just wonder can we also apply lucid on other models such as VAE or GAN? If not, is there any possible alternative method to ahieve this goal? Thanks.

Python 3 compatibility

/srv/venv/lib/python3.6/site-packages/lucid/scratch/web/svelte.py in build_svelte(html_fname)
     33     print(subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT))
     34   except subprocess.CalledProcessError as exception:
---> 35     print("svelte build failed!\n" + exception.output)
     36   return js_fname
     37 

TypeError: must be str, not bytes

When using a notebooks/building-blocks/SemanticDictionary.ipynb which manually installs 0.0.5 (not the latest commit where that has been fixed)

File "/srv/venv/lib/python3.6/site-packages/lucid/scratch/web/svelte.py", line 31
    print cmd
            ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(t cmd)?

int32 vs int64 issue

When np.int defaults to int32, 1e10 is out of bounds. Error message below, PR on the way.

~\MYPATH\lucid\misc\gradient_override.py in register_to_random_name(grad_f)
     68     String that gradient function was registered to.
     69   """
---> 70   grad_f_name = grad_f.__name__ + "_" + hex(np.random.randint(0, 1e10))[2:]
     71   tf.RegisterGradient(grad_f_name)(grad_f)
     72   return grad_f_name

mtrand.pyx in mtrand.RandomState.randint()

ValueError: high is out of bounds for int32

Specific usage

After each iteration in model training, whether and how can I compare the iteration results in the form of images using Lucid ?

after some layers the image given as output by render.render_vis become grey

When i try to use lucid on my own network the output of a lot of filters is just gray (there are some pixels that vary however the returned image is gray for a human. The deeper the layer the more filters are grey (in the highest layers everything returned is completely grey). Is there something I am doing wrong?

LookupError: No gradient defined for operation 'import/dense_1/BiasAdd' (op type: Dequantize)

screenshot from 2018-05-21 19-40-55

Here the 'import/BiasAdd' does not have gradient. But 'import/BiasAdd/(BiasAdd) might have the gradient. While backprop, lucid is taking 'import/BiasAdd' and it returns the above 'No gradient' error. I have built a keras model ( in .h5) then converted it to (.pb). Now I am trying to get relevance for each channel of my CNN. But the gradient is not back propagating because of the error. Please help me in finding the relevance.

Rethink render_vis outputs

Initially noted as render_viz being hard to test, I'd like to rethink some of its output.
At the moment we have no way of accessing the numerical values of the objectives or print_objectives.

I could see the results of render_viz containing all of this metadata.

This could fit into a configurable stopping criterion, with defaults being based on thresholds, but also offer running to convergence (in some default metric) or running to an arbitrary stopping criteria lambda function. This function could have access to the objective values/metadata described above and, for example, allow optimizing until a certain class probability becomes bigger than another class's probability, etc.

Consider eliminating explicit `.load_graphdef()` call?

Hey @colah @znah ;
should we consider automatically calling load_graphdef when instantiating a modelzoo class?
To me this boils down to:

What can you currently do with an instantiated modelzoo.Model, that you couldn't do with just the class?

It feels like this could simplify the current API, but it may also hide the fact that a graph definition may need to be downloaded.
Looking forward to your opinions!

No meaningful Visualizations in InceptionV3

Hey,

I wanted to visualize a custom InceptionV3 that I trained, but there were no interpretable Visualizations.
Therefore, I visualized the Keras InceptionV3, trained on Imagenet. But also with this model, I could not find any good visualizations. The closest one is mixed6/concat. To reproduce my problem, i created a Colab-Notebook:

https://colab.research.google.com/drive/1oIEeHZWxyU3vsqFpaQG1rs4YjHyYzfa1

If anyone knows, what I am doing wrong, or what the problem is, it would be great !

Spatial/per-pixel channel objective?

Would it theoretically be possible to create an objective that uses either a control map or a callback function that changes the channel objective on a coordinate-based criterion? A simple example would be a vertical gradient between two channels.

I guess this would be computationally expensive, but maybe there are also reasons why this is not possible at all?

Dead Link to Image

In https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/SemanticDictionary.ipynb there is a dead link in the section "Spritemaps" (https://storage.googleapis.com/lucid-static/building-blocks/sprite_mixed4d_channel.jpeg).

This is what I get instead of the actual image:

<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous caller does not have storage.objects.get access to lucid-static/building-blocks/sprite_mixed4d_channel.jpeg.
</Details>
</Error>

I would appreciate a working link. No biggy though.

Add CI

Should build PRs, run tests, publish tagged commits to PyPI.

Blocked by: #1 #2 #3 #4

show() regressions

In the module lucid.misc.io.showing:

  • No support for w to specify output width. showing.image() and showing.images() take w arguments but the don't do anything. showing.show() does not take a w argument.

    To fix this, _image_url() should probably take a w argument that causes appropriate zooming if set.

  • showing.show() takes a domain argument, but on the images() path it has not effect.

    This is simply because images() doesn't pass domain through to _image_url():

    url = _image_url(array)

Running on gpu

When I use lucid on my on pc (in a notebook) it doesn't use a gpu. Is there some command I should use with render_vis so it knows to use a gpu?

Notebook feedback: Users would like architecture diagram

User feedback on the lucid tutorial notebook:

It's a great tutorial. It would be great if you could add a line to view the pre - trained model architecture so that we have idea of names of different layer-filters. I tried multiple ways but with Lucid model zoo I was not able to view complete architecture.

I tried checking model.summary() etc.

Sprite enhancements

Things I'd like; maybe together with the asset loader:

pre-loading hints

I'd like to be able to tell the asset loader to go ahead and request my asset. For example on hovering over an example chooser.

debouncing

We added the grey loading screen to invalidate the current sprite so we don't show outdated images while the new ones are loading. This works great, but would be even better if it only set the grey loading background after a short delay, such as 100ms to prevent flashing.

Kit fox vs. tench

Please close if there's a banal explanation for this, but I noticed that the InceptionV1 labels do not match the labels according to the tensorflow/models ImageNet data preprocessing.

In lucid,

>>> model = models.InceptionV1()
>>> model.load_graphdef()
>>> model.labels[:10]
['dummy',
 'kit fox',
 'English setter',
 'Siberian husky',
 'Australian terrier',
 'English springer',
 'grey whale',
 'lesser panda',
 'Egyptian cat',
 'ibex']

In the download of ImageNet, I see

% head -n 10 <...>/imagenet-tfrecords/labels.txt 
0:background
1:tench, Tinca tinca
2:goldfish, Carassius auratus
3:great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
4:tiger shark, Galeocerdo cuvieri
5:hammerhead, hammerhead shark
6:electric ray, crampfish, numbfish, torpedo
7:stingray
8:cock
9:hen

Is one of these the standard?

Add code linting reporter?

I am considering adding a code quality reporter once we're a public repo.

The main objection would be that its extra work to configure + we may not always agree with default stylistic choices. For example, most linters will flag our usage of the T function as (technically correct) unidiomatic.

Leaving this here for discussion, @colah @znah. :-)

Importing model and visualizing it with Lucid

I'm trying to open an autoencoder model I've trained myself, on Lucid, and I'm using as reference the notebook Importing a graph into modelzoo.

I'm mostly in doubt on how to use the provided class:

  model_path = 'nasnet_mobile_graphdef_frozen.pb.modelzoo'
  image_shape = [224, 224, 3]
  image_value_range = (0, 1)
  input_name = 'input' 

What should I define as image_shape, image_value_range? For what images I'm considering this? The output of a certain convolutional layer?

Also, for what is defined the input_name?

Getting multiple images per step in optvis.render?

Hi, first of all thank you for the great work. :)

I have a question: What does it mean if I get multiple images per threshold from render_vis()? The numpy array vis in this line has a shape of (1, 128, 128, 3) in the tutorial. With a custom model, however, I got the shape (64, 100, 221, 6). It corresponds to a height of 100, width of 221, 6 channels, I get that. But why am I getting 64 images?

Thank you.

Add test coverage reporter

Once we're public, I'd like to take on measuring code coverage (probably abysmal atm) and integrate sth. like Coveralls (free for open source software)

Let's discuss whether you want this, too, here! :-)

A wrong in ActivationGrid.ipynb And my solution

When I run this demo, I can't get the predicted picture
tim 20180402234351
, and when I open this demo in jupyter, there is another wrong that shows . tim 20180402234842
So, to show the picture in Terminal, we can
import matplotlib.pyplot as plt
then:
tim 20180402235157
Note the green line and add those three lines above the green one.
At this time, we obtain the demo:
tim 20180402231145
I wish to pull a request to fix this bug which is not a bug..LOL

collapse_alpha needs known shape

Let's assert the tensor shape coming into collapse alpha to be known at graph creation time.

Currently you can get an exception if you ask for a collapse_alpha transform to be applied after a transform that stochastically changes t_image's shape (such as random_scale).

Feature request: blur transform

Not sure if this really makes sense, but a blur transform could be an interesting addition for lucid.optvis.transform - maybe it would allow to influence the behaviour of the high frequency patterns and/or the general scale of features.

Unify module name plurality?

transform vs objectives

I feel there are good arguments either way, but I think we should decide on using either plural or singular for these modules.

This could also be a good test case (because non-essential) for a deprecation strategy. I can think of aliasing the module in an __init__ file and issuing a DeprecationWarning

Enhanced interactive outputs/logging

  • Logging needs may change per function call; how can users specify that? (module-wide log levels may not be precise enough)
  • "log" images—same or different system? custom log handler? etc.
  • image output may be surprising, why not display stats about image on hover etc

show images during render_vis() process

We seem to have disabled showing images during the render_vis process. I would favor reinstating this, using @enjalot's fantastic misc.show.images().

It only uses general iPython stuff, so it should work in both colab and other jupyter/ipython frameworks. I think on the command line it will just null, although we should test it.

Examples in optvis readme don't work

Maybe the documentation is outdated, but it looks like several of the examples are using methods that do not seem to be implemented:
https://github.com/tensorflow/lucid/tree/master/lucid/optvis

e.g.

obj = objectives.channel("mixed4a_pre_relu", 2)
param_f = lambda: tf.concat([
    param.rgb_sigmoid(param.naive([1, 128, 128, 3])),
    param.fancy_colors(param.naive([1, 128, 128, 8])/1.3),
    param.rgb_sigmoid(param.laplacian_pyramid([1, 128, 128, 3])/2.),
    param.fancy_colors(param.laplacian_pyramid([1, 128, 128, 8])/2./1.3),
], 0)
render_vis(model, obj, param_f)

feature-visualization

Sorrry for bothering,
I try to do the feature-visualization for VGG16 in pytorch, but the result of my picture lost much of color information, some results are as follows:
feature-visualization

my loss function is the mean of the output, do I need any other constraints?

Support for tf.SavedModel

It looks like modelzoo could directly support the new SavedModel standard. We would still need the metadata entries in modelzoo, but no longer require manual freezing of Variables into Constants in the graph definition.

Random thoughts on modelzoo

  • I'd really like to have a convenient way to evaluate a model once without going through the whole rigamarole of setting up a graph.

    • Maybe model.isolated_eval(layer, input).
    • Use cases to consider -- ask Chris for example notebooks
    • Most common scenario: want to get activations for a single thing.
    • Second most common scenario: we want to get a gradient. Not sure what a super convenient framework for that would look like. Think about attribution workloads here.
  • We need to get model.labels back. Done!

  • It might make sense to attach expensive, precomputed assets to models. For example feature visualization spritemaps.

    • Maybe model.precomputed.vis_spritemaps["mixed4d"] gives me a url?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.