GithubHelp home page GithubHelp logo

jaumeamoresds / nbmodular Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 875 KB

Convert notebooks to modular code

Home Page: https://jaumeamoresds.github.io/nbmodular/

License: MIT License

Python 33.33% Jupyter Notebook 66.52% CSS 0.13% Shell 0.02%
data-science ipython-magic modularization nbdev notebooks software-engineering

nbmodular's Introduction

nbmodular

Convert data science notebooks with poor modularity to fully modular notebooks that are automatically exported as python modules.

Motivation

In data science, it is usual to develop experimentally and quickly based on notebooks, with little regard to software engineering practices and modularity. It can become challenging to start working on someone else’s notebooks with no modularity in terms of separate functions, and a great degree of duplicated code between the different notebooks. This makes it difficult to understand the logic in terms of semantically separate units, see what are the commonalities and differences between the notebooks, and be able to extend, generalize, and configure the current solution.

Objectives

nbmodular is a library conceived with the objective of helping converting the cells of a notebook into separate functions with clear dependencies in terms of inputs and outputs. This is done though a combination of tools which semi-automatically understand the data-flow in the code, based on mild assumptions about its structure. It also helps test the current logic and compare it against a modularized solution, to make sure that the refactored code is equivalent to the original one.

Features

  • Convert cells to functions.
  • The logic of a single function can be written across multiple cells.
  • Functions can be either regular functions or unit test functions.
  • Functions and tests are exported to separate python modules.
  • TODO: use nbdev to sync the exported python module with the notebook code, so that changes to the module are reflected back in the notebook.
  • Processed cells can continue to operate as cells or be only used as functions.
  • A pipeline function is automatically created and updated. This pipeline provides the data-flow from the first to the last function call in the notebook.
  • Functions act as nodes in a dependency graph. These nodes can optionally hold the values of local variables for inspection outside of the function. This is similar to having a single global scope, which is the original situation. Since this is memory-consuming, storing local variables is optional.
  • Local variables are persisted in disk, so that we may decide to reuse previous results without running the whole notebook.
  • TODO: Once we are able to construct a graph, we may be able to draw it or show it in text, and pass it to ADG processors that can run functions sequentially or in parallel.
  • TODO: if we have the dependency graph and persisted inputs / outputs, we may decide to only run those cells that are predecessors of the current one, i.e., the ones that provide the inputs needed by the current cell.
  • TODO: if we associate a hash code to input data, we may only run the cells when the input data changes. Similarly, if we associate a hash code with AST-converted function code, we may only run those cells whose code has been updated.
  • TODO: the output of a test cell can be used for assertions, where we require that the current output is the same as the original one.
  • TODO: Compare the result of the pipeline with the result of running the original notebook.
  • TODO: Currently, AST processing is used for assessing whether variables are modified in the cell or are just read. This just gives an estimate. We may want to compare the values of existing variables before and after running the code in the cell. We may also use a type checker such as mypy to assess whether a variable is immutable in the cell (e.g., mark the variable as Final and see if mypy complaints)

Install

pip install nbmodular

Usage

Load ipython extension

This allows us to use the following of magic commands, among others

  • function <name_of_function_to_define>
  • print <name_of_previous_function>
  • function_info <name_of_previous_function>
  • print_pipeline

Let’s go one by one

function

Use magic command function allows to:

  • Run the code in the cell normally, and at the same time detect its input and output dependencies and define a function with this input and output:
a = 2
b = 3
c = a+b
print (a+b)
5

The code in the previous cell runs as it normally would, but and at the same time defines a function named get_initial_values which we can show with the magic command print:

def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)

This function is defined in the notebook space, so we can invoke it:

def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)

The inputs and outputs of the function change dynamically every time we add a new function cell. For example, if we add a new function get_d:

d = 10
def get_d():
    d = 10

And then a function add_all that depend on the previous two functions:

a = a + d
b = b + d
c = c + d
f = %function_info add_all
print(f.code)
def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d
def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d
from sklearn.utils import Bunch
from pathlib import Path
import joblib
import pandas as pd
import numpy as np

def test_index_pipeline (test=True, prev_result=None, result_file_name="index_pipeline"):
    result = index_pipeline (test=test, load=True, save=True, result_file_name=result_file_name)
    if prev_result is None:
        prev_result = index_pipeline (test=test, load=True, save=True, result_file_name=f"test_{result_file_name}")
    for k in prev_result:
        assert k in result
        if type(prev_result[k]) is pd.DataFrame:    
            pd.testing.assert_frame_equal (result[k], prev_result[k])
        elif type(prev_result[k]) is np.array:
            np.testing.assert_array_equal (result[k], prev_result[k])
        else:
            assert result[k]==prev_result[k]
def index_pipeline (test=False, load=True, save=True, result_file_name="index_pipeline"):

    # load result
    result_file_name += '.pk'
    path_variables = Path ("index") / result_file_name
    if load and path_variables.exists():
        result = joblib.load (path_variables)
        return result

    b, c, a = get_initial_values (test=test)
    d = get_d ()
    add_all (d, b, c, a)

    # save result
    result = Bunch (b=b,c=c,a=a,d=d)
    if save:    
        path_variables.parent.mkdir (parents=True, exist_ok=True)
        joblib.dump (result, path_variables)
    return result
def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d

We can see that the uputs from get_initial_values and get_d change as needed. We can look at all the functions defined so far by using print all:

def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)
    return b,c,a

def get_d():
    d = 10
    return d

def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d

Similarly the outputs from the last function add_all change after we add a other functions that depend on it:

print (a, b, c, d)
12 13 15 10

print

We can see each of the defined functions with print my_function, and list all of them with print all

def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)
    return b,c,a

def get_d():
    d = 10
    return d

def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d
    return b,c,a

def print_all(b, d, a, c):
    print (a, b, c, d)

print_pipeline

As we add functions to the notebook, a pipeline function is defined. We can print this pipeline with the magic print_pipeline

def index_pipeline (test=False, load=True, save=True, result_file_name="index_pipeline"):

    # load result
    result_file_name += '.pk'
    path_variables = Path ("index") / result_file_name
    if load and path_variables.exists():
        result = joblib.load (path_variables)
        return result

    b, c, a = get_initial_values (test=test)
    d = get_d ()
    b, c, a = add_all (d, b, c, a)
    print_all (b, d, a, c)

    # save result
    result = Bunch (b=b,d=d,c=c,a=a)
    if save:    
        path_variables.parent.mkdir (parents=True, exist_ok=True)
        joblib.dump (result, path_variables)
    return result

This shows the data flow in terms of inputs and outputs

And run it:

self = %cell_processor
self.function_list
[FunctionProcessor with name get_initial_values, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
     Arguments: []
     Output: ['b', 'c', 'a']
     Locals: dict_keys(['a', 'b', 'c']),
 FunctionProcessor with name get_d, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
     Arguments: []
     Output: ['d']
     Locals: dict_keys(['d']),
 FunctionProcessor with name add_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
     Arguments: ['d', 'b', 'c', 'a']
     Output: ['b', 'c', 'a']
     Locals: dict_keys(['a', 'b', 'c']),
 FunctionProcessor with name print_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
     Arguments: ['b', 'd', 'a', 'c']
     Output: []
     Locals: dict_keys([])]
def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)
    return b,c,a

def get_d():
    d = 10
    return d

def add_all(d, b, c, a):
    a = a + d
    b = b + d
    c = c + d
    return b,c,a

def print_all(b, d, a, c):
    print (a, b, c, d)
index_pipeline()
{'d': 10, 'b': 13, 'a': 12, 'c': 15}

function_info

We can get access to many of the details of each of the defined functions by calling function_info on a given function name:

get_initial_values_info = %function_info get_initial_values

This allows us to see:

  • The name and value (at the time of running) of the local variables, arguments and results from the function:
get_initial_values_info.arguments
[]
get_initial_values_info.current_values
{'a': 2, 'b': 3, 'c': 5}
get_initial_values_info.return_values
['b', 'c', 'a']

We can also inspect the original code written in the cell…

print (get_initial_values_info.original_code)
a = 2
b = 3
c = a+b
print (a+b)

the code of the defined function:

print (get_initial_values_info.code)
def get_initial_values(test=False):
    a = 2
    b = 3
    c = a+b
    print (a+b)
    return b,c,a

.. and the AST trees:

print (get_initial_values_info.get_ast (code=get_initial_values_info.original_code))
Module(
  body=[
    Assign(
      targets=[
        Name(id='a', ctx=Store())],
      value=Constant(value=2)),
    Assign(
      targets=[
        Name(id='b', ctx=Store())],
      value=Constant(value=3)),
    Assign(
      targets=[
        Name(id='c', ctx=Store())],
      value=BinOp(
        left=Name(id='a', ctx=Load()),
        op=Add(),
        right=Name(id='b', ctx=Load()))),
    Expr(
      value=Call(
        func=Name(id='print', ctx=Load()),
        args=[
          BinOp(
            left=Name(id='a', ctx=Load()),
            op=Add(),
            right=Name(id='b', ctx=Load()))],
        keywords=[]))],
  type_ignores=[])
None
print (get_initial_values_info.get_ast (code=get_initial_values_info.code))
Module(
  body=[
    FunctionDef(
      name='get_initial_values',
      args=arguments(
        posonlyargs=[],
        args=[
          arg(arg='test')],
        kwonlyargs=[],
        kw_defaults=[],
        defaults=[
          Constant(value=False)]),
      body=[
        Assign(
          targets=[
            Name(id='a', ctx=Store())],
          value=Constant(value=2)),
        Assign(
          targets=[
            Name(id='b', ctx=Store())],
          value=Constant(value=3)),
        Assign(
          targets=[
            Name(id='c', ctx=Store())],
          value=BinOp(
            left=Name(id='a', ctx=Load()),
            op=Add(),
            right=Name(id='b', ctx=Load()))),
        Expr(
          value=Call(
            func=Name(id='print', ctx=Load()),
            args=[
              BinOp(
                left=Name(id='a', ctx=Load()),
                op=Add(),
                right=Name(id='b', ctx=Load()))],
            keywords=[])),
        Return(
          value=Tuple(
            elts=[
              Name(id='b', ctx=Load()),
              Name(id='c', ctx=Load()),
              Name(id='a', ctx=Load())],
            ctx=Load()))],
      decorator_list=[])],
  type_ignores=[])
None

Now, we can define another function in a cell that uses variables from the previous function.

cell_processor

This magic allows us to get access to the CellProcessor class managing the logic for running the above magic commands, which can become handy:

cell_processor = %cell_processor

Merging function cells

In order to explore intermediate results, it is convenient to split the code in a function among different cells. This can be done by passing the flag --merge True

x = [1, 2, 3]
y = [100, 200, 300]
z = [u+v for u,v in zip(x,y)]
z
[101, 202, 303]
def analyze():
    x = [1, 2, 3]
    y = [100, 200, 300]
    z = [u+v for u,v in zip(x,y)]
product = [u*v for u, v in zip(x,y)]
def analyze():
    x = [1, 2, 3]
    y = [100, 200, 300]
    z = [u+v for u,v in zip(x,y)]
    product = [u*v for u, v in zip(x,y)]

Test functions

By passing the flag --test we can indicate that the logic in the cell is dedicated to test other functions in the notebook. The test function is defined taking the well-known pytest library as a test engine in mind.

This has the following consequences:

  • The analysis of dependencies is not associated with variables found in other cells. - Test functions do not appear in the overall pipeline. - The data variables used by the test function can be defined in separate test data cells which in turn are converted to functions. These functions are called at the beginning of the test cell.

Let’s see an example

a = 5
b = 3
c = 6
d = 7
add_all(d, a, b, c)
(12, 10, 13)
# test function add_all
assert add_all(d, a, b, c)==(12, 10, 13)
def test_add_all():
    b,c,a,d = test_input_add_all()
    # test function add_all
    assert add_all(d, a, b, c)==(12, 10, 13)
def test_input_add_all(test=False):
    a = 5
    b = 3
    c = 6
    d = 7
    return b,c,a,d

Test functions are written in a separate test module, withprefix test_

!ls ../tests
index.ipynb  test_example.py

Imports

In order to include libraries in our python module, we can use the magic imports. Those will be written at the beginning of the module:

import pandas as pd

Imports can be indicated separately for the test module by passing the flag --test:

import matplotlib.pyplot as plt

Defined functions

Functions can be included already being defined with signature and return values. The only caveat is that, if we want the function to be executed, the variables in the argument list need to be created outside of the function. Otherwise we need to pass the flag –norun to avoid errors:

def myfunc (x, y, a=1, b=3):
    print ('hello', a, b)
    c = a+b
    return c

Although the internal code of the function is not executed, it is still parsed using an AST. This allows to provide very tentative warnings regarding names not found in the argument list

def other_func (x, y):
    print ('hello', a, b)
    c = a+b
    return c
Detected the following previous variables that are not in the argument list: ['b', 'a']

Let’s do the same but running the function:

a=1
b=3
def myfunc (x, y, a=1, b=3):
    print ('hello', a, b)
    c = a+b
    return c
hello 1 3
myfunc (10, 20)
hello 1 3

4
myfunc_info = %function_info myfunc
myfunc_info
FunctionProcessor with name myfunc, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
    Arguments: ['x', 'y', 'a', 'b']
    Output: ['c']
    Locals: dict_keys(['c'])
myfunc_info.c
4

Storing local variables in memory

By default, when we run a cell function its local variables are stored in a dictionary called current_values:

my_new_local = 3
my_other_new_local = 4

The stored variables can be accessed by calling the magic function_info:

my_new_function_info = %function_info my_new_function
my_new_function_info.current_values
{'my_new_local': 3, 'my_other_new_local': 4}

This default behaviour can be overriden by passing the flag --not-store

my_second_variable = 100
my_second_other_variable = 200
my_second_new_function_info = %function_info my_second_new_function
my_second_new_function_info.current_values
{}

(Un)packing Bunch I/O

from sklearn.utils import Bunch
x = Bunch (a=1, b=2)
c = 3
a = 4
def bunch_processor(x, day):
    a = x["a"]
    b = x["b"]
    c = 3
    a = 4
    x["a"] = a
    x["c"] = c
    x["day"] = day
    return x

Function’s info object holding local variables

df = pd.DataFrame (dict(Year=[1,2,3], Month=[1,2,3], Day=[1,2,3]))
fy = '2023'
def days (df, fy, x=1, /, y=3, *, n=4):
    df_group = df.groupby(['Year','Month']).agg({'Day': lambda x: len (x)})
    df_group = df.reset_index()
    print ('other args: fy', fy, 'x', x, 'y', y)
    return df_group
other args: fy 2023 x 1 y 3
Stored the following local variables in the days current_values dictionary: ['df_group']
Detected the following previous variables that are not in the argument list: ['x', 'df', 'fy']

An info object with name <function_name>_info is created in memory, and can be used to get access to local variables

days_info.df_group
<style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; }
.dataframe tbody tr th {
    vertical-align: top;
}

.dataframe thead th {
    text-align: right;
}
</style>
index Year Month Day
0 0 1 1 1
1 1 2 2 2
2 2 3 3 3

There is more information in this object: previous variables, code, etc.

days_info.current_values
{'df_group':    index  Year  Month  Day
 0      0     1      1    1
 1      1     2      2    2
 2      2     3      3    3}
days_info
FunctionProcessor with name days, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'not_run', 'previous_values', 'current_values', 'returns_dict', 'returns_bunch', 'unpack_bunch', 'include_input', 'exclude_input', 'include_output', 'exclude_output', 'store_locals_in_disk', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx'])
    Arguments: ['df', 'fy', 'x', 'y']
    Output: ['df_group']
    Locals: dict_keys(['df_group'])

The function can also be called directly:

days (df*100, 100, x=4)
other args: fy 100 x 4 y 3
<style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; }
.dataframe tbody tr th {
    vertical-align: top;
}

.dataframe thead th {
    text-align: right;
}
</style>
index Year Month Day
0 0 100 100 100
1 1 200 200 200
2 2 300 300 300

nbmodular's People

Contributors

jaumeamoresds avatar

Watchers

 avatar

nbmodular's Issues

Allow different values for arguments that use defaults

  • When using kwarguments, do not initialize variables that already exist in memory
  • Allow to indicate the values of arguments when calling magic cell:
%%function --input-values a=1 b=2
def add_and_print (a, b=10):
   c = a + b
   print (c)

in place changing the code in a previous function cell

Currently, this creates another code cell at the end of the list, keeping the previous ones. If --merge is not added, the previous ones are just set to not valid. Let's see this with an example:

%%function 
def add1 (x):
	y = x + 1
	return y
	
%%function --merge
def add1 (x, x2):
	y2 = x2 + 1
	return y, y2
	
%%function
def add1 (x, x2):
	y = x + 1
	y2 = x2 + 1
	print (f'{x} + 1 = {y}')
	print (f'{x2} + 1 = {y2}')
	return y, y2

The result is a list with three code cells: the first two with valid=False, and the last one with valid=True
Imagine the third cell replaced the first cell, i.e., the code in the third definition is written in the first cell. We could add a flag --replace-cell that makes it replace. But we need to add also an index of what cell to replace.

We could also add two magic lines:

  • %list_cells function_name listing the code cells existing for a particular function_name, along with the function indexes, to know which one needs to be replaced.
  • %delete funcion_cell function_name idx deleting the function cell in position idx of the list of function cells self.code_cells[function_name].

indicate pipeline name

set_pipe (function_name, pipeline_name)

if pipeline_name != 'None' and pipeline_name not in self.pipelines:
   self.pipelines.append (pipeline_name)

for pipeline in self.pipelines:
   self.create_pipeline (pipeline)

def create_pipeline (pipeline=None):
    if pipeline is None:
        pipeline = self.file_name_without_extension
    function_list = self.get_function_list (pipeline=pipeline)

get_function_list (self, ..., pipeline=None):
   if pipeline is not None:
        function_list = (... and f.pipeline==pipeline)

Explain advantage about using logger

We just initialize it before the function, use it, and it will automatically be added to function arguments
With visitor we might not even need to initialize before, provided we don't run the function

debug function

Add magic debug_function which doesn't run function but instead debugs it. It could use the variables in memory or passed as arguments:

%%debug_function --input-values a=1 b=2
def add_and_print(a, b):
    c = a + b
    print (c)

long term issues (Pool of issues)

  • Hierarchical objects with current values as attributes
  • Copy previous values and restore the variables to have the previous values after running the function
  • optional: warning message when return doesn't include variable name but function call
  • allow to exclude / include local variables to be stored in object, to avoid issues
    • Do this in two ways:
      • Delete the variable (del), with the disadvantage that we won't be able to inspect it later on.
      • Delete the variable only when a new cell magic is executed, so that we can still inspect the variables created in the last cell, and then move on to execute the next cell, at which point we remove previous variables that were memory-consuming.
      • We might as well, more in the long-term future, delete variables based on how much memory they consume, using some threshold parameter.
  • allow to store previous_values in _info object as follows:
    1. store the values in locals(), using the same code that is used now for storing current_values in info_object. This code is run before the code from the cell is run,
      so that the locals() are those from before the cell, i.e., the previous values
    2. use the same trick as the one used in keep_variables_in_memory: introduce a boolean flag created_previous_values in dict => although it should work here.
    3. if the flag does not exist, run second code that, instead of storing values in locals, stores them in disk => although this should not happen here.
  • Write ipython script where magic functions are written using https://stackoverflow.com/questions/10361206/how-to-run-ipython-magic-from-a-script
  • Using the AST, see if the first time a variable is stored, in the same statement it is also loaded. If that's the case, or if there was a load for the same variable prior to the current statement, the variable should be in the previous_variable list. Otherwise, it shouldn't. See if ast.walk preserves the ordering of statements when traversing the tree. I think so.
  • Have function_info object be attached to function object, <my_function>.info = current function info. Have the current values attached directly to the function object.

short term issues (pool of issues)

  • Exclude function from pipeline
  • Explain --data separately later: it just allows to have the test=False as argument, see examples where this can be useful.
  • (?) Magic that updates pipeline function (evaluates it)
  • (?) Remove names that are calls - no need to evaluate those objects. See defined function myfunc (don't remember what this is about)

I think these are already covered:

  • Set logging level with magic line
  • Indicate name of input/output to be used in pipeline, in case it is not the same as names in arguments/return_values
  • Indicate name of python file and path to it (relative to lib_path) as magic line.

Do linear scan for previous_variables

  • loaded names that have first been stored cannot be previous variables
  • this requires an ordered scan using a tree visitor. See ast_examples.ipynb in nbs/dev_tutorials folder

TO CHECK

  • keyword arguments

  • avoid calling previous code when merging! This can be very inefficient and wrong if code has side-effects.

    • option 1) join the AST trees from different cells, but only run the last cell
    • option 2) use current_values: if previous_values is in current_values of previous cells, it cannot be a previous value
    • experiment with cases: x loaded in cell 1 (prev value), y created in cell 1, z loaded in cell 2 (new prev value), y loaded in cell 2 (not prev value, since it is in created_values of prev cell)
  • magic line %set allows to set any param.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.