GithubHelp home page GithubHelp logo

blackarbsceo / adv_fin_ml_exercises Goto Github PK

View Code? Open in Web Editor NEW
1.6K 155.0 626.0 5.07 MB

Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

License: MIT License

Makefile 0.14% Jupyter Notebook 97.69% Python 2.18%

adv_fin_ml_exercises's Introduction

Advances in Financial Machine Learning Exercises

Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by Marcos Lopez De Prado

Make sure to use python setup.py install in your environment so the src scripts which include bars.py and snippets.py can be found by the jupyter notebooks and other scripts you may develop.

Additional AFML Projects and Resources

There are other github projects and links that people share that are inspired by the book. I'd like to collect them here to share with others in the spirit of collaboration and idea sharing. If you have more to add please let me know.

Github Projects

Article Links

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.testrun.org

Project based on the cookiecutter data science project template. #cookiecutterdatascience

adv_fin_ml_exercises's People

Contributors

blackarbsceo avatar dependabot[bot] avatar lionelyoung avatar mccarv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adv_fin_ml_exercises's Issues

Question: Chapter 16

Has someone gotten to this chapter? I've been trying to change the code to support short positions as in question 4, but as far as i understand, since we are levering on the fact that 𝜔 = V−1a / a′V−1a is optimal for diagonal covariance matrix, or there is no correlation between assets and therefore no posible diversification, when we change the constaint wa=1 to wa=0 the solution turns to w_n = 0. Which other constraint could be added in order to force the problem to take a position? a minimal expected return? o should be the objective function be changed?

FracDiff for -d values

While trying to apply the Franctionally differentiated logic to a time series i found that a negative value for d make the W series tend to infinity and the difference between weight very small. I belive that the idea of applying a negative value of d to the already FD series is to recover the original values. This can be achieve fixing the length of the W list. Im using the following setup:

def getWeights_FFD(d):
        # thres>0 drops insignificant weights
        w=[1.]
        for k in range(1,50):
            w_=-w[-1]/k*(d-k+1)
            w.append(w_)
        w=np.array(w[::-1]).reshape(-1,1)
        return w

Any comments on this approach?

getTEvents: why abs() in diff = np.log(gRaw).diff().dropna().abs()?

Hi, I am new to this community.

I find your getTEvents function in Labeling and MetaLabeling is different from the code provided by the author (code snippets [2.5.2.1]).

np.log(gRaw).diff().dropna().

I understand you would like to convert the prices into returns, and then compare the cumulative returns with the volatility threshold.

but why adding .abs() at the end?
You turn all the negative returns to positive returns, but I cannot understand the logic behind.

Can you explain your rationale?

Thanks

How useful are Featrues(but Time & Price itself) in LSTM deep Learning for Trading ?

Recently a collegue had a short argumentwith me, about wether it actually makes sense to use Extra Features (e.g. Indicators, Moving Averages, etc) in a trading Prediction Model with LSTM Cells.

He has the opinion , that price already contains all of these information (in a large enough sequence) . I couldnt follow. Under what circumstances this thesis is right ? Or better why? Isnt there something which the Net could miss out in the price?

If the theory is true, doesnt the part of stationarizing pricedata nearly fully destroy this extra information?
If so, is there an alternative to regular methods of stationarizing? ( by regular i mean the difference in Movement per Bar)

(Note: the argument is not about News as Features, but Features similar to indicators etc)

Exercices

Hi guys, anyone trying to do all exercices ?
I'm thinking about starting it, with python notebooks

Features for meta-labeling

What are you guys using as features for the meta-labeling? So far I just have a list with financial indicators. Does anybody have a good idea what could add value?

Potential bug in PurkedKFold class

The following notebook:

Adv_Fin_ML_Exercises/notebooks/07. Cross Validation in Finance.ipynb

Where it says:

maxT1Idx=self.t1.index.searchsorted(self.t1[test_indices].max())

I believe it should be:

maxT1Idx=self.t1.index.searchsorted(self.t1.index[test_indices].max())

Note that the book states exactly the same as you proposed, however, I think it should be as I propose.

sys.path is being populated with all directories and files below src

Not a huge deal, but in Tick, Volume, Dollar Bars.ipynb the sys.path is being populated with every file and directory under /src instead of just the directories as I think is intended.

script_dirs = list(Path(script_dir).glob('./*/'))
for sdir in script_dirs: 
    sys.path.append(sdir.as_posix())

Suggest replacing with the snippet below for just the directories

for (_, script_dir, _) in os.walk('./src'):
    sys.path.append(script_dir)

Thanks for starting this project. I am behind you and the others in my studying and implementation but I hope to catch up and contribute.

Optimum way of Scaling for LSTM Regressor?

Hey,

Currently I am just Scaling the whole Dataframe (not just columnwise) and avoid to scale the targets (I drop the column and add it later again). Im using StandardScaler() and I am well concerned that this is suboptimal...

What is a better way, to do it?
Thanks for every answer

Issue with function getEvents

events = getEvents(close,tEvents,ptsl,target,minRet,cpus,t1=t1) takes an infinite time to process on a windows 10 machine. Need help !!!!
My machine specs !!
CPython 3.5.5
IPython 6.4.0

compiler : MSC v.1900 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
CPU cores : 8
interpreter: 64bit

pandas 0.22.0
pandas_datareader 0.6.0
dask 0.18.1
numpy 1.14.5
sklearn 0.19.1
statsmodels 0.9.0
scipy 1.1.0
ffn (0, 3, 4)
matplotlib 2.2.2
seaborn 0.8.1

NameError: name 'data_dir' is not defined

I try to run the 'Tick, Volume, Dollar Volume Bars notebook', but got the following error:

  • NameError: name 'data_dir' is not defined
    should I define data_dir in the notebook?

Thanks.

getWeights_FFD is incomplete!

As defined in the book the FFD approach of getWeights which is Xt =Σl∗k=0 𝜔̃ kXt−k,

uses k which is k = 0,…, T − 1.

Thus a limit/size parameter is requried for getWeights_FFD in notebook 05. Fractionally Differentiated Features!

A complete implementation can be found at

https://github.com/philipperemy/fractional-differentiation-time-series/blob/master/fracdiff/fracdiff.py

viz.

def get_weight_ffd(d, thres, lim):
    w, k = [1.], 1
    ctr = 0
    while True:
        w_ = -w[-1] / k * (d - k + 1)
        if abs(w_) < thres:
            break
        w.append(w_)
        k += 1
        ctr += 1
        if ctr == lim - 1:
            break
    w = np.array(w[::-1]).reshape(-1, 1)
    return w

questions about volume imbalance bar implementation

Hi, in the book the VIB defined as T*-contiguous subset of ticks such that the following condition is met.
And the E0(T), 2v+, E0(Vt) etc are estimated from exponentially weighted moving averages of prior bars.
But I think your code might not match the definition the author said. Did I miss anything? Thanks!

image

Problem 1.7 on pitfalls of Sharpe ratio

Problem 1.7 is very surprising: "You read a journal article that describes an investment strategy. In a backtest, it achieves an annualized Sharpe ratio in excess of 2, with a confidence level of 95%. Using their dataset, you are able to reproduce their result in an independent backtest. Why is this discovery likely to be false?"
Does anyone understand it? I understand that this could be the case if the strategy was a result of backtest overfitting, but there is no indication for it, is it?

Question about applyPtSlOnT1

In the method [3.2] applyPtSlOnT1, there is:

df0=(df0/close[loc]-1)*events_.at[loc,'side'] # path returns

This line is the same as in the book, however what is the point of multiplying the returns by the predicted side: *events_.at[loc,'side']? Wouldn't this change the sign of the returns and thus create an error when the barriers are computed on the next line? I know this is not a bug, so I'm sorry about asking here.

Mistakes in crossing moving averages

The crossing moving averages in the Labeling and MetaLabeling for Supervised Classification.ipynb notebook don't work correctly. In the current version, the following example would trigger a down cross, where there is none:

>>> df = pd.DataFrame(
>>>     data=[[645.16614, 645.166140, 645.166140],
>>>                [460.31423, 539.536477, 546.578455]],
>>>     columns=['price', 'fast', 'slow'],
>>>     index=['2014-03-10 18:33:42', '2014-03-10 19:33:58']
>>> )
>>>
>>> def get_down_cross_(df):
>>>     crit1 = df.fast.shift(1) > df.slow
>>>     crit2 = df.fast < df.slow
>>>     return df.fast[(crit1) & (crit2)]
>>> 
>>> get_down_cross(df)
2014-03-10 19:33:58    539.536477
Name: fast, dtype: float64

The slower moving average should be shifted as well, like so (as well as the upper and lower Bollinger Bands):

def get_up_cross(df):
    crit1 = df.fast.shift(1) < df.slow.shift(1)
    crit2 = df.fast > df.slow
    return df.fast[(crit1) & (crit2)]

def get_down_cross(df):
    crit1 = df.fast.shift(1) > df.slow.shift(1)
    crit2 = df.fast < df.slow
    return df.fast[(crit1) & (crit2)]

def get_up_cross(df, col):
    # col is price column
    crit1 = df[col].shift(1) < df.upper.shift(1)
    crit2 = df[col] > df.upper
    return df[col][(crit1) & (crit2)]

def get_down_cross(df, col):
    # col is price column    
    crit1 = df[col].shift(1) > df.lower.shift(1)
    crit2 = df[col] < df.lower
    return df[col][(crit1) & (crit2)]

where the top data directory?

did you missing top "data" directory?
and after executing 'python ../src/data/make_dataset.py', it just only print "2018-09-04 13:35:22,438 - main - INFO - making final data set from raw data" and no data generating...

Tick Bars

Unless I'm misunderstanding something, I believe there's an error in the function that calculates the indices for tick bars.

I think the ts += i in the for-loop should be ts += 1 so that you accumulate a count of ticks, and not a sum of index values.

def  tick_bars(df, column, m):
    '''
    compute tick bars
    
    # args
        df: pd.DataFrame()
        column: name for price data
        m: int(), threshold value for ticks
    # returns
        idx: list of indices
    '''
    t = df[column]
    ts = 0
    idx = []
    for i, x in enumerate(tqdm(t)):
        ts += i
        if ts >= m:
            idx.append(i)
            ts = 0
            continue   
    return idx

Ch3 Notebook: Trend Follow Strategy

I just noticed now that your input vector for the random forest (trend following) has only one feature. X = ma_side.values.reshape(-1,1)

This will be a problem since the model wont be able to correctly split on nodes (there is only one feature). So at best the tree can do a single split, 1 or 0. The model doesn't have enough information to learn from the features. Maybe adding some of the previous sides as well as things like auto correlation would help.

I suspect that its the reason why the trend following RF always predicts to always trade.

Would love to hear your thoughts.
Best Regards
Jacques Joubert

Sequentializing data

One topic I do not find in the book is sequentializing data (for eg LSTM, RNN...).

I see a few difficulties:

  • When using information driven bars on multiple symbols, the bars will not be generated simultaniously. So when exactly to do the sequentializing? What to do with unfinished bars?
  • Sequentializing introduces a large overlap between samples. I assume this also needs to be taken into account for sequential bootstrapping and sample weights?

Any toughts or resources on this?

Meta-labelling

If one uses eg. a RF model to learn the side (instead of a simple moving average model), then how is meta-labelling carried out? Because we train the 1st primary model (RF) on X_train, do we also want to predict on this same X_train to know the side in order to train a 2nd model? This strategy is employed by Jaques Joubert (@Jackal08) in his notebook on Quantopian on the MNIST data set (as weel as on a validation and hold-out set). However, the 1st model's accuracy is then heavily inflated which I think would propagate to the 2nd model.

Further, Prado suggests that one can change the horizontal barriers when meta-labelling but I can't see why that makes sense at all since the 1st model is trained on one particular configuration of barriers and if meta-label according to a different configuration, then the 1st model is not evaluated properly. Has anyone any ideas as to why Prado suggests this?

Bars notebook - possible corrections

In my work running the code in the Bars notebook, I have found that the following possibly required corrections in the code.

  1. At Volume Bars, 2nd block
    At > v_bar_df = volume_bar_df(df, 'v', 'price', volume_M)
    Corrected to > v_bar_df = volume_bar_df(df, 'v', volume_M)
    ERROR message: TypeError: volume_bar_df() takes 3 positional arguments but 4 were given

  2. At Dollar Value Bars, 1st block
    def dollar_bars()
    At > t = df[column]
    Corrected to > t = df[dv_column]
    Reason: to match argument name

  3. At Dollar Value Bars, 2nd block
    At > dv_bar_df = dollar_bar_df(df, 'dv', 'price', dollar_M)
    Corrected to: dv_bar_df = dollar_bar_df(df, 'dv', dollar_M)
    ERROR message: TypeError: volume_bar_df() takes 3 positional arguments but 4 were given

  4. Initialisation block
    In my environment, at > import pandas_datareader.data as web
    After "import pandas as pd" I had to add the following line before the datareader line.
    pd.core.common.is_list_like = pd.api.types.is_list_like
    ERROR message: python datareader cannot import name 'is_list_like'
    CHANGE made as per Answer 55 at https://stackoverflow.com/questions/50394873/import-pandas-datareader-gives-importerror-cannot-import-name-is-list-like

  5. Suggestions:
    a) QtConsole: I have found it very helpful to add the following line to the Initilisation block so as to open a QtConsole in the current notebook kernel for interactive work without cluttering the notebooks.
    %qtconsole
    For example, I saved a lot of the variable datasets to csv files for inspection; also checking dtypes and the like.
    b) Variable Inspector for Notebooks: Install jupyter_contrib_nbextensions (and the jupyter_nbextensions_configurator)
    c) Switch plots between inline and interactive: In the QtConsole, switch between the two with:
    %matplotlib qt > #interactive plotting in separate window
    %matplotlib inline > #normal charts inside notebooks

  6. utils conflict
    a) In my environment, I also found that I needed to rename the file utils both in the src folder and in the Notebook.

  • The utils name must be used already elsewhere in my environment and so the line from utils_gh import cprint generated "ImportError: cannot import name 'cprint'" .
  • After I made the file name changes, not problem and cprint works as intended.

I trust that these may assist.

Thank you greatly for the sharing your implementations in the 2 notebooks. It has assisted greatly in understanding and contributes towards the possibility of applying the work from the book.

On reading discussion in some of the other comments, I am encouraged to find that I am not the only one who finds some of the notation obscure or ambiguous.

Equity Curve and Live Trading

I'm tackling converting the triple barrier code into an equity curve and live trading

Are you open to using this repository to collaborate? What are your thoughts?

Speed improvements for sampling

The code in the book for estimating uniqueness and building the ind matrix is quite crude and assumes a relatively small number of signals (and or bars). The major speed bump comes most likely because of large memory usage and therefore swap. Switching to sparse matrices fixes the problem even for large numbers of signals and bars:

def getIndMatrixSparse(barIx, t1):
    from scipy.sparse import csr_matrix
    rows = barIx[(barIx>=t1.index[0])&(barIx<=t1.max())]
    cols = t1
    indM = csr_matrix((len(rows), len(cols)), dtype=np.float)
    with tqdm(total=len(cols)) as pbar:
        for i,(entry,exit) in enumerate(t1.iteritems()):
            offsets = rows.searchsorted([entry,exit])
            indM[offsets[0]:offsets[1], i] = 1.
            pbar.update(1)
    return indM


def getAvgUniquenessSparse(indM):
    # Average uniqueness from indicator matrix
    c=indM.sum(axis=1) # concurrency
    u=csr_matrix(indM.multiply(1/c)) # uniqueness
    filtered = u.multiply(u > 0)
    #sparse workaround for a mean across axis 1 ignoring the 0's - equiv to df.mean(skipna=True)
    (x,y,z)=scipy.sparse.find(filtered)
    countings=np.bincount(y)
    sums=np.bincount(y,weights=z)
    averages=sums/countings
    return averages

I an open a PR although I can't rerun the NB so im not sure where and how to add it. I couldn't find a way to div with a csr matrix and multiply by 1/c we get a coo_matrix, so it needs another conversion to csr to do the mean calc. If someone is better versed in scipy.sparse, im happy to improve on it. Right now it does about 800Kx10K avgU calc in about 30ms on my laptop.

Ch3 Notebook: Random Forest Model

In the chapter 3 notebook, I noticed that when the train test split function is as follows: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5) This function shuffles the data by default. I believe this is a problem because it introduces look ahead bias into the time series (Maybe we could consider it a light type of label leakage). This is also the reason why purge k fold and embargo CV is introduced in later chapters.

To correct this the code needs to read X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, shuffle=False)

Ch3 Notebook: Adjust the getBins function

Question 3.3, pg 55: Adjust the getBins function (Snippet 3.5) to return a 0 whenever the vertical barrier is the one touched first.

We need to correct this line of code in the get_bins function.
out['bin']=np.sign(out['ret'])

Strange Dataframe doesnt work with specific functions :

After i convert my predcitions from my keras-model into a dataframe it looks like one, but doesnt support the same methods and attributes like a normal dataframe. here my example, what can i do about it?

`

def  CreateMetaLabelsRegression(pred_y,train_y,X_Dates):

    print(pred_y[:5])         # print the array to see how they are structured
    pred_y_df = pd.DataFrame(pred_y,columns = [["Profit","Loss"]]) 
    


    # all of the following wont work... __
    mask = np.where((pred_y_df["Profit"]>0) &(pred_y_df["Loss"]<0))
    pred_y_df["Direction"] = pred_y_df["Direction"].loc[mask] =1

    pred_y_df["Direction"] = np.where((pred_y_df["Profit"]>0) &(pred_y_df["Loss"]<0),1,np.nan)



train_Y_metalabels = CreateMetaLabelsRegression(pred_y,Y_Data,X_Dates)

`

this is the output:

`[[3.4223695e-04 5.1086456e-05]
[3.6414166e-04 6.1540290e-05]
[3.4799695e-04 5.7447331e-05]
[3.4469913e-04 6.4573884e-05]
[3.4438877e-04 6.0803453e-05]]


TypeError Traceback (most recent call last)
in
13 pred_y_df["Direction"] = pred_y_df["Direction"].loc[mask] =1
14 display(pred_y_df.head())
---> 15 train_Y_metalabels = CreateMetaLabelsRegression(pred_y,Y_Data,X_Dates)

in CreateMetaLabelsRegression(pred_y, train_y, X_Dates)
10
11
---> 12 mask = np.where((pred_y_df["Profit"]>0) &(pred_y_df["Loss"]<0))
13 pred_y_df["Direction"] = pred_y_df["Direction"].loc[mask] =1
14 display(pred_y_df.head())

c:\users\ben_z\appdata\local\programs\python\python37\lib\site-packages\pandas\core\frame.py in getitem(self, key)
2956 if self.columns.nlevels > 1:
2957 return self._getitem_multilevel(key)
-> 2958 return self._get_item_cache(key)
2959
2960 # Do we have a slicer (on rows)?

c:\users\ben_z\appdata\local\programs\python\python37\lib\site-packages\pandas\core\generic.py in _get_item_cache(self, item)
3268 res = cache.get(item)
3269 if res is None:
-> 3270 values = self._data.get(item)
3271 res = self._box_item_values(item, values)
3272 cache[item] = res

c:\users\ben_z\appdata\local\programs\python\python37\lib\site-packages\pandas\core\internals\managers.py in get(self, item)
958 raise ValueError("cannot label index with a null key")
959
--> 960 return self.iget(loc)
961 else:
962

c:\users\ben_z\appdata\local\programs\python\python37\lib\site-packages\pandas\core\internals\managers.py in iget(self, i)
975 Otherwise return as a ndarray
976 """
--> 977 block = self.blocks[self._blknos[i]]
978 values = block.iget(self._blklocs[i])
979 if values.ndim != 1:

TypeError: only integer scalar arrays can be converted to a scalar index

Tick, Volume

v_bar_df = volume_bar_df(df, 'v', 'price', volume_M) should be
v_bar_df = volume_bar_df(df, 'v', volume_M)

Thanks for sharing these codes.
They are perfect to learn.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.