GithubHelp home page GithubHelp logo

scikit-learn-contrib / imbalanced-learn Goto Github PK

View Code? Open in Web Editor NEW
6.7K 6.7K 1.3K 22.08 MB

A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

Home Page: https://imbalanced-learn.org

License: MIT License

Python 96.75% Makefile 0.07% Shell 2.45% TeX 0.73%
data-analysis data-science machine-learning python statistics

imbalanced-learn's Introduction

scikit-learn-contrib

scikit-learn-contrib is a github organization for gathering high-quality scikit-learn compatible projects. It also provides a template for establishing new scikit-learn compatible projects.

Vision

With the explosion of the number of machine learning papers, it becomes increasingly difficult for users and researchers to implement and compare algorithms. Even when authors release their software, it takes time to learn how to use it and how to apply it to one's own purposes. The goal of scikit-learn-contrib is to provide easy-to-install and easy-to-use high-quality machine learning software. With scikit-learn-contrib, users can install a project by pip install sklearn-contrib-project-name and immediately try it on their data with the usual fit, predict and transform methods. In addition, projects are compatible with scikit-learn tools such as grid search, pipelines, etc.

Projects

If you would like to include your own project in scikit-learn-contrib, take a look at the workflow.

A simple-but-efficient density-based clustering algorithm that can find clusters of arbitrary size, shapes and densities in two-dimensions. Higher dimensions are first reduced to 2-D using the t-sne. The algorithm relies on a single parameter K, the number of nearest neighbors.

Read The Docs, Read the Paper

Maintained by: Mohamed Abbas

Large-scale linear classification, regression and ranking.

Maintained by Mathieu Blondel and Fabian Pedregosa.

Fast and modular Generalized Linear Models with support for models missing in scikit-learn.

Maintained by Mathurin Massias, Pierre-Antoine Bannier, Quentin Klopfenstein and Quentin Bertrand.

A Python implementation of Jerome Friedman's Multivariate Adaptive Regression Splines.

Maintained by Jason Rudy and Mehdi.

Python module to perform under sampling and over sampling with various techniques.

Maintained by Guillaume Lemaitre, Fernando Nogueira, Dayvid Oliveira and Christos Aridas.

Factorization machines and polynomial networks for classification and regression in Python.

Maintained by Vlad Niculae.

Confidence intervals for scikit-learn forest algorithms.

Maintained by Ariel Rokem, Kivan Polimis and Bryna Hazelton.

A high performance implementation of HDBSCAN clustering.

Maintained by Leland McInnes, jc-healy, c-north and Steve Astels.

A library of sklearn compatible categorical variable encoders.

Maintained by Will McGinnis and Paul Westenthanner

Python implementations of the Boruta all-relevant feature selection method.

Maintained by Daniel Homola

Pandas integration with sklearn.

Maintained by Israel Saeta Pérez

Machine learning with logical rules in Python.

Maintained by Florian Gardin, Ronan Gautier, Nicolas Goix and Jean-Matthieu Schertzer.

A Python implementation of the stability selection feature selection algorithm.

Maintained by Thomas Huijskens

Metric learning algorithms in Python.

Maintained by CJ Carey, Yuan Tang, William de Vazelhes, Aurélien Bellet and Nathalie Vauquier.

imbalanced-learn's People

Contributors

bganglia avatar chkoar avatar discdiver avatar dvro avatar fmfn avatar glemaitre avatar hayesall avatar klizter avatar kmike avatar massich avatar matteding avatar microsheep avatar mr-c avatar nada-adel-mohamady avatar nv-jpt avatar orausch avatar osanai-hisashi avatar paulochf avatar pinnacleai avatar prakhyath07 avatar proinsias avatar pulkitmaloo avatar rasbt avatar sadrasabouri avatar seanbenhur avatar shaform avatar shihab-shahriar avatar solegalli avatar ssaamm avatar tthost avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imbalanced-learn's Issues

Issues using SMOTE

Hi
First of all thank you for providing us with the nice library

I have a imbalanced dataset and I've loaded the dataset using pandas.
When I'm supplying the dataset as input to the SMOTE I'm getting the following error:

ValueError: Expected n_neighbors <= n_samples,  but n_samples = 1, n_neighbors = 6

Thanks in Advance

Visualisation issue

First, my system is based on the current anaconda stable version.
I had two issues when calling the visualisation module:

svmsmote = SVM_SMOTE(random_state=1, svm_args={'class_weight' : 'auto'})

I am not really sure, but **svm_args should be passed as kwargs and not svm_args:

svm_args={'class_weight' : 'auto'}
svmsmote = SVM_SMOTE(random_state=1, **svm_args)

The second issue should be just a matter of the matplotlib version. I had to comment the property:

#"examples.download": True,

Guillaume Lemaitre

Refactor the package

Hi,

I would go for a long run and refactor the package such that it follows more the scikit-learn template.
What I mean by that is something like this.

I propose to add the following support:

  • sphinx/numpy doc.
  • proper testing.
  • continuous integration via Travis. If somebody wants to do the AppVeyor, that could be nice but I don't have interest in that.
  • refactor the package into modules.
  • proper coverage.

The full package will actually benefit from it. On the user side with a proper documentation. On the developer side to ensure back compatibility.

I will probably open a pull-request for that. It might be taking a while, but if anybody wants to jump in, I would be happy.

Flatten for nn_num

Hey,
In the smote.py file, the code is trying to flatten an integer (nn_num), and failing to make_samples.

Get specific array indices for sampled elements

Hi,

I am currently extracting some samples using undersampling for some data. However before sampling, I am using some data transformations.

However after the sampling is done, I want to use the selected samples with the untransformed data (e.g, normalize before clustering\KNN) and use non-normalized data samples.

Is it possible to get the selected indices of the sample rows in any way?

Installation

I have issues installing your package (I am using the Anaconda distribution). After following your setup instructions, the package is saved and can only be import by using "unbalanced_dataset", not "UnbalancedDataset". Therefore, the test example from your notebook also does not work for me. I also cannot import functions like this: from unbalanced_dataset import SVM_smote. Do you have any ideas how to fix this?
Thx for your time

Update: I was able to import SMOTETomek by using "from unbalanced_dataset import SMOTETomek". When trying to oversample a dataset with 3 features and a binary outcome label (n=128 for y==0 and n=3 for y==1) )a ValueError is raised: " Expected n_neighbors <= 3. Got 6". In this dataset I have

Create setup.py in order to exploit the notebook

Hi,

I think that there is a need to create a setup.py in order to have any problem when executing the notebook. I will try to do something about that.

I moved the notebook into a notebook folder instead of test. I will share the pull request when this is ready.

Cheers,

Trying create new SMOTE instance in python 3

HI, guys!

For python 3 we should replace "basetring" for "str" (in the file "master/unbalanced_dataset/base_sampler.py"). I don't know if it was suposed to support 2.x and 3.x pythons's version, just reporting. =) ... Keep up the good work =)

Install Error: "SyntaxError: invalid character in identifier"

Really keen to get going with this!

Error when using
python setup.py install

  File "C:/ManualGitHubClones/UnbalancedDataset/setup.py", line 34, in <module>
    _VERSION_GLOBALS = load_version()
  File "C:/ManualGitHubClones/UnbalancedDataset/setup.py", line 18, in load_version
    exec(fp.read(), globals_dict)
  File "<string>", line 1
    """
      ^
SyntaxError: invalid character in identifier

it highlights this

exec(fp.read(), globals_dict) in function def load_version():

My setup is Win 8.1, 64 bit, Python 3.5, Anaconda4 install.

Awkward Comment on over_sampling.py file

I would like to know if is there anything wrong when generating the synthetic samples. I wonder this because of the words """ FIX THIS SHIT!!! """ on the comments from lines 254 to 256 as shown bellow:

        # --- Generating synthetic samples
        # Use static method make_samples to generate minority samples
        # FIX THIS SHIT!!!#
        sx, sy = self.make_samples(x=minx,
                                   nn_data=minx,
                                   y_type=self.minc,
                                   nn_num=nns,
                                   n_samples=int(self.ratio * len(miny)),
                                   step_size=1.0,
                                   random_state=self.rs,
                                   verbose=self.verbose)

Modify plot_unbalanced_dataset.ipynb

The code has changed from unbalanced_dataset to imblearn, while plot_unbalanced_dataset.ipynb in example document remain the old version, which is relly annoying.

regarding generation of new samples

in calculating the number of new samples for a given unbalanced data sets say i have 12 samples in unbalanced class and 50 in the other. If I set the ratio =0.3 I expect 3 new samples and it works fine. If I set the ratio =0.4 the expected new samples is 4 but 8 are generated I do not understand. Please acknowledge this issue

Better examples and visualization.

Examples and visualization are lacking, specially with the addition of new algorithms. Ideally we could include examples, API usage, plots and other relevant material in one nice ipython notebook.

Ensemble methods only returns a list, not an array?

Hey, thanks for this awesome library. Would be awesome to see it integrated in sklearn some day.

I'm currently exploring sampling methods sort of like a hyper parameter search, by running each sampler on my data set before classification. It makes it through all of the Oversampler methods listed in the iPythonNotebook examples, but when it gets to EasyEnsemble (and balance cascade), my code breaks because the return type for X_train has changed.

I'm not sure why it's changing, or if its expected, but it doesn't seem to be listed in the docs or in a any code comments. It seems later in the example both ensemble methods are given special treatment. Perhaps I should go and read the paper, but if there is an unexpected difference in what your library outputs, I really think it should be documented.

        elif sampler:
            sampler.ratio = float(np.count_nonzero(y==1)) / float(np.count_nonzero(y==0))
            print "Using {1} sampling method with ratio {0}".format(sampler.ratio,str(sampler))
            X_train, y_train = sampler.fit_transform(X_train,y_train)

        print "Training: Feature space holds %d observations and %d features" % X_train.shape

This code normally runs fine, but breaks on ensemble methods:

  File "code.py", line 168, in run_labelkfold
    print "Training: Feature space holds %d observations and %d features" % X_train.shape
AttributeError: 'list' object has no attribute 'shape'

SMOTE ratio and number of synthetic samples generated

Hi,

ratio = float(np.count_nonzero(y==1)) / float(np.count_nonzero(y==0))

Using the ratio as given in the notebook for SMOTE generates as many new synthetic samples as there are majority samples. This means that smox/smoy sets are now unbalanced towards the minority.

Is this how SMOTE should be implemented or should it generate balanced datasets instead?

Oversampling from sequential data

Hey there! I would like to know how should I handle sequential data, that is, the design matrix X is of the form (nsamples, sequence_len, n_feature_dims) and the target vector Y comprises the class labels (Yi \in {0,1,2,...,K}), shaped as (nsamples).

What I'm currently doing is ignoring the second dimension (the sequence length) by selecting a fixed sequence position, such as X[:,0,:]. Thus, it yields a matrix shaped as (nsamples, n_feature_dims). However, I'm not sure if this is the correct way to proceed, because once I sample, it will not be possible to know to which sequence the sample belongs.

Do you know of any workarounds?

Bug in SMOTE SVM

Hey,

There is a bug in SMOTE SVM. It can happen that none of the SVs are considered in danger and the interpolation with the k-NN is failing.

i pull request a version that solve this issue. However, it could be made nicer I think.

Regards

Support indices after under-sampling (NearMiss)

Hi there,
First of all thank you for such a beautiful tool. I am trying for undersampling using NearMiss1. I would like to know is there any way to get the indices for selected samples of majority e.g. get_support(indices=False) in scikit

Regards

Simplify SMOTE's resample method

The resample method in the SMOTE object is too complex. While it works just fine, it could be simplified and made more clear.

Problem with scikit-learn version 0.16.1

Greetings,

I've got problem with scikit-learn version 0.16.1 with the parameter n_jobs used by kd_tree.
So your library seems to need scikit-learn version 0.17.0 then you should modify your requirements.

Update to scikit-learn version 0.17.0

sudo pip install --upgrade scikit-learn

Anyway, great job folks!

Claude Coulombe

API design

This is not really an issue but a discussion about the API.

I really liked the old API with just the resample method.
Estimator keep statistics in fit method and the resample method performs the resampling in the already fitted data.

Conforming the old API we could propose a new mixin like ResamplerMixinwhich should always implement a resample method and in fit_transform method should return self.fit(X, y,).resample(). In this way with a little modification in scikit-learn's pipeline object we could adapt all the estimators of the UnbalancedDataset package.

On the other hand, conforming the new API I propose that the BaseSampler has to inherit from scikit-learn's BaseEstimator and work from there in the pipeline object.

How to apply smote module to my dataset

Hi,

I am beginner in Python. I use neural network to do binary classification . I have an imbalanced dataset problem. How to apply smote (oversampling) module on my dataset to increase minority class?..

Thanks,

Combination of over and undersampling

Hi,

Would it be possible to add a new Class for performing both random over and undersampling combined?

Considering a binary classification problem {1: 2000, 0: 500}
can we sample in such a way we can bring it to {1: 1000, 0, 1000} using random undersampling for class 1 and random oversampling for class 0 using a single function from a class or combine multiple random samplers?

maybe we can have two ratio parameters of over samping and undersampling ratio for the init or there might be a better way?

The reason why I ask is often a combination of under\over sampling methods are useful in solving some ML problems with random sampling where synthetic data may not work well e.g. for failure predictions. Do let me know in case such a method already exists.

Regards,
Dipanjan

Installation Instructions

Hello,

I've installed this by placing it in my ~/anaconda/lib/python2.7/site-packages/ directory. How can I actually use it? Can you give an example used to generate the readme image?

EDIT: I have just read visualizations and found everything I needed in there. Thank you!

index error when fit_transform a large dataset

I have X_train_features.shape (30962, 15637) and y_train.shape (30962,)

type(X_train_features) is scipy.sparse.csr.csr_matrix

get index error:

IndexError                                Traceback (most recent call last)
<ipython-input-44-fa5e3a9ff626> in <module>()
      5 os = OverSampler(ratio=ratio, verbose=verbose)
      6 
----> 7 osx, osy = os.fit_transform(X_train_features, y_train)

C:\Python27\lib\site-packages\unbalanceddataset-0.1-py2.7.egg\unbalanced_dataset\unbalanced_dataset.pyc in fit_transform(self, x, y)
    260 
    261         self.fit(x, y)
--> 262         self.out_x, self.out_y = self.resample()
    263 
    264         return self.out_x, self.out_y

C:\Python27\lib\site-packages\unbalanceddataset-0.1-py2.7.egg\unbalanced_dataset\over_sampling.pyc in resample(self)
     52 
     53         # Start with the majority class
---> 54         overx = self.x[self.y == self.maxc]
     55         overy = self.y[self.y == self.maxc]
     56 

C:\Python27\lib\site-packages\scipy\sparse\csr.pyc in __getitem__(self, key)
    305             row, col = self._index_to_arrays(row, col)
    306 
--> 307         row = asindices(row)
    308         col = asindices(col)
    309         if row.shape != col.shape:

C:\Python27\lib\site-packages\scipy\sparse\csr.pyc in asindices(x)
    224                     x = x.astype(idx_dtype)
    225             except:
--> 226                 raise IndexError('invalid index')
    227             else:
    228                 return x

IndexError: invalid index

Combining all SMOTES

No reason to have SMOTE, bSMOTE1, 2, and SVM_SMOTE as separate objects. They could easily be condense into one SMOTE class that could take type as an argument.

Completed, will close this issue.

PEP8

Make the core file more PEP8 compliant.

Issues with oversampling (SMOTE) example

I tried to reproduce the result for oversampling "https://github.com/fmfn/UnbalancedDataset/blob/master/example/plot_unbalanced_dataset.ipynb". However, I am getting the following error:

Traceback (most recent call last):

File "", line 1, in
smox, smoy = smote.fit_transform(x, y)

File "/unbalanced_dataset/unbalanced_dataset.py", line 262, in fit_transform
self.out_x, self.out_y = self.resample()

File "/unbalanced_dataset/over_sampling.py", line 262, in resample
n_samples=int(self.ratio * len(miny)),

ValueError: invalid literal for int() with base 10: 'autoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoautoauto'

Did anyone got the same error ?!

Random state problem

Great library!

I noticed though that the seeding is all done via numpy while some of the functions still use the regular "random" module, which means that the seeding doesn't work for certain things.

For example, when using UnderSampling with replacement=False, the selection of an index in the "resample" method of that class is done using the non-numpy "sample" function while when replacement=True, the index selection is done using numpy.random.choice.

Anyways, that seems to lead to random states not actually being preserved since the random state given to a class like UnderSampler is calling numpy.random.seed with that value. In my case, I'm only interested in down sampling as able to fix it by changing this line in UnderSampler:

UnderSampler.resample (line 70)
indx = sample(range((self.y == key).sum()), num_samples)

to

indx = np.random.choice(range((self.y == key).sum()), size=num_samples, replace=False)

Hopefully that makes sense. I see the sample function is used elsewhere too so I thought I'd point that out.

Only integer arrays with one element can be converted to an index error

Hi,

Thanks for providing this wonderful package.
Coming to the issue, when I'm trying to use the undersampling or smote using svm variation, I'm encountering the following error.

File "smote.py", line 54, in <module> smox,smoy = US.fit_transform(data,labels) File "/usr/local/lib/python3.4/dist-packages/UnbalancedDataset-0.1-py3.4.egg/unbalanced_dataset/unbalanced_dataset.py", line 262, in fit_transform self.out_x, self.out_y = self.resample() File "/usr/local/lib/python3.4/dist-packages/UnbalancedDataset-0.1-py3.4.egg/unbalanced_dataset/under_sampling.py", line 73, in resample underx = concatenate((underx, self.x[self.y == key][indx]), axis=0) TypeError: only integer arrays with one element can be converted to an index

Here is part of my code where I'm using the undersampling

`print("Loading the Dataset......\n")

f1 = open("withallfeatures_7000tfidf.csv","rb")

data = pickle.load(f1)

print("Loading the labels......\n")

f2 = open("tweetlabels.txt","r")

labels = []
for x in f2:
labels.append(x)
verbose = False
US = UnderSampler(verbose=verbose)
smox,smoy = US.fit_transform(data,labels)`

Can anyone help me in solving this issue ?

Thanks in advance

Add default imbalanced dataset

It could be nice to have a module allowing to load imbalanced dataset from the web.

A list (pp. 40) proposed by Z. Ding could be used for that matter. Some txt file where available at some points but we could take directly the original data.

SMOTE is not taking ratio into consideration

Hi,

I have tried SMOTE with various parameters e.g. ratio & kind = 'borderline1' / 'borderline2' / 'svm' but in each value of 'kind', output minority class samples count is always near to double of input minority class strength. The 'ratio' which is passed as parameter into SMOTE is not taken into consideration. Please check.

Note : ratio is in [0.95, 1.9, 2.85, 3.8]

Issues with using SMOTE

First of all I would like to thank you for this great implementation, I have found it useful in many occasions.
At the moment I am trying to use SMOTE on my dataset which consists of vectorized text data.
However, I am getting the following error:
TypeError: ufunc 'multiply' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')

I am using up to 1000 features. Here is the loop that gives me trouble:
skf = StratifiedKFold(y, 2)
for train, test in skf:
X_train = vect.fit_transform(x[train])
X_train = X_train.toarray()
Y_train = y[train]
smote = SMOTE()
smox, smoy = smote.fit_transform(X_train, Y_train)

Thank you very much for your support!

cannot be used in a Pipeline

Hello,

I was hoping to use the UnderSampler in a scikit-learn Pipeline, but ran into problems with the dimensions of the output.

Looking more closely, the UnderSampler (and presumably others) do not fully conform to the scikit learn specifications for estimators.

from sklearn.utils.estimator_checks import check_estimator
check_estimator(UnderSampler)

TypeError: Cannot clone object '<unbalanced_dataset.under_sampling.UnderSampler object at 0x7fd29f441ad0>' (type <class 'unbalanced_dataset.under_sampling.UnderSampler'>): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.

MemoryError

Hi...I have four classes consist of 369 samples for 1st, 291 for 2nd, 332 for the 3rd, and 520336 for the 4th class. It's a text classification but i have convert the data as sparse matrix and the labels as numpy array. I also already used the updated package for handling sparse matrix like in issues 26. When I try to undersample the dataset by using random under-sampling, there is memory trouble
Here are the error messages:
Traceback (most recent call last):
File "C:/Python34/pen_uppm/classificationResamplingDataset.py", line 81, in
hasilUnderSampling = test_rest(vecTraining, y_training)
File "C:/Python34/pen_uppm/classificationResamplingDataset.py", line 77, in test_rest
usx, usy = US.fit_transform(x, y)
File "C:\Python34\lib\site-packages\unbalanceddataset-0.1-py3.4.egg\unbalanced_dataset\unbalanced_dataset.py", line 264, in fit_transform
self.out_x, self.out_y = self.resample()
File "C:\Python34\lib\site-packages\unbalanceddataset-0.1-py3.4.egg\unbalanced_dataset\under_sampling.py", line 74, in resample
underx = concatenate((underx, self.x[self.y == key][indx]), axis=0)
File "C:\Python34\lib\site-packages\unbalanceddataset-0.1-py3.4.egg\unbalanced_dataset\utils.py", line 36, in concatenate
return sp.vstack(l).todense()
File "C:\Python34\lib\site-packages\scipy\sparse\base.py", line 605, in todense
return np.asmatrix(self.toarray(order=order, out=out))
File "C:\Python34\lib\site-packages\scipy\sparse\coo.py", line 274, in toarray
B = self._process_toarray_args(order, out)
File "C:\Python34\lib\site-packages\scipy\sparse\base.py", line 793, in _process_toarray_args
return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError

Is it caused by 4th class contains 520336 samples? or?
Please help me,,thanks in advance.

Better organization.

We have several methods scattered in one big file, all must be imported and used separately. I suggest we split them in sub modules (over, under, mix, ensemble).

See pull request #10

SMOTE on pipeline

Hi. I was wondering how can I use the SMOTETomek class in sklearn's pipelines. It's working or is a WIP?
Thank You!

What should I do to handle categorical variables?

First, thanks for sharing the tools for us.
And I want to generates synthetic samples by SMOTE algorithm, but some of my features was categorical, like region 、gender and so on. I want to know how to handle these categorical variables to generate samples with the same type. I can't find any explanation in the document.
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.