GithubHelp home page GithubHelp logo

arasharchor / msda Goto Github PK

View Code? Open in Web Editor NEW

This project forked from phdowling/msda

0.0 1.0 0.0 501 KB

Python implementation of (linear) Marginalized Stacked Denoising Autoencoder (mSDA), as well as dense Cohort of Terms (dCoT). Based on Matlab code by Minmin Chen

Python 100.00%

msda's Introduction

mSDA

Python implementation of (linear) Marginalized Stacked Denoising Autoencoder (mSDA), as well as dense Cohort of Terms (dCoT), which is a dimensionality-reduction algorithm based on mSDA.

Based on Matlab code by Minmin Chen. For original Papers and Code, see http://www.cse.wustl.edu/~mchen/.

This code has not been extensively tested, so do not rely on this to in fact produce correct representations quite yet. Keep following this repository to keep up to date.

Example usage with dimensional reduction on text:

from linear_msda import mSDA

# load your corpus, should be bag of words format (as in e.g. gensim)
preprocessed_bow_documents = MmCorpus("test_corpus.mm")

# load your dictionary
id2word = Dictionary("...")

dimensions = 1000

# select prototype word IDs, e.g. by finding the most frequent terms
prototype_ids = [id_ for id_, freq in sorted(id2word.dfs.items(), key=lambda (k, v): v, reverse=True)[:dimensions]]

# initialize mSDA / dCoT
msda = mSDA(noise=0.5, num_layers=3, input_dimensionality=len(id2word), output_dimensionality=dimensions, prototype_ids=prototype_ids)

# train on our corpus, generating the hidden representations
msda.train(preprocessed_bow_documents, chunksize=10000)

# get a hidden representation of new text: (note: this is slow)
mytext = "add some text here"
bow = preprocess(mytext) # remove stopwords, generate bow, etc.
representation = msda[bow]

# this also works for corpus formats in the same notation, like in gensim (this way is also more efficient)
mycorpus_raw = ["add some text here", "another text", "this is a document"]
corpus = [preprocess(doc) for doc in mycorpus_raw]
representations = msda[corpus]

Note that this implementation is significantly more efficient when documents are transformed in bulk. If you transform documents one at a time, you may experience runtimes that are orders of magniture slower than those of bulk-processing the same data.

msda's People

Contributors

phdowling avatar

Watchers

Arash Archor avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.