Yongfu He's Projects
Text autoencoder with LSTMs
LSTM autoencoder for sentence embedding generation
A CNN with an attentional module that I built while attending the brains minds and machines summer course
Caffe2 is a lightweight, modular, and scalable deep learning framework.
This project aims to predict VOLATILITY S&P 500 (^VIX) time series using LSTM.
Timeseries Forecasting via Recurrent Neural Nets [Keras + Theano]
Deep Learning for Spatio-Temporal Data
Deep fusion project of deeply-fused nets, and the study on the connection to ensembling
anomaly detection with GAN
(machine learning) Unsupervised learning models for text: 1. LSTM Language Model 2. LSTM Autoencoder
MATLAB implementation of the GIRAF algorithm for convolutional structured low-rank matrix recovery
GPU implementaion of Kernel Recursive Least Squares (KRLS) algorithm
Code and documentation for the winning sollution to the Grasp-and-Lift EEG Detection challenge
This is my code for IJCAI-18 CVR Prediction Contest
Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on TensorFlow or Theano.
Keras implementation of LSTM Variational Autoencoder
TensorFlow LSTM-autoencoder implementation
LSTM based Autoencoder for extracting high-level representations from sequential categorical data
LSTM-MATLAB is Long Short-term Memory (LSTM) in MATLAB, which is meant to be succinct, illustrative and for research purpose only. It is accompanied with a paper for reference: Revisit Long Short-Term Memory: An Optimization Perspective, NIPS deep learning workshop, 2014.
Code accompanying the book "Machine Learning for Hackers"
This solution presents an accessible, non-trivial example of machine learning (Deep learning) with financial time series using Keras on top of Tensor Flow
Code to train state-of-the-art Neural Machine Translation systems.
OSPABP: UAV flight data anomaly detection based on oversampling projection approximation basis pursuit
Scripts to Analyze Pronto's Data Release
The "Python Machine Learning" book code repository and info resource
We want to force temporal models, like LSTM and GRU, to have better representation of the past (better memory). We do this by reconstructing the previous hidden representations using autoencoders.