GithubHelp home page GithubHelp logo

jaswar / nnlib Goto Github PK

View Code? Open in Web Editor NEW
5.0 1.0 0.0 1.25 MB

GPU-accelerated, C/C++ neural network library.

Home Page: https://jaswar.github.io/nnlib/

License: MIT License

CMake 5.37% C++ 71.12% C 3.69% Cuda 15.32% Shell 4.50%
gpu machine-learning neural-network cpp cuda

nnlib's Introduction

Overview

nnlib is a GPU-accelerated, static, C/C++ neural network library with autograd support. It was designed to work in one of the following two modes:

  • CPU-only: All operations take place on CPU and all data is stored in the main memory. Single Instruction Multiple Data (SIMD) instruction sets, AVX and AVX2, are used to increase performance.
  • GPU-accelerated: Most of the operations are performed on GPU and most of the data is stored on GPU' memory. The library then uses the GPU to increase performance in parallelizable tasks such as matrix multiplication. An Nvidia, CUDA-capable GPU is required to run the library in this mode.

Supported functionalities

  • Layers:
    • Fully connected
  • Activation functions:
    • Linear
    • ReLU
    • Sigmoid
  • Loss functions:
    • Mean Squared Error
    • Binary Cross Entropy
    • Categorical Cross Entropy
  • Metrics:
    • All of the above loss functions
    • Categorical accuracy
    • Binary accuracy
  • Optimizers:
    • Stochastic Gradient Descent

Setup

The library is currently supported on both Linux and Windows.

Linux

  1. Install a C++ compiler. Both g++ and clang++ have been tested.
# For g++
sudo apt install g++
# For clang++
sudo apt install clang  
  1. Install CMake version at least 3.16. Can be downloaded from the official site: https://cmake.org/download/.
  2. If you do not wish to use GPU acceleration, go to step 4. Otherwise, continue with the following:
    1. Make sure that you have a CUDA-Capable Nvidia GPU. Only those GPUs are supported by the library.
    2. Install the NVIDIA CUDA Toolkit using the installation guide.
    3. Verify that CUDA was installed by running nvcc -V in the terminal.
  3. Clone the repository using the following command:
git clone https://github.com/Jaswar/nnlib.git
  1. Build and install the library. Within the cloned git folder do the following:
cd scripts
sudo chmod +x build.sh
./build.sh 

This will install the library in the ./install directory in the main directory of the cloned repository (so not inside scripts).

Windows

The following is the recommended way to set up the library on Windows. The Visual Studio setup is required for the installation of CUDA and running the library in GPU-accelerated mode. Visual Studio is not however required to run the library in CPU-only mode. For that, a C++ compiler such as Clang/GCC will be sufficient.

  1. Download and install Visual Studio 2017 or higher.
  2. Enable Desktop development with C++ in the Visual Studio Installer.
  3. If you do not wish to use GPU-acceleration, go to step 4. Otherwise, continue with the steps below:
    1. Make sure that you have a CUDA-Capable Nvidia GPU. Only those GPUs are supported by the library.
    2. Install the NVIDIA CUDA Toolkit using the installation guide.
    3. Verify that CUDA was installed by running nvcc -V in the terminal.
  4. Clone the repository using the following command:
git clone https://github.com/Jaswar/nnlib.git
  1. It is recommended to build the library and the examples using the provided scripts in the scripts directory. Git Bash can be used for that purpose. If you do not have Git Bash installed, an alternative could be a tool such as Cygwin or MinGW.

The library can then be installed using the following commands (inside the cloned repository):

cd scripts
./build.sh

This will install the library in the install folder in the main directory.

Running an example

  1. Install the library as described in the Setup.
  2. Build an example. Here we will build the MNIST example (assuming we start in the main directory):
cd scripts

# If you are on Linux give execute permissions to the file
sudo chmod +x build_example.sh

./build_example.sh -c Release -p <path_to_nnlib_install> mnist

Here, <path_to_nnlib_install> should be replaced with the absolute path to the install folder that was created in the library installation step. Running these commands will build the mnist example in ./examples/mnist/build. 3. Run the example. It expects the absolute path to a MNIST_train.txt file, which can be downloaded from here, as the first and only argument. Run the example with the following command:

cd examples/mnist/build
./mnist_nnlib <path_to_MNIST_train.txt>

Documentation

The whole project is documented on https://jaswar.github.io/nnlib.

nnlib's People

Contributors

jaswar avatar

Stargazers

江小黑 avatar  avatar Paul Huebner avatar Mikhail Vlasenko avatar

Watchers

 avatar

nnlib's Issues

Activation function improvements

Some activation functions can be improved by using Tensor operations (instead of designated methods in the form of evaluators).

This is especially true for sigmoid and linear activation functions where both the forward and backward steps can be defined with tensor operations (ReLU would require a definition of 2 new tensor operations).

This will likely cause the activation functions to use more auxiliary space, so maybe some kind of basic memory manager could be used (that would contain some working spaces that are shared by activation functions/losses).

Methods to reshape a tensor

Create methods to enable a tensor to be reshaped:

  • Shape, size and location of the tensor should be private variables with special methods for controlled access
  • Create an expandDimension method to create a new dimension of the tensor (creates a new dimension by adding ,1 to the shape). Maybe also make a collapseDimension method to remove a dimension of shape ,1.
  • Create a reshape method to change the shape of the tensor. Will simply update the shape of the tensor to the new value. Hence it should only allow new shapes that result in the same size.

Metrics improvements

Regards the following changes:

  • Abstract Metric class exists
  • All loss functions are a metric,
  • Accuracy is a metric,

New metrics, including:

  • Binary accuracy
  • Precision
  • Recall
  • TP/TN/FP/FN

Support for testing networks

This should include:

  • Method to split the X and y tensors into train, test, validation sets
  • Use validation set during training

Fix Readme

3rd point of Running an example is wrongly formatted.

Autograd support

Implement a backpropagation engine that could derive the gradient equations automatically based on the forward-propagation of data.

Printing improvement

Printing the state of the epoch (progress bar) at each iteration causes a serious performance issue. This should be fixed.

Move Session to a static object

Possible move the session object to a static, constant expression using constexpr. In this way it won't have to be initialized every time with every tensor.

Remove HAS_CUDA

Remove the HAS_CUDA macro definition and replace it with add_definitions(-D__CUDA__) (or similar) if CUDA is found in the CMakeLists.txt file.

Improve imports

Simplify some of the imports, such as ../../../../include/activation.h in linear_activation.cpp. Should be changed to <activation.h>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.