Language performance comparison in image classification with deep neural networks (fully connected, convolutional, MobileNet, ConvNeXt, among other models) with GPU utilization
C++ is obviously and undeniably faster than Python when run on CPU. How do they compare, however, when all the heavy lifting is shifted to GPU? Not to mention there are multiple frameworks for deep neural networks - which one is the fastest, and which one is most tiresome to use? ...Does anyone still use Matlab?
Those, among others, are the questions that were keeping me awake at night, so I started this project in hopes of answering them once and for all.
This project is supposed to be my BSc thesis. I guesstimate it is about half-finished. Until then it just stands as proof that I know how to install nVidia drivers on Linux (and also that I know PyTorch and LibTorch and TensorFlow but the former is way more impressive, isn't it).
All benchmarks were run on GTX 1050 with CUDA 11.8 with drives suitable for Ubuntu
- Python (3.11)
- PyTorch
- TensorFlow
- Numpy
- Pandas
- multiprocessing
- C++ (14)
- LibTorch
- cuDNN
- CUDA
- Matlab (R2023a)
Refer to chapters 5 and 6 in the final report.
Attaching some plots below as they draw attention.
TL;DR: PyTorch turned out to be not much slower than TensorFlow (sometimes even faster), but was way nicer to work with - it's very pythonic. I can't recommend it enough.