GithubHelp home page GithubHelp logo

tgyy1995 / imagecompression Goto Github PK

View Code? Open in Web Editor NEW

This project forked from limuhit/imagecompression

0.0 1.0 0.0 216.71 MB

This is the codes for paper "Learning Convolutional Networks for Content-weighted Image Compression"

Python 100.00%

imagecompression's Introduction

ImageCompression

This is the codes for paper "Learning Convolutional Networks for Content-weighted Image Compression".

This project is based on a modified version of caffe framework. Currently, we only offer the complied pycaffe python package for test. The pycaffe package is complied with the environment "Windows 10" "VS 2015" and "CUDA 8.0". The pycaffe package is available at "https://drive.google.com/open?id=0B-XAj3Bp3YhHbU5pdTNfc3l5OGs". After download it, just put it into the library of python and make sure the code "import caffe" works. Here, we recommend you to take use of Anaconda as the python environment and just move the "caffe" directory in the uncompressed folder into the path "AnacondaInstallPath/Lib/site-packages". After that, install python package "numpy","protobuf","lmdb" and "cv2" before runing the codes, otherwise you may meet with some errors. For installing these packages, you make take use of pip install or conda install. For example, just type "pip install numpy" in the command line to install the numpy package.

Now, I will show you how to take use of the test codes for our image compression model.

The file "test_imp.py" is used to test different models with different compression ratios. One model for one ratio. For each model, this file can generate the compressed image, calculate the PSNR metrics and put the compressed images under the directory "model/img". The importance map for each image is transformed as a black white picture and saved in the folder "model/img/imp".

The compression ratio of our model can be calculated in two parts. One part for the importance map, another for the binary codes. The file "create_lmdb_for binary_codes.py" is used to extract the context for each binary codes and put the context cubic into a lmdb database for further calculate the hit or miss possibility in the arithmetic coding. The database should be created for each model before testing the compression ratio. The file "create_lmdb_for_imp_map.py" is used to prepare the data for calculating the ratio of the binary codes. After preparing the data, we can run the file "test_entropy_encoder.py" to calculate the compression ratio of the binary codes and the importance map. Finally, by adding the importance map ratio and the binary codes ratio, we can get the final ratio of our model.

If you have problem in testing our model, please contact me at "[email protected]".

If you the codes, please cite the paper "Learning Convolutional Networks for Content-weighted Image Compression".

@article{li2017learning,

title={Learning Convolutional Networks for Content-weighted Image Compression},

author={Li, Mu and Zuo, Wangmeng and Gu, Shuhang and Zhao, Debin and Zhang, David},

journal={arXiv preprint arXiv:1703.10553},

year={2017}

}

imagecompression's People

Contributors

limuhit avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.