GithubHelp home page GithubHelp logo

mercurial24 / stegastamp Goto Github PK

View Code? Open in Web Editor NEW

This project forked from tancik/stegastamp

0.0 1.0 0.0 227 KB

Invisible Hyperlinks in Physical Photographs

Home Page: http://www.matthewtancik.com/stegastamp

License: MIT License

Shell 0.91% Python 99.09%

stegastamp's Introduction

StegaStamp: Invisible Hyperlinks in Physical Photographs [Project Page]

CVPR 2020

Matthew Tancik, Ben Mildenhall, Ren Ng University of California, Berkeley

Introduction

This repository is a code release for the ArXiv report found here. The project explores hiding data in images while maintaining perceptual similarity. Our contribution is the ability to extract the data after the encoded image (StegaStamp) has been printed and photographed with a camera (these steps introduce image corruptions). This repository contains the code and pretrained models to replicate the results shown in the paper. Additionally, the repository contains the code necessary to train the encoder and decoder models.

Citation

If you find our work useful, please consider citing:

    @inproceedings{2019stegastamp,
        title={StegaStamp: Invisible Hyperlinks in Physical Photographs},
        author={Tancik, Matthew and Mildenhall, Ben and Ng, Ren},
        booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2020}
    }

Installation

  • Clone repo and install submodules
git clone --recurse-submodules https://github.com/tancik/StegaStamp.git
cd StegaStamp
  • Install tensorflow (tested with tf 1.13)
  • Python 3 required
  • Download dependencies
pip install -r requirements.txt

Training

Encoder / Decoder

  • Set dataset path in train.py
TRAIN_PATH = DIR_OF_DATASET_IMAGES
  • Train model
bash scripts/base.sh EXP_NAME

The training is performed in train.py. There are a number of hyperparameters, many corresponding to the augmentation parameters. scripts/bash.sh provides a good starting place.

Pretrained network

Run the following in the base directory to download the trained network used in paper:

wget http://people.eecs.berkeley.edu/~tancik/stegastamp/saved_models.tar.xz
tar -xJf saved_models.tar.xz
rm saved_models.tar.xz

Detector

The training code for the detector model (used to segment StegaStamps) is not included in this repo. The model used in the paper was trained using the BiSeNet model released here. CROP_WIDTH and CROP_HEIGHT were set to 1024, all other parameters were set to the default. The dataset was generated by randomly placing warped StegaStamps onto larger images.

The exported detector model can be downloaded with the following command:

wget http://people.eecs.berkeley.edu/~tancik/stegastamp/detector_models.tar.xz
tar -xJf detector_models.tar.xz
rm detector_models.tar.xz

Tensorboard

To visualize the training run the following command and navigate to http://localhost:6006 in your browser.

tensorboard --logdir logs

Encoding a Message

The script encode_image.py can be used to encode a message into an image or a directory of images. The default model expects a utf-8 encoded secret that is <= 7 characters (100 bit message -> 56 bits after ECC).

Encode a message into an image:

python encode_image.py \
  saved_models/stegastamp_pretrained \
  --image test_im.png  \
  --save_dir out/ \
  --secret Hello

This will save both the StegaStamp and the residual that was applied to the original image.

Decoding a Message

The script decode_image.py can be used to decode a message from a StegaStamp.

Example usage:

python decode_image.py \
  saved_models/stegastamp_pretrained \
  --image out/test_hidden.png

Detecting and Decoding

The script detector.py can be used to detect and decode StegaStamps in an image. This is useful in cases where there are multiple StegaStamps are present or the StegaStamp does not fill the frame of the image.

To use the detector, make sure to download the detector model as described in the installation section. The recomended input video resolution is 1920x1080.

python detector.py \
  --detector_model detector_models/stegastamp_detector \
  --decoder_model saved_models/stegastamp_pretrained \
  --video test_vid.mp4

Add the --save_video FILENAME flag to save out the results.

The --visualize_detector flag can be used to visualize the output of the detector network. The mask corresponds to the segmentation mask, the colored polygons are fit to this segmentation mask using a set of heuristics. The detector outputs can noisy and are sensitive to size of the stegastamp. Further optimization of the detection network is not explored in this paper.

stegastamp's People

Contributors

bmild avatar tancik avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.