GithubHelp home page GithubHelp logo

dessa-oss / deepfake-detection Goto Github PK

View Code? Open in Web Editor NEW
534.0 17.0 189.0 10.25 MB

Towards deepfake detection that actually works

Home Page: http://deepfake-detection.dessa.com/projects

License: MIT License

Dockerfile 2.08% Python 84.90% Shell 13.01%
machine-learning deep-learning deep-fakes pytorch resnet-18 resnet transfer-learning new-york-times deepfake-detection deepfake deepfakes deepfake-dataset faceforensics

deepfake-detection's Introduction

parallel coordinates plot

Read the technical deep dive: https://www.dessa.com/post/deepfake-detection-that-actually-works

Visual DeepFake Detection

In our recent article, we make the following contributions:

  • We show that the model proposed in current state of the art in video manipulation (FaceForensics++) does not generalize to real-life videos randomly collected from Youtube.
  • We show the need for the detector to be constantly updated with real-world data, and propose an initial solution in hopes of solving deepfake video detection.

Our Pytorch implementation, conducts extensive experiments to demonstrate that the datasets produced by Google and detailed in the FaceForensics++ paper are not sufficient for making neural networks generalize to detect real-life face manipulation techniques. It also provides a current solution for such behavior which relies on adding more data.

Our Pytorch model is based on a pre-trained ResNet18 on Imagenet, that we finetune to solve the deepfake detection problem. We also conduct large scale experiments using Dessa's open source scheduler + experiment manger Atlas.

Setup

Prerequisities

To run the code, your system should meet the following requirements: RAM >= 32GB , GPUs >=1

Steps

  1. Install nvidia-docker
  2. Install ffmpeg or sudo apt install ffmpeg
  3. Git Clone this repository.
  4. If you haven't already, install Atlas.
  5. Once you've installed Atlas, activate your environment if you haven't already, and navigate to your project folder.

That's it, You're ready to go!

Datasets

Half of the dataset used in this project is from the FaceForensics deepfake detection dataset. .

To download this data, please make sure to fill out the google form to request access to the data.

For the dataset that we collected from Youtube, it is accessible on S3 for download.

To automatically download and restructure both datasets, please execute:

bash restructure_data.sh faceforensics_download.py

Note: You need to have received the download script from FaceForensics++ people before executing the restructure script.

Note2: We created the restructure_data.sh to do a split that replicates our exact experiments avaiable in the UI above, please feel free to change the splits as you wish.

Walkthrough

Before starting to train/evaluate models, we should first create the docker image that we will be running our experiments with. To do so, we already prepared a dockerfile to do that inside custom_docker_image. To create the docker image, execute the following commands in terminal:

cd custom_docker_image
nvidia-docker build . -t atlas_ff

Note: if you change the image name, please make sure you also modify line 16 of job.config.yaml to match the docker image name.

Inside job.config.yaml, please modify the data path on host from /media/biggie2/FaceForensics/datasets/ to the absolute path of your datasets folder.

The folder containing your datasets should have the following structure:

datasets
├── augment_deepfake        (2)
│   ├── fake
│   │   └── frames
│   ├── real
│   │   └── frames
│   └── val
│       ├── fake
│       └── real
├── base_deepfake           (1)
│   ├── fake
│   │   └── frames
│   ├── real
│   │   └── frames
│   └── val
│       ├── fake
│       └── real
├── both_deepfake           (3)
│   ├── fake
│   │   └── frames
│   ├── real
│   │   └── frames
│   └── val
│       ├── fake
│       └── real
├── precomputed             (4)
└── T_deepfake              (0)
    ├── manipulated_sequences
    │   ├── DeepFakeDetection
    │   ├── Deepfakes
    │   ├── Face2Face
    │   ├── FaceSwap
    │   └── NeuralTextures
    └── original_sequences
        ├── actors
        └── youtube

Notes:

  • (0) is the dataset downloaded using the FaceForensics repo scripts
  • (1) is a reshaped version of FaceForensics data to match the expected structure by the codebase. subfolders called frames contain frames collected using ffmpeg
  • (2) is the augmented dataset, collected from youtube, available on s3.
  • (3) is the combination of both base and augmented datasets.
  • (4) precomputed will be automatically created during training. It holds cashed cropped frames.

Then, to run all the experiments we will show in the article to come, you can launch the script hparams_search.py using:

python hparams_search.py

Results

In the following pictures, the title for each subplot is in the form real_prob, fake_prob | prediction | label.

Model trained on FaceForensics++ dataset

For models trained on the paper dataset alone, we notice that the model only learns to detect the manipulation techniques mentioned in the paper and misses all the manipulations in real world data (from data)

model1 model11

Model trained on Youtube dataset

Models trained on the youtube data alone learn to detect real world deepfakes, but also learn to detect easy deepfakes in the paper dataset as well. These models however fail to detect any other type of manipulation (such as NeuralTextures).

model2 model22

Model trained on Paper + Youtube dataset

Finally, models trained on the combination of both datasets together, learns to detect both real world manipulation techniques as well as the other methods mentioned in FaceForensics++ paper.

model3 model33

for a more in depth explanation of these results, please refer to the article we published. More results can be seen in the interactive UI

Help improve this technology

Please feel free to fork this work and keep pushing on it.

If you also want to help improving the deepfake detection datasets, please share your real/forged samples at [email protected].

LICENSE

© 2020 Square, Inc. ATLAS, DESSA, the Dessa Logo, and others are trademarks of Square, Inc. All third party names and trademarks are properties of their respective owners and are used for identification purposes only.

deepfake-detection's People

Contributors

marctyndel avatar rayhane-mamah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepfake-detection's Issues

code execution backdoor

We discovered a malicious backdoor in the project's dependencies, affected versions are 56ccf40~c2439e3855df9296a2476e940adf35afbb833c20. Its malicious backdoor is the request package, the DeepFake-Detection/custom_docker_image/requirements.txt file has a dependency request.

image

Even if the request has been deleted by PyPI, many mirror sites have not completely deleted this package, so it can still be installed. For example: https://mirrors.neusoft.edu.cn/pypi/web/simple/request/

Using such a mirror site to download and install this item will be vulnerable.

image

Analysis of malicious function of request package:
1.Remote download of malicious code
When the request package is installed, the setup.py file in the package will be actively executed. The setup.py file contains the logic for the attacker to remotely download and execute malicious code. At the same time, the C2 domain name is encoded and obfuscated. The decrypted C2 address is: https://dexy.top/request/check.so.
2.Release the remote control Trojan and persist it
The malicious code loaded remotely during the installation of the request package includes two functions:
Release the remote control Trojan to the .uds folder of the current user's HOME directory. The Trojan name is _err.log (for example, /root/.uds/_err.log). The content of the _err.log remote control Trojan script is encoded and compressed by base64, which reduces the size and enhances the confrontation.
Implant malicious backdoor commands in .bashrc to achieve persistence
3.Issue stealing instructions
The attacker issues python secret stealing instructions through the remote control Trojan to steal sensitive information (coinbase account secret)
After decrypting the stealing instruction, the function is to request the C2 service: http://dexy.top/x.pyx, and remotely load the stealing Trojan.
Some of the functions of the remotely loaded secret stealing Trojan are shown below, which are used to steal browser cookies, coinbase accounts and passwords, etc.

Repair suggestion: replace request in requirements.txt with requests

Problem building docker container

I am trying to replicate test but it is failing at
nvidia-docker build . -t atlas_ff

at the command:
RUN pip install -v --no-cache-dir apex/

I am running on an Azure NC6_Promo instance with Ubuntu 18.04 and NVIDIA K80. Running python 3.6.12 in a conda environment. nvidia-smi output at end of this post. I could successfully install apex on the host instance but the docker build is failing. Help!

Error is:

Processing ./apex Created temporary directory: /tmp/pip-req-build-2yeom3ry Added file:///workspace/apex to build tracker '/tmp/pip-req-tracker-odo6vywe' Running setup.py (path:/tmp/pip-req-build-2yeom3ry/setup.py) egg_info for package from file:///workspace/apex Created temporary directory: /tmp/pip-pip-egg-info-95qvevi0 Running command python setup.py egg_info Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-req-build-2yeom3ry/setup.py", line 35, in <module> _, bare_metal_major, _ = get_cuda_bare_metal_version(cpp_extension.CUDA_HOME) File "/tmp/pip-req-build-2yeom3ry/setup.py", line 14, in get_cuda_bare_metal_version raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

Warning: Torch did not find available GPUs on this system.
 If your intention is to cross-compile, this is not an error.
By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),
Volta (compute capability 7.0), Turing (compute capability 7.5),
and, if the CUDA version is >= 11.0, Ampere (compute capability 8.0).
If you wish to cross-compile for a single specific architecture,
export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.

ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Exception information: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 228, in _main status = self.run(options, args) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 182, in wrapper return func(self, options, args) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 324, in run reqs, check_supported_wheels=not options.target_dir File "/opt/conda/lib/python3.6/site-packages/pip/_internal/resolution/legacy/resolver.py", line 183, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/resolution/legacy/resolver.py", line 388, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/resolution/legacy/resolver.py", line 340, in _get_abstract_dist_for abstract_dist = self.preparer.prepare_linked_requirement(req) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 483, in prepare_linked_requirement req, self.req_tracker, self.finder, self.build_isolation, File "/opt/conda/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 91, in _get_prepared_distribution abstract_dist.prepare_distribution_metadata(finder, build_isolation) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/distributions/sdist.py", line 40, in prepare_distribution_metadata self.req.prepare_metadata() File "/opt/conda/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 555, in prepare_metadata self.metadata_directory = self._generate_metadata() File "/opt/conda/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 535, in _generate_metadata details=self.name or "from {}".format(self.link) File "/opt/conda/lib/python3.6/site-packages/pip/_internal/operations/build/metadata_legacy.py", line 73, in generate_metadata command_desc='python setup.py egg_info', File "/opt/conda/lib/python3.6/site-packages/pip/_internal/utils/subprocess.py", line 242, in call_subprocess raise InstallationError(exc_msg) pip._internal.exceptions.InstallationError: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

`+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 00000001:00:00.0 Off | 0 |
| N/A 49C P8 28W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
`

Migrate to python 3+

There are a lot of issues while using python 2.7, like not able to use foundations. Why don't you make it Python 3 compatible?

foundations.submit broken

Hi,

I'm using your custom_docker_image to repeat your experiment done on https://www.dessa.com/post/deepfake-detection-that-actually-works. However, the Foundations package I found on pypi does not seem to have a .submit member and I can't find an other foundations module that does contain this member.

It's regarding this section in hparams_search.py.

for job_ in range(NUM_JOBS): print "packaging job " + str(job_) hyper_params = generate_params() print hyper_params Foundations.submit(scheduler_config='scheduler', job_directory='.', command='main.py', params=hyper_params, stream_job_logs=False)

403 Forbidden Error in link

The link to the file for downloading youtube prestructered data in the restructure_data.sh
THIS LINK

Is giving 403 Forbidden access denied error.
How to go about this?

Youtube Video URL?

Hi, thanks for creating this repository. Is it possible to provide a list of URLs of DeepFake videos downloaded from Youtube? Thanks!

CMake must be installed to build the following extensions: dlib

(conda) alex@G5-5587:~/DeepFake-Detection/custom_docker_image$ nvidia-docker build . -t atlas_ff
Sending build context to Docker daemon 1.942MB
Step 1/15 : FROM pytorch/pytorch:1.3-cuda10.1-cudnn7-runtime
---> ba2da111b833
Step 2/15 : LABEL maintainer "NVIDIA CORPORATION [email protected]"
---> Using cache
---> 1778b16c889e
Step 3/15 : RUN apt-get update && apt-get install -y wget git libsm6 libxext6 libxrender-dev libgtk2.0-dev && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 0474a3577e36
Step 4/15 : RUN apt-get update && apt-get upgrade -y && apt-get install bzip2
---> Using cache
---> 703132040c49
Step 5/15 : RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
---> Using cache
---> 4b283d8d79b1
Step 6/15 : RUN bash miniconda.sh -b -p /miniconda
---> Using cache
---> 73cd40916b1a
Step 7/15 : RUN /bin/bash -c "source ~/.bashrc"
---> Using cache
---> 26aa20bb0b99
Step 8/15 : RUN conda init
---> Using cache
---> 8b9a20aba2bc
Step 9/15 : RUN /bin/bash -c "source ~/.bashrc"
---> Using cache
---> 77e20334a1d7
Step 10/15 : RUN conda remove -c anaconda pyyaml -y
---> Using cache
---> d361c1c04368
Step 11/15 : COPY requirements.txt /tmp
---> Using cache
---> f797667b3857
Step 12/15 : RUN pip install --no-cache-dir -r /tmp/requirements.txt && rm /tmp/requirements.txt
---> Running in 59d72d57cd2f
Collecting numpy==1.17.3
Downloading numpy-1.17.3-cp36-cp36m-manylinux1_x86_64.whl (20.0 MB)
Collecting dlib==19.15.0
Downloading dlib-19.15.0.tar.gz (3.3 MB)
Collecting pathlib==1.0.1
Downloading pathlib-1.0.1.tar.gz (49 kB)
Collecting Pillow==6.2.0
Downloading Pillow-6.2.0-cp36-cp36m-manylinux1_x86_64.whl (2.1 MB)
Collecting tqdm==4.25.0
Downloading tqdm-4.25.0-py2.py3-none-any.whl (43 kB)
Collecting scikit-learn==0.21.3
Downloading scikit_learn-0.21.3-cp36-cp36m-manylinux1_x86_64.whl (6.7 MB)
Collecting matplotlib==3.1.1
Downloading matplotlib-3.1.1-cp36-cp36m-manylinux1_x86_64.whl (13.1 MB)
Requirement already satisfied: requests in /opt/conda/lib/python3.6/site-packages (from -r /tmp/requirements.txt (line 8)) (2.24.0)
Collecting jsonschema==3.0.2
Downloading jsonschema-3.0.2-py2.py3-none-any.whl (54 kB)
Collecting dill==0.2.8.2
Downloading dill-0.2.8.2.tar.gz (150 kB)
Collecting google-api-python-client==1.7.3
Downloading google_api_python_client-1.7.3-py2.py3-none-any.whl (55 kB)
Collecting google-auth-httplib2==0.0.3
Downloading google_auth_httplib2-0.0.3-py2.py3-none-any.whl (6.3 kB)
Collecting google-cloud-storage==1.10.0
Downloading google_cloud_storage-1.10.0-py2.py3-none-any.whl (52 kB)
Collecting PyYAML==5.1.2
Downloading PyYAML-5.1.2.tar.gz (265 kB)
Collecting freezegun==0.3.12
Downloading freezegun-0.3.12-py2.py3-none-any.whl (12 kB)
Collecting flask-restful==0.3.7
Downloading Flask_RESTful-0.3.7-py2.py3-none-any.whl (24 kB)
Collecting Flask==1.1.1
Downloading Flask-1.1.1-py2.py3-none-any.whl (94 kB)
Collecting Flask-Cors==3.0.8
Downloading Flask_Cors-3.0.8-py2.py3-none-any.whl (14 kB)
Collecting pandas==0.23.3
Downloading pandas-0.23.3-cp36-cp36m-manylinux1_x86_64.whl (8.9 MB)
Collecting promise==2.2.1
Downloading promise-2.2.1.tar.gz (19 kB)
Collecting redis==3.3.11
Downloading redis-3.3.11-py2.py3-none-any.whl (66 kB)
Collecting slackclient==1.3.0
Downloading slackclient-1.3.0-py2.py3-none-any.whl (18 kB)
Collecting Werkzeug==0.16.0
Downloading Werkzeug-0.16.0-py2.py3-none-any.whl (327 kB)
Collecting boto3==1.9.86
Downloading boto3-1.9.86-py2.py3-none-any.whl (128 kB)
Collecting boto==2.49.0
Downloading boto-2.49.0-py2.py3-none-any.whl (1.4 MB)
Collecting python-jose==3.0.1
Downloading python_jose-3.0.1-py2.py3-none-any.whl (25 kB)
Collecting python-keycloak==0.17.6
Downloading python-keycloak-0.17.6.tar.gz (18 kB)
Collecting tabulate==0.8.3
Downloading tabulate-0.8.3.tar.gz (46 kB)
Collecting docker==4.0.2
Downloading docker-4.0.2-py2.py3-none-any.whl (138 kB)
Collecting joblib>=0.11
Downloading joblib-1.0.1-py3-none-any.whl (303 kB)
Collecting scipy>=0.17.0
Downloading scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl (25.9 MB)
Collecting kiwisolver>=1.0.1
Downloading kiwisolver-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (1.1 MB)
Collecting pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1
Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Collecting cycler>=0.10
Downloading cycler-0.10.0-py2.py3-none-any.whl (6.5 kB)
Collecting python-dateutil>=2.1
Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests->-r /tmp/requirements.txt (line 8)) (1.25.11)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests->-r /tmp/requirements.txt (line 8)) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests->-r /tmp/requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests->-r /tmp/requirements.txt (line 8)) (2020.6.20)
Requirement already satisfied: six>=1.11.0 in /opt/conda/lib/python3.6/site-packages (from jsonschema==3.0.2->-r /tmp/requirements.txt (line 9)) (1.15.0)
Collecting pyrsistent>=0.14.0
Downloading pyrsistent-0.18.0-cp36-cp36m-manylinux1_x86_64.whl (117 kB)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.6/site-packages (from jsonschema==3.0.2->-r /tmp/requirements.txt (line 9)) (50.3.0.post20201006)
Collecting attrs>=17.4.0
Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
Collecting httplib2<1dev,>=0.9.2
Downloading httplib2-0.19.1-py3-none-any.whl (95 kB)
Collecting google-auth>=1.4.1
Downloading google_auth-1.32.1-py2.py3-none-any.whl (147 kB)
Collecting uritemplate<4dev,>=3.0.0
Downloading uritemplate-3.0.1-py2.py3-none-any.whl (15 kB)
Collecting google-api-core<2.0.0dev,>=0.1.1
Downloading google_api_core-1.31.0-py2.py3-none-any.whl (93 kB)
Collecting google-resumable-media>=0.3.1
Downloading google_resumable_media-1.3.1-py2.py3-none-any.whl (75 kB)
Collecting google-cloud-core<0.29dev,>=0.28.0
Downloading google_cloud_core-0.28.1-py2.py3-none-any.whl (25 kB)
Collecting pytz
Downloading pytz-2021.1-py2.py3-none-any.whl (510 kB)
Collecting aniso8601>=0.82
Downloading aniso8601-9.0.1-py2.py3-none-any.whl (52 kB)
Collecting click>=5.1
Downloading click-8.0.1-py3-none-any.whl (97 kB)
Collecting itsdangerous>=0.24
Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)
Collecting Jinja2>=2.10.1
Downloading Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting websocket-client<1.0a0,>=0.35
Downloading websocket_client-0.59.0-py2.py3-none-any.whl (67 kB)
Collecting s3transfer<0.2.0,>=0.1.10
Downloading s3transfer-0.1.13-py2.py3-none-any.whl (59 kB)
Collecting botocore<1.13.0,>=1.12.86
Downloading botocore-1.12.253-py2.py3-none-any.whl (5.7 MB)
Collecting jmespath<1.0.0,>=0.7.1
Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting future<1.0
Downloading future-0.18.2.tar.gz (829 kB)
Collecting ecdsa<1.0
Downloading ecdsa-0.17.0-py2.py3-none-any.whl (119 kB)
Collecting rsa
Downloading rsa-4.7.2-py3-none-any.whl (34 kB)
Collecting pyasn1-modules>=0.2.1
Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools<5.0,>=2.0.0
Downloading cachetools-4.2.2-py3-none-any.whl (11 kB)
Collecting packaging>=14.3
Downloading packaging-21.0-py3-none-any.whl (40 kB)
Collecting googleapis-common-protos<2.0dev,>=1.6.0
Downloading googleapis_common_protos-1.53.0-py2.py3-none-any.whl (198 kB)
Collecting protobuf>=3.12.0
Downloading protobuf-3.17.3-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
Collecting google-crc32c<2.0dev,>=1.0; python_version >= "3.5"
Downloading google_crc32c-1.1.2-cp36-cp36m-manylinux2014_x86_64.whl (38 kB)
Collecting importlib-metadata; python_version < "3.8"
Downloading importlib_metadata-4.6.1-py3-none-any.whl (17 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_x86_64.whl (30 kB)
Collecting docutils<0.16,>=0.10
Downloading docutils-0.15.2-py3-none-any.whl (547 kB)
Collecting pyasn1>=0.1.3
Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Requirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.6/site-packages (from google-crc32c<2.0dev,>=1.0; python_version >= "3.5"->google-resumable-media>=0.3.1->google-cloud-storage==1.10.0->-r /tmp/requirements.txt (line 13)) (1.14.0)
Collecting zipp>=0.5
Downloading zipp-3.5.0-py3-none-any.whl (5.7 kB)
Collecting typing-extensions>=3.6.4; python_version < "3.8"
Downloading typing_extensions-3.10.0.0-py3-none-any.whl (26 kB)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.6/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0; python_version >= "3.5"->google-resumable-media>=0.3.1->google-cloud-storage==1.10.0->-r /tmp/requirements.txt (line 13)) (2.20)
Building wheels for collected packages: dlib, pathlib, dill, PyYAML, promise, python-keycloak, tabulate, future
Building wheel for dlib (setup.py): started
Building wheel for dlib (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"'; file='"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-4dcee7yg
cwd: /tmp/pip-install-eg8b9tta/dlib/
Complete output (53 lines):
running bdist_wheel
running build
running build_py
package init file 'dlib/init.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 118, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "/opt/conda/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/opt/conda/lib/python3.6/subprocess.py", line 423, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/conda/lib/python3.6/subprocess.py", line 729, in init
restore_signals, start_new_session)
File "/opt/conda/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake': 'cmake'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 257, in
'Topic :: Software Development',
File "/opt/conda/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(attrs)
File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/site-packages/wheel/bdist_wheel.py", line 290, in run
self.run_command('build')
File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/conda/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 127, in run
cmake_version = self.get_cmake_version()
File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 123, in get_cmake_version
"\n
*****************************************************************\n")
RuntimeError:


CMake must be installed to build the following extensions: dlib



ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Building wheel for pathlib (setup.py): started
Building wheel for pathlib (setup.py): finished with status 'done'
Created wheel for pathlib: filename=pathlib-1.0.1-py3-none-any.whl size=14348 sha256=885ebf81f964aa087218ca21360c47a5172c8d305fd376b8de1c7da3b0b5982d
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/e1/32/91/afe2cabe6f77819de11759f2a07d538cd521ef3a9dd81ba0b4
Building wheel for dill (setup.py): started
Building wheel for dill (setup.py): finished with status 'done'
Created wheel for dill: filename=dill-0.2.8.2-py3-none-any.whl size=76450 sha256=0f4359d78bf10161226a5fa5898490ceb1f95af283c5b180340ccac4ecc20c95
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/b7/c6/4a/b50701e335c856eeb495d321cdbe8020386461de08b31e63d6
Building wheel for PyYAML (setup.py): started
Building wheel for PyYAML (setup.py): finished with status 'done'
Created wheel for PyYAML: filename=PyYAML-5.1.2-cp36-cp36m-linux_x86_64.whl size=44103 sha256=53b9059b2cd42f611c12125ffdeb243c5b9074373fc03d4de965d79c9dd10c1c
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/d8/9b/e7/75af463b873c119dd444151fc54a8e190c87993593e1fa194a
Building wheel for promise (setup.py): started
Building wheel for promise (setup.py): finished with status 'done'
Created wheel for promise: filename=promise-2.2.1-py3-none-any.whl size=21290 sha256=a6e9e0fd9d07313d9f91789201a19ac407c736acb7a4b104401eef73a2201e6c
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/aa/7b/df/e774aae29b66b522e411f69c9a97e2e37ab541ad877069df58
Building wheel for python-keycloak (setup.py): started
Building wheel for python-keycloak (setup.py): finished with status 'done'
Created wheel for python-keycloak: filename=python_keycloak-0.17.6-py3-none-any.whl size=26886 sha256=23718f53b24fe8fd697489d50d0ebb699d7041797a6158d50658dece709456ce
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/90/b4/44/7a78a6bdc1f263977d4b3b21bf44199b377691471b8c0a8dc0
Building wheel for tabulate (setup.py): started
Building wheel for tabulate (setup.py): finished with status 'done'
Created wheel for tabulate: filename=tabulate-0.8.3-py3-none-any.whl size=23377 sha256=a5b811736e9506c4930de858c505df4a488ed902e12efba23135218392756b92
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/20/2a/38/bbf076580de21676f282b7a40156184b9ef5bc8ebca4dbddb1
Building wheel for future (setup.py): started
Building wheel for future (setup.py): finished with status 'done'
Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491059 sha256=00ab083d63f2d84e330e431c8bd97065521cd5c71a6bfb3ffa73e7474bee7398
Stored in directory: /tmp/pip-ephem-wheel-cache-p1up0y1v/wheels/6e/9c/ed/4499c9865ac1002697793e0ae05ba6be33553d098f3347fb94
Successfully built pathlib dill PyYAML promise python-keycloak tabulate future
Failed to build dlib
Installing collected packages: numpy, dlib, pathlib, Pillow, tqdm, joblib, scipy, scikit-learn, kiwisolver, pyparsing, cycler, python-dateutil, matplotlib, pyrsistent, attrs, jsonschema, dill, httplib2, pyasn1, pyasn1-modules, cachetools, rsa, google-auth, google-auth-httplib2, uritemplate, google-api-python-client, packaging, pytz, protobuf, googleapis-common-protos, google-api-core, google-crc32c, google-resumable-media, google-cloud-core, google-cloud-storage, PyYAML, freezegun, zipp, typing-extensions, importlib-metadata, click, Werkzeug, itsdangerous, MarkupSafe, Jinja2, Flask, aniso8601, flask-restful, Flask-Cors, pandas, promise, redis, websocket-client, slackclient, jmespath, docutils, botocore, s3transfer, boto3, boto, future, ecdsa, python-jose, python-keycloak, tabulate, docker
Attempting uninstall: numpy
Found existing installation: numpy 1.17.2
Uninstalling numpy-1.17.2:
Successfully uninstalled numpy-1.17.2
Running setup.py install for dlib: started
Running setup.py install for dlib: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"'; file='"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-low0q4rl/install-record.txt --single-version-externally-managed --compile --install-headers /opt/conda/include/python3.6m/dlib
cwd: /tmp/pip-install-eg8b9tta/dlib/
Complete output (55 lines):
running install
running build
running build_py
package init file 'dlib/init.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 118, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "/opt/conda/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/opt/conda/lib/python3.6/subprocess.py", line 423, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/conda/lib/python3.6/subprocess.py", line 729, in init
restore_signals, start_new_session)
File "/opt/conda/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake': 'cmake'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 257, in <module>
    'Topic :: Software Development',
  File "/opt/conda/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/opt/conda/lib/python3.6/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
    return orig.install.run(self)
  File "/opt/conda/lib/python3.6/distutils/command/install.py", line 545, in run
    self.run_command('build')
  File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/opt/conda/lib/python3.6/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/opt/conda/lib/python3.6/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/opt/conda/lib/python3.6/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 127, in run
    cmake_version = self.get_cmake_version()
  File "/tmp/pip-install-eg8b9tta/dlib/setup.py", line 123, in get_cmake_version
    "\n*******************************************************************\n")
RuntimeError:
*******************************************************************
 CMake must be installed to build the following extensions: dlib
*******************************************************************

----------------------------------------

ERROR: Command errored out with exit status 1: /opt/conda/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"'; file='"'"'/tmp/pip-install-eg8b9tta/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-low0q4rl/install-record.txt --single-version-externally-managed --compile --install-headers /opt/conda/include/python3.6m/dlib Check the logs for full command output.
The command '/bin/sh -c pip install --no-cache-dir -r /tmp/requirements.txt && rm /tmp/requirements.txt' returned a non-zero code: 1

Inference

Really new at deep learning with PyTorch, so could you provide an inference code, please guys?

Reproducing the results from readme file

Hi there, how can I reproduce the results seen that can be seen in the readme file? Is it possible to reproduce those results (photos + the corresponding labels about whether the photo is classified as fake or not) within python or one has to use Atlas? Also, is it possible to insert and test custom photos or only those provided by you?

Link to article

The link to the article is broken. Can you provide an alternative link?

Not able to download faceforensics dataset

I requested access from Faceforensics and downloaded the script. Loaded it in the folder and ran:

bash restructure_data.sh faceforensics_download.py

The command seems to correctly download youtube data set but gives following error for Faceforensics part:

Downloading videos of dataset "original_sequences/youtube"
Traceback (most recent call last):
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 1262, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 1308, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 1257, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 1036, in _send_output
    self.send(msg)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 974, in send
    self.connect()
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/http/client.py", line 946, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/socket.py", line 724, in create_connection
    raise err
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/socket.py", line 713, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "faceforensics_download.py", line 258, in <module>
    main(args)
  File "faceforensics_download.py", line 189, in main
    FILELIST_URL).read().decode("utf-8"))
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 1346, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/home/parag/anaconda3/envs/Deepfake/lib/python3.6/urllib/request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 111] Connection refused>
By pressing any key to continue you confirm that you have agreed to the FaceForensics terms of use as described at:
http://falas.cmpt.sfu.ca:8100/webpage/FaceForensics_TOS.pdf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.