GithubHelp home page GithubHelp logo

adneneboumessouer / mvtec-anomaly-detection Goto Github PK

View Code? Open in Web Editor NEW
261.0 7.0 58.0 428.9 MB

This project proposes an end-to-end framework for semi-supervised Anomaly Detection and Segmentation in images based on Deep Learning.

Python 100.00%
anomaly-detection mvtec anomaly-segmentation anomaly-localization unsupervised auto-encoder convolutional-autoencoder structural-similarity ssim surface-inspection

mvtec-anomaly-detection's Introduction

Anomaly Detection

This project proposes an end-to-end framework for semi-supervised Anomaly Detection and Segmentation in images based on Deep Learning.

Method Overview

The proposed method employs a thresholded pixel-wise difference between reconstructed image and input image to localize anomaly. The threshold is determined by first using a subset of anomalous-free training images, i.e validation images, to determine possible values of minimum area and threshold pairs followed by using a subset of both anomalous-free and anomalous test images to select the best pair for classification and segmentation of the remaining test images.

It is inspired to a great extent by the papers MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection and Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders. The method is devided in 3 steps: training, finetuning and testing.

Image of Yaktocat

NOTE: Why Semi-Supervised and not Unsupervised?

The method proposed in the MVTec paper is unsupervised, as a subset containing only anomaly-free training images (validation set) are used during the validation step to determine the threshold for classification and segmentation of test images. However, the validation algorithm is based on a user input parameter, the minimum defect area, which definition remains unclear and unexplained in the aforementioned paper. Because the choice of this parameter can greatly influence the classification and segmentation results and in an effort to automate the process and remove the need for all user input, we developed a finetuning algorithm that computes different thresholds corresponding to a wide range of discrete minimum defect areas using the validation set. Subsequently, a small subset of anomaly and anomaly-free images of the test set (finetuning set) is used to select the best minimum defect area and threshold pait that will finally be used to classify and segment the remaining test images. Since our method relies on test images for finetuning, we describe it as being semi-supervised.

Dataset

The proposed framework has been tested successfully on the MVTec dataset.

Models

There is a total of 5 models based on the Convolutional Auto-Encoder (CAE) architecture implemented in this project:

NOTE:

mvtecCAE, baselineCAE and inceptionCAE are comparable in performance.

WARNING:

resnetCAE and skipCAE, are still being tested, as they are prone to overfitting, which translates in the case of convolutional auto-encoders by copying its inputs without filtering out the defective regions.

Prerequisites

Dependencies

The main libraries used in this project with their corresponding versions are listed below:

  • tensorflow == 2.1.0
  • ktrain == 0.21.3
  • scikit-image == 0.16.2
  • scikit-learn == 0.23.2

For more information, refer to requirement.txt.

Installation

Before installing dependencies, we highly recommend setting up a virtual anvironment (e.g., anaconda environment).

  1. Make sure pip is up-to-date with: pip install -U pip
  2. Install TensorFlow 2 if it is not already installed (e.g., pip install tensorflow==2.1).
  3. Install ktrain: pip install ktrain
  4. Install scikit-image: pip install scikit-image
  5. Install scikit-learn: pip install scikit-learn

The above should be all you need on Linux systems and cloud computing environments like Google Colab and AWS EC2. If you are using ktrain on a Windows computer, you can follow the more detailed instructions provided here that include some extra steps.

Download the Dataset

  1. Download the mvtec dataset here and save it to a directory of your choice (e.g in /Downloads)
  2. Extract the compressed image files.
  3. Create a folder in the project directory to store the image files.
  4. Move the extracted image files to that folder.

Directory Structure using mvtec dataset

For the scripts to work propoerly, it is required for the folder containing the training and test images to have a specific structure. In the case of using the mvtec dataset, here is an example of how the directory stucture should look like:

├── bottle
│   ├── ground_truth
│   │   ├── broken_large
│   │   ├── broken_small
│   │   └── contamination
│   ├── test
│   │   ├── broken_large
│   │   ├── broken_small
│   │   ├── contamination
│   │   └── good
│   └── train
│       └── good
...

Directory Structure using your own dataset

To train with your own dataset, you need to have a comparable directory structure. For example:

├── class1
│   ├── test
│   │   ├── good
│   │   ├── defect
│   └── train
│       └── good
├── class2
│   ├── test
│   │   ├── good
│   │   ├── defect
│   └── train
│       └── good
...

Usage

Training (train.py)

During training, the CAE trains exclusively on defect-free images and learns to reconstruct (predict) defect-free training samples.

usage: train.py [-h] -d [-a] [-c] [-l] [-b] [-i]

optional arguments:

-h, --help show this help message and exit

-d , --input-dir directory containing training images

-a , --architecture architecture of the model to use for training: 'mvtecCAE', 'baselineCAE', 'inceptionCAE' or 'resnetCAE'

-c , --color color mode for preprocessing images before training: 'rgb' or 'grayscale'

-l , --loss loss function to use for training: 'mssim', 'ssim' or 'l2'

-b , --batch batch size to use for training

-i, --inspect generate inspection plots after training

Example usage:

python3 train.py -d mvtec/capsule -a mvtecCAE -b 8 -l ssim -c grayscale

NOTE:

There is no need for the user to pass a number of epochs since the training process implements an Early Stopping strategy.

Finetuning (finetune.py)

This script used a subset of defect-free training images and a subset of both defect and defect-free test images to determine good values for minimum defect area and threshold pair of parameters that will be used during testing for classification and segmentation.

usage: finetune.py [-h] -p [-m] [-t]

optional arguments: -h, --help show this help message and exit

-p , --path path to saved model

-m , --method method for generating resmaps: 'ssim' or 'l2'

-t , --dtype datatype for processing resmaps: 'float64' or 'uint8'

Example usage:

python3 finetune.py -p saved_models/mvtec/capsule/mvtecCAE/ssim/13-06-2020_15-35-10/mvtecCAE_b8_e39.hdf5 -m ssim -t float64

Testing (test.py)

This script classifies test images using the minimum defect area and threshold previously approximated at the finetuning step.

usage: test.py [-h] -p [-s]

optional arguments: -h, --help show this help message and exit

-p , --path path to saved model

-s, --save save segmented images

Example usage:

python3 test.py -p saved_models/mvtec/capsule/mvtecCAE/ssim/13-06-2020_15-35-10/mvtecCAE_b8_e39.hdf5

Project Organization

├── mvtec                       <- folder containing all mvtec classes.
│   ├── bottle                  <- subfolder of a class (contains additional subfolders /train and /test).
|   |── ...
├── autoencoder                 <- directory containing modules for training: autoencoder class and methods as well as custom losses and metrics.
├── processing                  <- directory containing modules for preprocessing images and before training and processing images after training.
├── results                     <- directory containing finetuning and test results.
├── readme.md                   <- readme file.
├── requirements.txt            <- requirement text file containing used libraries.
├── saved_models                <- directory containing saved models, training history, loss and learning plots and inspection images.
├── train.py                    <- training script to train the auto-encoder.
├── finetune.py                 <- approximates a good value for minimum area and threshold for classification.
└── test.py                     <- test script to classify images of the test set using finetuned parameters.

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Acknowledgments

mvtec-anomaly-detection's People

Contributors

adneneboumessouer avatar miriskalt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mvtec-anomaly-detection's Issues

issues

nb_training_images_aug = args.images
AttributeError: 'Namespace' object has no attribute 'images'

about train

  Hello,when I finish training,displays“Please invoke the Learner.lr_plot() method to visually inspect the loss plot to help identify the maximal learning rate associated with falling loss.”what shoud I do next.
  Looking forward to your reply.Thank you.

Results

First of all, thank you very much for publishing your code.

Could you please write down the evaluation results (AUC, accuracy or any reasonable metric you'd like) per category or at least the mean value?

Also, there is this page where you can do it publicly.

Thank you very much in advance.

Training and validation loss curve remains flat when trained on MVTec data

Greetings,
I trained the network on MVTEC tile data (Tried on various other datasets as well). The details are as follows

Command

python <Root folder>/train.py -d <data folder>/capsule -a mvtecCAE -l ssim -b 32

config.py

ROT_ANGLE = 5
W_SHIFT_RANGE = 0.05
H_SHIFT_RANGE = 0.05
FILL_MODE = "nearest"
BRIGHTNESS_RANGE = [0.95, 1.05]
VAL_SPLIT = 0.2

# Learning Rate Finder parameters
START_LR = 1e-7
LR_MAX_EPOCHS = 10
LRF_DECREASE_FACTOR = 0.85

# Training parameters
EARLY_STOPPING = 12
REDUCE_ON_PLATEAU = 6

# Finetuning parameters
FINETUNE_SPLIT = 0.1
STEP_MIN_AREA = 5
START_MIN_AREA = 5
STOP_MIN_AREA = 1005

Env

appdirs==1.4.4
argon2-cffi==20.1.0
astunparse==1.6.3
async-generator==1.10
attrs==20.3.0
backcall==0.2.0
black==19.10b0
bleach==3.2.0
CacheControl==0.12.6
cachetools==4.2.1
cchardet==2.1.7
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
click==7.1.2
colorama==0.4.3
contextlib2==0.6.0
cycler==0.10.0
decorator==4.4.2
distlib==0.3.0
distro==1.4.0
fastprogress==1.0.0
filelock==3.0.12
flatbuffers==1.12
fvcore==0.1.3.post20210226
gast==0.3.3
google-auth==1.27.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
html5lib==1.0.1
idna==2.10
imageio==2.9.0
iopath==0.1.4
ipaddr==2.2.0
ipython==7.21.0
ipython-genutils==0.2.0
jedi==0.18.0
jieba==0.42.1
joblib==1.0.1
Keras==2.4.3
keras-bert==0.86.0
keras-embed-sim==0.8.0
keras-layer-normalization==0.14.0
keras-multi-head==0.27.0
keras-pos-embd==0.11.0
keras-position-wise-feed-forward==0.6.0
Keras-Preprocessing==1.1.2
keras-self-attention==0.46.0
keras-transformer==0.38.0
kiwisolver==1.3.1
ktrain==0.25.4
langdetect==1.0.8
lockfile==0.12.2
Markdown==3.3.4
matplotlib==3.3.4
msgpack==0.6.2
networkx==2.5
numpy==1.20.1
oauthlib==3.1.0
opt-einsum==3.3.0
packaging==20.9
pandas==1.2.2
parso==0.8.1
pathspec==0.8.1
pep517==0.8.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==8.1.1
pkg-resources==0.0.0
portalocker==2.2.1
progress==1.5
prompt-toolkit==3.0.16
protobuf==3.15.3
ptyprocess==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.8.0
pyparsing==2.4.7
python-dateutil==2.8.1
pytoml==0.1.21
pytz==2021.1
PyWavelets==1.1.1
PyYAML==5.4.1
regex==2020.11.13
requests==2.25.1
requests-oauthlib==1.3.0
retrying==1.3.3
rsa==4.7.2
sacremoses==0.0.43
scikit-image==0.18.1
scikit-learn==0.23.2
scipy==1.6.1
sentencepiece==0.1.91
seqeval==0.0.19
six==1.15.0
SSIM-PIL==1.0.12
syntok==1.3.1
tabulate==0.8.9
tensorboard==2.4.1
tensorboard-plugin-wit==1.8.0
tensorflow-estimator==2.4.0
tensorflow-gpu==2.4.1
termcolor==1.1.0
threadpoolctl==2.1.0
tifffile==2021.2.26
tokenizers==0.9.3
toml==0.10.2
tqdm==4.58.0
traitlets==5.0.5
transformers==3.5.1
typed-ast==1.4.2
typing-extensions==3.7.4.3
urllib3==1.26.3
wcwidth==0.2.5
webencodings==0.5.1
Werkzeug==1.0.1
Whoosh==2.7.4
wrapt==1.12.1
yacs==0.1.8

Machine details

Ubuntu 20.04.2 LTS
Memory: 15.5GB
GPU: NVIDIA Corporation GM107M [GeForce GTX 960M]

Issue

No significant learning is happening
loss_plot

Sample output

crack_000_inspection

Additional info

lr_plot

I am getting similar results on all other cases in Mvtec dataset too.
Having other questions like why is val loss less than training loss, etc.

I am getting a feeling that I am not setting something right.
Kindly help.

Did you use the MVtec data set to do defect detection?

Did you use the MVtec data set to do defect detection? Or reappear SSIM CNN autoencoder (improving unsupervised defect segmentation

By applying structural simplicity to autoencoders and MVtec ad-a comprehensive real world dataset for unsupervised

Anomaly Detection)

Paper Request

Hello AdneneBoumessouer! thanks for your great work in the industrial anomaly detection. have you publicated any papers on this project? and how i can cite it when borrow from your work?

No such file or directory, when test.py with -s for save segmented images

When using -s or --save (without any further information of a path), the script tells me, that the path ...MVTec-Anomaly-Detection-master\results\mvtec/bottle\baselineCAE\l2\04-10-2021_09-32-18\test\ssim_float64\segmentation\broken_large\000_seg.png' is not availible. When I use the -s with a path to any folder (eg. to ...\results\mvtec\bottle\baselineCAE\l2\04-10-2021_09-32-18\test\ssim_float64\segmentation) the script tells me, that there are unrecognized arguments. How to properly use the image segmentation?

Kindly asking for help.

The detail information of reconstruction map is fuzzy

Hello, I have a question. Why is the reconstructed image missing the details? For example, hazelnut, whose reconstruction details are fuzzy. The details of the pictures you have made are also some true. The words "500" on the capsule are very vague.

hist_dict = dict((key, self.hist.history[key]) for key in self.hist_keys) KeyError: 'ssim'

Hi,

I am running a simple test for training, I have all the prerequisites installed and I am training on the default MVTec dataset with the project organized as mentioned in the readme.
This is the line of code I run to train:
python train.py -d mvtec/capsule -a mvtecCAE -b 8 -l ssim -c grayscale

It starts nicely with 10 rounds of epochs, followed by (unknowingly why) a round of 1024 epochs, where it specify a max learning rate of 0.002. At around the 30th epochs of the second round, I get the following error and the program stops:

hist_dict = dict((key, self.hist.history[key]) for key in self.hist_keys) KeyError: 'ssim'

"PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f9de4207678>"

File "/usr/local/lib/python3.6/dist-packages/ktrain/core.py", line 563, in lr_find verbose=verbose) File "/usr/local/lib/python3.6/dist-packages/ktrain/lroptimize/lrfinder.py", line 119, in find callbacks=[callback]) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper return method(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1063, in fit steps_per_execution=self._steps_per_execution) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 1117, in __init__ model=model) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 916, in __init__ **kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 786, in __init__ peek, x = self._peek_and_restore(x) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 920, in _peek_and_restore return x[0], x File "/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py", line 65, in __getitem__ return self._get_batches_of_transformed_samples(index_array) File "/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py", line 230, in _get_batches_of_transformed_samples interpolation=self.interpolation) File "/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/utils.py", line 114, in load_img img = pil_image.open(io.BytesIO(f.read())) File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 2944, in open "cannot identify image file %r" % (filename if filename else fp)

train end error.

Found 223 images belonging to 1 classes.
Found 24 images belonging to 1 classes.
INFO:autoencoder.autoencoder:initiating learning rate finder to determine best learning rate.
simulating training for different learning rates... this may take a few moments...
Epoch 1/10
13/13 [==============================] - 152s 12s/step - loss: 0.6025 - mssim: 0.3975
Epoch 2/10
13/13 [==============================] - 152s 12s/step - loss: 0.5974 - mssim: 0.4026
Epoch 3/10
13/13 [==============================] - 156s 12s/step - loss: 0.5921 - mssim: 0.4079
Epoch 4/10
13/13 [==============================] - 155s 12s/step - loss: 0.5702 - mssim: 0.4298
Epoch 5/10
13/13 [==============================] - 156s 12s/step - loss: nan - mssim: nan
Epoch 6/10
13/13 [==============================] - 155s 12s/step - loss: nan - mssim: nan
Epoch 7/10
13/13 [==============================] - 148s 11s/step - loss: nan - mssim: nan
Epoch 8/10
13/13 [==============================] - 146s 11s/step - loss: nan - mssim: nan
Epoch 9/10
13/13 [==============================] - 144s 11s/step - loss: nan - mssim: nan
Epoch 10/10
13/13 [==============================] - 143s 11s/step - loss: nan - mssim: nan

done.
Visually inspect loss plot and select learning rate associated with falling loss
INFO:autoencoder.autoencoder:lr with minimum loss divided by 10: 8.09E-04
INFO:autoencoder.autoencoder:lr with minimum numerical gradient: 8.38E-05
C:\Users\Administrator\Desktop\MVTec-Anomaly-Detection-master\autoencoder\autoencoder.py:231: RuntimeWarning: invalid value encountered in less
self.lr_opt_i = np.argwhere(segment < optimal_loss)[0][0]
Traceback (most recent call last):
File "train.py", line 238, in
main(args)
File "train.py", line 81, in main
autoencoder.find_lr_opt(train_generator, validation_generator)
File "C:\Users\Administrator\Desktop\MVTec-Anomaly-Detection-master\autoencoder\autoencoder.py", line 194, in find_lr_opt
self.custom_lr_estimate()
File "C:\Users\Administrator\Desktop\MVTec-Anomaly-Detection-master\autoencoder\autoencoder.py", line 231, in custom_lr_estimate
self.lr_opt_i = np.argwhere(segment < optimal_loss)[0][0]
IndexError: index 0 is out of bounds for axis 0 with size 0

Val_mssim very low

@AdneneBoumessouer Thank for your hard work.
I try to run your code but I saw that accuracy very low. I don't know what happen but I try with 3 ways and all given not good result.

python3 train.py -d mvtec/pill -a baselineCAE -b 32 -l mssim -c rgb (I also tested with ssim, l2, grayscale)
python3 train.py -d mvtec/pill -a mvtecCAE -b 32 -l mssim -c rgb
python3 train.py -d mvtec/pill -a inceptionCAE -b 32 -l mssim -c rgb

-> Epoch 00013: Reducing Max LR on Plateau: new max lr will be 0.08725637942552567 (if not early_stopping).
Restoring model weights from the end of the best epoch.
8/8 [==============================] - 10s 1s/step - loss: 0.4127 - mssim: 0.8223 - val_loss: 0.8789 - val_mssim: 0.3548
Epoch 00013: early stopping
Weights from best epoch have been loaded into model.

INFO:autoencoder.autoencoder:loss_plot.png successfully saved. INFO:autoencoder.autoencoder:lr_schedule_plot.png successfully saved. INFO:autoencoder.autoencoder:training history has been successfully saved as csv file. INFO:autoencoder.autoencoder:training files have been successfully saved at: /content/drive/Shared drives/1_New/Acuity/MVTec-Anomaly-Detection/saved_models/mvtec/pill/baselineCAE/mssim/13-10-2020_11-35-32 INFO:__main__:done.

And then, using finetune:
python3 finetune.py -p "saved_models/mvtec/pill/baselineCAE/mssim/13-10-2020_11-35-32/baselineCAE_b32_e0.hdf5" -m ssim -t float64

Last, using test.py:
python3 test.py -p "saved_models/mvtec/pill/baselineCAE/mssim/13-10-2020_11-35-32/baselineCAE_b32_e0.hdf5"

              filenames  predictions  truth  accurate_predictions

0 color/000.png 0 1 False
1 color/001.png 0 1 False
2 color/002.png 0 1 False
3 color/003.png 1 1 True
4 color/004.png 0 1 False
5 color/005.png 0 1 False
6 color/006.png 0 1 False
7 color/007.png 1 1 True
8 color/008.png 1 1 True
9 color/009.png 1 1 True
10 color/010.png 0 1 False
11 color/011.png 0 1 False
12 color/012.png 0 1 False
13 color/013.png 0 1 False
14 color/014.png 0 1 False
15 color/015.png 0 1 False
16 color/016.png 0 1 False
17 color/017.png 0 1 False
18 color/018.png 1 1 True
19 color/019.png 1 1 True
20 color/020.png 0 1 False
21 color/021.png 0 1 False
22 color/022.png 0 1 False
23 color/023.png 0 1 False
24 color/024.png 0 1 False
25 combined/000.png 0 1 False
26 combined/001.png 1 1 True
27 combined/002.png 0 1 False
28 combined/003.png 0 1 False
29 combined/004.png 0 1 False
30 combined/005.png 0 1 False
31 combined/006.png 1 1 True
32 combined/007.png 1 1 True
33 combined/008.png 1 1 True
34 combined/009.png 1 1 True
35 combined/010.png 1 1 True
36 combined/011.png 1 1 True
37 combined/012.png 1 1 True
38 combined/013.png 0 1 False
39 combined/014.png 0 1 False
40 combined/015.png 1 1 True
41 combined/016.png 0 1 False
42 contamination/000.png 0 1 False
43 contamination/001.png 0 1 False
44 contamination/002.png 0 1 False
45 contamination/003.png 0 1 False
46 contamination/004.png 0 1 False
47 contamination/005.png 0 1 False
48 contamination/006.png 0 1 False
49 contamination/007.png 0 1 False
50 contamination/008.png 0 1 False
51 contamination/009.png 0 1 False
52 contamination/010.png 0 1 False
53 contamination/011.png 1 1 True
54 contamination/012.png 1 1 True
55 contamination/013.png 0 1 False
56 contamination/014.png 0 1 False
57 contamination/015.png 0 1 False
58 contamination/016.png 0 1 False
59 contamination/017.png 0 1 False
60 contamination/018.png 1 1 True
61 contamination/019.png 0 1 False
62 contamination/020.png 1 1 True
63 crack/000.png 0 1 False
64 crack/001.png 0 1 False
65 crack/002.png 1 1 True
66 crack/003.png 0 1 False
67 crack/004.png 0 1 False
68 crack/005.png 0 1 False
69 crack/006.png 1 1 True
70 crack/007.png 0 1 False
71 crack/008.png 0 1 False
72 crack/009.png 0 1 False
73 crack/010.png 1 1 True
74 crack/011.png 1 1 True
75 crack/012.png 0 1 False
76 crack/013.png 1 1 True
77 crack/014.png 0 1 False
78 crack/015.png 0 1 False
79 crack/016.png 0 1 False
80 crack/017.png 0 1 False
81 crack/018.png 0 1 False
82 crack/019.png 1 1 True
83 crack/020.png 0 1 False
84 crack/021.png 0 1 False
85 crack/022.png 1 1 True
86 crack/023.png 1 1 True
87 crack/024.png 0 1 False
88 crack/025.png 0 1 False
89 faulty_imprint/000.png 0 1 False
90 faulty_imprint/001.png 0 1 False
91 faulty_imprint/002.png 1 1 True
92 faulty_imprint/003.png 0 1 False
93 faulty_imprint/004.png 0 1 False
94 faulty_imprint/005.png 0 1 False
95 faulty_imprint/006.png 1 1 True
96 faulty_imprint/007.png 1 1 True
97 faulty_imprint/008.png 0 1 False
98 faulty_imprint/009.png 0 1 False
99 faulty_imprint/010.png 0 1 False
100 faulty_imprint/011.png 0 1 False
101 faulty_imprint/012.png 1 1 True
102 faulty_imprint/013.png 0 1 False
103 faulty_imprint/014.png 0 1 False
104 faulty_imprint/015.png 0 1 False
105 faulty_imprint/016.png 1 1 True
106 faulty_imprint/017.png 0 1 False
107 faulty_imprint/018.png 1 1 True
108 good/000.png 0 0 True
109 good/001.png 0 0 True
110 good/002.png 0 0 True
111 good/003.png 0 0 True
112 good/004.png 0 0 True
113 good/005.png 0 0 True
114 good/006.png 0 0 True
115 good/007.png 0 0 True
116 good/008.png 0 0 True
117 good/009.png 0 0 True
118 good/010.png 0 0 True
119 good/011.png 0 0 True
120 good/012.png 1 0 False
121 good/013.png 0 0 True
122 good/014.png 1 0 False
123 good/015.png 0 0 True
124 good/016.png 0 0 True
125 good/017.png 1 0 False
126 good/018.png 0 0 True
127 good/019.png 0 0 True
128 good/020.png 0 0 True
129 good/021.png 0 0 True
130 good/022.png 0 0 True
131 good/023.png 0 0 True
132 good/024.png 0 0 True
133 good/025.png 0 0 True
134 pill_type/000.png 0 1 False
135 pill_type/001.png 0 1 False
136 pill_type/002.png 0 1 False
137 pill_type/003.png 0 1 False
138 pill_type/004.png 0 1 False
139 pill_type/005.png 0 1 False
140 pill_type/006.png 0 1 False
141 pill_type/007.png 1 1 True
142 pill_type/008.png 1 1 True
143 scratch/000.png 1 1 True
144 scratch/001.png 0 1 False
145 scratch/002.png 0 1 False
146 scratch/003.png 0 1 False
147 scratch/004.png 1 1 True
148 scratch/005.png 0 1 False
149 scratch/006.png 0 1 False
150 scratch/007.png 0 1 False
151 scratch/008.png 0 1 False
152 scratch/009.png 0 1 False
153 scratch/010.png 0 1 False
154 scratch/011.png 0 1 False
155 scratch/012.png 0 1 False
156 scratch/013.png 0 1 False
157 scratch/014.png 0 1 False
158 scratch/015.png 1 1 True
159 scratch/016.png 1 1 True
160 scratch/017.png 0 1 False
161 scratch/018.png 0 1 False
162 scratch/019.png 0 1 False
163 scratch/020.png 0 1 False
164 scratch/021.png 1 1 True
165 scratch/022.png 1 1 True
166 scratch/023.png 0 1 False
test results: {'min_area': 770, 'threshold': 0.27000000000000013, 'TPR': 0.2907801418439716, 'TNR': 0.8846153846153846, 'score': 0.587697763229678, 'method': 'ssim', 'dtype': 'float64'}

The result is not good at all.
Can you give me an advice?

keras.__version__

great job!
in your code, you use "from tensorflow import keras", so the keras__version is 2.2.4_tf. However, in requirements.txt file, the keras version is 2.3.1, so i use "import keras" directly instead of using "from tensorflow import keras", but it will cause some problems, so i wish you to give some advices, thx!

'self.lr_mult = (end_lr / start_lr) ** (1 / num_batches) ZeroDivisionError: division by zero'

Hi,
I have this problem and I'd like to know how to fix it, please.

Traceback (most recent call last): File "train.py", line 238, in <module> main(args) File "train.py", line 81, in main autoencoder.find_lr_opt(train_generator, validation_generator) File "/home/tom/projects/lucas/textile/MVTec/MVTec-Anomaly-Detection/autoencoder/autoencoder.py", line 191, in find_lr_opt restore_weights_only=True, File "/root/anaconda3/envs/mvtec/lib/python3.6/site-packages/ktrain/core.py", line 553, in lr_find verbose=verbose) File "/root/anaconda3/envs/mvtec/lib/python3.6/site-packages/ktrain/lroptimize/lrfinder.py", line 94, in find self.lr_mult = (end_lr / start_lr) ** (1 / num_batches) ZeroDivisionError: division by zero

AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'

Just installed follow instruction and run example. Hit error.

(kt) D:\Sample\MVTec-Anomaly-Detection>python train.py -d mvtec/capsule -a mvtecCAE -b 8 -l ssim -c grayscale
2020-10-30 17:31:52.414545: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
  File "train.py", line 11, in <module>
    from autoencoder.autoencoder import AutoEncoder
  File "D:\Sample\MVTec-Anomaly-Detection\autoencoder\autoencoder.py", line 14, in <module>
    import ktrain
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\ktrain\__init__.py", line 2, in <module>
    from . import imports as I
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\ktrain\imports.py", line 219, in <module>
    import transformers
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\transformers\__init__.py", line 135, in <module>
    from .pipelines import (
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\transformers\pipelines.py", line 47, in <module>
    from .modeling_tf_auto import (
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\transformers\modeling_tf_auto.py", line 45, in <module>
    from .modeling_tf_albert import (
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\transformers\modeling_tf_albert.py", line 24, in <module>
    from .activations_tf import get_tf_activation
  File "D:\bin\miniconda3\envs\kt\lib\site-packages\transformers\activations_tf.py", line 53, in <module>
    "swish": tf.keras.activations.swish,
AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'

(kt) D:\Sample\MVTec-Anomaly-Detection>

'ZeroDivisionError: division by zero'

Hi,
I downloaded the MVTec dataset and am trying to launch the training using carpet images.
However, when I try to run the train.py script python3 train.py -d mvtec/carpet/ -a mvtecCAE -b 2 -l ssim -c grayscale, it returns me the reported error:

Found 0 images belonging to 2 classes. Found 0 images belonging to 2 classes. INFO:autoencoder.autoencoder:initiating learning rate finder to determine best learning rate. simulating training for different learning rates... this may take a few moments... Traceback (most recent call last): File "train.py", line 238, in <module> main(args) File "train.py", line 81, in main autoencoder.find_lr_opt(train_generator, validation_generator) File "/home/thomas/MVTec-Anomaly-Detection/autoencoder/autoencoder.py", line 191, in find_lr_opt restore_weights_only=True, File "/root/anaconda3/envs/mvtec/lib/python3.6/site-packages/ktrain/core.py", line 553, in lr_find verbose=verbose) File "/root/anaconda3/envs/mvtec/lib/python3.6/site-packages/ktrain/lroptimize/lrfinder.py", line 94, in find self.lr_mult = (end_lr / start_lr) ** (1 / num_batches) ZeroDivisionError: division by zero

How can I fix it?

MVTec dataset cannot be downloaded

I think the ftp server might go wrong for the MVTec dataset. The site (ftp://guest:[email protected]/mvtec_anomaly_detection/mvtec_anomaly_detection.tar.xz) is offline now.

Can you provide a downloadable link for the MVTec dataset. Thank you very much.

run finetune error

train MVTec/wood dataset and then using finetune.py error occurred.
train using:
python train.py -d mvtec/wood -a mvtecCAE-b 32-l mssim -c rgb --inspect
finetune using:
python finetune.py -p saved_models/mvtec/wood/mvtecCAE/mssim/17-11-2020_11-21-44/mvtecCAE_b32_e88.hdf5 -m ssim -t float64

image

Use of Repository. Unable to contact

Good afternoon,

I've been trying to contact you to ask for permission to use the code in this repository as inspiration for the resolution of a problem proposed in a math modelling contest by the Universidad Complutense de Madrid. I am a second year Math Student with an interest the use of machine learning to real-world applications and really admire the structure and reasoning in your code

Thank you in advance, and sorry for reaching put in such an unconventional and public way,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.