luizgh / sigver_wiwd Goto Github PK
View Code? Open in Web Editor NEWLearned representation for Offline Handwritten Signature Verification. Models and code to extract features from signature images.
License: BSD 2-Clause "Simplified" License
Learned representation for Offline Handwritten Signature Verification. Models and code to extract features from signature images.
License: BSD 2-Clause "Simplified" License
I want to change classifiers SVM to different one. Please guide me wheres the classifiers are used?
Running Ubuntu 16.04. When running python example.py
I get the below error:
Traceback (most recent call last):
File "example.py", line 10, in <module>
from preprocess.normalize import preprocess_signature
File "/home/jash/Documents/Miscellaneous/sigver_wiwd/preprocess/normalize.py", line 1, in <module>
import cv2
ImportError: libhdf5.so.10: cannot open shared object file: No such file or directory
Request you to help.
I have been working with this code repository very recently after going through the paper "Learning features for offline handwritten signature verification using deep convolutional neural networks by Luiz G. Hafemann, Robert Sabourin ,LuizS.Oliveira". The problem I am facing is that how can I generate .npy files placed in the data/ directory of this project, so that we might verify any other user defined signature image other than "some_signature.png". The aim behind asking is that a user defined image can be used for signature verification since the code "example.py" actually compared the .npy files at the time of testing.
My question might sound as I am new to this field.
Thanking you in anticipation.
Best Regards
Tahir
Hi,
there are problems downloading pre trained models.
This link doesn't work:
https://www.etsmtl.ca/Unites-de-recherche/LIVIA/Recherche-et-innovation/Projets/Signature-Verification
Can you fix?
Thanks very much for your work.
Hi,
I was trying to run example.py from PyCharm using windows.
Unfortunately I am getting the following error:
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environement.
Could you help on how to set this?
Thanks in advance.
Regards,
Vivek
Sir,
I tried to read "signet.pkl" file in python 3.6 but I am getting error:
code:
from six.moves import cPickle
with open('signet.pkl', 'rb') as f:
model_params = cPickle.load(f)
error:
Traceback (most recent call last):
File "<ipython-input-39-0593160b4786>", line 3, in <module>
model_params = cPickle.load(f)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
I tried
with open('signet.pkl', 'rb').read().decode('utf8') but its not working.
Hi,
Will you make available the features extracted using SSP models for each of the four datasets?
Thanks
I have data set of handwriting , I want to train the model . can you provide information on How do I train it ?
sigver_wiwd/signet_spp_300dpi.py
Line 19 in 3e509df
As shown in your paper "Table 1", this pooling layer is defined as "pool3-s2-p0", as such the code should be like this right?
net['large_pool4'] = MaxPool2DLayer(net['large_conv4'], pool_size=3, stride=2)
If not, may i ask why?
I cannot download the models for the links provided... They seem to be broken :(
sir i am getting error
No such file or directory: 'models/signet.pkl'
IOErrorTraceback (most recent call last)
in ()
----> 1 model = CNNModel(signet, model_weight_path)
/content/drive/My Drive/sigver_wiwd-master/cnn_model.py in init(self, model_factory, model_weight_path)
18 model_weights_path (str): A file containing the trained weights
19 """
---> 20 with open(model_weight_path, 'rb') as f:
21 if six.PY2:
22 model_params = cPickle.load(f)
IOError: [Errno 2] No such file or directory: 'models/signet.pkl'
How to generate data/processed.npy
Thank you
hey, what if we add a new user with different class. then do we need to train it again?
I was going through the source files and could not find the training script. I would like to retrain the network on a few different data sets as the accuracy for the pre-trained model(s) on the data I have is not very good, and I think there can be significant improvements after re-training using the data I am working with. Could you please provide the training script that was used for the published work (and used to train the models provided) or a brief guide stating how to go about doing this would be really helpful? Thank you for your assistance.
Hello! I want to try running this code but my machine is not powerful enough for it. Would it be possible for this to run in google colaboratory?
Dear Sir,
Please reply these questions regarding training WI classifier (CNN),
1.Both real and forgeries of a user form development set used? ,and if both were used then were they given same label or different?
2.Were all forgeries for all users given same label?
Thanks,
Chunky
Hello,
I am trying to train and test a WD classifier using features extracted by Signet and Signet-F(0.95), however my error scores are usually a little bit higher (1.5-2.5%) than the reported scores on Vv (Table 5, EERglobalthreshold on paper). I am using:
C=1, class_weight='balanced', gamma=2**(-11)
'balanced' option should match the skew according to the sklearn documentation. Both kernels have similar results.
Is this a normal behaviour or is there something else I might be missing?
Another question: GPDS960 consists of 960 users, according to the research group, but on your research paper, you state that there are 881 users, on the extracted features dataset, I can also see some missing user_id's, what's the reason for that?
Thanks a lot!
EDIT: I could replicate and even surpass the CEDAR scores by using:
gamma='scale'
which hints that my implementation is correct, in my opinion.
EDIT2: The errors are also a little bit higher for MCYT. For 10 signatures training, rbf kernel svm, my 10-fold average error is around 3.91%, compared to 2.87% reported on paper.
Hi , is there any way i can use this signet_spp_300dpi and signet_spp_600dpi using tensor flow, as i do not want to use theano. My current setup is in tensorflow.
I appreciate your suggestion.
Thanks
Hello,
I am trying to load the model weights you made available, but it doesn't seem that I can access the bias term of the Convolutional Layers.
When I run:
[l.shape for l in lasagne.layers.get_all_param_values(model.model['fc2'])]
Those are the params I get:
[W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std, W, beta, gamma, mean, inv_std]
and
[l.shape for l in lasagne.layers.get_all_param_values(model.model['fc2'])]
I get:
[(96, 1, 11, 11), (96,), (96,), (96,), (96,), (256, 96, 5, 5), (256,), (256,), (256,), (256,), (384, 256, 3, 3), (384,), (384,), (384,), (384,), (384, 384, 3, 3), (384,), (384,), (384,), (384,), (256, 384, 3, 3), (256,), (256,), (256,), (256,), (3840, 2048), (2048,), (2048,), (2048,), (2048,), (2048, 2048), (2048,), (2048,), (2048,), (2048,)]
As we can see, those are the params related to the weights of the convolutional layers and the params of the batch normalization, it doesnt seem that this method returns the bias term as one of the params.
Any idea how can I get the bias information?
I am asking this because I want to port this model to Tensorflow.
Best regards,
Victor.
I want to export this model into 'pb' format so that I can use it with google cloud platform
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.