THE SCIENCE OF SIGNATURE DETECTION: UNDERSTANDING THE TECHNIQUES (PART-III)
Hello Readers, now I am continuing with the last part of this blog. In the previous blog, we discussed all the executed techniques and theory related to the fake signature detection. So now in this part, we will discuss the techniques which I have implemented and the result that we get from the signature detection.
So here I have used CNN, Siamese Network and VGG6 Network to made this model. Here CNN (Convolutional Neural Network) a deep learning algorithm is used for identifying whether the signature is real or forged. The pattern of convolutional neural networks is much similar to the pattern of neurons in the human brain. It gives different weights and bias values to different parts the image and helps in identifying one image from another. Here CNN model is trained using various images of the same person and then is used to test it on other images and find out if they are forged or real Figure given below shows the entire flow of the proposed neural network.
![]() |
| Proposed Block Diagram |
- A Siamese network is an architecture of two simultaneously running neural networks which take different inputs, and whose outputs are of combined value.
- A training dataset is created which contains a positive and a negative class. And the data training model is done using this dataset.
- It is passed through a neural network and extracts the features of both images.
- Then the difference in feature vectors between these two images is calculated, and passed as a vector through the sigmoid function.
- This function will give a resultant value lying in between 0 and 1 (if a value is close to 0 the images are not matched and if close to 1 the images match).
- A VGG 16 is a 16 layer model that is pre-trained on a variety of images and we can use it for transfer learning.
- Two folders are created, one that contains training images and another containing testing images. In both folders, both forged and genuine signatures of a person are present 5 original and 5 fake.
- The images are passed through 16 layers of CNN twice from 2 networks that are altered according to the users need.
- Then the difference in feature vectors between these two images is calculated, and passed as a vector through the sigmoid function.
- This function will give a resultant value lying in between 0 and 1. (if the value is close to the images are not matched and if close to 1 the images match).
- In this work, initially the pre-processing is done on the images and the dataset is divided into train and test data
- The model used is sequential and allows you to build the model layer by layer. The first 2 layers are convolutional layers which will deal with input images. The activation function used here for the convolutional layer is Rel. Unrectified linear activation.
- Flatten layer is used for the connection between dense and convolutional layers. The dense layer is used for the output image. One of the activation functions used here is SoftMax which makes the output value up to I such that output is represented in a probabilistic manner which will be used by the model to make the further prediction of the image.
- Once the layers of the model are ready, the next step is to compile the model. It has three parameters optimizer, metrics and loss.
- Next step is to train the model for this fit ( function which is used with the parameters as train data, train labels, batch size and epoch value. The higher the value of the epoch the more the model will improve.
- Lastly, the prediction function is used. Test data will be passed to predict the output of the model. Here it will return an array that will have probabilistic values for the images which will help to find out whether the image is forged or real.
In the preprocessing section, here it first performs preprocessing on the images to make those fit for our model as inputs. All three models require the input images to be of the same size, but since the scanned images have a size in a spread-out range we resize them. We resize all the images to a fixed size 105×105 using PyTorch transform, or CV2. Then the images are inverted to give a value 0 to the background pixels Figure 4 shows the flow of phases in training and testing the data.

Comments
Post a Comment