Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion

Dane Brown, Karen Bradshaw

Abstract


The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

Full Text:

PDF


DOI: http://dx.doi.org/10.18489/sacj.v29i1.436

Copyright (c) 2017 Dane Brown, Karen Bradshaw

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.