Feature-Fusion Guidelines for Image-Based Multi-Modal Biometric Fusion

Authors

  • Dane Brown Rhodes University
  • Karen Bradshaw Rhodes University

DOI:

https://doi.org/10.18489/sacj.v29i1.436

Abstract

The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.

Author Biographies

Dane Brown, Rhodes University

Computer Science PhD student

Karen Bradshaw, Rhodes University

Computer Science Professor

Downloads

Published

2017-07-08

Issue

Section

Research Papers (general)