Abstract
The use of counterfeit documents, human trafficking, identity theft,
terrorist attacks and cyber-attacks are amongst many of the challenging
problems and threats that the whole world is exposed to. Today’s era of
advancing technologies means that more people and more devices are
connected, and can easily communicate to and with each other via the
Internet. Although global communication is a necessity, it also entails the
risk of exposing personal or highly classified data to any sort of malicious
exploitation. The need to increase security has seen the use of biometrics
as a necessary alternative. The growing role of biometric methods have
resulted to countries such as India, the United Emirates, Japan, etc. to
implement biometric systems for applications in national ID cards, border
security, immigration control, and law enforcement, retail stores, banks
and government facilities.
Amongst the various biometric modalities, the human iris is regarded as
the most accurate, and as such has drawn a lot of attention and gained
momentum for over a decade due to the uniqueness, reliability and
stability of iris features over a person’s lifetime, as well as the high
accuracy achieved for authentication, and ease of image acquisition. A
typical iris recognition system (IRS) consists of four modules namely iris
segmentation, normalisation, feature extraction and template matching.
Each module has automated traditional algorithms that have been
successfully used solely for the purpose of uniquely identifying and
verifying a person within a large database of enrolled individuals. The
drawback of the classical iris segmentation algorithm for instance, is that is
assumes that the pupil and iris boundaries are concentric circles, that is,
they share the same center, which is not generally the case. The
normalisation stage uses the rubber sheet model to transform the
segmented iris from a Cartesian plane to polar coordinates to cater for...
D.Phil. (Electrical and Electronic Engineering)