Background knowledge for face recognition here 

 Face Recognition system : Viaface

  ( previously efaceguard ) 

  Face Recognition is one of the most noble technology for security  and  HCI industry. It is based on the core techonology of  Computer  Vision,  Pattern Recognition,Machine learning and Image Processing  field.

 Samsung IT R&D Center made a 'Face Recognition System' in  2002  after 3 years of research . It was a proto-type, named  'ViaFace'.  Later, it is made over to Samsung Data System, then it  is used  various field and industry including one of huge apartment  franchise  'Raemian' and Some Mexico airport etc. This System consists of 2  types, Verification(1-1),Identification(1  -many or Survailance).  Verification system is used mostly in  entrance of apartment while  Identification system is used in some  company and public places like  airport to identify fraud from  general passengers.

 

  UI of Viaface At first, user need to register his or her normal  face  data to DB. After that, whenever the user stand front of  the camera  of Viaface, the system matches the user's facial  features,extracted from the  camera image, to registered  data.

Face Identification ( survailance system)

Face Verification

 What I've done : Face Detection

 I was responsible to Face detection algorithm module at IT R&D Center.  Even though I also envolved Recognition system,  somewhat, My major researching area was Face and Eye Detecting. Especially result of eye detection is very important to the result of  recognition process  

1. Face Detection :

    (97~98% success rate in FERET face set,    with very low    FAR(False Accept Ratio))

  •Image Enhancement: Noise reduction of Input image    with various Histogram equalization. Better labeling of    eye-area in binary image via various filter including    Morphological operator.

  •Face Candidate Extraction: Worked to come up with a    multiple operator filter optimized through numerous    comparison tests between morphological operator which is    weak in eye with glasses but robust in illumination, and    2nd Gaussian filter that is relatively strong in illumination    but weak in glasses. On the other hand, I decided not to    use Neural network and Knowledge based extraction after    experiments and tests. That was because when compared    to the former, the later took more time for calculation,    leading to a failure of optimization for better output.

  •Face Verification (Face Candidate Qualification):    Face/Non-Face classification was applied. For this,    Principal Component Analysis (PCA: linear classification),    a classical method used mainly for recognition, Support    Vector Machine (SVM: non-linear classification) and    Mahalanobis Distance (MHD: calculating confidence with    reference image) were employed. After various tests, we    were able to get successful verification result by applying a    combination of each classification, or weighting.

  •Iris Localization: It was optimized after a lot of trial and    error based on iris segmentation using edge-based    Hough transform and 2ndGaussian filter.

Some detection test, it is supposed to robust on various gestures and various lighting environment

1-1 Eye detection in normal expression and normal environment

 

Real-time Detection in various expressioin

 

Real-time detection in dark environment and hiding eye-brows

2. Face Recognition :

 •Need for On-chip face detection Algorithm in camera

  : Manipulation of input image on a detection layer in a   real-time system could put load on the system. Therefore,   what I did was to have camera find out face candidate   area so that only that area can be equalized.

 •Recognition Algorithm: research of this initially started in   two tracks, and then later on it was converged.

   1) Start from Modular Eigenface: Local Feature Analysis       (LFA) and 2nd order statistics were used. And after that,       Elastic Bunch Graph Matching (EBGM) which used Garber       filter (8 direction, 5 size) was applied.

   2) Start from Dual Eigenspace: Independent Component      Analysis (ICA) was applied after calculating      within-between class covariance with Linear Discriminant      Analysis (LDA or fisherface). However, the result of      practical test was not as good as that of LFA method.

  3) Based on the tests, we decided to use combined vector      which is a convolution of both Eigenface and Fisherface

 •Compensation: I was able to get a variety of reference     faces using 3D model synthesis

What I've done : Color Segmentation

Skin Color Segmentation :

 •Need for fast eliminating non-face region made me research the   skin color segmentation.

  : The essential is to make an optimized LUT(Look Up Table) to  decide whether incoming pixel is skin color or not. To make LUT,  numerous skin & non-skin patch images and color informations  are trained. Then, 2 layered(skin/non-skin) array(LUT)accumulated in  HS dimension. After that, Distributions of 2 layer modified by 2 ways  -EM(Expectation Maximization) alg. or SOM(Self Organizing Mixture  model)-. When a pixel incomes, the pixel compared  between skin/non-skin mixture models, then it finally is confirmed  whether skin pixel or not.

 However, skin segmentation skill is somewhat out-of-date. Because of  it's weakness at various lighting env' and dependency to each camera  types. Unfortunately, my skin model could not be able to feed into out  project. But I'm sure if there could be more robust models that color  segmentation could be very useful in some area 

What I've done : Eye Detect & Compensation

Skin Color Segmentation : To crop exact ratio of face region and to  get exact locations of eyes, we need to compensate eye  coordinations. To do this, there are so many method to try.  Among those method, Second order Gaussian filter and edge  based circular-haugh algorithm mixed to using in  compensation.  2nd Order Gaussian filter make lower brightness area to lowest  and higher area to highest. Haugh alg. make estimate most  probable eye coordinate by check the most cumulative circle-like  area parameters. mixing with these and other method we can  compensate eye coordinates. 

 

What I've done : Stereo Image Reconstruction (IVR)

Stereo image reconstruction : It was an extension of my BS Thesis "Simple Enhanced Block-Matching Algorithm for Intermediate View Reconstruction". I just tried to make a fast feature extraction by stereo scopic images. However, IVR is one of the basic skills in stereo image construction, and algorithm in my BS thesis was too simple one. The reason why I introduce some images here is..just it is also related with Computer vision :p

4 view

4 view

 7 view

7 view

 

 

Link to guide some Terminologies and Theories and some thesis mentioned above

 

 

   All contents including images and UI formats are under copyright of kihwan23 2005, kihwan23 is kihwan,KAY,kim's nick or trademark :)