Home > Article > Modelling Uncertainty in Representation of Facial Features for Face Recognition

Modelling Uncertainty in Representation of Facial Features for Face Recognition

presented in(Hsu et al 2002) Fagnition can be defined as the iddividuals fromfaces bd database of faces labeled withtraction of features from the face regions, and firdlem as therefactors suchake up etc, which affect the appearance of an individuals facial features In addition tothese facial variations, the lighting, background, and scale changes also make this task evenmore challenging Additional problematic conditions include noise, occlusion, and manyble factorsMany methods have been proposed for face recognition within the last two decadesechniques, the appearance-based methods are very popular beefficiencydling these problems( Chellappa et al 1995) In particular, the linear1990) The defining characteristic of appearance-based algorithms is that they directly uhich to base thedecision The pixel intensities that are used as features are represented using single valuedvariables, Hface is captured in different orientation,do change becauses

Thevalued variables may not be able toH∈dbDe2b10: Diday 1993hich the interval-valued data are analyzed Therefore, there is a need to focus the researchefforts towards extracting features, which are robust to variations due to illuminatiorientation and facial expression changes by representing the face images as symbobjects of interval type variables(Hiremath Prabhakar 2005) The representation of faceariations of hder different lighting conditions, orientation and facial expressioduces the dspace In(Hiremath Prabhakar 2005), a symbolic PCAapproach for face recognition is presented, in which symbolic PCAset of subspace basis vectors for symbolic faces and then project tcompressed subspace, This method requires less number of features to achieve the sametion rateencodes only for second order statistics, i e Pixel wise covariance among the pixels, andtive to the dependenciestiple (more than two) pixels in the patterns As theseecond order statistics provide only partial information on the statistics of both naturalges and human faces, it might becomeorate higher order statistics asell The kernel PCA (Scholkopf et al 1998)is capable of derivingdimensional featuresnon linetions among the pixel intensity values, such as thethreeelsure important informcognition The kernel PCA is extended to symbolic data analysis as symbolic kernel PCa(Hiremath Prabhakar 2006) for face recognition and the experimental results showognition rate as compared to the symbolic PCA method The extensionsymbolic analysis to face recognition techniques using methods based on linear discriminant

Figure 6 Results of Face Detection a) Facial Feature extraction b)Detected face in boxntal ResultsThe MAtLAB 60 implementation of the above describeddure on pentium iv a 26GHz vields the su067D2,(D is distance between eyes)which is considerably very smal/lcopfeature extraction is confined to only the total area covered by tthe image size This reduced search area leadsthe detection time to agreat extent

Sample detection results are shown in Figure 7 and Figure 8 with detected facesccluded by hair, very small face sizes, face occluded by hand and too dark shadow on facesFigure 9le comparison of different state of the art detectors proposed by(Shih and Liu 2004, weneidn and Kanade 2000that, fuzzy face model approach based on skin color segmentation (H-D methodomparable to others in terms of detection rate and very low in both detection time and falseMethodDet Iime DatasetetectionS-L methodMIT-CMUreportedI S-K method 465I MIT-CMU 125483H-D method02078650Figure 7 Sample detection results for single as well as multiple human faces with sizes,

Figure 8 Sample images with expressions, lighting conditions, complex backgroundOptimization of feature setsA small set of geometrical features is sufficient for the facsk, whichcial features detected basedmodel arederednormalizedcometrical feature vector is constructed with the distances, areas, evaluation values andfuzzy membership values Normalization is done with respect to the distance between eyenized and demonstrated that the resultant vector isariant of scale, rotation, and facial expressions This vectorely characterizes eachman face despite changes in rotation, scale and facial expressions Hendely used for the facetem further it is a 1-dimensional feature vectwhich has reduced dimensionality to ared to the otherds(turk Pentland, 1991; Belhuet al, 19g7) based on the 2-dimensiorsets foty space In(Hiremath and Danti, Dec 2004), the method of optimization of featurented and it is described as bele3

1 Geometrical Faciatains total of about 26 features, in wlning 14 projected features are determinedby the projection of facial features suchbrows, nose, mouth and earsFacial Featuressing the face detector based on Lines-of-Separability face model(Hiremath PS Danti A(Hiremath PS& Danti A, Dec 2005)respectively, the listof geometrical facial features extracted are given in the TableProjected Featurestures obtained bperpendicularly to the Diagonal Reference Line(DRL) as shown in the Figure 10 The DRL is

the line bisecting the first quadrant in the HRL-vRl plane and is a locus of point (x, y)equidistant from HRL and VRL The equation of the DRL is given byfr+ By+C=0, where the coefficients a, b, and cFeature DescriptionFeature DescriptionEvaluation value of eveEvaluation value of leftOverall evaluation value of theebroEvaluation valuembership value of righteyebrowTable 5 List of geometrical features extracted from face detecthRLDRLDistance Ratio FeaturesThe distance ratios are computed as described in the following Let (rK,yx)be the centroidK of the kth feature (e

g left eyebrow in the Figure 10) Let P be the projection of point K onthe DRL Then, the following distances are computed(Perpendicular distance)(18)(Radial distancePk-VMK-KPx(Diagonal distanc

notation, Rieh denote the distance ratio obtained by the projectioSimilarly the distance ratios Rle, RRe, RRea, RNore RiwRdeterminedDistance ratio Features in Combinatiothe dRl are used to compute the distance ratiosfor thefacial featuresReelLeft Eye to Right Eye)RL(Left Eyebrow to Right Eyebrow(Nose to Mouth)24)MPlear

(Left Ear to RightArea Featuresyebrows, nose and moutthe Figure 11 The areed by the triangles are used to determine the areigure 11(a), et and ez denote right and left eyes respectively;(x3,y3),adright eye, left eye, nose, and mouth respectivelFigure 11 Triangular area features(a) Areas formed by eyes, nomouth(b)Areasformed byeyes and nose; and, the triangular area Aew foreves and mouth are computed as

Then theareas covered by eyes, nose and mouth is given by the equation (2711(b), b and b2 denote right and left eyebrows respectively, and n and mdenote nose and mouth respectively The coordinates(n, n1)(2, J2),(*3,33),and(r4, y4)arerespectively The trianglarea Ady formedand nose; and, the triangular area Adtu formed by eyebrowsuth are comdetThen the ratio of areas covered by eyebrows, nose and mouth is given by the equation(29The projected features are listed in the table 6Feature descriptionFeature DescriptionLow Distance ratio by left eye REar Distance ratio by right earCRew Distance ratio by right eye RLewzRew Distance ratio by left and right eyesistance ratio by left rightRObRNmDistance ratio by nose and mouthROsetio bRLour2Rewr Distance ratio by left ear and rightce ratio by mouth AEws Area ratio by eyes, nose and mouthtio by left eaAEytroes Area ratio by eyebrows, nose andTable 6 List of projected featurese 26 features, in which 12 features are from the table 5 and14 features are from the Table 63

2 Optimizationeatures setsThree subsets of features from 26 features in differe considered forptimization The subset A, B, C consist of 14, 6, 14 features, respectively as given belowSubset =(Rger, REve reeh, REb, RNoe RMwtk, RLe, RR(30)Subset C=(ALeh, AReb, u Nase, MOuth, EEres72)Ereg A eyebrowsThe every feature subset is optimized by the maximal distances between the classes andminimal distances between the patterns of one class Here each class represents one person

and the different images of ordered as patterns The effectiveness ofdetermined byn function f as given bel∑here Mi and D, areand variance of the feature values fi foMu and Meratio of theof dispersion of sample standard deviations and of the sample mess For illustrationface database, which contain 40 subjects or classes and each of 10 variations

The Figurevalues along the y-axis are plg the x-axis The lower F value indngerofe feature subset cll optimized with the lowest F values compared tod, hence it corresponds to a better feature subset:2Figure 12 OptimizatsubsetsThe above feature Subset C is considered as the most optimized geometrical feature vectorelative geometrical distances between the facial features such as eyes, nose, mouth, andeyebrows vary proportionally with respect to scaling, rotation, and facial expressions, andtheir feature valthe optimized feature vector3 illustrates the invariar

feature vectors for the images shown in Figure 13(a) The Figure 13(b), feature vectorsexhibit negligible variations in the featur4)Varied3) Rotated ExpressionmFigure 13 Illustration of invariance property a)Different images of the same person b)ture vectorsd face recognition, a human facedescribed by several featfeatures have differenttributionsfeatures will always have the credit of reducing huge space that is normally required in faceage representach in turnition speed considerably(Zhao et00)In(Hiremath and Danti, Jan 2006), the geometric-Gabor features extraction isproposed for face recognition and it is described in this section4 1 Gemetric-Gabor feature Extractiocognizing a facegeometrical features and Gabor featuresmbin n focognition

Theature set (Subset C) is considered as Geomes for face recogniand thefeatures=(ulcb, FRebANav,AErneThe gabor featuresthe gabor filters on the facial featureas obtained by our face detector and these locations are considered asGabor features as gextraction process is as givenThe local information around the locations of the facial features is obtained by the Gaborsinusoid modulated by a 2D Gaussian function andn frequency The Gabor filters resemble the receptive field profiles ofble orientation, radial frequency bandwidth andfrequency The limited localization in space and frequency yields a certain amount ofbustness against translation, distortion, rotation and scaling, The Gabor functions

analvsis, two-dimensional discriminansis, Independeanalysis, factorialdiscriminant analysis and kernel discriminant analysis has beenPrabhakar Dec 2006, Jan 2006, Aug 2006, Sept 2006, 2002It is quite obvious that the literature on face recognition isnethods addressing a broad range of issues of face detection and recognition However, theobjective of the studthe present chapter is the modeling of uncertainty in theepresentation of facial features, typically arising due to the variations in the conditionsges of a person are captured as well as the variations in the personalformation such as age,ssion or mood of the person at the time of capturingge Two approaches, namely, fuzzygeometric approach a

nd symbolic dnalysis, for face recognition are considered for the modeling of uncertainty of informationFuzzy face Mode for Face DetectionIn(Hiremath and Danti, Dec 2005), the detection of the multiple frontal human faces basedn the facial feature extraction, using the fuzzy face model and the fuzzy rules, is proposedthe skinmentation method In which, 2D chromatic space CbCr usingnents Cb and Cr, derived bystical sampling technique Each potential face region is then verified for a face in whiclnitially, the eyes are searched and then the fuzzy face model is constructed by dividing thecial area into quadrants by two reference lines drawnhe fuzzyface model using the fuzzy rules and then face is detected by the process of defuzzificationOverview of this fuzzy geometric21 Skin Coltranslation(Hal2002)Hkin color, with the exception of very blackow color space

Taking advantage of this kngions are segmented using the skin color space as followskin Color SpaceThe YCbCr color model is used to build the skin color space It includes all possible skinblefacial skin color regions excluding theonly the chromatic color components Cb and Cr fomentation using the sigma control limits(Hiremath and Danti, Feb 2006) The procedurId skin color space is described as followinmages are in RGB colors The rgb coloronditions and hence luminance is noThe RGBerted into ycbCr colseparated (ain A K 2001) Skin color space is developed by considering the large sample offacial skins cropped manually from thee images of the multi racial people Skinmples are then filtered using low pass filter ain 2001)to remove noises The lower and

upper control limits of the pixel values for the chromatic red and blue coledetermined based ond-half sigma limits using the equation (1)(1-)2/ce Aof size m x H, where c denotes the color plane(ie red and blueandstandard deviation of the colnd uc of therespectively, are used as threshold values for thetation ofskin pixels as given below1, if)&(lcb≤Cb(x,y)≤wcb)here cand Ch(r, v) are the chromatic red and blueues of the pixel at(xin the red and bldanes of the testHence the lower andntrol limits lcl, and ucl for red andcolors, can transform a colimage into a binary skin image P, such that the whitethe black pixels belong to the non skin region as shee Figure 1(b)computation of the lower and upper control limits, experimental results show that, in theontrary, in the a limits, the probability of omission of facial skin pixels in thebetweIneof facial skin pixels and thearea

In the experiments,gma control limits are flexible enough to absorb the moderate variations of lightingditions in theresults of the skin color segmentation are shownin the Figure 1(b) The skin color segmentation leads to a faster face detection process as thearch area for the facial features is comparativelfferent skin color segmentation methods is shown in the Table 2Color componenMeanStd, D1555146ble 1 statistical

把Danti, Feb 2006), c)RGB (Wang- Yuan method), d) HSV(Bojic method), e)YCbCr(Chainethod), f) YUV (Yao method), g) YIQ (Yao method)Skin coAvg timefeatureskin areael( Chai& ngan 1999)'CbCr(Hiremath& Danti, Feb 2006)001372ble 2

Comparison of time, segmented skin area, and number of candidate facial featurein color segmentation methodshe binary skin segmented image obtained above is preprocessed by performing binaryopening operation to remove isolated noisy pixels Further, white regions

contain black holes these black holes may be of any size and are filled completely Theinary skin image is labeled using the region labeling algorithm and their feature moments,8, major axis length, minor axis length and area,are computed (ain, A K, 2001; Gonzalez, RC, et al, 2002) By the observation odegrees in the case of frontalface images

Only such regions are retained in thefurtherface regions and are removed from the binary skin image, The observation of several realfaces also revealed that the ratio of height to width of each face region is approximately 2,henever the face area is less than 500 pixels Hence, the regions, whose area is more thanThe resulting binary skinace regions (F1g ng and applying the above constraints is expected to contain potential測P2a)Original Image b)gray scale image c)Sobel Filtered Binar22 Face DetectionEachtial face region in thed into grayhetherss of facial feature extractionfuzzy rules(Hiremath2005)Theled face detection process,ich detects multiple faces in an input image, is described in Figure 3Skin color segmentation Obtain face regions Select a face regionearch eyes and construct Fuzzyface model with respect to eyesSearchsplay detected faces in Boxesf the multiple face detection pro

cessing of Face Regiethe potential face region is filtered using the Sobel edge filterd binarized using a simple global thresholding and then labeled In the labeled image, theal facial feature blocks are clearly visible in the potential face region underFigure 2(c) Further, for each facial feature block, its center8, bounding rectangle and the length of semi major axismuted (ain,AK2001)eature extractiThe feature blocks of the potential face region in the labeled image are evaluated in order towhich combination of featcksand the procedure isn of theface All theeature blocks are evaluated for eyes Initially, any two feature blocks are selected arbitrarilyime them as probable eye candidates Let(*iyi)and (=2

2)be respectively, theenters of right feature block and left feature block The line passing through the center ofboth the feature blocks is called as the horizontal-reference-line(HRL)as shown in Figure 4d is given by the equation ()and the slope angle angL between the HRL andII QuadraQuadrantFigure 4 Fuzzy face model with support regions for eyebrows, nose andwhere3)The slope angle Burl between the HRL and x-axis is given byodel, a face in a too skewed orientation is notconsidered in this model Hence, the slope angle BHrt is constrained within the ranget pair of feature blocks does not satisfy this orientation constraint, thenof feature blocks from the remaining feature blocken for matching Only for the accepted pairs of features, the normalized lengths of themi major axis h and Iz are computed by dividing the length of the semidistance d between these two features The distance d is

Let 6, ande, are the orientations of the above accepted feature blocks, The evaluationanction EEumuted using the eq(6) to check whether the current pair offeatures is a potential eye pair or notEEof the negative exponential distributiperimentation to optimize higher detection rate withfalse detections Hence, highthe evaluation value EEw higher is the probability of the two selected feature blocks to beceptedocks is selected For potential eye pair candidate, thefuzzy face model is constructed and the other facial featusearcheuction of Fuzzy Face ModelIt is assumed that every hume seInal 2002) The fuzzy face model is constructed with respect to the above potentialline perpendicular to the hrl at the) Let (p q) beof the two eye candidates Then the equation of the VRl is given by equation(7)(7These two reference lines(HRL and VRL) are used to pathe facialyebrows, nose and mouth acally estimated in terms of the distance d between thenters of the two eves on the basis of the observations from several face images theVEwhnny, Nowe and Vlwh denote the vertical distances of the centers of eyebrows,se and mouth from the HRL which are estimated as 0

3D, 0 6D and 1OD respectively Thezontal distances of the ced mouth from the vrl whichestimated5D 005D and oIDe facial featuresregions, which confine the search area for facial features This completes the construction ofe fuzzy face model with respect to the selected potential eye pair candidate in the givenshown in Figure 4 Further, the fuzzy face model is used to determine whichSearching Eyebrows, Nose and MouthThe searching process proceeds to locate the other potential facial features, nameyebrows, nose and mouth with respect to the above potential eye pair candidate TheThen these support regionsustration, we take the left eyebrow feature as an example to search Let a feature block K

be a potential left eyebrow feature The horizontal distance htert and the vertical distance uutof the centroid of the Ka feature from the VRL and HRL, respectively, are computed usingation(8)ertical distancesHorizontal distances002024065045Mouh045135090154030300010Table 3 Emperically determined distances of the facial features(normalized by D)uantities torepcation of thpotential left eyebrow feature, the fuzzy membership values Am, and A,, respectively, aredefined using the trapezoidal fuzzy membership functionparticular, the membership function A,, is defined using the equation(9) and Table 3muxe="n)(B≤wSimilarly, the membership function A, is defined

The support region for the potential leftyebrow feature is the set of values ht and uleb whose fuzzy membership values are non-o The Figure 5(a)shows the graph of the trapezoidal fuzzy membership function u, forthe vertical distance of the j h feature and the support region for the left eyebrow is shown inigure 5(b) To evaluate Kh feature block in the support region for left eyebrow, the value ofe evaluation function Ex is given by the equation(10) The Ex value ranges from 0 to 1 andrepresents the probability that the feature block K is a left eyebrotVk▲VRLVerticalHorizontal distance(hure 5 Trapezoidal fuzzy membership function u, for the vertical distance of the jth facialfeature b)Support region for left eyebrow in the I quadrant of face mode

D/2Similarly, evaluation value is computed for all the feature blocks preseset of Ex values with their corresponding fuzzy membership values Axvalue pLeh corresponding to Ewe is obtained by the min-max fuzzy compoYuan 2000) given by the equations(11)and(12) The feature block having the evaluationsponding utat found in the support region of the left eyebrow is theSimilarly, the right eyebroththeir respective supportogions determined by appropriately defining thehe fuzzydistances(horizontal and vertical) from the centroid ofeatures, and their fuzzyerall fuzzy evaluation E for the fuzzy face model is defined as the weighted sum of thefuzzy evaluation values of the potential facial features namely, forright eyebrow, nose and morspectively The weigdjusted togiven in the equation (13) The membership value Wg corresponding to E is obtained by theE=04E0 2ENom(13)4)d for every poal eye pair candidate and get the set of fuzzyaces These fuzzy faces are represented by the set of E values with their correspondingA5re than one E value corresponding to ue maximum among thoselues is the defuzzified evaluation value Ep of the face

Finally, the potential eyes,rows, nose and mouth featuresthreshold value 07 Otherwise this face region is rejected The face detectsults are shown in Figure 6, where (a) display the feature extraction in which facialatures are shown in bounding boxes (ain 2001)and(b) shows detected face in rectangulabox,(Hiremath PS& Danti A Feb 2006) The above procedure is repeated for every