Blurred lines in face recognition

Face recognition has come on apace from a cliched trope of science fiction to a reality of the modern world with widespread use in photography databases, social media, and the security world. However, as with any tool, there are those who would abuse it for nefarious ends. New research published in the International Journal of Biometrics investigates one such aspect of face recognition where a third party might “spoof” the face of a legitimate user to gain access to systems and services to which they are not entitled and offers a suggestion as to how such spoofing might be detected.

Sandeep Kumar, Sukhwinder Singh, and Jagdish Kumar of the Punjab Engineering College in Chandigarh, India, explain how biometrics, including face recognition, has come to the forefront of security in all sorts of realms from the simple accessing of a person’s smartphone to securing sensitive premises. The key to precluding face recognition spoofing lies in the determination of whether the face being presented to the security camera or device is “live” or a static photograph or video rather than the actual person.

The team has turned to an improved SegNet-based architecture that can measure “blur” on the basis of local minimum and maximum left and right edges and calculate blur of horizontal and vertical edges. A flat image such as a photograph or video display presented to a security camera or device would be wholly in focus whereas “depth-of-field” comes into play. With a three-dimensional object, such as a real face, presented to the camera, the eyes would be sharply in focus assuming the camera focused on that part of the face, but the curved sides of the head would be slightly out of focus because they are not in the same plane relative to the camera lens as the eyes. Regardless, it is technically impossible for the whole of a three-dimensional object presented to a camera to be in focus, detecting the blur of parts of the object in front of or behind the focal plane is key to discerning whether a real face is in front of the camera or a flat image.

The team’s proof of principle offers up to 97 percent accuracy, which is an improvement on earlier algorithms when tested against standard benchmarks. Moreover, it can determine the “liveness” of a presented face within about one second. The researchers are now working on improving their system’s speculation abilities by looking at shading, another characteristic of a real face that is is obvious to a person looking at a face but difficult for a computer to detect via a camera.

Kumar, S., Singh, S. and Kumar, J. (2021) ‘Face spoofing detection using improved SegNet architecture with a blur estimation technique’, Int. J. Biometrics, Vol. 13, Nos. 2/3, pp.131–149.