An uncomfortable fact of modern security is that too many people go through transit hubs and to public events for all of them to be screened both efficiently and thoroughly. As a result, there has been a lot of attention focused on producing automated systems that screen crowds without the need for human intervention. Automated biometric scans have serious limits, however, in that they can only identify people who have already been classified as threats. The Department of Homeland Security is hoping to overcome that limitation by automating the identification of individuals whose behavior suggests they pose a threat via a program dubbed "Hostile Intent."
The program has a foundation in both real science and prior experience. The human brain is constantly balancing input from both conscious decisions and reflexive actions. When the two conflict, this debate can produce subtle alterations in the timing and appearance of conscious actions, as well as in physiological responses. Trained individuals can read those cues to identify aberrant behavior; a report on the Hostile Intent program in the New Scientist indicates that the DHS has already deployed such individual examiners in airports, where they've apprehended a number of drug smugglers and money launderers.
The big leap will be shifting that sort of expertise to an automated system. At least in the case of facial expressions, a significant amount of work has already been done. Researchers located in the Netherlands have already demonstrated a system that recognizes and rates facial expressions. This system appears to be capable of working in real time, as demonstrated by videos the researchers made of a mood-based Pong game. |