Lies Against the Machine

Lies Against the Machine
By Elle Proust | October 4, 2019

Lie detection machines are a mainstay of crime fiction, as effective tools for law enforcement. Outside of fiction they are less useful, both because of legal restrictions and not working especially well. As with many other areas of life, Artificial Intelligence(“AI”) has been put forward as a potential solution. Three recent examples; European Dynamics provides data science based tools to border agents in Europe, research scientists at Arizona State University have built a AI based immigration agent called ‘AVATAR’ intended to be used at the US/Mexico border, and Conversus has developed an AI based lie detector ‘EyeDetect’ to screen potential employees. Other than having names sounding like the villain company in a robot sci fi, the developed technology is concerning both because of the lack of transparency on their scientific efficacy and apparently little thought to ensuring that they are applied fairly.

Background

It is human fallibility at discerning lies that has driven scientific methods for doing so. A literature review of over 200 studies in 2003 found that humans could accurately detect whether a statement was true or false at around 54% of the time, so there. Policing, immigration and other law enforcement would be greatly improved by accurate lie detection. Attempts to measure lying by machine have been around since the early twentieth century, and have had varying levels of success through polygraphs, voice detection and even brain scans, though most have been rejected by courts for unreliability. Data science in the field is relatively new – but spreading widely.

How they work

EyeDect as in the tool utilised in the movie Blade Runner – monitors eye movements to detect lying. AVATAR also use eye movement but also adds in voice analytics and facial scanning. Finding the data set to train these AI systems on presents an issue, as in most cases we do not have fully labeled data. If someone gets away with a lie it is by definition mislabelled which creates a training problem. Lie detection studies have gotten around this by secretly filming undergraduate students committing dishonest acts and then asking them about it later. The AVATAR deep learning system system was actually trained recordings of faces of college students in the same manner. AVATAR does not disclose how it guarantees that the lying done by students is comparable to people lying at the border. AVATAR claims 75% accuracy but whether this is determined purely on the college students or at the border is unlcear. If it is at the border how it is accurately accounting false negatives is another question entirely. EyeDetect at least has acknowledged some per review assessments showing accuracy tending toward 50% but how they can ensure this does not happen to customers and potential employees does not appear to be publicised.

Applicability and fairness

AVATAR at least is a black box model, the owners of which readily acknowledge that they have no idea how the technology makes decisions – this can be worrying because then we do not know if the algorithms are making unbiased decisions. Conversus have argued that lie detectors are an improvement on the polygraph because no humans can manipulate it. It is true certainly that both employment screening and immigration can and likely are subject to human biases. However, algorithms can be biased and arguing otherwise is specious. As an example, Google did not intend to generate a racist image identifier, however no human would ever label a person of African descent as non-human, which the Google algorithm did – removing a human made a situation worse.

In addition, Professor Eubanks, author of automating inequality has argued that alogirthms can remove individual bias but “are less good at identifying and addressing bias that is structural and systemic”. Most of the data these algorithms are trained on are disproportionately white and affluent Americans – there is no guarantee that these will treat other populations fairly. In other situations, such as welfare programs oversampling of one group of people have led to other groups being identified as outliers simply for being different. Employment and Immigration are already difficult for marginalised groups and we should tread very carefully with anything that could amplfy existing issues.

Looking Forward

The United States at least has banned the use of any lie detectors in employment testing and as admissable evidence. There is no such prohibition against the use by border enforcement which explains the attempted rise of AVATAR. European Dynamics is being trialled by the Hungarian government, and EyeDetect has confirmed that is an operating in a Middle Eastern country but will not name which one – the human rights record in either case should not instill much confidence.

It seems likely systems will be used more widely, appropriate care must be taken.

Images Taken from: Blade Runner (1982), and https://www.eyetechds.com/avatar-lie-detector.html

Leave a Reply