Horizon CDT Research Highlights

Research Highlights

Deceiving the Machine

  Matthew Yates (2018 cohort)

The last decade has seen remarkable progress in the fields of machine learning and computer vision as more sophisticated algorithms are designed and the hardware to run them becomes more powerful and more accessible. Generative algorithms for creating images and video have become much more robust with the inception of Generative Adversarial Networks (GANs) (Goodfellow et al, 2014).

This ability to distort reality by generating or altering visual data brings a new set of challenges and threats when the technology is used with malicious intent. One recent example of this is the rise of “Deep Fakes” (Chesney, R., & Citron, D. 2018) which use machine learning techniques to splice one face onto another in motion, making it possible to create video of real people saying and doing things that never actually occurred. This technology has serious ramifications across multiple levels, from the impact on people’s personal data being altered and misused without their consent, to larger reaching consequences of national security if this technology becomes used for digital propaganda to influence large populations.

As the implementation of generative techniques becomes more accessible and easier to use, as well as the generated content itself becoming more realistic and harder for the human eye to distinguish it is paramount that a standard set of tools and metrics are created to detect the presence of such algorithms in digital media.

This PhD research project aims to create a set of tools and metrics for detecting forged or generated content by combining approaches from both computer vision and research into human visual attention by accessing the strengths and weakness of both visual systems. A secondary aim of the project is to gain new insight into the differences in mechanism and behaviour between the two visual systems by exploring the cognitive visual responses towards this specific digital stimulus. By drawing on theories from cognitive neuroscience and in using a mixed methods approach, this project aims to tackle the digital threats of fake information from a novel direction.

This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and DSTL.