Horizon CDT Research Highlights

Research Highlights

Investigating public discourses around decision-making algorithms using a combined computational and discursive language analysis approach.

  Dan Heaton (2020 cohort)

This project will examine the scope for investigating the discourse surrounding public-facing Autonomous Systems – in this case, defined as decision-making algorithms – using an interdisciplinary lens. Public-facing Autonomous Systems aim to increase productivity and enable more efficient and informed decision-making (Royal Academy of Engineering, 2017). Examples include Test and Trace, a Covid-19 contact-tracing application, and the Ofqual algorithm, used for automating Advanced Level results in 2020. These work without supervision and both had impact on United Kingdom citizens (Kretzschmar et al., 2020; Kelly, 2021). Investigating trust in Autonomous Systems is critical for the development of future artificial intelligence technologies (Shahdar et al., 2018) as they become more relevant to our daily lives. The Trustworthy Autonomous Systems Hub adopts Devitt’s view that Autonomous Systems must be trustworthy by design and perception (2018).

The collection of opinions here will act as validation or signpost the need for alternative exploration (Bruch and Feinberg, 2017). To analyse the views expressed, a sentiment analysis – or opinion mining – tool may be deployed. This can be non-intrusive and cost-effective, as opposed to interviews or experiments (Rout et al., 2018). A typical source of data for sentiment analysis may be social media, notably Twitter, as many offer opinions on this public-access site, providing a large data set that Twitter’s API can analyse in real-time (Kumar et al., 2014). 

Sentiment analysis does not account for the discursive and conversational ways in which opinions on Autonomous Systems are discussed on social media, which should be accounted for to understand this in further detail. Shortcomings include a lack of account for time lapsing or interaction with others (Liu, 2010) and the detection of sarcasm and irony (Mohammad, 2017). Rectifying these may be crucial as there is an absence of interest in how trust in Autonomous Systems is examined over time. A new approach may overcome these shortcomings by combining computational linguistic and sociolinguistic analytical methods.

This project will investigate the following questions:

  1. How can current forms of computational linguistic analysis, including sentiment analysis, support understanding social media public discourses around decision-making algorithms? What are the benefits and shortcomings?
  2. How can the discourse surrounding public-facing decision-making algorithms on social media be understood in more detail through the inclusion of discourse analysis alongside computational linguistic analysis?
  3. In what ways can existing computational linguistic analysis be combined with qualitative linguistic analysis to mitigate the shortcomings of existing methods?

References

Devitt, S. K. (2018). Trustworthiness of autonomous systems. Springer, Cham.

Kelly, A. (2021). A tale of two algorithms: The appeal and repeal of calculated grades systems in England and Ireland in 2020. British Educational Research Journal.

Kretzschmar, M. E., Rozhnova, G., Bootsma, M. C., van Boven, M., van de Wijgert, J. H., & Bonten, M. J. (2020). Impact of delays on effectiveness of contact tracing strategies for COVID-19: a modelling study. The Lancet Public Health, 5, e452–e459.

Kumar, S., Morstatter, F., & Liu, H. (2014). Twitter data analytics. Springer.

Liu, B. (2010). Sentiment analysis and subjectivity. Handbook of natural language processing, 2, 627–666.

Mohammad, S. M. (2017). Challenges in sentiment analysis. Springer.

Royal Academy of Engineering. (2017). Algorithms in decision-making (pp. 1-6). Retrieved from https://www.raeng.org.uk/publications/responses/algorithms-in-decision-making

This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (UKRI Grant No. EP/S023305/1).