Horizon CDT Research Highlights

Research Highlights

Agency, trust and blame in decision-making algorithms: an analysis of Twitter discourses

  Dan Heaton (2020 cohort)

This project will examine the scope for investigating agency, blame and trust in public-facing Autonomous Systems – in this case, defined as decision-making algorithms – using an interdisciplinary lens, combining language analysis approaches from Computer Science and Linguistics to deliver nuanced insights into views expressed on social media. Public-facing Autonomous Systems aim to increase productivity and enable more efficient and informed decision-making (Royal Academy of Engineering 2017). Examples include the NHS Covid-19 contact-tracing application, used for mitigating the spread of coronavirus, the Ofqual algorithm, used for automating Advanced Level results in the UK in 2020, and ChatGPT, a text-generative large language model. These work with limited supervision, have had impact on a national and global scale, and have generated conversation on social media (Kretzschmar et al. 2020, Kelly 2021).

Concerns regarding the agency of these decision-making algorithms have arisen recently, particularly when negative outcomes occur (Bryson 2020, Burrell 2016), and determining responsibility is challenging due to complexity and opacity (Tsoukias 2021, Holford 2022, Selbst et al. 2019). The perceived social agency of a decision-making algorithm can impact whether it is trusted, mistrusted, celebrated or blamed. Investigating trust in decision- making algorithms is critical for the development of future artificial intelligence technologies (Shahrdar et al. 2019) as they become more relevant to our daily lives. The Trustworthy Autonomous Systems Hub adopts the view by Devitt (2018) that decision-making algorithms must be trustworthy by design and perception. Despite this, little exploration exists on how trust and blame are impacted by perceived social agency, responsibility, and accountability of systems.

A way of examining the agency of an entity is to use the relationship between social agency and grammatical agency. While agency can be investigated in multiple ways, such as through interview or observation (Ahearn 1999, Grillitsch et al. 2021), grammatical agency -– or transitivity — can show whether an entity is presented actively performing an action or passively having an action performed onto them (Leslie 1993). Deconstructing the agency of decision-making algorithms in these discourses can shed light onto the perceived power relations between entities and how these can ultimately indicate whether an algorithm is perceived as a social actor (Clark 1998, Van Leeuwen 2008), and whether they are blamed or trusted.

As previously mentioned, many have offered views about public-facing decision-making algorithms on Twitter, providing a large data set that an API can analyse in real-time (Kumar & Suresh 2012). To analyse the views expressed, popular NLP-based computational tools, like sentiment analysis, topic modelling and emotion detection tool, are usually deployed. These are common tools used to undertake social media research due to the vast amounts of data that is available to analyse. These can be non-intrusive and cost-effective, as opposed to interviews or experiments (Rout et al. 2018). However, these popular NLP-based computational linguistic tools struggle to account for the discursive and conversational ways in which opinions on decision-making algorithms are discussed on social media, which should be accounted for to understand how the presentation of grammatical and social agency impacts trust and blame (Kapidzic et al. 2019). Other shortcomings include, but are not limited to, difficulty in the detection of negation, sarcasm and irony and difficulty in interpretation (Maier et al. 2018, Jiang et al. 2017, Stine 2019). A new approach may overcome these shortcomings by combining popular NLP-based computational linguistic and sociolinguistic analyses in an approach that is underpinned by principles of the epicycles of data science (Peng & Matsui 2016). Here, Corpus Linguistics (CL) and Critical Discourse Analysis (CDA) may be helpful, culminating in a corpus-driven approach to language exploration, underpinned by Social Actor Representation (SAR).

Although there are shortcomings to these approaches – such as CL results may provide great evidence but limited explanation (Rose 2017), the effort and time required to perform a successful discourse analysis, especially on a large data set (Wetherell & Potter 1988), and the subjective nature of CDA (Gill 2000) – there are ways in which they can complement the popular NLP-based computational linguistic tools to deliver insights into agency, blame and trust in this public discourse, specifically the analysis of the nuances of discourses that popular NLP-based methods may not account for.

Methodologically, the examination of each case study will involve three parts. Firstly, popular NLP-based computational linguistic tools – topic modelling, sentiment analysis and emotion detection – will be used to gain an overview of the discourse in question, presenting these as trajectories where areas of interest (particularly fluctuations that seem unexpected) can be highlighted for further exploration. Secondly, CL tools will be used to examine active and passive grammatical constructions in the discourse, with greater attention able to be paid to the highlighted moments in the discourse from the initial analysis. Thirdly, CDA, underpinned by SAR, will be used to further delve into these active and passive presentations, identifying social actors and providing insights into whether Twitter users, ultimately, trust or blame decision-making algorithms.

Therefore, the main research question for this PhD project is: What insight into agency, trust and blame in Twitter discourses surrounding decision-making algorithms can be achieved through combining language analysis approaches?

This research question will be explored in each of the three case studies, which are:

  1. The 2020 A Level Calculation Algorithm
  2. The NHS Covid-19 Contact-Tracing App
  3. ChatGPT

Four objectives will apply to each case study, which are:
a Demonstrate how sentiment analysis, topic modelling and emotion detection provide insight into public discourses surrounding decision-making algorithms.
b Demonstrate how corpus linguistics, particularly collocation, provides insight into public discourses surrounding the agency of decision-making algorithms.
c Demonstrate how Critical Discourse Analysis provides insight into public discourses surrounding the agency, trust and blame of decision-making algorithms.
d Identify the strengths and limitations of using the three approaches to investigate public discourses surrounding decision-making algorithms.

Publications