Horizon CDT Research Highlights

Research Highlights

Innovation and Regulation: Attaining and Maintaining Transparency and Accountability in Automated Decision-Making.

  Kathryn Baguley (2020 cohort)

Within society, Automated Decision-Making (ADM) (a decision with no human involvement) is increasing.  The retail sector uses ADM to profile and personalise content to its customers through Recommender Systems (RS).  RS takes the information the organisation knows or perceives about an individual or group of people and applies an algorithm that predicts and recommends future purchases. 

RS’s perceived benefits are that the individual sees content that is more relevant to them and is cost-effective  to organisations, and increases sales. As online retail grew over 30% between 2015-2018,  and April 2020 saw e-commerce sales grow by 209%, keeping abreast of consumer interests and habits is particularly important for organisations to optimise survival chances.  In contrast, RS present many challenges for individuals and society, primarily around privacy.   As they work behind the scenes, many individuals may be unaware of RS, and studies continue to understand more about the ethical challenges and their real impact upon society.   

This research concentrates on the policy  aspects of RS with the General Data Protection Regulation.    We seek to understand how organisations interpret this technology’s use in line with the GDPR, particularly the application and distinction between general profiling activity, decision-making based upon profiling and automated individual decision-making, including profiling.   

The GDPR is a key starting point as its general design is to bring clearer rights for individuals and provide more obligations and consequences for organisations. While the GDPR does not define transparency, we mean being ‘clear, open and honest’  to all and by accountability, we mean ‘taking responsibility’ and ‘demonstrating compliance’.

To understand the current position regarding elements of transparency  and accountability, we will consider the organisation’s publicly available information  and make use of our individual rights.   Semi-structured interviews with organisations, individuals and regulators will also provide wide stakeholder feedback on RS transparency and accountability, both on the present position and what they feel may improve matters.   

Overall, the project aims to understand the balance between organisations, individuals and society and make appropriate recommendations to inform the future regulation of ADM in RS for societal good.

References

  • Information Commissioner’s Office (ICO), What is automated decision-making and profiling’ <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/#id2> accessed 22 February 2021.
  • Theo Araujo, Natali Helberger, Sanne Kruikemeier, Claes H. de Vreese, ‘In AI we trust? Perceptions about automated decisionmaking by artificial intelligence’ AI & SOCIETY (2020) 35:611–623 <https://doi.org/10.1007/s00146-019-00931-w> accessed 22 February 2021.
  • Article 29 Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’ (WP251, Adopted 3 October 2017 as last revised 6 February 2018).
  • Stuart J Barnes, ‘Information Management research and practice in the post-COVID-19 world’, International Journal of Information Management, <https://doi.org/10.1016/j.ijinfomgt.2020.102175> accessed 22 February 2021; ACI Worldwide, ‘Global eCommerce Retail Sales Up 209 Percent in April, ACI Worldwide Research Reveals’ <https://investor.aciworldwide.com/node/21576/pdf> accessed 23 February 2021.
  • Ansgar Koene, Elvira Perez, Christopher James Carter, Ramona Statache, Svenja Adolphs, Claire O’Malley, Tom Rodden and Derek McAuley, ‘Ethics of Personalized Information Filtering in Internet Science’ (2015) 123-132. Springer Verlag. <https://doi.org/10.1007/978-3-319-18609-2_10> accessed 23 February 2021.
  • Silvia Milano, Maria Rosaria Taddeo and Luciano Floridi, ‘Recommender systems and their ethical challenges’ AI & Society, AI & SOCIETY (2020) 35:957–967 <https://doi.org/10.1007/s00146-020-00950-y> accessed 22 February 2021;
  • Engin Bozdag, ‘Bias in algorithmic filtering and personalization’ Ethics Inf Technol (2013) 15:209–227 <DOI 10.1007/s10676-013-9321-6> accessed 23 February 2021.
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L 119/1 (General Data Protection Regulation GDPR).
  • GDPR art 5(1)(a), rec 39, Information Commissioner’s Office (ICO), ‘Principle (a): Lawfulness, fairness and transparency’ <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/lawfulness-fairness-and-transparency/> accessed 27 January 2021.
  • GDPR art 5(2), Information Commissioner’s Office (ICO), ‘Accountability’ <https://ico.org.uk/for-organisations/data-sharing-a-code-of-practice/accountability/> accessed 27 January 2021.
  • Article 29 Working Party, ‘Guidelines on transparency under Regulation 2016/679’ (WP260, adopted on 29 November 2017 as last revised and adopted on 11 April 2018).