Raper, Rebecca2024-06-242024-06-242024-06-05Raper R. (2024) Is there a need for robots with moral agency? a case study in social robotics. In: 2024 IEEE International Conference on Industrial Technology (ICIT), 25-27 March 2024, Bristol, UK979-8-3503-4027-32641-0184https://doi.org/10.1109/ICIT58233.2024.10540919https://dspace.lib.cranfield.ac.uk/handle/1826/22554There has been significant recent interest in the risks associated with Artificial Intelligence (AI), so much so that a Global AI Summit was recently hosted at Bletchley Park in the United Kingdom. One supposed risk associated with Artificial Intelligence is the threats that might be associated with an Artificial General Intelligence (AGI) carrying out acts detrimental to humanity. In the past, some researchers have attempted to bestow machines with morals to mitigate against these types of threat, however, in recent times the approach has been largely dismissed, with claims that giving machines moral agency poses more of a threat in of itself, than preventing it. One critique of the calls of the risk associated with AGI is that it is unrealistic, and that there is no grounding for any threat to humanity. The aim of this paper is to present a case study in social robotics to illustrate two points: 1) what real-life risks associated with AI might be, and 2) to reinstate the discussion surrounding whether there is a requirement for robots with moral agency.en-UKAttribution-NonCommercial 4.0 InternationalLaws of RoboticsMoral MachinesMachine EthicsMoral AgencyRobot EthicsSocial RoboticsIs there a need for robots with moral agency? a case study in social roboticsConference paper979-8-3503-4026-62643-2978