Browsing by Author "Raper, Rebecca"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Embargo A comment on the pursuit to align AI: we do not need value-aligned AI, we need AI that is risk-averse(Springer, 2024-01-31) Raper, RebeccaAI Safety, AI Alignment, and the eventual demise of society due to superintelligent beings have recently become topics of public and even political interest. So much so that there were calls to halt all development on Artificial Intelligence until these issues had been addressed (see BBC News, May 2023), Elon Musk has established another organisation (‘xAI’) aimed at tackling such issues, and global leaders have organised global summits to prioritise research in the area. Needless to say; with the introduction of ChatGPT, there has been a growing interest in and precedence to solve issues to prevent the AI dystopian scenarios we might be familiar with from science fiction.Item Open Access Is there a need for robots with moral agency? a case study in social robotics(IEEE, 2024-06-05) Raper, RebeccaThere has been significant recent interest in the risks associated with Artificial Intelligence (AI), so much so that a Global AI Summit was recently hosted at Bletchley Park in the United Kingdom. One supposed risk associated with Artificial Intelligence is the threats that might be associated with an Artificial General Intelligence (AGI) carrying out acts detrimental to humanity. In the past, some researchers have attempted to bestow machines with morals to mitigate against these types of threat, however, in recent times the approach has been largely dismissed, with claims that giving machines moral agency poses more of a threat in of itself, than preventing it. One critique of the calls of the risk associated with AGI is that it is unrealistic, and that there is no grounding for any threat to humanity. The aim of this paper is to present a case study in social robotics to illustrate two points: 1) what real-life risks associated with AI might be, and 2) to reinstate the discussion surrounding whether there is a requirement for robots with moral agency.