Joanna J Bryson: Academic Expert in Artificial and Natural Intelligence: on human ethics, building ethical AI, responsibility, transparency & interdisciplinary knowledge – The Human Show Podcast 67

 

 

Joanna J Bryson is a globally-recognized leader in both artificial intelligence itself and AI ethics. She is currently a Reader (Associate Professor) in the Department of Computer Science at the University of Bath. She holds degrees in psychology and behavioral science from the University of Chicago and the University of Edinburgh, and degrees in Artificial Intelligence from the University of Edinburgh and MIT. At Bath, she founded the Artificial Intelligence research group and heads their Artificial Models of Natural Intelligence. She has held fellowships in various fields at Harvard, the University of Nottingham, the Konrad Lorenz Institute for Evolution and Cognition in Austria, Oxford, Mannheim, and Princeton Universities. Dr. Bryson has published widely and copiously in venues ranging from the top academic journal Science to the website Reddit (where she hosted an outstanding Ask Me Anything thread). She has lent her expertise to various national governments and government agencies as a consultant on AI policy and regulation, and to various government- and non-government organizations such as the OECD,  Red Cross, Chatham House, EU, WEF, and UN. She first consulted on AI in industry for LEGO in the development of the Mindstorms series, more recently she has been engaging with a variety of corporations including Facebook, Google, Microsoft, salesforce, Airbnb, again in the domain of AI policy.

In today’s episode we talk to Joanna about AI from a rather philosophical prism, on ethics and much more. Joanna questions the separation between different machines and the special treatment AI receives. She calls for the reconsideration of human awareness of ethics itself, of understanding why we are obliged to each other, which could then make it easier to define and design any obligations towards a device. Joanna explains why is she against machines being responsible for themselves and how building AI that reminds us of people might lead to less transparency in society. At last she gives examples of how interdisciplinary research can be organized in such a way that allows us to understand more.

 

Listen & Subscribe to the Podcast here:

 

Mentioned in Podcast:

European Human Behaviour and Evolution Association,  http://ehbea.com/
Anthropology + Technology Conference, Bristol October 3rd  2019: https://www.anthtechconf.co.uk/

Social Media:

Twitter: @j2bryson
LinkedIn: https://www.linkedin.com/in/bryson/
Google Scholar: https://scholar.google.com/citations?user=QOU1RTUAAAAJ&hl=en
Blog: https://joanna-bryson.blogspot.com/

Recent Posts

Shares
Share This