Regulating AI: Explainable AI

AI algorithmic designs may involve algorithms such as neural networks and machine learning mechanisms, which can enhance robust power and predictive accuracy of the applications. However, how AI systems arrive at their decisions may appear opaque and incomprehensible to general users, non-technical managers, or even technical personnel. Algorithmic design may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers. These initiatives seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups.

View the recording here!

Date & Time

Date: 25 May 2022 (Mon)

Time: 0930-1030 (CET, GMT+2) / 1530-1630 (Singapore, Manila & Hong Kong, GMT+8)/ 1730-1830 (Sydney, GMT+10)

More Information

Find out the full series of Regulating AI: Debating Approaches and Perspectives from Asia and Europe here.

Speakers
Prof Matthias C. Kettermann
Head of research programme, Hans-Bredow-Institute / HIIG

Matthias C. Kettemann is Professor of Innovation, Theory and Philosophy of Law and head of the Department for Theory and Future of Law at the University of Innsbruck, Austria and holds research leadership positions at the Leibniz Institute for Media Research | Hans-Bredow-Institute, Hamburg, and the Humboldt Institute for Internet and Society, Berlin.

more
Prof Liz Sonenberg
Pro Vice-Chancellor, Systems Innovation, University of Melbourne

Liz Sonenberg is a Professor of Information Systems at the University of Melbourne and holds the Chancellery role of Pro Vice Chancellor (Systems Innovation). Liz is a member of the Advisory Board of AI Magazine and of the Standing Committee of the  One Hundred Year Study on Artificial Intelligence (AI100). Her currently active research projects include “Strategic Deception in AI” and “Explanation in AI”.

more
Dr Brian Lim
Assistant Professor of Computer Science, National University of Singapore

Brian Lim, PhD, is an Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS). He leads the NUS Ubicomp Lab focusing on research on ubiquitous computing and explainable artificial intelligence for healthcare, wellness and smart cities. His research explores how to improve the usability of explainable AI by modeling human factors, and applying AI to improve clinical decision making and user engagement towards healthier lifestyles. He has been serving on the editorial board of PACM IMWUT and program committees for CHI and AAAI. He received a B.S. in engineering physics from Cornell University and a Ph.D. in human-computer interaction from Carnegie Mellon University.

 

License: All rights reserved

more
Kal Joffres (Moderator)
CEO and co-founder of Tandemic
more
more
Contact
Us

Heinrich Böll Stiftung (website)

Ms Lucia Siu, Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue

Email: Lucia.Siu [at] hk.boell.org

 

APRU (website)

Ms Christina Schönleber, Senior Director, Policy and Research Programs, APRU

Email: policyprograms [at] apru.org

Enquiry
×
Speaker