Tag #Regulating AI
Programme (1)
Event (3)
News (4)
Regulating AI
Background Artificial intelligence (AI) has reached a stage of maturity and extensive application across supply chains and manufacturing, in automation, public governance, media and entertainment. While industries and societies are quick in the uptake of AI to harness benefits and opportunities, many governments are catching up to develop responsible and appropriate regulatory frameworks, to prevent immense possible harm of mismanaged AI. During this active shaping process, the European Commission has unveiled its draft AI Act (AIA) in April 2021; ongoing discussion and a law-making process that seeks to establish key agenda and practices in the field of AI regulation continues in the EU Parliament in 2022. Simultaneously in the past 2-3 years, a number of Asian countries are actively rolling out policy papers, laws, and guidelines stages concerning AI regulation, embracing different emphases and approaches. The Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) are inviting to a series of three webinars that bring together experts and interested audiences from the Asia-Pacific region and Europe to discuss current ideas and approaches around the regulation of AI including: What kind of regulatory regime can put effective checks on misuse or socially dangerous developments without harming technological progress in the field? How can accountability of AI-supported decision-making be secured if the details of the process cannot be fully and transparently explained? How is it possible, in an environment of large-scale data usage, to safeguard privacy and data protection? The series seeks to share best practices, developments and governance frameworks, to deepen insights how to address AI-related governance and policy challenges globally. Activities Together we held 3 joint online expert forums focusing on Asia-Europe dialogue on AI regulation and governance, on 3 critical themes of debate that stand at the frontier of current attempts to develop AI regulatory policy and are likely to constitute future-shaping parameters on how AI will be implemented in global industries and societies. Participants include governmental and non-governmental actors and experts from Asia and Europe involved in the wider process of tech regulation. Deliverables included 3 webinars, video recordings, web articles, and followed by a publication of a policy insight brief developed from the proceedings.
The Heinrich Böll Stiftung (hbs), from Germany with a global network of more than 30 offices, is involved in the discussion regulatory and governance issues of digitalization especially through its Brussels, Washington and Hong Kong offices and its head office in Berlin. hbs is networked to relevant actors especially in Europe, including civil society and members of parliament, policy-makers and other experts involved in the EU’s AI Law initiative. Visit their website here.
Regulating AI: Protection of Data Rights for Citizens and Users
Event time: 09:30-10:30 CEST GMT+2 / 13:00-14:00 India GMT+5.5/ 16:30-17:30 South Korea GMT+9
June 15, 2022 - June 15, 2022
Regulating AI: Explainable AI
May 25, 2022 - May 25, 2022
Regulating AI: Risk-based Approach of AI Regulation
Event Time: 0830-0930 (CET, GMT+2)/ 1430-1530 (Hong Kong, GMT+8).
May 5, 2022 - May 5, 2022
New Joint Synthesis Report by APRU and hbs HK Shows Way Forward on Regulating AI
APRU is proud to announce the publication of the final synthesis report of the Regulating AI webinar series brought together by the Hong Kong-chapter of the Germany-based Heinrich-Böll-Stiftung (hbs HK) and APRU. “Regulating AI: Debating Approaches and Perspectives from Asia and Europe” addresses key questions that surround the appropriate regulation of AI including: What constitutes an unacceptable risk? How does AI become explainable? How can data rights be protected without throttling AI’s potential? The joint synthesis report comes at a critical time, as AI has been leaving the labs and is rapidly gaining footholds in our everyday lives. Millions of decisions – many of them invisible – are being driven by AI.    “The project facilitated a fruitful exchange of perspectives from Asia and Europe and allows us to better understand a wide range of emerging approaches to the regulation of AI in different parts of the world,” says Christina Schönleber, APRU’s Chief Strategy Officer and member of the Regulating AI webinar series working group. Webinar 1 under the theme “Risk-based Approach of AI Regulation” was moderated by Zora Siebert (hbs Brussels) and featured Toby Walsh (University of New South Wales), Alexandra Geese (Member of European Parliament), and Jiro Kokuryo (Keio University) as speakers. The event highlighted that the EU’s proposed AI Act is taking a significant step in defining the types of AI with unacceptable risks, as well as how these can be clearly defined. Webinar 2 under the theme “Explainable AI” was moderated by Kal Joffres (Tandemic) and brought in perspectives of Liz Sonenberg (University of Melbourne), Matthias Kettemann (Hans-Bredow-Institute / HIIG), and Brian Lim (National University of Singapore). Participants agreed that enabling humans to understand why a system makes a particular decision is key to fostering public trust. Webinar 3 under the theme “Protection of Data Rights for Citizens and Users” was moderated by Axel Harneit-Sievers (hbs HK) with Sarah Chander (European Digital Rights), M Jae Moon (Yonsei University), and Sankha Som (Tata Consultancy Services) looking into various risks deriving from both under-regulation and over-regulation. The synthesis report concludes that while governments are fully capable of banning or restricting entire categories of AI uses, the risks posed by AI are so context-sensitive that regulating them a priori and regardless of context is a blunt instrument. The working group furthermore notes that policy discussions on AI have too often focused on individuals’ fundamental rights; they recommend that discussions should be rebalanced for greater consideration of the broader societal impacts of AI. Finally, the synthesis report warns that policy discussions centred on the risks of AI can sometimes lose sight of the opportunities AI offers for creating a better future. “AI has the potential to help address human biases in decision-making and deliver a level of explainability that many of today’s institutions cannot, from banks to government agencies,” the working group writes. “The opportunities of AI must be monitored and acted upon as rigorously as the risks.” Find out more information about Regulating AI here.  Download the report here.
February 16, 2023
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
On June 15, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (HBS) and the Association of Pacific Rim Universities (APRU), a consortium of leading research universities in 19 economies of the Pacific Rim, highlighted the complexity of data rights for citizens and users, with risks deriving from both under-regulation and over-regulation of AI applications. The webinar held under the theme Protection of Data Rights for Citizens and Users completed a joint hbs-APRU series consisting of three webinars on regulating AI. The series came against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI enables private sector enterprises and governments to collect, store, access, and analyse data that influence crucial aspects of life, the challenge for regulators is to strike a balance between data rights of users and the rights for enterprises and governments to make use of AI to improve their services. The webinar’s three speakers representing an NGO network, academia and the private sector explained that the fair use of personal data should be protected while abusive manipulation and surveillance should be limited. Conversely, regulators should leave reasonable room for robust innovation and effective business strategies and facilitate effective operation of government bureaus to deliver public services. “We not only talk about the use of personal data but also a broader range of fundamental rights, such as rights to social protection, non-discrimination and freedom of expression,” said Sarah Chander, Senior Policy Adviser at European Digital Rights (EDRi), a Brussels-based advocacy group leading the work on AI policy and specifically the EU AI Act. “Besides these rights in an individual sense, we have also been looking into AI systems’ impact on our society, impact on broader forms of marginalization, potential invasiveness, as well as economic and social justice, and the starting point of our talks with the different stakeholders is the question of how we can empower the people in this context,” she added. M. Jae Moon, Underwood Distinguished Professor and Director of the Institute for Future Government at Yonsei University, whose research focuses on digital government, explained that governments are increasingly driven to implement AI systems by their desire to improve evidence-based policy decision-making. “The availability of personal data is very important to make good decisions for public interest, and, of course, privacy protection and data security should always be ensured,” Moon said. “The citizens, for their part, are increasingly demanding customized and targeted public services, and the balancing of these two sides’ demands requires good social consensus,” he added. Moon went on to emphasize that citizens after consenting to the use of their private data by the government should be able to track the data usage while also being able to withdraw their consent. Sankha Som, Chief Innovation Evangelist of Tata Consultancy Services, explained that the terms Big Data and AI are often intertwined despite describing very different things. According to Som, Big Data is the ability to manage the input side of AI and drawing insights from the data whereas AI is about predictions and decision-making. “If you look at how AI systems are built today, there are several different Big Data approaches used on the input side, but there are also processing steps such as data labelling which are AI specific; and many issues related to AI actually come from the these processing steps,” Som said. “Biases can, intentionally or unintentionally, cause long-term harm to individuals and groups, and they can creep into these processes, so it will not only take regulation on use of input data but also on end use, while at the same time complying with enterprise specific policies,” he added. The webinar was moderated by Dr. Axel Harneit-Sievers, Director, Heinrich Böll Stiftung Hong Kong Office. The series’ previous two webinars were held in May under the themes Risk-based Approach of AI Regulation and Explainable AI. More information Listen to the recording here. Find out more about the webinar series here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 27, 2022
Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
On May 25, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) highlighted that many of the algorithms that run artificial intelligence (AI) are shrouded by opaqueness, with expert speakers identifying approaches in making AI much more explainable than it is today. The webinar held under the theme Explainable AI was the second in a joint hbs-APRU series of three webinars on regulating AI. The series comes against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI algorithmic designs can enhance robust power and predictive accuracy of the applications, they may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives that seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups. “There are many AI success stories, but algorithms are trained on datasets and proxies, and developers too often and unintentionally use datasets with poor representation of the relevant population,” said Liz Sonenberg, Professor of Information Systems at the University of Melbourne, who featured as one of the webinar’s three speakers. “Explainable AI enables humans to understand why a system decides in certain way, which is the first step to question its fairness,” she added. Sonenberg explained that the use of AI to advise a judicial decision maker of a criminal defendant’s risk of recidivism, for instance, is a development that should be subject to careful scrutiny. Studies of one existing such AI system suggest that it offers racially biased advice, and while this proposition is contested by others, these concerns raise the important issue of how to ensure fairness. Matthias C. Kettemann, head of the Department for Theory and Future of Law at the University of Innsbruck, pointed out that decisions on AI systems’ explanations should not be left to either lawyers, technicians or program designers. Rather, he said, the explanations should be made with a holistic approach that investigates what sorts of information are really needed by the people. “The people do not need to know all the parameters that shape an AI system’s decision, but they need to know what aspects of the available data influenced those decisions and what can be done about it,” Kettemann said. “We all have the right of justification if a state or machine influences the way rights and goods are distributed between individuals and societies, and in the next few years, it will be one of the key challenges to nurture Explainable AI to make people not feeling powerless against AI-based decisions,” he added. Brian Lim, Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), in his research explores how to improve the usability of explainable AI by modeling human factors and applying AI to improve decision making and user engagement towards healthier and safer lifestyles. Speaking at the webinar, Lim explained that one of the earliest uses of Explainable AI is to identify problems in the available data. Then, he said, the user can investigate whether the AI reasons in a way that follows the standards and conventions in the concerned domain. “Decisions in the medical domain, for instance, are important because they are a matter of life and death, and the AI should be like the doctors who understand the underlying biological processes and causes of mechanisms,” Lim said. “Explainable AI can help people to interpret their data and situation to find reasonable, justifiable and defensible answers,” he added. The final webinar will be held on June 15 under the theme Protection of Data Rights for Citizens and Users. The event will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI. More information Listen to the recording here. Find out more about the webinar series here. Register for the June 15th session here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 1, 2022
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
May 12, 2022