Tag #AI For Everyone
Programme (1)
Event (1)
News (5)
AI For Everyone
Ensuring equitable outcomes requires active engagement with the ICT sector. In December, 2018, APRU initiated a partnership with Google on the exploration of artificial intelligence policy issues. The project will see the production of two policy research projects. The first focuses on the social implications of artificial intelligence and the future of work, the second project seeks to understand how society can maximize artificial intelligence’s potential for an equitable future. Collaborators from the APRU network began working on the first project, AI for Everyone: Benefitting From and Building Trust in the Technology, holding the first workshop on artificial intelligence accessibility and governance on December 1, 2017, at Keio University, Tokyo; co-chaired by artificial intelligence experts Professors Jiro Kokuryo (Keio)and Toby Walsh (UNSW Sydney). The project will deliver a series of working papers, resulting in policy recommendations to be published and widely disseminated to governments and civil society.    
AI for everyone: benefitting from and building trust in the technology Increase access to the benefits of artificial intelligence. Build awareness about the nature of the technology. Disseminate key findings feeding into policy discourse and dialogue.
AI4Every1 Workshop
August 31, 2018 - August 31, 2018
APRU on The Business Times: Safeguarding Our Future With AI Will Need More Regulations
Original post in The Business Times. More has to be done to ensure that AI is used for social good. A SILVER lining emerging from Covid-19’s social and economic fallout is the unprecedented application of artificial intelligence (AI) and Big Data technology to aid recovery and enable governments and companies to effectively operate. However, as AI and Big Data are rapidly adopted, their evolution is far outpacing regulatory processes for social equity, privacy, and political accountability, fuelling concern about their possible predatory use. No matter whether contributing to essential R&D for coronavirus diagnostic tools or helping retailers and manufacturers transform their processes and the global supply chain, AI’s impressive achievements do not fully allay anxieties around their perceived dark side. Public concern about the threats of AI and Big Data ranges from privacy breaches to dystopian takes on the future that account for a technological singularity. Meanwhile, there is fairly strong sentiment that tech giants like Facebook, Amazon and Apple have too much unaccountable power. Amid rising antitrust actions in the US and legislative pushback in Europe, other firms like Microsoft, Alibaba and Tencent also risk facing similar accusations. Despite their advancements, breakthrough technologies always engender turbulence. The pervasiveness of AI across all aspects of life and its control by elites, raise the question of how to ensure its use for social good. For the ordinary citizen, justifiable suspicion of corporate motives can also render them prey to misinformation. Multilateral organisations have played critical roles in countering false claims and building public trust, but there is more to be done. AI FOR SOCIAL GOOD Against this backdrop, APRU (the Association of Pacific Rim Universities), the United Nations ESCAP and Google came together in 2018 to launch an AI for Social Good partnership to bridge the gap between the growing AI research ecosystem and the limited study into AI’s potential to positively transform economies and societies. Led by Keio University in Japan, the project released its first flagship report in September 2020 with assessments of the current situation and the first-ever research-based policy recommendations on how governments, companies and universities can develop AI responsibly. Together they concluded that countries effective in establishing enabling policy environments for AI that both protect against possible risks and leverage it for social and environmental good will be positioned to make considerable leaps towards the Sustainable Development Goals (SDGs). These include providing universal healthcare, ensuring a liveable planet, and decent work opportunities for all. However, countries that do not create this enabling environment risk forgoing the potential upsides of AI and may also bear the brunt of its destructive and destabilising effects: from weaponised misinformation, to escalating inequalities arising from unequal opportunities, to the rapid displacement of entire industries and job classes. WAY FORWARD Understanding of the long-term implications of fast-moving technologies and effectively calibrating risks is critical in advancing AI development. Prevention of bias and unfair outcomes produced by AI systems is of top priority, while government and private sector stakeholders should address the balance between data privacy, open data and AI growth. For governments, it will be tricky to navigate this mix. The risk is that sluggish policy responses will make it impossible to catch up with AI’s increasingly rapid development. We recommend governments establish a lead public agency to guard against policy blind spots. These lead agencies will encourage “data loops” that provide feedback to users on how their data are being used and thus facilitate agile regulation. This is necessary due to AI’s inherently rapid changing nature and the emergence of aspects that may not have been obvious even weeks or months earlier. Another important ability that governments have to acquire is the ability to negotiate with interest groups and ethical considerations. Otherwise, progress of promising socially and environmentally beneficial AI applications ranging from innovative medical procedures to new transportation options can be blocked by vested interests or a poor understanding of the trade-offs between privacy and social impact. Governments should also strengthen their ability to build and retain local technical know-how. This is essential, given that AI superpower countries are built on a critical mass of technical talent that has been trained, attracted to the country, and retained. DIASPORA OF TALENT Fortunately, many countries in Asia have a diaspora of talent who have trained in AI at leading universities and worked with leading AI firms. China has shown how to target and attract these overseas Chinese to return home by showcasing economic opportunities and building confidence in the prospects of a successful career and livelihood. Ultimately, for any emerging technology to be successful, gaining and maintaining public trust is crucial. Covid-19 contact tracing applications are a good case in point, as transparency is key to gaining and maintaining public trust in their deployment. With increased concerns about data privacy, governments can explain to the public the benefits and details of how the tracing application technology works, as well as the relevant privacy policy and law that protects data. To deal with the use and misuse of advanced technologies such as AI, we need renewed commitment to multilateralism and neutral platforms on which to address critical challenges. At the next level, the United Nations recently launched Verified, an initiative aimed at delivering trusted information, advice and stories focused on the best of humanity and opportunities to ‘build back better’, in line with the SDGs and the Paris agreement on climate change. It also invites the public to help counter the spread of Covid-19 misinformation by sharing factual advice with their communities. The education sector is playing its part to facilitate exchange of ideas among thought leaders, researchers, and policymakers to contribute to the international public policy process. I am hopeful that universities will be able to partner with government, the private sector and the community at large in constructing a technological ecosystem serving the social good. The writer is secretary general of APRU (the Association of Pacific Rim Universities)
March 18, 2021
APRU on South China Morning Post: Governments, business and academia must join hands to build trust in AI’s potential for good
By Christopher Tremewan December 31, 2020 Original post in SCMP. Concerns about the predatory use of technology, privacy intrusions and worsening social inequalities must be jointly addressed by all stakeholders in society – through sensible regulations, sound ethical norms and international collaboration. In September, it was reported that Zhu Songchun, an expert in artificial intelligence at UCLA, had been recruited by Peking University. It was seen as part of the Chinese government’s strategy to become a global leader in AI, amid competition with the US for technological dominance. In the West, a new US administration has been elected amid anxiety about cyber interference. Tech giants Apple, Facebook, Amazon and Google are facing antitrust accusations in the US, while the European Union has unveiled sweeping legislation to enable regulators to head off bad behaviour by big tech before it happens. Meanwhile, Shoshana Zuboff’s bestselling book The Age of Surveillance Capitalism has alerted social media users to a new economic order that “claims human experience as free raw material for hidden commercial practices”. In addition, the public is regularly bombarded with dystopian scenarios (like in Black Mirror ) about intelligent machines taking control of society, often at the service of ruling elites or criminals. The dual character of AI – its promise for social good and its threat to human society through absolute control – has been a familiar theme for some time. Also, AI systems are evolving rapidly, outpacing regulatory processes for social equity and privacy. Especially during a pandemic, the urgent question facing governments, the private sector and universities is how to promote public trust in the beneficial side of AI technologies. One way to build public trust is to deliver for the global common good, beyond national or corporate self-interest. With the world facing crises ranging from the current pandemic to worsening inequalities and the massive effects of climate change, it is obvious that no single country can solve any of them alone. The technological advances of AI already hold out promise in everything from medical diagnosis and drug development to creating smart cities and transitioning to a renewable-energy economy. MIT has reportedly developed an app that can immediately diagnose 98.5 per cent of Covid-19 infections by people just coughing into their phones. A recent report on “AI for Social Good”, co-authored by the UN, Google and the Association of Pacific Rim Universities, concluded that AI can help us “build back better” and improve the quality of life. But it also said “the realisation of social good by AI is effective only when the government adequately sets rules for appropriate use of data”. With respect to limiting intrusions on individual rights, it said that “the challenge is how to balance the reduction of human rights abuses while not suffocating the beneficial uses”. These observations go to the core of the problem. Are governments accountable in real ways to their citizens or are they more aligned with the interests of hi-tech monopolies? Who owns the new AI technologies? Are they used for concentrating power and wealth or do they benefit those most in need of them? The report recommends that governments develop abilities for agile regulation; for negotiation with interest groups to establish ethical norms; for leveraging the private sector for social and environmental good; and to build and retain local know-how. While these issues will be approached in different ways in each country, international collaboration will be essential. International organisations, globally connected social movements as well as enhanced political participation by informed citizens will be critical in shaping the environment for regulation in the public interest. At the same time, geopolitical rivalry need not constrain our building of trust and cooperation for the common good. The Covid-19 crisis has shown that it is possible for governments to move decisively towards the public interest and align new technologies to solutions that benefit everyone. We should not forget that, in January, a team of Chinese and Australian researchers published the first genome of the new virus and the genetic map was made accessible to researchers worldwide. International organisations such as the World Health Organization and international collaborations by biomedical researchers also play critical roles in building public trust and countering false information. Universities have played an important role in advancing research cooperation with the corporate sector and in bolstering public confidence that global access takes priority over the profit motive of Big Pharma. For example, the vaccine developed by Oxford University and AstraZeneca will be made available at cost to developing countries and can be distributed without the need for special freezers. Peking University and UCLA are cooperating with the National University of Singapore and the University of Sydney to exchange best practices on Covid-19 crisis management. Competition for international dominance in AI applications also fades as we focus on applying its beneficial uses to common challenges. Global frameworks for cooperation such as the UN 2030 Agenda for Sustainable Development or the Paris Climate Agreement set out the tasks. Google, for example, has established partnerships with universities and government labs for advanced weather and climate prediction, with one project focusing on communities in India and Bangladesh vulnerable to flooding. To deal with the use and misuse of advanced technologies like AI, we need a renewed commitment to multilateralism and to neutral platforms on which to address critical challenges. Universities that collectively exercise independent ethical leadership internationally can also, through external partnerships, help to shape national regulatory regimes for AI that are responsive to the public interest. Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
December 31, 2020
APRU on Times Higher Education: ‘Oversight needed’ so AI can be used for good in Asia-Pacific
By Joyce Lau Original post in THE. Academics urge governments to set up frameworks for ethical use of technology and reaffirm the need for greater multidisciplinarity Asia-Pacific universities could use artificial intelligence to harness their strengths in combating epidemics and other global problems, but only if there were regulatory frameworks to ensure ethical use, experts said. Artificial Intelligence for Social Good, a nearly 300-page report by academics in Australia, Hong Kong, India, Singapore, South Korea and Thailand, was launched the same day as the event, held by the Association of Pacific Rim Universities (APRU), the United Nations’ Economic and Social Commission for Asia and the Pacific (ESCAP) and Google. The research, co-published by APRU and Keio University in Japan, laid out recommendations for using AI in the region to achieve the UN’s sustainable development goals (SDGs). While the report outlined the great potential for AI in the region, it also said that risks must be managed, privacy concerns must be addressed and testing must be conducted before large-scale technology projects were implemented. Christopher Tremewan, APRU’s secretary general and a former vice-president at the University of Auckland, said that Pacific Rim universities “have incredible research depth in the challenges facing this region, from extreme climate events and the global Covid-19 pandemic to complex cross-border problems. Their collective expertise and AI innovation makes a powerful contribution to our societies and our planet’s health.” However, he also said there were potential problems with “rapid technological changes rolled out amid inequality and heightened international tensions”. “As educators, we know that technology is not neutral and that public accountability at all levels is vital,” he said. The APRU, which includes 56 research universities in Asia, Australasia and the west coast of the Americas, is based at the Hong Kong University of Science and Technology. In answering questions, Dr Tremewan drew on his own observations in New Zealand and Hong Kong, two places where Covid responses have been lauded. “The feeling in Hong Kong is that there is tremendous experience from Sars,” he said, referring to a 2003 epidemic. “The universities here have capability in medical research, particularly on the structure of this type of disease, and also in public health strategy.” Meanwhile, in New Zealand, “confidence in science” and the prominence of researchers and experts speaking out aided in the public response. “Universities are playing key roles locally and internationally,” he said, adding that expertise was also needed in policy, communications and social behaviour. “The solutions are multidisciplinary, not only technological or medical.” Soraj Hongladarom, director of the Center for Ethics of Science and Technology at Chulalongkorn University in Bangkok, and one of the authors of the report, said their work had “broken new ground” in Asia. “We’re trying to focus on the cultural context of AI, which hasn’t been done very much in an academic context,” he said. Professor Hongladarom, a philosopher, urged greater interdisciplinarity in tackling social problems. “Engineers and computer scientists must work with social scientists, anthropologists and philosophers to look beyond the purely technical side of AI – but also at its social, cultural and political aspects,” he said. He added that policy and regulation were vital in keeping control over technology: “Every government must take action – it’s particularly important in South-east Asia.” Dr Tremewan said that, aside from crossing disciplinary boundaries, AI also had to cross national borders. “Universities have huge social power in their local contexts. So how do we bring that influence internationally?” he asked. Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
November 12, 2020
AI For Everyone: New Open Access Book
APRU is pleased to announce the new release of the book “AI for Everyone: benefitting from and building trust in the technology.” Published on January 28, 2020, the book was written by Jiro Kokuryo, Catharina Maracke, and Toby Walsh.  The project was led by project co-chairs and AI-experts Professors Jiro Kokuryo (Keio) and Toby Walsh (UNSW). The open-access book features APRU’s project and introduces its findings. The project is the result of a discussion series organized by APRU and Google. “Experts from APRU universities greatly contributed to this foundational project in which we built upon for projects such as the Transformation of Work and AI for Social Good,” said Christina Schönleber, APRU Senior Director (Policy and Programs). “It enabled us to actively pursue opportunities to interact with policymakers, businesses, and leaders in society to address major AI-related fears, such as of ‘black box’ machines manipulating human society, unethical uses of AI, and that AI may widen the gap between the rich and the poor,” she added. The project’s first meeting was held in late-2017, laying the groundwork for the crafting of a series of working papers and their resulting policy recommendations. As many as twelve of these AI-related working papers were reviewed at the second meeting in September, reflecting eager participation by APRU members. An accompanying project workshop took on key questions, such as how to establish more trust in AI and how to amplify human intelligence through the use of AI toward beneficial ends. The project’s preliminary outcome was prominently featured by the Pacific Economic Cooperation Council’s State of the Region Report 2018-2019, which fed into the 30th APEC ministerial meeting held in the following month in Port Moresby, Papua New Guinea. “The title of our book reflects the belief that access to the benefits of AI should be transparent, open, and understood by and accessible to all people regardless of their geographic, generational, economic, cultural and other social background,” said Kokuryo. “We wrote it to strengthen awareness about the nature of the technology, governance of the technology, and its development process, with a focus on responsible development,” he added. The book is available as a paperback edition at cost price. Please see the project overview and policy statement here. Keio and UNSW are the APRU member university leads of this project.  Other involved APRU member institutions include: The Australian National University (Australia), Far Eastern Federal University (Russia), Peking University (China), The Chinese University of Hong Kong, National University of Singapore, Technologico de Monterrey (Mexico), Fudan University (China), University of California Irvine (USA),  Universidad de Chile (Chile), UNSW Sydney (Australia), and The Hong Kong University of Science and Technology.
February 1, 2020
AI Policy for the Future: Can we trust AI?
AI Policy for the Future: Can we trust AI? Date & Time: August 23 from 9 am to 5 pm Venue: Korea Press Center, 20th floor, International Conference Hall   Seoul National University Initiative will host a one-day conference focusing on AI trust for the future. The conference will invite AI experts and scholars from academia, industry, and government to address the current concerns on accountability and enhance social beneficial outcomes related to AI governance through technology, policy, and law. Considering the critical issues such as fairness and equity will be analyzed on both a macro and micro level to develop key recommendations on the responsible use of AI. Find out the program here.
August 16, 2019