Last Monday US President Donald Trump signed the “American AI Initiative,” an executive order designed to spur US investment in artificial intelligence and boost the domestic AI industry. The initiative has five highlights: Investing in AI Research and Development (R&D), Unleashing AI Resources, Setting AI Governance Standards, Building the AI Workforce, International Engagement and Protecting our AI Advantage. The signing has attracted attention and stimulated discussion across the global AI community.
Synced invited Brian Tse (谢旻希), a policy affiliate at Oxford University’s Center for the Governance of AI, to share his thoughts on the new American AI Initiative.
Why does this AI Initiative matter?
Unlike more than a dozen other countries, the US government had not developed a coordinated national strategy to increase AI investment or respond to the societal challenges of the technology prior to this initiative. President Trump’s White House has taken a markedly different, free market-oriented approach to AI in contrast to President Obama’s, which proactively laid the foundation for a US strategy during its final months. Many within the US have criticized such a hands-off approach out of a concern that the US might lose its edge. Last year, the then-US defense secretary Jim Mattis sent a memo to the White House asking the president to devise a national strategy on AI. The American AI Initiative is an attempt to answer the calls by policy analysts and lawmakers for the US to take stronger leadership in an increasingly competitive global arena for AI.
The plan first and foremost emphasizes the need for the US to improve its R&D in AI, stating that the US must “drive technological breakthroughs in AI”. But it is also a far-reaching plan that covers a wide range of issues beyond technological development and innovation. These include regulation, education, private-public sector data sharing, and international diplomacy. This is a development that has implications for many sectors in the US, as well as the global dialogue of AI governance.
What impact might this bring to the AI community?
Given the importance of data in training AI algorithms, AI companies and researchers should be particularly glad to see the increased access to the US government “data, models, and computing resources” under the American AI Initiative. Federal agencies will increase access to their resources by identifying high-priority federal data and models, improving the quality of federal AI data, and allocating high-performance and cloud computing resources to AI-related applications and R&D. As a past example of private-public data partnership, Google has a trial program with the Department of Veterans Affairs to access medical records, which helped the tech giant develop early kidney disease detection tests.
A recent survey of American attitudes towards AI shows that Americans do not have high confidence in any particular actors to develop AI for the public benefit. To better build public trust, federal agencies will explore regulatory approaches to governing new AI applications to promote innovation while respecting privacy and civil liberties. The strategy instructs the White House Office of Science and Technology Policy (OSTP), the National Institute of Standards and Technology (NIST), and other departments to draft standards guiding the development of “reliable, robust, trustworthy, secure, portable, and interoperable AI systems.”
Can you identify any bottlenecks or shortcomings?
Notably, the American AI Initiative fails to address issues of immigration, given the importance of overseas talent for AI development in the US. By some estimates, immigrants make up a large majority of graduate students in computer science at US universities. Some of the top researchers contributing to AI development in the US include Andrew Ng and Yann LeCun, who helped pioneer the underlying technologies for the current wave of deep learning. As Greg Brockman, the co-founder and CTO of OpenAI, said, “Our secret weapon in the US historically has been our ability to import the world’s AI talent.” Meanwhile, countries such as Canada have created programs to welcome the technical talent that has been pushed out by the US.
Another shortcoming of the plan is its insufficient discussion on ensuring that AI systems can be developed and used within an ethical and responsible framework. “I’m skeptical that the passing mention of these protections will result is any serious efforts to build in appropriate legal, ethical, and policy safeguards to ensure that AI systems are deployed responsibly,” Professor Kate Crawford, codirector and cofounder of the AI Now Institute at New York University, told IEEE Spectrum this week. In contrast to the US, the EU Commission has developed a set of AI guidelines to address ethical issues such as fairness, safety, and transparency and has established various expert groups for discussions.
Can you predict any potential future developments related to the American AI Initiative?
The follow-through on the plan will be critical as it is unclear whether the US government is equipped to deliver the promises laid out in the American AI Initiative. Jason Furman, who worked on the AI task force at Obama’s White House, is particularly sceptical. As he told Technology Review, “The administration’s American AI Initiative includes all of the right elements; the critical test will be to see if they follow through in a vigorous manner. … The plan is aspirational with no details and is not self-executing.” The plan also lacks a firm timeline for implementation, so there is no guarantee that even the modest policy proposals will come to fruition in the foreseeable future.
Also, unlike the AI plans coming out of China and other countries, Trump’s executive order did not allocate any additional federal funding but only called on federal agencies to prioritize existing funds toward AI projects. The responsibility lies on the selected federal agencies to come up with a plan for how and if they will be able to allocate existing resources effectively.
How do you view this in a global perspective?
One of the key objectives of the plan is to “call for a strategy for international collaboration that ensures AI is developed in a way that is consistent with American ‘values and interests’.” I believe deeply that American interests can co-exist with Chinese interests. Differences in values need not and must not result in conflicts without cooperation in critical domains of shared concerns.
Given the global and interdependent nature of AI, international cooperation is necessary for coping with its societal and political challenges, such as global security. A recent report entitled Technology Roulette by the Center for A New American Security makes the point that the recognition of a great common uncertainty brought about by emerging technologies, including AI, can open opportunities for cooperation to reduce accidental and emergent risks. Henry Kissinger suggests that AI would compound the problem of verification in traditional arms control negotiations. It is therefore “both a practical and a moral imperative to find a way to keep mankind from destroying itself. The United States and China must strive to come to an understanding about the nature of their co-evolution.” Chinese leaders have demonstrated the willingness to seek cooperation in AI, notably from the statement read for President Xi Jinping at the World AI Conference in Shanghai in 2018: “It requires deepened collaboration and open dialogue among countries to deal with new subjects such as legislation, security, employment and governance. China is willing to join hands with other countries to promote the development of artificial intelligence, ensure security and share fruitful results.’”
As difficult as global coordination may be, countries must seek avenues of possible cooperation to build a future of safe and prosperous AI.
Brian is a Policy Affiliate with the Center for the Governance of AI at the University of Oxford, a Fellow at Partnership on AI, and the author of the Chinese translation of the OpenAI Charter. His research focuses on international cooperation for safe and beneficial development of advanced AI, and he has been invited to present at DeepMind and Asilomar Conference on Beneficial AI. He is a program committee member of a technical AI safety conference organized by the Tsinghua University Institute of AI.
Synced Insight Partner Program
The Synced Insight Partner Program is an invitation-only program that brings together influential organizations, companies, academic experts and industry leaders to share professional experiences and insights through interviews and public speaking engagements, etc. Synced invites all industry experts, professionals, analysts, and others working in AI technologies and machine learning to participate.
Simply Apply for the Synced Insight Partner Program and let us know about yourself and your focus in AI. We will give you a response once your application is approved.