Can artificial intelligence wipe out humanity?
A senior U.S. official said the United States government is working with leading AI companies and at least 20 countries to set up guardrails to mitigate potential risks, while focusing on the innovative edge of AI technologies.
Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy, spoke Tuesday to VOA about the voluntary commitments from leading AI companies to ensure safety and transparency around AI development.
One of the popular generative AI platforms is ChatGPT. If a user asked it politically sensitive questions in Mandarin Chinese such as, "What is the 1989 Tiananmen Square Massacre?" the user would get information that is heavily censored by the Beijing government.
But ChatGPT, created by U.S.-based OpenAI, is not available in China.
China has finalized rules governing its own generative AI. The new regulation will be effective August 15. Chinese chatbots reportedly have built-in censorship to avoid sensitive keywords.
"I think that the development of these systems actually requires a foundation of openness, of interoperability, of reliability of data. And an authoritarian top-down approach that controls the flow of information over time will undermine a government's ability, a company's ability, to sustain an innovative edge in AI," Fick told VOA.
The following excerpts from the interview have been edited for brevity and clarity.
VOA: Seven leading AI companies made eight promises about what they will do with their technology. What do these commitments actually mean?
Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy: As we think about governance of this new tech frontier of artificial intelligence, our North Star ought to be preserving our innovative edge and ensuring that we can continue to maintain a global leadership position in the development of robust AI tools, because the upside to solve shared challenges around the world is so immense. …
These commitments fall into three broad categories. First, the companies have a duty to ensure that their products are safe. … Second, the companies have a responsibility to ensure that their products are secure. … Third, the companies have a duty to ensure that their products gain the trust of people around the world. And so, we need a way for viewers, consumers, to ascertain whether audio content or visual content is AI-generated or not, whether it is authentic or not. And that's what these commitments do.
VOA: Would the United States government fund some of these types of safety tests conducted by those companies?
Fick: The United States government has a huge interest in ensuring that these companies, these models, their products are safe, are secure, and are trustworthy. We look forward to partnering with these companies over time to do that. And of course, that could certainly include financial partnership.
VOA: The White House has listed cancer prevention and mitigating climate change as two of the areas where it would like AI companies to focus their efforts. Can you talk about U.S. competition with China on AI? Is that an administration priority?
Fick: We would expect the Chinese approach to artificial intelligence to look very much like the PRC's [People's Republic of China] approach to other areas of technology. Generally, top down. Generally, not focused on open expression, not focused on open access to information. And these AI systems, by their very definition, require that sort of openness and that sort of access to large data sets and information.
VOA: Some industry experts have warned that China is spending three times as much as the U.S. to become the world's AI leader. Can you talk about China's ambition on AI? Is the U.S. keeping up with the competition?
Fick: We certainly track things like R&D [research and development] and investment dollars, but I would make the point that those are inputs, not outputs. And I don't think it's any accident that the leading companies in AI research are American companies. Our innovation ecosystem, supported by foundational research and immigration policy that attracts the world's best talent, tax and regulatory policies that encourage business creation and growth.
VOA: Any final thoughts about the risks? Can AI models be used to develop bioweapons? Can AI wipe out humanity?
Fick: My experience has been that risk and return really are correlated in life and in financial markets. There's huge reward and promise in these technologies and of course, at the same time, they bring with them significant risks. We need to maintain our North Star, our focus on that innovative edge and all of the promise that these technologies bring in. At the same time, it's our responsibility as governments and as responsible companies leading in this space to put the guardrails in place to mitigate those risks.