Competition between the U.S. and China in artificial intelligence has expanded into a race to design and implement comprehensive AI regulations.
The efforts to come up with rules to ensure AI's trustworthiness, safety and transparency come at a time when governments around the world are exploring the impact of the technology on national security and education.
ChatGPT, a chatbot that mimics human conversation, has received massive attention since its debut in November. Its ability to give sophisticated answers to complex questions with a language fluency comparable to that of humans has caught the world by surprise. Yet its many flaws, including its ostensibly coherent responses laden with misleading information and apparent bias, have prompted tech leaders in the U.S. to sound the alarm.
"What happens when something vastly smarter than the smartest person comes along in silicon form? It's very difficult to predict what will happen in that circumstance," said Tesla Chief Executive Officer Elon Musk in an interview with Fox News. He warned that artificial intelligence could lead to "civilization destruction" without regulations in place.
Google CEO Sundar Pichai echoed that sentiment. "Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society," Pichai said in an interview with CBS's "60 Minutes" program.
Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA Mandarin, "Business leaders understand that regulators will be watching this space closely, and they have an interest in shaping the approaches regulators will take."
US grapples with regulations
AI regulation is still nascent in the U.S. Last year, the White House released voluntary guidance through a Blueprint for an AI Bill of Rights to help ensure users' rights are protected as technology companies design and develop AI systems.
At a meeting of the President's Council of Advisors on Science and Technology this month, President Joe Biden expressed concern about the potential dangers associated with AI and underscored that companies had a responsibility to ensure their products were safe before making them public.
On April 11, the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, began to seek comment and public input with the aim of crafting a report on AI accountability.
The U.S. government is trying to find the right balance to regulate the industry without stifling innovation "in part because the U.S. having innovative leadership globally is a selling point for the United States' hard and soft power," said Johanna Costigan, a junior fellow at the Asia Society Policy Institute's Center for China Analysis.
Brandt, with Brookings, said, "The challenge for liberal democracies is to ensure that AI is developed and deployed responsibly, while also supporting a vibrant innovation ecosystem that can attract talent and investment."
Meanwhile, other Western countries have also started to work on regulating the emerging technology.
The U.K. government published its AI regulatory framework in March. Also last month, Italy temporarily blocked ChatGPT in the wake of a data breach, and the German commissioner for data protection said his country could follow suit.
The European Union stated it's pushing for an AI strategy aimed at making Europe a world-class hub for AI that ensures AI is human-centric and trustworthy, and it hopes to lead the world in AI standards.
Cyber regulations in China
In contrast to the U.S., the Chinese government has already implemented regulations aimed at tech sectors related to AI. In the past few years, Beijing has introduced several major data protection laws to limit the power of tech companies and to protect consumers.
The Cybersecurity Law enacted in 2017 requires that data must be stored within China and operators must submit to government-conducted security checks. The Data Security Law enacted in 2021 sets a comprehensive legal framework for processing personal information when doing business in China. The Personal Information Protection Law established in the same year gives Chinese consumers the right to access, correct and delete their personal data gathered by businesses. Costigan, with the Asia Society, said these laws have laid the groundwork for future tech regulations.
In March 2022, China began to implement a regulation that governs the way technology companies can use recommendation algorithms. The Cyberspace Administration of China (CAC) now supervises the process of using big data to analyze user preferences and companies' ability to push information to users.
On April 11, the CAC unveiled a draft for managing generative artificial intelligence services similar to ChatGPT, in an effort to mitigate the dangers of the new technology.
Costigan said the goal of the proposed generative AI regulation could be seen in Article 4 of the draft, which states that content generated by future AI products must reflect the country's "core socialist values" and not encourage subversion of state power.
"Maintaining social stability is a key consideration," she said. "The new draft regulation does some good and is unambiguously in line with [President] Xi Jinping's desire to ensure that individuals, companies or organizations cannot use emerging AI applications to challenge his rule."
Michael Caster, the Asia digital program manager at Article 19, a London-based rights organization, told VOA, "The language, especially at Article 4, is clearly about maintaining the state's power of censorship and surveillance.
"All global policymakers should be clearly aware that while China may be attempting to set standards on emerging technology, their approach to legislation and regulation has always been to preserve the power of the party."
The future of cyber regulations
As strategies for cyber and AI regulations evolve, how they develop may largely depend on each country's way of governance and reasons for creating standards. Analysts say there will also be intrinsic hurdles linked to coming up with consensus.
"Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play," Brandt told VOA. "They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are."
Observers said the international community would face challenges as it creates standards aimed at making AI technology ethical and safe.