As AI develops, regulators must learn from past mistakes with new technology
Artificial intelligence poses difficult questions about everything from antitrust regulations to fraud, cyber-terrorism and training
Yesterday, Gary Conkling wrote about the reasons Oregon should attempt to become an incubator for artificial intelligence development and what it might take to make that happen. He also acknowledged that artificial intelligence comes with risks and concerns that should be analyzed and planned for. Today, I will address government’s role in protecting the public from misuse of AI.
We don’t have to look far for lessons about what happens when government and private industry are too slow to respond to life-changing technological change. The last new technology with impact of somewhat comparable scope to AI was the Internet, which drastically changed the news business and retailing, among other industries, and altered the way the world communicates. Some of the changes have been for the better, some for the worse. That’s inevitable. However, a few lessons are clear.
Government needs to rigorously enforce, if not stiffen, antitrust laws.
This probably is the most important lesson for elected leaders and appointed regulators to apply.
Several key aspects of the online world have a dominant player that overshadows the competition: Google in search, Meta in social media, Amazon in retail. Videos, music, online news and travel are more competitive but still dominated by a handful of companies. This concentration of power has had vast political and economic implications. And AI has the potential for much greater upheaval.
Adding to concern, existing tech giants are at the front of the line in efforts to develop and profit from artificial intelligence. Google, Microsoft, Amazon and Meta all have major AI initiatives. Their already dominant market positions give them a significant advantage.
To their credit, these companies along with AI specialists such as Inflection, Anthropic and OpenAI are pledging to cooperate with regulators. Even if you assume they are sincere (for now, I give them the benefit of the doubt), the terms of cooperation matter. And a key term should be preservation of competition to prevent the accumulation of power.
Consider what might have happened if the Federal Trade Commission and courts had prevented the Facebook acquisition of Instagram. Instead of one company (Meta, formerly Facebook) having a virtual monopoly of non-video social networking with one platform dominant with younger users and the other dominant with older users, there probably would be true competition and therefore less power for Meta to exert over advertisers and online content in general.
The stakes are much higher with AI, which raises the risk for everything from fraud to cyber-terrorism. Therefore, the need for competition is paramount. (For a deeper discussion of the risks, read this article.)
Government, universities and private industry need to work together to implement AI.
In hindsight, it’s clear that government, higher education and private businesses all underestimated the impact of the Internet. That’s less likely to happen with AI because the risks are more apparent. But simply acknowledging that mishandling AI could cause harm is not enough. There must be a concerted, collaborated effort to predict how it will change life for everyone and to anticipate what guardrails will be needed. Obvious trouble spots such as enforcement of copyrights and the spread of disinformation must be addressed on the front end. Some type of public-private commission should be formed to watch for unintended consequences and pursue real-time corrections before problems multiply.
Admittedly, this will be difficult. Contrary to some perceptions, there were good-faith efforts to create a regulatory framework for the Internet.
In some corners of government and academia there are heated debates about Section 230 of the Communications Decency Act of 1996, but it continues to provide the basic framework for regulation of the Internet. A detailed analysis of this law is beyond my expertise, though it’s worth noting that Oregon has had a prominent role in its development. U.S. Sen. Ron Wyden, D-Ore., was one of its co-authors, and former Oregonian reporter and current U.S. Naval Academy professor of cybersecurity law wrote the often-cited book, The 26 Words that Created the Internet, about Section 230. It’s also certain that the law is better than nothing, and critics have struggled to articulate what would be better.
Prioritize cybersecurity
Cybersecurity has become an essential part of national defense. Election manipulation through social media. Ransomware. Hacks of electrical grids and financial systems. The list of concerns is almost endless. And you don’t have to be a computer engineer or science fiction writer to imagine the damage hackers could inflict through artificial intelligence.
Part of the solution should be a public-private effort to develop more cybersecurity experts. The U.S. Labor Department estimates a shortage of 6 million engineers. Addressing that big a shortage as demand continues to surge will require both immigration of trained engineers and better training and recruitment of U.S. students. One part of that effort should be tuition relief for those majoring in engineering and committing to use their skills in public-service jobs such as cybersecurity. Another part must be improved STEM education in public middle and high schools so more students are capable of majoring in engineering fields.
The challenges of AI are too complex to cover in a book, much less one opinion article, but a good place to start is by applying lessons we already learned from evolution of the Internet and social media. Anything less would be irresponsible.
Mark Hester is a retired journalist who worked 20 years at The Oregonian in positions including business editor, sports editor and editorial writer.