top of page
Search

Coming AI Regulations Will Limit You to a Few AI Providers in the Future



Since the onset of the generative AI wave earlier this year, we have seen the likes of Elon Musk, Steve Wozniak, and various governments around the World call for strict regulations for AI. Why? The consensus is that generative AI could potentially harm humans in various ways from misinformation to bringing about the extinction of biological life, but no one knows the full extent of the risks yet.

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.” Sam Altman, CEO OpenAI

Governments Preach AI Regulations but None Have Executed

While AI systems going rogue and becoming Skynet to terminate humanity remains fiction for now, there are pressing concerns that have waded into reality. Misinformation, an almost accidental mischief of AI, and scams, often orchestrated with the sinister veil of deepfakes, have become the immediate offspring of Generative AI. A case in point was a digital puppetry act in May, where Elon Musk seemed to be endorsing crypto for BitVex, courtesy of a deepfake video.


So in order to combat these types of risks, President Biden issued an Executive Order mandating that developers of “any foundation model that poses a serious risk to national security, national economic security or national public health and safety” share safety test results with the government, establishes rigorous standards for AI safety and security, and initiates measures to protect against AI-enabled fraud and the misuse of AI in creating hazardous biological materials.


Across the sea, the EU is nurturing a legislative sapling known as the Artificial Intelligence Act. It's designed to categorize AI entities based on their risk levels, with a keen eye on data quality, human oversight, and accountability. It's a broad brushstroke aimed at promoting trust, excellence, and responsible AI dalliances. The roadmap also includes a pause on biometric surveillance and a curtain pull on generative AI systems like ChatGPT.


Meanwhile, in the Far East, China is meticulously sketching its own regulatory framework. With the roll-out of the Interim Measures for the Management of Generative Artificial Intelligence Services, it's a step towards ensuring the AI models are fed a balanced diet of diverse data, and keeping a tab on the quality of their learning materials. The rule of thumb? Sample 4,000 data snippets from a source, and if over 5% is deemed as “illegal and negative information”, that corpus gets a red card for future training.


A common thread weaving through these regulations is their qualitative and somewhat subjective nature. It's likely by design, giving authorities a bit of room to navigate the enforcement maze without strict tethers. Yet, as it stands, these regulations seem more like a reassuring pat on the back for the world, a symbolic gesture to address the lurking risks of AI. There's a tacit game of wait-and-see among nations, each hesitant to clip the wings of their booming AI industry, lest they fall behind in this high-stakes race.


During the recent AI Safety Summit at Bletchley Park, Elon Musk had this to say:

"I think there's a lot of concern among people in the AI field that the government will sort of jump the gun on rules, before knowing what to do. I think that's unlikely to happen.”

The AI regulations race is one race no government or country wants to win.


Sam Altman Pro-regulatory Stance is a Façade to Keep OpenAI Ahead of Competition?

Sam Altman, the frontman of OpenAI, has been making waves with his calls for AI regulation, a stance that came into sharp focus during a Senate subcommittee hearing last May. Amidst the wood-paneled rooms of Washington, DC, Altman made an earnest plea: Let’s cradle the promise of AI with thoughtful regulations, lest it overshadow our human narrative. A surprising move, given it could potentially curb the lead his own turf, OpenAI, has painstakingly built.


Yet, in a show of unyielding momentum, OpenAI hasn’t taken its foot off the accelerator. On the contrary, it’s gearing up to unveil an upgrade for GPT-4, a leap that will see ChatGPT inching closer to the realm of Artificial General Intelligence (AGI), a space where AI dons a human-esque robe, mastering a broad spectrum of tasks and domains.


The present-day ChatGPT interface requires users to juggle between DALL-E, plug-ins, and a web browsing feature. However, the forthcoming update is set to bring these disparate elements under a single umbrella. Users can now effortlessly upload documents, delve into web browsing, conduct advanced data analysis, and knock on DALL-E’s door, all within a single chat session. It’s akin to marrying the prowess of GPT-4 and DALL-E, forging a generative AI entity ready to tiptoe towards AGI.


But why would Altman, the captain of the OpenAI ship, beckon the regulatory storms that could potentially dampen his company’s voyage? Apart from the altruistic cloak of safeguarding humanity, it seems Altman harbors a belief that regulations, rather than hindering OpenAI's stride, would cement its position in the AI arena. The smaller players, lacking the robust “guardrails” of OpenAI, might find themselves entangled in the regulatory vines, their pace of innovation slowed down by the long wait for a nod of approval. OpenAI, with its pre-installed safety measures and a banner of willingness for regulation, thanks to Altman, stands on steadier ground.


I’m not the only one who thinks this may be the case. Cofounder of Google Brain, Andrew Ng, had this to say:

"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction. It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community." Andrew Ng, Cofounder Google Brain and Former Chief Scientist at Baidu

Government and Regulations Will Cement the Position of AI Leaders

As we peer into the crystal ball of AI, two sketches of the future seem to emerge. The first brings to light a scenario where despite the dawn of AI regulations worldwide, the enigmatic nature of AI models and their corporate custodians prove to be a tough nut to crack for the rule books. Or, let's say certain countries decide to play it fast and loose with these newly minted regulations. The result? A global rulebook that quickly morphs into a paper tiger, as each nation, wary of falling behind its less regulation-friendly peers, subtly loosens the regulatory controls.


Now, flip the coin and we land on a second scenario. Here, the world actually gets its act together and, under the oversight of independent international committees, enforces AI regulations with a fair degree of success. But alas, this road is not without its bumps. With regulations firmly in place, AI firms now find themselves entangled in the red tape of bureaucracy. Every leap of innovation, every upgrade to foundational models could now be met with a 'hold-on-a-minute' from external audit committees. The pace of progress slows, and the landscape of AI providers shrinks. The giants of today - OpenAI, Anthropic, Google, Meta - may just become the familiar faces of AI tomorrow, while the smaller players fade away into the backdrop.

 
 
 

2 Comments


Walter Tong
Walter Tong
Nov 06, 2023

Great read! Interesting perspective that Sam Altman's call for regulations was a tactical move to ensure that OpenAI stays ahead of the curve. I believe that there is an interesting similarity between the absence of clear regulation or the qualitative nature of regulations in the AI sector, and the standards related to ESG. For example, they both have a lack of concrete standards, varying interpretations by different parties, and put smaller players at a disadvantage.


It would be interesting to see if there will be an AI equivalent to "greenwashing", where AI companies exploit vague regulations create the illusion of responsible AI practices without genuine transparency or accountability.

Like

guanlong he
guanlong he
Nov 06, 2023

Insightful read! It does however lead one to wonder what regulations, if any, will truly enable humans to monitor the true intelligence of AI and if we would indeed be able to stop a fully functioning, sentient AI from achieving its targets.

Like

Want to Know When We Post?

Coming AI Regulations Will Limit You to a Few AI Providers in the Future

Since the onset of the generative AI wave earlier this year, we have seen the likes of Elon Musk, Steve Wozniak, and various governments around the World call for strict regulations for AI. Why? The consensus is that generative AI could potentially harm humans in various ways from misinformation to bringing about the extinction of biological life, but no one knows the full extent of the risks yet. ....

blurred text.png

Already Accessed Free Article !

Copyright © 2023 Vibranium Holdings Pte. Ltd.

  • LinkedIn
bottom of page