Blogs
May 31, 2025
16:00

AI’s Unchecked Ascent: How Big Tech is outpacing the regulatory rulebook

Artificial intelligence is experiencing a period of meteoric acceleration. Scarcely a week passes without fresh demonstrations of its expanding capabilities, as giants like OpenAI, Meta, Google, Anthropic and Microsoft unveil deeper integrations of their AI models, each flaunting ever more advanced capabilities. 

These firms’ fortunes were built on data, both scraped from the internet and personal user details. This digital information now serves as the lifeblood for all the AI tools they deploy to the general public as tiered products.

Some of these tech titans have faced scrutiny over their data practices, resulting in fines in certain instances and changes in their behavior in others. They have been questioned by regulators, courts, and the general public in several major economies. 

To understand the kind of data these firms collect and the methods they use, consider a 2020 class action lawsuit brought against Google. In Brown et al vs Google LLC, users alleged that the tech giant was tracking them even when they were browsing privately, using Google’s “incognito” mode. The users alleged that the tech giant was tracking their data, including shopping habits and other online hunts, despite them choosing to browse privately.

The search giant reached a settlement in April, and lawyers of the plaintiffs valued the accord as high as $7.8 billion. While users will have to individually file for damages, the company agreed to delete troves of data from their records following the settlement. 

In another case, Google agreed to settle a case brought against it by Texas Attorney General Ken Paxton over deceptive location tracking. The Silicon Valley company agreed to pay $1.4 billion for illegally tracking location and biometric details of users without consent. 

Google is not alone. Llama AI owner Meta is another data guzzler. The social media giant was accused of using biometric data of users illegally. The company agreed to pay $1.4 billion and sought to deepen its business in the state of Texas.

The settlement route

Both Google and Meta have denied any wrongdoing. This method of making out of court settlement coupled with denying wrongdoing only emboldens the tech giants. By settling, these companies avoid creating legal precedents that could be used against them or the broader tech industry in future cases. A definitive court ruling against their data practices could open the floodgates for similar lawsuits.

If Google and Meta’s legal woes are largely concerned with user data, OpenAI, the standard-bearer of AI’s rapid advance, finds itself contesting lawsuits that probe the very foundations of its training methodologies. Multiple class-action suits accuse the company of illicitly scraping vast quantities of personal data from the internet without consent to train its large language models. 

High-profile authors and media organisations, including The New York Times, have joined this legal fray, alleging copyright infringement and claiming their intellectual property was unlawfully used to construct the OpenAIs’ ChatGPT. 

The copyright battles aren’t limited to the U.S. Indian book publishers and their international counterparts filed a copyright lawsuit against OpenAI earlier this year, while publisher Ziff Davis sued OpenAI for copyright infringement in April, adding to the web of high-stakes copyright cases.

These cases starkly illuminate the conflict between the AI industry’s perceived hunger for limitless data and established protections for personal information and intellectual property. Even as litigation mounts, OpenAI, Google and Meta’s AI development and deployment continue, seemingly undeterred.

Oblivious to these legal and regulatory threats, tech giants appear to operate in a realm where conventional constraints are less binding. They not only continue to enhance their AI models but deploy them with ever-greater velocity even as legal frameworks struggle to catch up or even define the parameters of a race that is already decisively underway.

The EU gold-standard tested

Perhaps, an answer could lie in someplace across the Atlantic, where Europe’s General Data Protection Regulation (GDPR) represents a robust attempt to tether data use to individual rights. Penalties under GDPR can be formidable, and the EU has been moving beyond GDPR violations to broader digital market competition issues. 

Just this year, the EU fined Meta over the company’s user consent policy, which violated the bloc’s Digital Markets Act.

The EU’s scrutiny is not confined to American firms. Complaints have also targeted Chinese tech companies like TikTok and SHEIN, with allegations of unlawful data exports. While GDPR has undeniably compelled companies to adjust certain practices, the broader AI industry, particularly builders of foundational models, has continued its global expansion with little apparent deceleration. Moreover, the ultimate efficacy of Europe’s direct AI regulation remains an open question, with the EU’s AI Act not slated for full implementation until August 2025.

This dynamic is mirrored in other significant economies. India, with its Digital Personal Data Protection Act, 2023, is navigating this regulatory maze, formalising a data protection regime. The Act aims for a comprehensive framework, balancing consent requirements with provisions for future flexibility, thus attempting a delicate calibration between control and encouragement. India aims to be both a regulator and an important AI player.

China, too, has implemented stringent data privacy rules that make it difficult for foreign firms to transfer “significant data”. While China is strict about data transfers from its soil, the country has given AI development paramount strategic importance by support local firms to harness latest advances in emerging technologies. And as in the U.S., the firms investing most heavily in AI are often those with the largest data troves.

Thus, while courtrooms bustle and regulators issue stern pronouncements, AI giants forge ahead, relentlessly refining models and deploying them at remarkable speeds. Legal challenges, however significant, often resemble the wake behind a rapidly advancing ship rather than a rudder steering its course. It is abundantly clear that privacy laws and regulatory frameworks are struggling to keep pace.

The fundamental truth is that Big Tech’s AI innovation cycle currently far outstrips the slower, more deliberative cadence of legal and ethical calibration. In this race, user privacy and broader societal guardrails risk becoming afterthoughts—issues to be managed or litigated post hoc, rather than foundational principles guiding AI’s unchecked and transformative ascent.

Published - May 31, 2025 10:20 am IST