AI is the hot topic dominating Austin at the annual startup conference attracting founders, techies, developers, investors and the general public. Austin, whose tech scene is ostensibly second only to that of Silicon Valley. Two years in a row, at Austin Startup Week, twenty-somethings to octogenarians listen attentively, take notes diligently, teaching themselves about what is trending in the world of tech. What’s all the buzz here- AI- is as hot right now as PCs were in the late 80s and early 90s. This wave is here to stay, with rapid developments as agentic AI and genAI using LLMs gain momentum. How prepared are the regulators to protect public interest in the absence of substantial enforcement methodologies and pervasive scrutiny where privacy and transparency concerns run rampant? Optimism about AI is matched with skepticism as Meta AI’s rollout of its chatbot has been operational since last year, training its AI using data of millions off of social media. In this case study is analyzed how Social Media turns a Social Experiment whose start and end goals or processes are not entirely known and widely applicable to users of Facebook, Whatsapp and Instagram.

WHAT IS THE SOCIAL MEDIA, SOCIAL EXPERIMENT, AT THE COST OF PRIVACY, SECURITY AND MORE?

Meta AI aims to be the newest virtual assistant on demand. With its rollout in 22 countries worldwide as of the time of publishing this in April 2025, it is seamlessly (and annoyingly, for consumers?) integrated across all of Meta’s social media platforms. These are the some of the handful of biggest social media platforms to exist: Facebook, Whatsapp and Instagram.

Meta AI’s multimodal model (results consisting of texts, videos/images) boasts multilingual assistance in answering questions, generating new ideas and teaching concepts. 24/7. One click away. Seamlessly integrated in the search bar. Meta AI does personalized recommendations, customer support and social media growth. In group chats or individual results, the ability to search, create and learn on the go is the functionality it promises. Is there, however, a shadowy bottom line where covert behavior modification and/or targeting of consumers is happening behind the scenes? Meta AI confidently claims that all of this is to enhance the user experience. While going through chats, messages or groups, the Meta AI icon appears on any search feature. It is not active unless used, claims Meta. Users are compelled to opt-in by default, with no one-click opt-out feature or none at all in many locations, claim social media users. Here is where we are at with this.

Meta AI does concede to using messages on public groups and public profile photos to train its algorithms on which this AI chatbot works. These algorithms and LLM use data to train this chatbot by data available on its networks. 2 years into ChatGPT’s launch, Meta leveraged its social media networks to create one of the most powerful AI chatbots merely due to the access to data of millions of social media users. Bringing it up to speed with Google’s Gemini and China’s Deepseek as close contenders for the race to the top of the chatbot hierarchy pyramid for market size and use.

The key missing ingredient is an enforcement mechanism for regulatory oversight. The US adopts a more laissez-faire (‘invisible hand’- ie the market forces of demand and supply interact to reach an equilibrium with minimal government intervention and a more free market) vs EU’s more top-down approach (stricter consumer protection laws and more comprehensive regulation in streamlining how market forces function and interact). So how do these overarching regulatory bodies governing industry monitor and control how data is stored, accessed and transferred, especially where LLMs and genAI is doing data mining/scrapping to train itself on a 24/7 basis? What if we add the fact that data is moving cross-borders? Sourced and transferred across millions of users?

Regulations such as EU’s first-of-its kind AI Act and Biden’s Presidential Order do oversight over the AI/data landscape where fast-moving laws are the need of the hour. For regulation, the details of the inner workings of these chatbots, tools, apps and algorithms, must be clearly known. With AI in its infancy and new changes daily, its an uphill task to pin down how it works and what are the areas where protections are most required.

Running in the background, there is lack of transparency and accountability with AI regulation in the US and beyond being in its infancy. Tasked with staying ahead of the curve, regulation is playing catch up as innovation marches ahead. Monitoring and ensuring checks and balances are the next major challenge.

Privacy/data security concerns loom large with British users having managed to get an opt-out form where the social media user can choose to not use the Meta AI feature. This feature can be turned off in Whatsapp, for instance, in certain countries- not all. Where data goes- mining, storage, retrieval, access, transfer, has a looming cloud of uncertainty and doubt for users. Recently, EU watchdog Not Your Business has legally challenged this Meta move to compel consumers through an opt-out policy rather than informed consent via an opt-in policy for all users BEFORE Meta AI is integrated. Meaning that rather than say no to not use it, Meta in contravention of privacy laws by forcing users to have to undertake the process to inform Meta of its choice not to use it for Meta AI to stop data use.

Finally, the fear of misuse in terms of covert behavioral modification by listening in on conversations or tracking patterns of online use to give targeted ads or encourage certain behaviors is a concern. Ethical and legal concerns here focus on the chatbot constantly teaching itself on user preferences, patterns and desires. A reddit chat, for one, is the relatively fewer places, where these concerns are seen to be voiced.

All in all, Meta AI is performing (relatively undetected as to the exact parameters of its AI training methodology) the largest ‘social experiment’ on its social media platforms to date. This ‘experiment’ will exponentially expand its AI’s accuracy for “improving user experience” alone, as per the stated function of this. The definition of a social experiment is where real-time features of a population are collected, analyzed and processed. The trends of this population are recorded. Results derived may not have a control and test group. However, the extent and method of monitoring and the exact way the feedback loop works in terms of using data about a particular user to interact with that user is still largely unknown. How data moves and protocols at each stage are unclear, apart from obvious good intentions. Meta claims private messages are not used. There is no way for a social media user to verify this without a lawsuit where sufficient evidence exists and harm occurs to prove otherwise. On Whatsapp, all data is encrypted. But, is it all read before being encrypted, usually for public safety and national security?

A rising discomfort, mild outrage and pervasive unease creeps into certain users when first encountering the Meta AI “option.” Meta AI is an “option” because, as Meta states, it is not activated, unless a user expressly desires this and taps on it to show results. A promise to use data responsibly with safety being the “highest priority” doesn’t really say all that much about the specific protocols in place as safeguards across social media apps and for all users. Not mentioning specific safeguards allow for significant leeway, apart from where regulators come in. In a world where public policy is being increasingly and obviously influenced by tech giants and entrepreneurs, the track record of Meta, all with Zuckerberg’s wide-eyed blinking professed ignorance to data handling protocols before Congress (no less!) does not do a great deal to build public confidence either.

Dr. Sarah A. Farid, Attorney-at-Law (Top Ranked JD, LLB, BA)

Check out our newsletter here: Legal Binaries

Leave A Comment