top of page
Writer's pictureJeremy Kranz

Disinformation in AI

TL;DR: Managing Partner Jeremy Kranz reflects on the warning signs from social media that led to AI disformation, and the unnerving evolution of click farms to bot farms to AI disinformation farms.  He warns how “action buttons” have the potential to unleash AI-generated disinformation at widespread scale and speed.


Truth is so obscure in these times, and falsehood so established, that, unless we love the truth, we cannot know it. – Blaise Pascal

Not too long ago, I attended the Singapore Conference on AI (SCAI)  hosted by Singapore’s Ministry of Communications and Information and Smart Nation Group, in partnership with Topos Institute. The conference focused on AI for good and brought together global thought leaders, scientists, policy makers, and entrepreneurs. It was a unique opportunity to gain insights into regional considerations on AI safety. The SCAI participants produced a paper outlining key questions for AI safety, which you can read about here.


During SCAI, I championed two specific topics: the role of AI in disinformation and how to credential/certify AI safety.  For this blog, I’d like to share more about my thoughts on the former.


Social Media as Social Nuclear Weapon

I rarely make bold statements about the future of technology, but in 2010, I made an exception. I argued that social media will be the nuclear weapon of the next generation. For the first time, there was a tool capable of motivating one million people to be angry at another million people within a 24-hour window, all based on an untruth. The proliferation of anger would be unprecedented and unmanageable. I delivered this message to the leadership of the Singapore government, who understood the gravity of the issue and have taken steps to prevent the worst abuses.  


Now, in 2024, I must admit that I underestimated the problem. I have updated my warning for this year: the combination of social media and "action buttons" – the ability to collectively execute a response to AI-generated disinformation at widespread scale –  will be the nuclear weapon of this generation.  


AI Disinformation is Older and More Powerful Than You Might Think 

While AI holds great promise for the future, I believe its most impactful use today is as a weapon of disinformation. Many technologies of the information age can seem to initially empower more bad actors than good ones (though this eventually reverses). Examples include the web (illegal pornography), online payments and cryptocurrencies (drugs, money laundering), and private encrypted messaging (terrorism).


The issue of tech-enabled disinformation is not new. Approximately fifteen years ago, I made the decision not to invest in Facebook at a $10B valuation due to a sophisticated disinformation campaign. Well-organized "click farms" based in Southeast Asia were being paid by Facebook and its supporters to create profiles and generate clicks, artificially inflating user activity. Upon discovering this, I became apprehensive and walked away. 


Since then, these same click farms have perfected their craft. With the aid of AI, they can now create sophisticated video content and conduct millions of tests to manipulate well-intentioned algorithms and the users they serve. As one example of many, in April 2023, a mother received a call from her daughter claiming to be kidnapped. The mother engaged in a conversation with her daughter, only to later learn that it was an AI bot impersonating her daughter’s voice.


An Ever Evolving Disinformation Minefield

Let me introduce the term AI Disinformation, or AIDI. These are special AI bots capable of dynamically adjusting to situations in an effort to persuade real humans to take action.  Over the past two decades, we have witnessed a shift from click farms to bot farms, and now from bot farms to AIDI farms. These AIDI farms can create thousands of "people" who influence real individuals. 


Nowhere is this more prevalent than in social media. The first generation of Instagram accounts of people who don't actually exist is already here, where we watch their compelling videos and follow their lives, including their aging process. They possess charm, desirability and sometimes even sophisticated reasoning – and earn money too.  The Spanish Instagram model Aitana Lopez makes over $10,000 a month and is completely AI-generated. 


TikTok is an even more explosive sandbox for AIDI, given its target audience of completely online and impressionable users. You may recall a woman who posted on TikTok claiming to have read Osama Bin Laden's Letter to America. Within days, the content went viral, garnering millions of views and additional postings. How many of those were real and how many were AIDI bots? With certainty, this was a campaign whose goal was political destabilization – making young Americans question the country’s security defense policies, especially its support for Israel.  It is not a mere coincidence that this virality happened just after the Oct 7th Hamas terror attack.


AI Collective Action Buttons

The successful AIDI bot campaigns on TikTok are experimental but quickly evolving. The cost and manpower required to build these campaigns are rapidly declining. AIDI is being adopted by governments  like Venezuela’s state media and Chinese operatives.  Activist groups are actively engaging as well, as seen in campaigns around the Israel-Hamas conflict.  What was once an "information war" has now become a "disinformation war."


What are the implications of this?  The GameStop trading spectacle in 2021 served as an early example of people organizing widespread, collective financial action through social forums.  In that incident, retail investors, organized through platforms like Reddit's WallStreetBets, collectively drove up the stock price of GameStop (GME), a struggling video game retailer. The sudden surge in GME's price caused significant financial losses for some hedge funds and led to widespread debates about market manipulation, the role of retail investors in financial markets, and the power dynamics between individual investors and institutional investors.


Now, let’s add AIDI bots into that dynamic.  


Imagine if the action button - say, for purchasing or selling a stock like in the case of Gamestop - was seamlessly integrated within Reddit. Instead of users navigating the inherent friction of our current system - sifting through Reddit, switching to their Schwab account, selecting stocks, confirming trades, and so forth - what if this friction was eliminated? By reducing obstacles, collective action can be more immediate and timed to the moment of one’s emotional reaction.  


AIDI Marketplaces

Introducing trading capabilities directly in a forum where participants are already gathered has always been highly efficient.  Not too long ago, people gathered in open outcry trading pits to do that very thing.  Those trades, while sometimes speculative, were always based on a foundation of some truth: a crop report, an employment number, an interest rate cut.


But with AIDI, that foundation of truth evaporates.  Moreover, distributed protocols (e.g. blockchain) have allowed for financial transactions to occur anonymously and outside of any centralized marketplace. As a result, trading opportunities have already emerged outside of financial situations, such as politics and social justice.  In 2018, people could place bets on whether public figures would be assassinated on Augur, a decentralized prediction market.  At least on Augur, those bets were realized given a real-life outcome.  But again with AIDI, the lines blur awfully.  And with an action button as close to the point of (dis)information as possible, the potential for mayhem exponentially increases.


The View at Sentinel

While AI hopefully has tremendous potential for good,  it currently serves as a potent weapon in the hands of adversaries engaged in financial crimes, digital identity theft, disinformation, and hacking. The very first thing that every enterprise should purchase is what we call AI defense.  


At Sentinel, we are eager to invest in companies specializing in this defense against AI-driven illicit activities. Our focus lies in technologies that fortify digital identities, enhance encryption, validate data integrity, and foster trust in decentralized systems.


If you resonate with this mission and share our passion for creating a safer, defended digital world, we would love to connect with you. Feel free to reach out to us at hello@sentinelglobal.xyz.

Comments


Commenting has been turned off.
bottom of page