Coming AI Economy Will Sell Your Decisions Before You Take Them, Researchers Warn

Coming AI Economy Will Sell Your Decisions Before You Take Them, Researchers Warn

The near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing “intentions” in real-time to companies that can meet the need – before we even realize we have made up our minds.

This is according to AI ethicists from the University of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the “Intention Economy”.

Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of “persuasive technologies” – one hinted at in recent corporate announcements by tech giants.

“Anthropomorphic” AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.  

This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting – mimicking personalities and anticipating desired responses – to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, said LCFI Visiting Scholar Dr. Yaqub Chaudhary.

“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions”

“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes.”

Dr. Jonnie Penn, an historian of technology from Cambridge’s LCFI, said: “For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”

“Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”

“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences.”

In a new Harvard Data Science Review paper, Penn and Chaudhary write that the intention economy will be the attention economy “plotted in time”: profiling how user attention and communicative style connects to patterns of behaviour and the choices we end up making.

“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.

In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user’s cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers.

This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip (“You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”).

This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organisations, argue Penn and Chaudhary.

While researchers say the intention economy is currently an “aspiration” for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players.

These include an open call for “data that expresses human intention… across any language, topic, and format” in a 2023 OpenAI blogpost, while the director of product at Shopify – an OpenAI partner – spoke of chatbots coming in “to explicitly get the user’s intent” at a conference the same year.

Nvidia’s CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released “Intentonomy” research, a dataset for human intent understanding, back in 2021.

In 2024, Apple’s new “App Intents” developer framework for connecting apps to Siri (Apple’s voice-controlled personal assistant), includes protocols to “predict actions someone might take in future” and “to suggest the app intent to someone in the future using predictions you [the developer] provide”.

“AI agents such as Meta’s CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one’s position,” said Chaudhary.

“These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”

Penn points out that these developments are not necessarily bad, but have the potential to be destructive. “Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” he said.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2025. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.