As a noun phrase, “The intention economy” first appeared in a Linux Journal column by that title, written by me in March 2006.
A few months later, when I became a fellow at Harvard’s Berkman Klein Center, I started ProjectVRM for the purpose of making that economy happen.
Six years after that, I wrote this book, to report on progress toward that goal, and to lay out paths forward:
Excerpts from the first chapter:
This book stands with the customer. This is out of necessity, not sympathy. Over the coming years customers will be emancipated from systems built to control them. They will become free and independent actors in the marketplace, equipped to tell vendors what they want, how they want it, where and when—even how much they’d like to pay—outside of any vendor’s system of customer control…
Demand will no longer be expressed only in the forms of cash, collective appetites, or the inferences of crunched data over which the individual has little or no control. Demand will be personal. This means customers will be in charge of personal information they share with all parties, including vendors.
Customers will have their own means for storing and sharing their own data, and their own tools for engaging with vendors and other parties…
Thus relationship management will go both ways. Just as vendors today are able to manage relationships with customers and third parties, customers tomorrow will be able to manage relationships with vendors and fourth parties, which are companies that serve as agents of customer demand, from the customer’s side of the marketplace.
Relationships between customers and vendors will be voluntary and genuine, with loyalty anchored in mutual respect and concern, rather than coercion…
Likewise, rather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers. Once customers’ expressions of intent become abundant and clear, the range of economic interplay between supply and demand will widen, and its sum will increase. The result we will call the Intention Economy…
The volume, variety and relevance of information coming from customers in the Intention Economy will strip the gears of systems built for controlling customer behavior, or for limiting customer input. The quality of that information will also obsolete or re-purpose the guesswork mills of marketing, fed by crumb-trails of data shed by customers’ mobile gear and Web browsers. “Mining” of customer data will still be useful to vendors, though less so than intention-based data provided directly by customers.
In economic terms, there will be high opportunity costs for vendors that ignore useful signaling coming from customers. There will also be high opportunity gains for companies that take advantage of growing customer independence and empowerment.
The Intention Economy inspired Sir Tim Berners-Lee’s Solid Project, Consumer Reports‘ Permission Slip, work on personal AI at Kwaai.ai (which I serve as Chief Intention Officer)—among the many other efforts listed by ProjectVRM, which is still active and you can follow and join here.
I bring all this up because I learned this morning that the intention economy is now also a threat:

The PR for this other intention economy—the bad one—has been very good. It made The Guardian, The Times (of London), The Times of India, and other outlets via AFP, the Agence France Press, one of the most far-reaching of the world’s news agencies.
The apparent* source for all this PR is Cambridge University’s Research / News office, which published this earlier today: Conversational AI agents may become attuned to covertly influence your intentions, creating a new commercial frontier that researchers call the “intention economy”. It begins,
The near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing “intentions” in real-time to companies that can meet the need – even before we have made up our minds.
This is according to AI ethicists from the University of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.
Researchers from Cambridge’s Leverhulme Centre for the Future of Intellige (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of “persuasive technologies” – one hinted at in recent corporate announcements by tech giants.
And the source for that is the Harvard Data Science Review (HDSR), which published this 44 minutes before right now (2:15pm EST): Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models, by Yaqub Chaudhary and Jonnie Penn.
Here’s the abstract:
The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.
This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” . We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate).
I thank John Garfunkel for pointing out on the ProjectVRM list that the paper does cite me and The Intention Economy. (I missed it while running all over the Web to track down all the places this bad PR has spread.) He also notes this bit of shade in our direction:
While prior accounts of an intention economy have positioned this prospect as liberatory for consumers, we argue that its arrival will test democratic norms by subjecting users to clandestine modes of subverting, redirecting, and intervening on commodified signals of intent.
Our work toward making The Intention Economy happen is not an “account.” The “it” they’re talking about is also not what we’ve been working on—and what intention economy has meant since 2006.
But I’d rather not argue about this.
Instead, I’d like to point out two important differences behind two radically different meanings:
- Our intention economy is an optimistic and potentially world-changing aspiration while theirs is a pessimistic world-worsening possibility.
- Our intention economy has eighteen years of work behind it while theirs has this single study.
So I’m asking Yaqub Chaudhary and Jonnie Penn to find a different label for the threat they describe. Simple as that. Because The Intention Economy is taken.
Bonus link: Personal Agentic AI.
*I invite corrections and improvements to everything here.

Leave a Reply to Gregory Czaplak Cancel reply