Where Are We?

Extra points if you can say where this is.

What you’re reading is written and posted on the Web. While the Web isn’t a place, we know it as one. The language we use to describe it is concrete and structural: domains at locations, with sites that are built and have pages that are authored, and contents that are uploaded and downloaded.

For a generation or more, we mostly saw and navigated the Web through Google, whose original mission was “to organize the world’s information and make it universally accessible and useful.” When we searched Google, we relied on its PageRank system to bring us lists of pages ranked by how likely they were to contain keywords or other search terms.

But Google Search has changed. Old pages are gone from many searches. This matters to me because I’ve been writing on the Web for thirty-one years, almost entirely on static sites. But Google no longer finds many of those. My friend Dave Askins tells me the same malady has befallen much of his own archival work, which is present on the Web, but absent from Google searches. Why?

I asked Google’s Gemini that question. Here’s what it said:

The “disappearing” of the old web is a result of a massive shift in Google’s priorities over the last few years.

While you might be looking for a specific historical document or an old forum thread, Google’s systems are now aggressively tuned to filter out what they consider “stale” or “unhelpful” content…

In the race to be a ‘helpful assistant,’ Google has stopped being a ‘universal library.

PageRank is still there, it tells me, but subordinated. To what?

Of course: it’s advertising.

Taking my clues from Keyords are dead, by Shalom Gonzalez in Search Engine Land, I asked Gemini to tell me “how searches by the helpful assistant might constantly be looking for what the person might eventually want to buy, so they can be advertised at when the right time comes.”

Gemini replied,

This is the “secret sauce” of the 2026 search model. If the old search engine was a librarian, the new one is a personal shopper who is taking notes on everything you do to predict your next “need state. The transition you’re sensing is the move from Reactive Search (answering what you asked) to Predictive Search (answering what you’re going to ask).”

This treats the Web as nothing more than a heap of data in which patterns can be found, and answers given, based on algorithmic assumptions—not just about who and what a person is, but what advertisers are paying to have pitched. Constantly. In real time.

There are four giant problems to pull apart here:

  1. How Google is deprecating the Web.
  2. What to call the vast everything that Big AI now uses to give us answers and pitch us shit.
  3. How AI dependence is changing us from homo sapiens to homo something else.
  4. How we can stay human in the midst of it all.

So let’s explore those.

I. How Google is deprecating the Web.

We need to name the machine equivalent of hubris that Google and Gemini display in their deprecation of the Web and their presumptions about our intentions.

Dictionary.com says hubris is derived from hýbris, the Greek word for insolence. It mean pride, arrogance, or an excess of ambition…ultimately causing the transgressor’s ruin.

When I asked ChatGPT to define AI hubris, it said, “AI hubris is the condition in which a system expresses or operationalizes confidence that exceeds its actual competence, epistemic grounding, or contextual awareness.”

Then, when I asked it to explain how that manifests, it gave me—

  • A model that acts as if it knows rather than predicts.
  • A system that is treated as an oracle rather than a probabilistic engine.
  • Designers who mistake fluency for wisdom.

When I asked it to be more specific, it confessed to—

  • Overconfident answers: The model presents uncertain, incomplete, or fabricated information as authoritative.
  • Boundary blindness: It fails to recognize where its training data ends, where knowledge is outdated, or where a question exceeds its capability.
  • Illusion of comprehension: It produces fluent language that implies understanding when it is pattern synthesis.
  • Prescriptive overreach: It gives decisive advice in domains requiring caution (medicine, law, policy) without signaling uncertainty.
  • System-level hubris: Not just the model—but the designers or deployers assume the system is more reliable, objective, or neutral than it truly is.

Then I asked it to give me some words we might use to label AI hubris. It generated a large pile of them, abridged here—

  1. Hybris mechanica: Latinized Greek; “mechanical hubris.” Evokes classical overreach, but makes clear it’s structural, not emotional. Good if you want rhetorical force.
  2. Pseudomnēsis (ψευδομνήσις): “False memory.” From pseudo- (false) + mnēsis (memory, recollection). Captures the illusion of knowing.
  3. Doctia ignorans: Modeled on docta ignorantia (Nicholas of Cusa: “learned ignorance”). This flips it: Doctia ignorans — “ignorant knowing.” Sharp, ironic, philosophical.
  4. Epistemic hybris: Blunt and precise. The sin is not pride, but overstepping epistemic bounds.
  5. Pseudognosis (ψευδογνῶσις): False knowledge. Simple, strong, clean.
  6. Automata superbia: Mechanical arrogance. Has a medieval-theological feel.
  7. Confidentia vacua. Empty confidence. Direct and Roman.
  8. Logohybris: λόγος (speech/reason) + ὕβρις (hubris) Hubris of speech. Arrogance of articulated reason. Sharp, rhetorically charged, has teeth.
  9. Pseudognosis: Names the condition without moralizing it.
  10. Pseudodoxia (ψευδοδοξία): Suggests false or illusory knowing, overstepping bounds, inflated confidence, illusion of authority, false belief, spurious opinion. Has historical precedent (e.g., Browne’s Pseudodoxia Epidemica). Feels learned and slightly ironic.
  11. Kenognosis (κενογνῶσις): “Empty knowledge.” From kenos (empty) + gnosis (knowledge). Strong and elegant.
  12. Agnomimesis (ἀγνωμιμησις): “Imitation of knowing.” From agnoia (ignorance) + mimesis (imitation). Captures the performance of understanding.
  13. Pseudonoia (ψευδονοία): False mind / counterfeit intellect. More psychological. Overstepping Epistemic Bounds
  14. Epihybris (ἐπιὕβρις): “Over-hubris” or escalated hubris. Suggests compounding overreach.
  15. Metahubris: Hubris at the level of system or design. Modern coinage, but philosophically clean.
  16. Paragnosis (παράγνωσις): Misrecognition; knowing beside the truth. Has a tragic flavor (like parabasis, paradox).
  17. Transgnosis: Going beyond what one can legitimately know.
  18. Megalodoxia (μεγαλοδοξία): Inflated Confidence. Grand opinion; overblown self-assurance.
  19. Superbia machinae: “The arrogance of the machine.” Latinate, ecclesiastical.
  20. Inflatio mentis: Inflation of mind. Roman and sharp.
  21. Praesumptio automatica: Automatic presumption. Feels juridical.
  22. Authodoxia: Illusion of Authority. Self-authorizing belief. Authority generated from itself.
  23. Autognosticism: Self-certifying knowledge. Simulacra scientiae. A simulacrum of knowledge. Very Baudrillard-adjacent.
  24. Oraculitis: Mock-Greek: “oracle-itis.” The disease of speaking as an oracle.
  25. Technohybris: Structural or Systemic Hubris. Technological overreach. Blunt but serviceable.
  26. Logomimesis: Mimicry of reasoned speech. Less accusatory, more descriptive.
  27. Sophomimesis: Imitation of wisdom.Sophia (wisdom) + mimesis (imitation).
  28. Pseudoepisteme (ψευδοεπιστήμη): False systematic knowledge. More formal than pseudognosis.
  29. Kenodoxia: “Empty glory” (historically attested in theology). Could be repurposed. Stark and Memorable
  30. Vox vana: “Empty voice.”
  31. Certitudo vana: “Vain certainty.”
  32. Machina praesumens: The presuming machine.
  33. Sophomimesis: Imitation of wisdom.
  34. Kenognosis: Empty knowledge.
  35. Pseudodoxia mechanica: mechanical false belief.
  36. Logohybris: Hubris of speech (from earlier).
  37. Paragnosis: Mis-knowing.

Well, Big AI is all of those. But let’s pick one, because we need it. I lean toward logohybris, but I’m open.

II. What to call the vast everything that Big AI now uses to give us answers and pitch us shit

We are the only species that talks. A lot. About everything. We also make sense of the world as embodied creatures. We say good is up and bad is down because we are built for standing, walking, running, and squatting without falling over. This is why heaven is above, hell is below, and to fail is to fall. We say good is light and bad is dark because we are diurnal: optimized for daylight. We say a smart person is bright and a dumb person is dim. Good futures are bright and bad ones are dark. We frame life as travel, not a biological process. That’s why birth is arrival, death is departure, and we get stuck in a rut, lost in the woods, and fall off a wagon. We frame time as money, not as a progression of existence. That’s why we save, waste, spend, invest, and lose time.

We also lay nouns on everything (and non-thing) that we possibly can. We make taxonomies to organize the nouns. We make verbs for the actions that happen among and between all the nouns. We make prepositions to locate the nouns. We would be lost without all the ways we understand the world as a structured place.

To illustrate how important this is, look at how some drugs detach our minds from structures:

  • LSD—acid—detaches what we see and know from all the nouns we project on them.
  • MDMA—ecstasy—liberates other beings from what might in a normal state of mind make them separate from us and unlovable until proven otherwise.
  • THC—weed—disconnects percepts and thoughts, so they become “strings of pearls without the string.” (That’s one of the few smart things I ever said when I was high.*)

While all those drugs are good for recreation and therapy, and may even help civilize us in some ways, they distract us from the structures that make civilization work.

Nearly all of what we know about the natural world is also tacit, meaning we know it, but can’t explain it. For example, we know how gravity works, even though we can’t explain it as well as Einstein or Feynman. Or at all. Doesn’t matter. We know how it works, and that’s what matters.

We can still be explicit about what we know. About gravity, we might say, “It’s what gives things weight.” Or, “It’s what makes things in a vacuum fall at 9.8 meters per second squared.”

Here’s the key: What we know tacitly far exceeds what we can say explicitly.

The digital world, however, is entirely explicit. It is composed of data and code. There is no tacit there.

In the digital world, we also have no distance and no gravity, at costs that lean toward zero. A video conference might have people from India, Germany, and Australia, with none sensing distance or cost. Everyone is present. Conversation moves along just fine. Gravity is also absent, because nobody manifests in material form, or appears upside-down or sideways. There might be light outside the window of one participant and dark outside the window of another, but only if the camera points toward a window and the person isn’t using a fake background.

But we still have structures in the digital world. For example, the World Wide Web uses a hypertext protocol (http/s) to connect sites at domains locations, between which packetized data can moves up and down. Note that these are real estate and shipping metaphors.

Directory paths on the Web are inherited from UNIX. For example, https://science.what/geology/phanerozoic/paleozoic/devonian/fossils/ammonites is what’s called a path. It’s like one you might follow through the stacks of a library. That’s still how the Web works for us. It has structure.

When search engines indexed everything in those published directory structures, they made sense of the whole Web in a library-like way. We can see how the Internet is drifting away from that structure by reviewing the titles of David Weinberger‘s literary oeuvre, following The Cluetrain Manifesto in 1999:

See where this is going?

Yes, toward AI. By now, Big AI has absorbed all those miscellaneous small pieces in the vast chaos of the digital world. I makes sense about that totality in every question we ask of it. It’s the room that includes everything that’s too big for us to know.

But here’s the rub: Big AI also doesn’t know a damn thing. It recognizes patterns in a vast totality of data and uses programming and language to answer questions and perform tasks for the human inhabitants of its environments.

Think about what’s in the rooms built by OpenAI (ChatGPT), Google (Gemini), Microsoft (Copilot), Perplexity (Perplexity AI), Anthropic (Claude), DeepSeek (DeepSeekAI), and Grok (Xai).

The short answer is everything. All of the Web, all books, all utterings on social media, you name it. Everything explicit, that is. Nothing tacit. It doesn’t have tacit knowledge because it’s not human.

We need a word for that everything. Here are some candidates (with minimal AI help this time):

  • Akasha: (Sanskrit) The aether or mystical compendium of all human events, thoughts, words, emotions, and intent ever to have occurred. Perhaps a poetic synonym for a universal database.
  • Pleroma: (Greek) “fullness,” totality of divine powers, or the sum of all knowledge.
  • Pantology: (Greek root?) The systematic view of all human knowledge.
  • Wànwù: (Mandarin: 万物) Literally “the ten thousand things.” It’s used to describe the entirety of the universe and all its manifestations. It’s a beautiful way to describe the sheer variety of data I hold. (Says Gemini, when I asked for a word in another language to encompass what it “knows.”)
  • Sylloge: (Greek) Collection, compendium, summary, digest.

I vote for akasha.

What’s in all those companies’ akashas are not databases. Instead they are trillions of “tokens,” which (as NVIDIA explains here) “are units of data processed by AI models during training and inference, enabling prediction, generation, and reasoning.” These are not structured in ways familiar to us in the natural world. They are also not sources like those Google used to index and treat as directories. They are opaque to us. Every AI data center is a black box in material form.

Marshall McLuhan taught that every new medium, every new technology, creates an environment that changes us. What we are as drivers in the rolling carapace we call a car is not what we are when we are walking around. We are changed. Extended. Enlarged.

Big AI does some of that for us, but we don’t have truly personal AI yet. That’s a whole ‘nuther rant (going on here). But after we get personal AI, we’ll still be radically changed. Which brings us to—

III.How AI dependence is changing us from homo sapiens to homo ________

When I asked ChatGPT to “tell me what to name the AI-dependent species that succeeds homo sapiens,” it gave me this (which I’ve abridged into a less wordy form):

  • Homo delegatus. The delegating human. Clean. Roman. Almost bureaucratic.
  • Homo auxiliatus. The assisted human. Gentler. Less tragic.
  • Homo symbioticus. The symbiotic human. Optimistic framing. Mutualism implied.
  • Homo prostheticus. The prosthetic human. From prosthesis — extension. Suggests cognition extended outside the skull.
  • Homo algorithmicus. The algorithmic human. Likely to stick. Slightly dystopian. The defining trait is AI reliance.
  • Homo automatus. The automated human. Bleaker.
  • Homo machinatus. The machinated human — shaped by machines.
  • Homo instructus. The instructed human — guided by systems. The defining trait is epistemic submission
  • Homo credulus. The believing human. Short. Dangerous.
  • Homo confisus. The trusting human. Ambiguous — trust can be good or naive.
  • Homo oracularis. The oracle-dependent human. Strong mythic tone.
  • Homo synthetica. The synthetic human. Hybridized.
  • Homo copulatus. The coupled human.
  • Homo nexus. The networked human.
  • Homo cyberneticus. Classic mid-20th-century feel.
  • Homo exocerebralis. The external-brained human. Scientifically plausible.
  • Homo delegans sapiens. The wisdom-delegating human.
  • Homo post-sapiens. Simple, brutal.
  • Homo paragnostica. The mis-knowing human.
  • Homo servomechanicus. The servo-mechanical human.
  • Homo dependens. The dependent human.
  • Homo exocerebralis. External-brained human
  • Homo delegatus. Delegating human
  • Homo symbioticus. Symbiotic human
  • Homo algorithmicus. Algorithmic human
  • Homo exocerebralis. It names the real shift: the brain extends beyond the skull into networks and models. Biologically grounded and conceptually sharp.
  • Homo delegatus. Has quiet menace.

I don’t like any of them. But we need one. Vote on one of these, or give us a better alternative.

IV. How we can stay human in the midst of it all?

First, stop the surveillance. That will at least begin to restore and protect our humanity.

We can’t do it all at once, but we can do it through relationships with organizations that don’t participate in the surveillance economy. There are a lot of those.

These can’t be relationships that organizations initiate. We got that kind when industry won the industrial revolution, and they have infected the digital world with endless “consent” (non-) agreements and “Our terms have changed” gauntlets requiring acceptance of terms we don’t read and don’t matter.

We can do it with agreements that we proffer, as sovereign and independent human beings. And now we have a standard for that: MyTerms (7012-2025 – IEEE Standard for Machine Readable Personal Privacy Terms). These will allow us to visit sites, use services, and buy stuff without worrying if we’re being tracked like marked animals by parties unknown. They will also form a solid base for additional relationships based on mutual trust.

That’s the second step. Both are tabula rasa today. But if we want to keep from turning into any of the many species listed above, we need to start with ourselves and our relationships with willing second parties. MyTerms will do that.

__________________________________

*My minimal and pathetic drug history:

  • I was never more than a casual drinker, though I do like dark beers and good wine. I also stopped drinking anything a few years ago.
  • I’ve never taken LSD, shrooms, or any hallucinogen.
  • The first time I took cocaine was at a party in the North Carolina woods. When nothing happened, the guy who gave it to me said, “Well, maybe ya’ll’s personality masks the effects.” The second time was when a friend and I wanted to stay up late to watch the first round of March Madness. The third time was to stay awake on a very long drive. The fourth and last time was to stay focused through an all-night conversation with an old friend. Afterwards, I felt like I might die and had no perception of color. After that, I never touched cocaine again.
  • The only time I ever took ecstasy was with old friends. It was beautiful. Their dog said to us, “See? This is what it’s like to be a dog! You love everybody!
  • I never smoked tobacco, and only smoked weed a few times. One of those yielded the one-liner I shared above. Another caused such intense pain behind my right eyeball that a bit of it recurs every time I smell weed.


Leave a Reply

Your email address will not be published. Required fields are marked *