• A Short History of Now

    Cambridge Diner Clock
    So there must be something to it. Watch how often people interrupt each other with “So—.” Also watch for speeches starting with “So—.” The first speaker I heard do that was Larry Lessig, one of the world’s best, back in the aughts.

    Which is why the Web is still for writers. All writers. The Longing is a chapter by David Weinberger in The Cluetrain Manifesto that explains what the Web was for 25 years ago, and is still for today  As Dave (Winer, not Weinberger) says, the Web was meant for writers, even though the earliest of those were high-energy physicists looking to share documents between CERN, SLAC, and the Weizmann Institute. (Paul Kunz told me that.) Hell, scientists are writers too. I started writing for the Web in the early ’90s before I could even get on it. Here is one piece I wrote in 1992, in ancient HTML, wishing I had a website to put it on. No publisher was hiring me back then. Or, come to think of it, now.

    Same goes for The Internet. And The Net. The Web is the Web, not “the web”. We—the writers of the networked world—gave something up when we allowed the bishops of at AP and the Chicago Manual of Style to demote the Web from proper noun to lower-case status, down there with television and radio. Nobody invented “the television,” or “the radio.” Not even “the newspaper.” But somebody—Sir Tim Berners-Lee—invented the World Wide Web, and named it with upper case letters. The WWW was not the www. Is it too late to bring the Web back as a proper noun? I don’t know. I do know that I’m never going to demote it in my own writing.

    Here’s one. Want a reason not to buy a Dodge Charger?

    True for the whole bandLatest to go: WIRY/1340 in Plattsburgh, New York. Says its Facebook post, “The rise of digital platforms and social media has changed the way audiences consume content, leading to declining traditional radio listenership. Coupled with rising music royalty costs, a shrinking media sales force, and decreasing profit margins, we have reached a point where it is no longer viable to continue broadcasting.”

  • Tonight’s Lunar Eclipse

    They call it a blood moon, because it looks like this:

    But that came later.

    Right now still in the evening of March 13th in the Eastern time zone, the Moon is as full as it can get without moving into the shadow of Earth . Which it will. Shortly. Here is where Earth’s shadow is right now, as I write this, at around 10 PM Eastern Time:

    The outer circle is Earth’s partial shadow. The inner circle is its complete shadow.

    Here is how the Moon will look when it moves between one and the other:

    And here is a close-up of how it will look when it is completely in the shadow, at about 3 AM, Eastern Time:

    And this is how Earth and the Sun will look from the Moon at that time:

    And if you zoom in on Earth, what you will see is a red ring, which is one round sunset circling the whole planet, and night across everything between:

    I grabbed all these images from a program called Starry Night Pro Plus 7. I’ve had versions of Starry Night since my 28-year-old son was a toddler. For years we would sit out on our deck every night using Starry Night as a small rectangular planetarium. So hats off to the makers for keeping it going so long.

    If you want to know more, Space.com has this.

    Update at 3:17am, March 14…

    A few minutes ago, at 3 AM, when the Moon has traveled into Earth’s shadow, here is what I saw:

    Or, more correctly, what my Sony A7iv saw through its FE 70-200 mm F2.8 GM OSS II lens. Cropped, it looks like what I just put at the top of the post.

    Now back to bed. Cheers, all.

  • Thoughts

    An important extinction. Overheard (between veteran unemployed journalists): “I’m not a has-been. I’m a still-was.”

    Because you’re the only one who might employ you. “DIY journalism” just got uttered in a call I’m on.

    Its the windows, mostly. I didn’t like Boeing’s 787 eight years ago, and still don’t like it now. That’s my response to this FlightRadar24 question.

    Hockey vs. Football. Suddenly this looks more likely.

  • Musical Moments

    I need to learn French.  This is lovely. Who is this girl? Here's one clue. Another. Look for more. Never heard of her before today.

    Hello, I still love you. The Doors are 60 this year. Or would have been. Good and legendary as they were, I think they are woefully underrated. Manzarek, Densmore, and Krieger were great musicians. Morrison wasn't a great poet, but he was a damn good one, and a helluva performer. What too few remember, or respect, is The Doors' melodies. Wintertime Love, for example, is a sweet waltz. Dig around the oeuvre. Lotta good stuff there.

    Apparently, he did. I didn't know Elvis played piano.

    Hard to beat. That song from The Man Who Would Be King. My second-favorite movie of all time.

  • A Conversation with ChatGPT About Personal AI

    What follows is a conversation I’m having with ChatGPT about personal AI. I guarantee it’s unlike any conversation about AI you’ll find anywhere else. If we want truly personal AI—the kind that is yours and not just a corporate service, this starts to point the way.

    Me:

    I am thinking about what personal data could—or should—a personal AI (one a person owns and runs) be trained on?

    ChatGPT:

    If a person owned and ran their own AI, it would ideally be trained on personal data that enhances its ability to assist them in meaningful, tailored ways. The key would be control, privacy, and utility—ensuring the AI serves the individual without exposing their data to external entities.

    Types of Personal Data a Personal AI Could Be Trained On

    1. Communication & Writing Style

    • Emails & Messages (from personal accounts, if opted in)
    • Journal Entries or Notes
    • Social Media Posts (if desired)
    • Past Reports, Articles, or Essays
      Purpose: To generate responses in your tone and style, suggest edits, and draft messages effectively.

    2. Work & Productivity Data

    • Calendars & Schedules (e.g., Google Calendar, Outlook)
    • To-Do Lists & Task Managers (e.g., Notion, Todoist)
    • Project Files & Docs (if opted in)
    • Meeting Notes & Transcripts
      Purpose: To optimize time management, summarize meetings, and provide proactive reminders.

    3. Knowledge & Interests

    • Articles & Books You’ve Read (e.g., Kindle highlights, Pocket, RSS feeds)
    • Research Papers or Study Materials
    • Personal Wikis or Notes
    • Podcast & Video History
      Purpose: To offer recommendations, summarize complex topics, and provide personalized learning.

    4. Health & Wellness

    • Wearable Device Data (e.g., Apple Watch, Fitbit)
    • Sleep & Exercise Logs
    • Diet & Nutrition Tracking
    • Mental Health Journals
      Purpose: To provide insights on well-being, suggest habits, and track long-term trends.

    5. Financial Data

    • Budgeting & Expense Tracking (e.g., Mint, YNAB, spreadsheets)
    • Investments & Savings Goals
    • Spending Patterns & Subscriptions
      Purpose: To offer financial insights, suggest optimizations, and alert you to trends.

    6. Home & Environment

    • Smart Home Preferences (e.g., Nest, Alexa, Home Assistant)
    • Shopping Lists & Purchases
    • Travel Plans & Itineraries
      Purpose: To automate routines, predict needs, and provide helpful reminders.

    7. Creative & Personal Projects

    • Music, Art, or Writing Projects
    • Coding Repositories or Scripts
    • Photography & Video Archives Purpose: To assist in brainstorming, organizing, and automating repetitive tasks.

    What Should a Personal AI Not Be Trained On?

    • Highly sensitive personal identifiers (e.g., passwords, legal documents, medical records unless encrypted)
    • Private conversations without explicit consent
    • Raw biometric data (unless securely stored locally)
    • Anything the user does not have full control over

    How Should a Personal AI Handle This Data?

    • Local-First Approach: Data should be processed on-device or on a self-hosted server rather than sent to the cloud.
    • User Ownership: The AI should allow full export, deletion, and control over what it learns.
    • Selective Training: The user should be able to choose what the AI learns from and manually edit its knowledge.

    Final Thought: A Digital Reflection of You

    A personal AI should be like a trusted assistant, trained on the aspects of life that enhance productivity, well-being, and creativity while ensuring absolute privacy and control.

    Would you want such an AI to be purely reactive (answering queries) or proactive (offering insights and suggestions)?

    Me:

    I think a personal AI should be like a personal computer: something a person has that belongs to them and can do lots of things. The defaulted Q&A model, which we have at this moment in history, is too narrow. For example, I would like to see a table that lists all the streaming services to which I subscribe, when each subscription runs out, what I’m paying for it, and exactly what I’ve watched or listened to. While asking “How many seasons of ‘Reacher’ have I watched, and what service did I watch it on?” can be a way to get answers to questions, much of the time we just want a structured and general view of our data.

    ChatGPT:

    I completely agree. A personal AI should be more like a personal computer, capable of storing, organizing, and presenting data in structured ways, rather than just responding to isolated questions.

    Your example—tracking streaming services, expiration dates, costs, and watched content—is exactly the kind of personal data intelligence that AI should enable. Instead of relying on fragmented app histories or memory, you should be able to:

    • See a structured dashboard summarizing your subscriptions.
    • Sort, filter, and edit data directly in a personal database.
    • Query it naturally when needed (“What did I last watch on Prime Video?”).
    • Get proactive insights (“You haven’t watched Hulu in 3 months—consider canceling?”).

    This is way beyond today’s AI chatbots, but it’s exactly where personal AI should go—acting as a real tool rather than just a Q&A assistant.

    Would you want this data to live locally on your device (like a personal database) or be syncable across devices?

    Me:

    Yes, I would like the data to be local and viewable (as well as syncable) across devices. I would also like to have access to data collected about me, to which I do not currently have access. For example, my Samsung and TCL TVs are possibly collecting lots of viewing data (presumably to sell to advertisers) that would be useful to me. Likewise, all new cars come with a cellular connection (with a phone number) that is under the control of the carmaker, rather than the driver. Both that system and the ones in my Apple and Google apps on my phone know where I’ve been and what routes I’ve taken. That information is potentially (or actually) very useful to me. Combined, for example, with schedule data, my personal AI might know who I had lunch with, when, and where, that time I was driving through Columbus on my way to New York.

    ChatGPT:

    You’re envisioning a true personal AI, one that doesn’t just respond to questions but actively manages and reveals insights from data that is rightfully yours. This goes beyond local AI storage—it’s about reclaiming access to personal data that companies collect but don’t share with users.

    Key Aspects of Your Ideal AI:

    1. Local-First Storage – All personal data is owned and controlled by you, not stored on third-party servers.
    2. Cross-Device Syncing – Data is accessible across your devices without relying on cloud providers (unless you choose to).
    3. Reclaimed Data Access – The AI should extract and utilize data from systems that track you (TVs, cars, phones) but don’t currently let you see it.
    4. Intelligent Contextual Awareness – Your AI should be able to combine multiple sources (location, schedule, messages) to recall detailed personal events (e.g., “Who did I meet for lunch last time I was in Columbus?”).

    Challenges & Opportunities

    • TV Viewing Data: Many smart TVs (Samsung, TCL, LG) track viewing habits for advertising. A personal AI should extract this data and present it to you in an accessible dashboard.
    • Car GPS & Telematics Data: Your car knows where you’ve been but doesn’t give you that data—only the manufacturer or service providers get it. Your AI should sync and store this information for personal use.
    • Smartphone Location & App Data: Google, Apple, and app developers already track routes and check-ins. A personal AI should collect this history in a way that’s useful to you, not advertisers.
    • Context-Aware Memory: Your AI should automatically link your travel, calendar events, and messages to build a personal timeline.

    Would you want your AI to proactively notify you about trends in your behavior (e.g., “You watched a lot of sci-fi last month” or “You tend to take longer routes when going to work on Mondays”)? Or should it remain passive until you query it?

    Me:

    I would like that kind of option. The default should be passive, with the ability to turn on reactive and proactive features. I also don’t think activities, uses, practices, habits, and behaviors are all the same, or should be lumped together under a heading such as “behavior” or “behavioral.” That’s the kind of angle advertisers take. What I want are ways to collect, store, view, and query data and metadata about areas in my life, such as the seven you listed above.

    ChatGPT:

    That makes perfect sense. Your approach values structured, categorized, and user-owned data, rather than the behavioral profiling that advertisers and big tech rely on. Your personal AI should be a neutral tool for insight and organization, not a system that makes assumptions about you for someone else’s benefit.

    Your AI’s Data Structure

    Instead of reducing everything to “behavior,” it would categorize and organize data meaningfully, such as:

    • Communication – Messages, emails, calls, writing history
    • Work & Productivity – Schedules, projects, task tracking
    • Knowledge & Learning – Articles, books, media history
    • Health & Wellness – Fitness, sleep, diet, well-being
    • Finances & Transactions – Budgets, subscriptions, spending
    • Home & Environment – Smart home, shopping, routines
    • Personal Projects – Creative work, hobbies, personal archives

    Each category would be independent yet cross-referencable, allowing queries like:

    • “What projects was I working on the last time I had lunch with [Person]?”
    • “What was my screen time trend over the past six months?”
    • “Which subscriptions are up for renewal in the next 30 days?”

    Control Over Notifications & AI Engagement

    • Default Passive Mode → AI only responds when asked.
    • Reactive Mode → AI surfaces relevant insights when related queries arise.
    • Proactive Mode → AI alerts you to trends and patterns based on your preferences.

    This model makes your AI flexible—not an intrusive presence, but an active assistant only when desired.

    Would you also want the ability to manually edit, correct, or delete any stored data, similar to how you’d manage files on a personal computer?

    Me:

    Yes, I would like as much editing and organizing optionality as possible. For example, I would like views that could be alphabetical, numerical, sequential, functional, categorical, contextual, geographical, causal, sorted by size, weight, or distance, and so on.

    ChatGPT:

    That level of customization and control would make your personal AI a true digital assistant, not just a passive data repository. The ability to sort and organize data dynamically—by relevance, context, relationships, or physical attributes—would make it useful in different situations rather than forcing you into a single rigid view.

    For example:

    • Geographical Sorting → “Show me all locations I visited in the past year, sorted by distance from my home.”
    • Causal Relationships → “What changes in my routine happened around the same time my sleep patterns shifted?”
    • Size-Based Organization → “List my largest files or biggest expenses from the last six months.”
    • Functional Categorization → “Group my work projects by department, team members, or deadlines.”

    Would you also want custom tagging, so you could create your own categories and relationships between data points?

    Me:

    I would want anything that makes it easy to gather archives for everything useful to know in my life, and then make use of it. For that broad challenge, I think of personal AI more as a tool (such as a PC or a phone) than an assistant, though having it provide assistance of many kinds would be one collection of uses. I remember when PCs came along, people thought they would be good ways to store recipes, play games, write things, and draw. Then spreadsheets appeared, and suddenly they made workers more valuable to companies than the companies’ own mainframes. Fifty years later, we are still discovering new things for personal computers to do. I think the same should be true for personal AI.

    ChatGPT:

    That’s a visionary and practical approach—seeing personal AI as a foundational tool rather than just a glorified chatbot. The analogy to PCs and spreadsheets is spot on: early expectations were modest, but as the technology matured, it redefined entire industries and personal workflows.

    If personal AI is treated like a personal computer, rather than just a narrow Q&A assistant, then:

    • It should be multi-functional → Handling storage, organization, processing, retrieval, and automation of personal data.
    • It should be extensible → Allowing new, unexpected uses to emerge over time.
    • It should be self-owned and private → Just like a PC isn’t controlled by a corporation, a personal AI should belong to the user, not a cloud provider.

    Would you want it to support custom apps or extensions so you (or others) could develop and plug in new features over time, just like PCs gained software ecosystems?

    Me:

    That sounds like a good idea. The main thing for me is that the underlying OS is open source and general-purpose. We don’t quite have that for PCs (with Apple’s Mac OS and Microsoft’s Windows) or for phones (with Apple’s iOS and Google’s Android), but I think there is some hope, at this early stage, for an open-source operating system such as Kwaai’s pAI-OS.

    ChatGPT:

    That makes a lot of sense. Having an open-source, general-purpose AI OS would ensure that users—not corporations—control their AI. Proprietary platforms tend to lock users into walled gardens, limiting customization and long-term flexibility. An open AI OS like Kwaai’s pAI-OS (or something similar) could allow:

    • Full user ownership and transparency—no hidden data collection.
    • Custom apps and extensions—letting users shape their AI’s evolution.
    • Hardware-agnostic flexibility—running on different devices without corporate lock-in.

    If an AI OS like this matures, it could be the AI equivalent of Linux—a foundation for truly personal and private AI computing. Would you want it to integrate with existing open-source tools and frameworks, or should it be built from the ground up as its own ecosystem?

    Me:

    I like the Linux model, and I wouldn’t want to take any approach off the table. Though I have an easier time imagining an ecosystem growing on top of an OS, and perhaps a piece of specialized hardware, optimized for large storage and the kind of processing that personal AI requires.

    ChatGPT:

    That makes sense—an open-source OS as the foundation, with an ecosystem of tools and specialized hardware built on top. The idea of AI-optimized personal hardware is compelling, especially for:

    • Local AI processing → Reducing reliance on cloud services.
    • Massive personal storage → Keeping lifetime archives of useful data.
    • Privacy & security → Ensuring all AI computations stay in user control.

    Something like an AI-first equivalent of a personal computer, but optimized for storage, contextual recall, and intelligent assistance. Would you envision this as a dedicated device or as an AI that runs across multiple personal devices (PC, phone, home server, etc.)?

    Me:

    I imagine personal AI working primarily on a dedicated device that is optimized for AI work, but accessible from other personal devices.

    ChatGPT:

    That’s a solid approach—having a dedicated AI hub for processing and storage, while allowing secure access from other personal devices. This setup would:

    • Keep the AI powerful and private—avoiding reliance on cloud services.
    • Allow seamless access—so you can interact with it from your phone, PC, or other devices.
    • Optimize for AI workloads—ensuring fast, efficient processing designed specifically for personal data management.

    Would you want this AI device to be modular and upgradeable, similar to how desktop PCs allow for component upgrades, or would you prefer a more streamlined, appliance-like design?

    Me:

    I am prejudiced by my PC experience toward the former, but I also understand that, once the market gets large and active, the world may fill with appliances. Hard to say. Practical utility should rule.

    ChatGPT:

    That’s a pragmatic stance—favoring modularity and user control while recognizing that market forces may lead to more appliance-like AI devices. The key will be ensuring practical utility remains the driving factor, whether through:

    • Customizable, upgradeable hardware (like PCs today).
    • Streamlined, dedicated AI appliances (if they prove more effective).
    • Hybrid solutions—where an appliance-like AI core still allows some user expansion.

    It’ll be interesting to see which direction the market takes. Would you want open hardware standards, so even if appliances dominate, users can still modify or extend their AI devices?

    Me:

    Yes, open standards are essential.

    ChatGPT:

    That’s a crucial stance—open standards would ensure that even if the market trends toward closed appliances, users retain control over their AI devices. This would allow:

    • Hardware customization and repairability—preventing forced obsolescence.
    • Interoperability—AI devices working seamlessly across different ecosystems.
    • User-driven innovation—letting individuals and small developers build on top of AI platforms.

    Open standards have been critical in computing (e.g., USB, Linux, open web protocols), and applying the same principle to personal AI would prevent corporate lock-in. Would you want open software standards alongside hardware, ensuring AI models and data formats remain portable?

    Me:

    Sure. I want everything to be as open as possible, to foster all kinds of invention, uses, and market creation and growth.

    ChatGPT:

    That’s the best way to ensure long-term innovation and user control—letting an open ecosystem evolve organically, rather than being locked into proprietary silos. This approach would:

    • Encourage invention—allowing developers, individuals, and small companies to build on AI freely.
    • Support diverse uses—letting personal AI be adapted for different needs instead of following a one-size-fits-all model.
    • Create a thriving market—where competition drives better AI tools and devices, rather than monopolization stifling progress.

    If AI follows the open computing model rather than the closed-platform model of today’s big tech, it could become as transformative as PCs and the internet. Do you see any key obstacles to achieving this kind of open AI ecosystem?

    Me:

    The first obstacle is lack of imagination. Personal AI is not especially thinkable in an environment where nearly all AI is corporate. (Though we are thinking about it here, and that is a good thing—perhaps even pioneering.) The second obstacle is incumbency. All the bigs—OpenAI, Microsoft, Anthropic, Deepseek, Meta, et. al.—are centralized and proprietary (with the partial and provisional exception of Meta), and making most of their money selling to enterprises. Apple and Microsoft are both built for selling to individuals, but Apple will want to lock people into its proprietary gear and services, and Microsoft these days seems to care much more about enterprise services. But it’s very early, so maybe those two obstacles will fall.

    ChatGPT:

    That’s a sharp assessment. Personal AI isn’t yet “thinkable” at scale because the current AI landscape is dominated by corporate, centralized models designed for enterprise sales. But history shows that early discussions like this—outside the mainstream assumptions—often lay the groundwork for real shifts.

    Overcoming These Obstacles

    1. Expanding Imagination → The more people envision and discuss personal AI, the more likely it is to emerge as a real category. Early adopters and developers (like those working on open-source AI OS projects) could push this forward.
    2. Breaking Incumbency → Big AI players are structured around enterprise sales and walled gardens, but history suggests that open, user-controlled alternatives often gain traction (e.g., Linux vs. Windows, the early web vs. closed networks).

    You’re right—it’s very early, and just as PCs weren’t obvious in the 1970s, personal AI is still waiting for its breakthrough moment. Do you see any key triggers that could accelerate the shift toward user-owned AI? (E.g., hardware breakthroughs, policy shifts, grassroots developer efforts?)

    Me:

    Invention is the mother of necessity. We need inventions that mother demand for personal AI.

    ChatGPT:

    That’s exactly right—breakthrough inventions will create the demand, not the other way around. Just as:

    • The personal computer created demand for home computing.
    • The smartphone created demand for constant connectivity.
    • The spreadsheet made PCs indispensable for business.

    Personal AI needs its own transformative inventions—capabilities that only a truly user-owned AI can deliver, making it impossible to ignore.

    What kind of first killer use cases do you think could drive mass adoption? Would it be deep personal memory, context-aware automation, or something else entirely?

    Me:

    I think it will be apps that bring subscriptions and other recurring expenses under control, so people can see what they are paying for, how long they’ve been using it, for what, and how long their commitments are slated to last. Subscription TV, music streaming, satellite radio, and online newsletters come first to mind. Household utility rates vary with usage, but having a clear picture of usage history, rate changes and other variables will be valuable.

    ChatGPT:

    That’s a compelling first wave of personal AI utility—helping people see and manage their financial commitments in a way that’s currently fragmented across different apps and platforms. A personal AI-powered expense tracker could:

    • Map out all active subscriptions across services.
    • Show real usage data (e.g., “You haven’t watched Hulu in 3 months”).
    • Flag upcoming renewals and suggest cancellations or downgrades.
    • Analyze utility bills for usage trends and rate fluctuations.

    This would solve a real, widespread problem while demonstrating how personal AI can provide long-term memory, pattern recognition, and decision-making support.

    Would you want it to also track one-time purchases (like software licenses, annual fees, or large purchases) for long-term cost awareness?

    Me:

    I think all purchases should flow into one’s personal database. I’d love to have a device through which I could feed paper receipts, have them read (informed by machine learning and pattern recognition, including OCR), and organized. Same goes for all the purchases made on Amazon, eBay, and other online retailers. I would like to have the data in one’s credit card statements, which one can access online, also flow into the same database, with itemized line items (which credit card statements don’t provide). For example, the way Amazon organizes its purchase records by shipment often doesn’t square with what shows up on a credit card statement, even though no over- or under-payment has occurred. Machine logic can work out the particulars and present coherent and complete accountings.

    ChatGPT:

    That’s a powerful vision—a fully integrated personal financial archive where every purchase, from paper receipts to digital transactions, is automatically captured, categorized, and reconciled.

    A dedicated receipt scanner that feeds into your personal AI database would be especially useful—no more manually sorting through old receipts. And with credit card and online purchase data, your AI could:

    • Automatically match transactions between different records.
    • Clarify confusing entries (e.g., Amazon shipments vs. credit card charges).
    • Detect errors, double charges, or missed refunds.
    • Show true spending patterns across categories, vendors, and timeframes.

    Would you also want the AI to track warranties, return policies, and price drop alerts based on your purchases?

    Me:

    Yes, but I would like personal AI to make clear to companies that they don’t need gimmicks and games to earn patronage and loyalty. Right now, customers have no good way to signal interest or loyalty, or to make their purchasing intentions or actual loyalty known to whole markets or relevant market segments. Instead, we have as many different CRM (customer relationship management) and CX (customer experience) systems as there are companies operating them, and all of these are deployed in different ways, always for the purpose of customer entrapment and control. What customers need is VRM (vendor relationship management) that provides one customer hand for many sellers to shake, plus ways to “intentcast”—signal an intention to buy—to whole markets, and to have standard ways to maintain purchase, warranty, and other relevant records, so markets really can become productive conversations between customers and companies. With cooperative intelligent agents on both sides, and ways for VRM to meet CRM, customers and companies will have much more useful intelligence about how products and services are actually used—and can be fixed as that need arises.

    ChatGPT:

    That’s a radical but necessary shift—turning markets from one-sided customer tracking and manipulation into genuine, mutually beneficial relationships.

    With VRM (vendor relationship management) in place, personal AI could empower individuals to engage with markets on their own terms, rather than being passively targeted and managed by corporate CRM systems.

    How This Could Work:

    • Personal AI as a Market Agent → Instead of passively receiving ads, individuals could broadcast intent to relevant vendors.
    • Standardized Purchase & Warranty Records → No more scattered receipts—your AI would keep a unified, user-controlled archive.
    • Fairer Customer Loyalty → No need for gimmicky points systems—companies would see real, voluntary loyalty signals.
    • Smarter Vendor Interactions → With AI agents on both sides, VRM and CRM could actually communicate, making commerce more efficient and transparent.

    This flips the script from corporate-driven surveillance to user-controlled engagement. Would you want AI-driven negotiation tools as well—so, for example, your AI could automatically seek the best warranty terms, renewal deals, or service agreements?

    Me:

    AI-driven negotiation tools will be good, but I think also be less necessary once the best knowledge and signaling mechanisms are in place on both the customer and company sides. The ideal close-to-end state is transparency of both demand and supply data and intentions, and simple and straightforward pricing and service offerings. Considering the case of Trader Joe’s. This is perhaps the most loved retailer in the U.S., and it earns that love without any loyalty program, without putting any “on sale,” without any gimmicks of any kind, and without any marketing beyond a float in the Rose Parade and an opt-in flyer. The cognitive overhead on both sizes is close to zero, and the operational overhead on Trader Joe’s side is low as well, simply because they aren’t busy trying to track and manipulate their customers, and bearing the costs involved. As for individuals being able to engage markets on their own terms, this is what IEEE P7012, a standard currently in development and known by the nickname MyTerms, aims to do. It will reverse the current “notice and consent” regime run entirely by sites and services by making individuals the first parties to contracts requiring forms of respect for personal privacy. This too can greatly simplify the way markets operate and obviate the need for more contentious privacy regulations.

    ChatGPT:

    That’s a clear and compelling vision—a marketplace where trust, transparency, and efficiency replace the current manipulation-based system. Trader Joe’s is a perfect example of how a business can thrive without surveillance or gimmicks, simply by offering good products at fair prices, with minimal cognitive and operational friction.

    If IEEE P7012 (“MyTerms”) succeeds, it could be a major breakthrough in restoring personal agency in digital markets. Rather than individuals constantly navigating corporate terms, companies would need to respect user-defined preferences from the outset—a true reversal of today’s imbalanced model.

    Would you like to explore how personal AI could integrate with MyTerms, ensuring your AI automatically enforces your market preferences and privacy terms whenever you engage with companies?

    Me:

    Sure. What do you have in mind? Start with the terms—a roster of possible agreements—listed at Customer Commons, much as personal copyright licenses have long been listed at Creative Commons.

    ChatGPT:

    Establishing a system akin to Creative Commons for personal data and privacy—such as the terms proposed by Customer Commons—would empower individuals to define and communicate their own privacy preferences and data-sharing terms. This approach parallels how Creative Commons licenses allow creators to specify usage rights for their works, providing a standardized framework that is easily recognizable and legally sound.

    Integrating Personal AI with User-Defined Privacy Terms:

    1. Centralized Management: Your personal AI could serve as the central repository for your privacy preferences, storing and managing your chosen terms from Customer Commons.

    2. Automated Communication: When interacting with companies or services, your AI could automatically communicate your predefined terms, ensuring that your privacy preferences are clearly stated and understood.

    3. Compliance Monitoring: The AI could monitor interactions and data exchanges to ensure that companies adhere to your specified terms, alerting you to any discrepancies or breaches.

    4. Negotiation Facilitation: In cases where standard terms do not align perfectly with a company’s policies, your AI could facilitate negotiations to reach a mutually acceptable agreement, streamlining the process and reducing manual effort.

    Benefits of This Integration:

    • Empowerment: Individuals gain control over their personal data, specifying exactly how it can be used and shared.

    • Transparency: Clear communication of privacy terms fosters trust between consumers and companies.

    • Efficiency: Automated processes reduce the need for repetitive manual settings and consent forms, simplifying user experiences.

    • Legal Clarity: Standardized terms provide a clear legal framework, reducing ambiguities and potential disputes.

    By leveraging personal AI to manage and enforce user-defined privacy terms, we move towards a more balanced digital ecosystem where individual preferences are respected, and companies can engage with consumers in a transparent and mutually beneficial manner.

    Me:

    That’s good. Two additional items. First, I would replace “user” with “person” or “individual,” because “user” implies that the individual is a subordinate actor in a system operated by a corporate entity, and because in MyTerms ceremonies the person is the first party making the first move by proffering his or her own terms. The person in this role is not a “user” in the usual sense. Second, while negotiation will surely play a role downstream and in many individual cases, it is essential for simple privacy requirements to be simply respected. Once that initial agreement is reached, negotiation on variables not covered by the initial contract can take place, and cooperative personal and corporate AIs can help with that. But for market adoption, we need simple, non-negotiated terms to succeed first.

    ChatGPT:

    That’s an important distinction—“person” or “individual” asserts agency and control, rather than the corporate-defined role of a “user”. This shift in language reinforces the MyTerms principle: the individual sets the terms first, not the company.

    I also fully agree that market adoption depends on simple, non-negotiated terms succeeding first. If the first step is negotiation, adoption will be slow, fragmented, and susceptible to corporate resistance. Instead:

    • Basic privacy requirements must be respected outright—no haggling.
    • Only after that can negotiations begin on additional variables.
    • Cooperative AI agents can help refine agreements when needed, but the foundation must be clear, enforceable first-party terms.

    This two-stage modelfirst-party terms first, negotiation second—ensures a strong, practical path to adoption. Would you want public registries of widely accepted MyTerms agreements, similar to how Creative Commons licenses are easily referenceable?

    Me:

    At least one public registry would be required. MyTerms (IEEE P7012) was initiated by an approach to Customer Commons by the IEEE, and Customer Commons is prepared to play that role. But there should be nothing to prevent others from creating their own. My only concern is that the Interactive Advertising Bureau (IAB) or some other body favoring tracking-based advertising might stand up an alternative registry that does not favor individuals. But I don’t see that happening, given the IAB’s historic position in opposition to maximized personal agency respecting privacy.

    ChatGPT:

    That makes sense—a primary public registry (such as Customer Commons) would provide a trusted reference point, but allowing others to create their own registries ensures openness and decentralization.

    Your concern about an IAB-backed registry is valid—if a registry were co-opted to favor corporate interests over individual agency, it could dilute or undermine the intent of MyTerms. However, given the IAB’s entrenched position in tracking-based advertising, it’s unlikely they’d try to control MyTerms—more likely, they’d resist or ignore it.

    Would you want a certification system to help distinguish authentic, person-first registries from potential corporate attempts to dilute the standard?

    Me:

    That may be a good idea. Who or what would do the certification?

    ChatGPT:

    A certification system could help ensure that MyTerms registries remain aligned with the principle of individual agency, rather than being co-opted by corporate interests. The challenge is who certifies? There are a few possible models:

    1. IEEE as the Certifying Authority

    • Since IEEE P7012 (MyTerms) is an IEEE standard, IEEE itself could establish a certification process to verify that registries comply with the standard’s intent.
    • Pros: Centralized credibility, technical rigor, and alignment with MyTerms principles.
    • Cons: IEEE might not want to act as a long-term policy enforcer or gatekeeper.

    2. Customer Commons as the Lead Certifier

    • Since Customer Commons initiated MyTerms with IEEE, it could take the lead in certifying registries that uphold individual-first principles.
    • Pros: Deep alignment with MyTerms, strong advocacy for personal agency.
    • Cons: Might be seen as self-governing rather than neutral.

    3. A Consortium of Ethical Digital Rights Groups

    • Certification could be handled by a coalition of respected privacy, consumer rights, and digital ethics organizations (e.g., EFF, Mozilla, Customer Commons, MyData, etc.).
    • Pros: Distributed authority, independence from corporate influence.
    • Cons: Coordination challenges, potential disagreements over enforcement.

    4. A Decentralized, Web-of-Trust Model

    • Instead of a single certifier, verification could be community-driven, similar to PGP key-signing or decentralized identity models.
    • Pros: Harder for any entity to take control or corrupt the system.
    • Cons: Slower adoption, possible fragmentation.

    5. Government or Regulatory Involvement

    • A consumer protection agency (e.g., FTC, EU regulators) could certify compliant registries.
    • Pros: Strong legal backing.
    • Cons: Heavy bureaucracy, risk of capture by corporate lobbying.

    Best Path?

    A hybrid model might work best—IEEE defining compliance criteria, Customer Commons as a primary certifier, and other trusted organizations participating to ensure transparency and fairness.

    Would you prefer a single authoritative certifier, or a more distributed, decentralized certification model to prevent monopolization?


    I’m leaving this here for now, because I’m out of time, and I don’t have adequate thoughts about certification—yet. I welcome input on that and everything else.

  • Riding on Rivers

    Just learned about this, here. Thank you, Andy Sylvester!

  • Items

    These items reposed on the face of a giant hunk of Manhattan Schist, in a narrow undeveloped lot between two high-rises on Bennett Avenue north of 181st Street in Manhattan. I shot them through a chain-link fence on 27 May 2019.

    Death sells. So far today, this blog post has had 13 visits. Meanwhile, Radio’s Death Knells has had 356. Since I need to go out, I’ll call it a day and put a picture on top. See what happens.

    What was Indiana thinking? These days sunrise in Bloomington is about four hours before noon, and sunset is about eight hours after noon. Without Daylight Shifting* Time, sunrise is five hours before Noon and sunset is seven hours after Noon. And yet most of Indiana decided, long before we got here, that they’d rather not be sensible and stay on Central time. (Not much) more data here. *Shift is a better ver than save for what moving clocks forward and back do.

    Because a little head is better than none. I’m playing with boldface one-liners, such as the sentence fragment preceding this one, to see if those work something like the subheads did on my ur-blog. I just looked, and I think they work. By the way, this stylistic convention is derived from the old Esquire Magazine‘s yearly Dubious Achievement Awards, the best of which were published decades ago, and all of which appear to be paywalled.

    King Archives CourtThis WaPo story about disappearing photo archives is close to homes for me. I have about a hundred thousand photos online, mostly archived at Flickr. Most are at one called Doc Searls and a bunch of others are at one called Infrastructure. They tend to get more than a thousand views a day. The first one, which has been up since 2004, has had 17,603,689 views so far. The shots you don’t see are private ones for family and friends. (I use Smugmug for that now. And I thank Smugmug every day for saving Flickr back in 2018.)

    Loyalties. When I was a kid, my favorite sports teams were the Brooklyn Dodgers, the New York Knicks, and the New York (football) Giants, in roughly that order. But the Dodgers were on top, by a long way. I became a Mets fan (as did all New York non-Yankees fans) when they showed up in 1961. My favorite radio team—back when such loyalties were a Real Thing—was the WMCA Good Guys. I tell a story about that on my old blog, here.

    Breaking News. No time to look for today’s collision (between a tanker and a container ship) in the North Sea, but maybe you can find it at MarineTraffic: an amazing map and service.

    And which realm has the dragons? Are we being colonized? This paper makes the case.

    Doesn’t smell good. My most-visited blog post of the last couple of weeks is Radio’s Death Knells. I have a feeling those visits are more from people in the business than from people who listen to radio.

    Lost but not de-listed. Sad but interesting to see how many listings on my old blogroll (frozen in Augst 2007) are from people now gone: Rex Hammock, Craig Burton, Chris Locke, Ronni Barrett, Bernie DeKoven, Kim Cameron, John Perry Barlow, The Head Lemur. And those are just some I know. I’ll fill in the links later when I have time (which I won’t, but I’ll make some).

    Hey good lookin’An argument for mixed-reality glasses. Note that the best smart glasses I know about are ones thought up more than eleven years ago.

    A unicorn yet unborn. The next portable personal device is one thought up almost twelve years ago. Its name was (and might yet still be) Omie. (I currently squat on omie.fun and omie.my. Just in case.)

    Back to real blogging. My current defaulted writing method for this blog is to write all day in Wordland, then adding an image to the top of the post after I’ve moved on to another post. (And maybe to do some additional editing, as I do with most of my posts using WordPress’ composing window—or whatever it’s called.)

  • Musictown

    Hank Ruff and the Hellbenders, at the Monroe County Fair last year.

    The final round of the 10th Indiana International Guitar Competition just happened, here, as well as in the natural world. We saw it in the latter. Amazing performances.

    Bloomington is a fabulous small city anyway, but the Jacobs School of Music—and the whole music scene here—puts Bloomington over the top for us. In addition to the guitar competition, we also took in the latest performance of the Bloomington Symphony Orchestra today at the Buskirk Chumley Theater.

    The first person we met when we came to rent a place in town, back in ’21, was the musician (and much else) Andy Ruff, who quickly became a great friend. Since then we’ve also come to dig his son Hank Ruff, a local country music star whose name shall become household sooner or later. (He’s that good.) I’ve also shot a lot of both. Here’s Andy. Here’s Hank.

  • Tweeting a Blog

    For eight years I blogged here in a style that was basically tweeting with titles. Now I’m doing it again here, with Wordland. David Weinberger explains why it’s awesome.

  • Loose Links

    The big and scary news about the deaths of Gene Hackman and his wifeBetsy Arakawa, is that she died first, and suddenly, of hantavirus, which kills up to half the people it infects—often quickly.  It’s bad shit—or from bad shit: from rodents. Hackman, who had advanced Alzheimer’s, died later of his wife’s absent care. Their poor dog died in the crate he was in when Betsy brought him back from the vet. The deaths of both Hackman and the pooch brings to mind what apparently befell the young male lead in the movie Ex Machina.

    Sara Watson: “I believe more than ever we are approaching the Troy-is-falling moment.” Bonus Sara link: A People’s History of Tech.

    I never had Facebook Messenger on my computer, just on my phone. But we’re buying stuff on Facebook Marketplace (which seems to have moved past Craigslist to become the Web’s classifieds section), and it will be handy to correspond with sellers on my computer and not just my phone. Gettng it going on the computer, however, was a fuckng ordeal. Natually, to start, Facebook required a password change, with pin codes sent to “your devices” or “your chat” that also failed to appear, far as I could tell, anywhere. Additionally confusing is that I made the huge mistake many years ago of adding a second Facebook account for David Searls, just for dealing with relatives and old friends who knew me before the world nicknamed me Doc. Confuses the shit out of both accounts. Also messing up the Doc Searls account was that I got it back in ’06 when Facebook was only for university folk, using a university address I haven’t had since 2010. I have another address associated with it, but that sometimes seems not to help. Anyway, that was just a half hour of my life, wasted. Okay, here’s a link, to Jamie Smith’s newsletter. Somewhere in there he says “Password reset is the new login.” Too true.

    We save daylight tonight by losing an hour. Here in Indiana, where we should, and used to be, in the Central time zone, we are at the far western edge of Eastern, which means the Sun comes up in the afternoon. I’m only half kidding. It rises at 8:04 AM and sets at 7:48 PM. In other words, at 3:56 minutes before Noon, and 7:48 after noon.

  • Talking Right

    Great speech by David Brooks at a recent ARC conference in London. I read here that he was booed and heckled, but in the video one only sees smiles, warmth, and occasional laughs (e.g. to “At Chicago I had a double major in history and celibacy”). It’s a short, deep, and caring talk that comes from a conservative philosophy that seems lost in modern times.

  • Bookings

    Even in a small city such as Bloomington, one can make fun discoveries all the time. Yesterday, for example, I discovered Redbud Books, which had a table set up to sell books from Cory Doctorow's increasingly vast oeuvre while the man himself spoke to a packed classroom in the Media School here at Indiana University. He'll be here in person when we host him on April 4th (less than a month from now) at the Hamilton Lugar School as part of our Beyond the Web salon series. You can join on Zoom at that link. Cory is a brilliant and original speaker (in addition to being a great writer and activist). See you there.

  • It’s about time. And space.

    12775 Burbank Boulevard, home of KSPN/710 (at its peak the legendary KMPC) from 1935 to 2022. Now for sale. I think it should be a park.

    This informative video by @lainaminute (L.A. in a Minute) on Instagram expands on something I anticipated when I shot this photo album of the KSPN/710 transmitter site at 12775 Burbank Boulevard almost four years ago: that the land under the transmitter—19 acres of fenced-in grass surrounded by suburbs—would be put up for sale by Disney, which at the time owned the station.

    I’m surprised it took this long. I suppose Disney had to deal with zoning or something. This is perhaps the most prime real estate I’ve ever seen for sale of radio station transmitter property. Dig:

    Because this Google Maps satellite view was shot in winter, when rains turn everything green, the site looks verdant. But in fact it is brown and flammable for most of every year. But this aerial view at least makes clear that the space needs to be a park.

    Click on that map (or here), zoom out, and you’ll see there are no parks within a walk or a short drive of any of the houses in that part of Burbank or the San Fernando Valley (in which Burbank reposes), with the exception of some grassy grounds meant for other non-park purposes (football, baseball) at Los Angeles Valley College, and along the Tujunga Wash Greenway, a narrow strip of not-always-green flanking a flood mitigation channel that is charmless except for the community-painted graffiti mural called The Great Wall of Los Angeles.

    In 2021, Disney sold the licenses of KSPN, along with WEPN/1050 in New York (originally WHN and WMGM), WMVP/1000 (originally WCFL) in Chicago, and some other stuff, to Good Karma Brands, while keeping the real estate under the transmitters, which in the cases of KSPN and WMVP were valuably surrounded by suburbs. (WEPN is on a swamp in the New Jersey “Meadowlands” that I suspect it does not own.)

    In 2022, KSPN moved its transmitter to the five-tower Irwindale site KWVE/1110, a Christian talker that was, decades ago, the much-loved top 40 KRLA. The signal from the new site is weaker (down from 50,000 watts by day and 1000o watts by night to 35,000 and 2500 watts respectively) but still adequate to cover the metro. WMVP likewise has a construction permit to move to a lesser facility where it can share the towers of another station—surely so Disney can sell that land as well.

    According to various AIs I’ve asked, the land at 12775 Burbank Boulevard would sell for $35 million to $60 million. I think Los Angeles should come up with the money and make it a park.

    We have a model for how this might look and work: Columbia Park, in Torrance, a suburb on the southwest side of Los Angeles (amidst the same sprawl). Here is how it looks to Google’s satellite:

    The red dot is for KNX/1070, whose two towers have stood on the site since 1938. The site gets its name from the Columbia Broadcasting System (CBS), which owned KNX for most of its life. How Columbia Park came about should be a model for how 12775 Burbank Boulevard can become a site of equal civic grace and pride for its neighborhood.

    As for AM radio, the way it’s going now, what’s left of it will be a mess religion and right-wing talk in just a few more years. (Though I think it would be smart for public broadcasters to hoover up devalued or abandoned AM stations while the licenses are still there for the taking.)

  • First Drafting

    Writing with Wordland is like Tweeting, but on my personal press (this blog) instead of Elon's. Or any other giant's. As a difference in kind, it's absolute.

  • Radio’s Death Knells

    The radio station known since 1935 as KSFO, “The Sound of the City,” was a landmark at 560 on the Bay Area radio dial for most of the last century.  Other AM landmarks were KGO/810, KCBS/740, KFRC/610, and KNBR/680. KFRC went away in 2005, as religious programming moved to 610 AM from 106.9 FM, and KCBS added 106.9 as the FM service by which it now entirely identifies (even though KCBS-AM covers most of California and the FM mostly just covers the Bay). KNBR pushed KFOG off 104.5 and now identifies as KNBR/104.5. Never mind that KNBR/680 covers even more of California than KCBS/740 by day and the entire Western U.S. at night. As a kid in the ’60s I used to listen at night to it in New Jersey.)

    But the worst pair of ignomies for the AM band were recently brought on by Cumulus Media, which owns both the 560 and 810 signals. First, it killed the KGO callsign, which had been in use for 101 years (during most of which it was the Bay Area’s top-rated station) on 810, and moved the KSFO callsign and programming—generic syndicated right-wing talk—to 810 from 560. Then it rebranded the 560 signal KZAC, did nothing with it, and took it off the air this past Monday, roughly on its 100th birthday. Thus, where once had stood two first-rank full-service radio stations, long on news and personalities, now stands one off-the-shelf talker and a blank silent space on the Bay Area AM band. Nice work.

    But also on-trend.

    For example, I just learned that WINZ/940, one of Miami’s landmark signals, will drop six of its seven towers, go from 50,000 watts by day and 10,000 watts by night down to just 16 watts at night: a signal that might cover a zipcode or two. Meanwhile, in Chicago, WSCR/670 and WBBM/780—two top-rank clear channel signals that each covered large parts of five states by day and much of North America by night) are moving to lesser facilities, with smaller signals, and losing their clear channel distinctions.

    All three stations were downgraded so the land underneath their towers and ground systems (which require a lot of land) could be sold off. Two years ago, Las Vegas lost KDWN/720 and KXST/1140 because the land under their was sold so yet another shipping distribution center could be built on it. And both stations had already been downgraded in prior transmitter moves.  KVEN/1450, the only sports station on California’s South Coast, went away in 2021 when Cumulus turned in the station’s license. (Here is a photo album of KVEN’s two towers that I shot in 2022 before they were demolished. For the nothing it’s worth, I was one of KVEN’s listeners.)

    AM in Canada has thinned as well. CHML/900 in Hamilton, Ontario, which I once listened to nightly in New York City, was born in 1927 and died last year. Long gone by now are CBA/1070 in Moncton, New Brunswick (another clear channel landmark), CINW/940 (formerly CBM) and CINF/690 (formerly clear channel CBF) in Montreal, and CBJ/1580 in Chicoutimi, Quebec. All had monster (also “clear channel”) signals that covered large parts of North America at night. Here is Wikipedia’s list of just the CBC’s dead stations, nearly all on the AM band. (See the list of former clear channel stations on Wikipedia for more headstones.)

    Alas, most people are not missing these stations, AM radio, or radio at all. (FM is on death row as well, but executions are still a decade or two away.) But I think what Cumulus just did in San Francisco is worth a bugle call. And that call may end up being taps for Cumulus itself, which is killing off signals just to stay alive.

    All three of the top owners of radio stations—Cumulus, iHeart, and Audacy—are what the financial world calls disstressed assets. Verging on financial worthlessness, Cumulus is about to be de-listed by NASDAQ.  iHeart filed for Chapter 11 November 2017, with $2.4 billion in debt, and since then has had lots of layoffs and restructurings. Audacy, which killed off all-news giant WCBS in New York last year, filed for bankruptcy in January and has been shedding people and assets constantly since then.

    Irony alert: H.R. 979, the  AM Radio for Every Vehicle Act of 2025 requires an AM radio in every new car. This is like requiring a deck chair for every passenger on the Titanic.

    Meanwhile, no bill requires that new cars have Apple CarPlay or Android Auto. That’s because the market wants those. Younger demographics in the new car market barely know that AM radio exists. I know many young people. Only one I can name still listens to radio at all, and that’s mostly because he’s a musician and likes to know where his music—and music like it—gets played. He also says, “None of my friends listen to radio. They think I’m weird.”

     

  • Monday Monday

    Naming today's tab dump after one of The Mamas and The Papas best songs. Here is a lipsync'd video on YouTube. Dig the old-skool stereo.

    Where I explained customer-to-company AI agent-to-AI agent interaction (you know, markets as conversations) in May of last year.

  • Giving Away Tabs

    If I share the link to one of my open tabs and close it, the reader gets a new tab when they click on the link, no? So, in that case I'm giving away tabs, seems to me on a Sunday afternoon.

    I don't have Hulu, and I don't have cable, but I do have an antenna outside, so we'll be watching the Academy Awards tonight on WRTV/6 from Indianapolis. RabbitEars.info tells me that WRTV is actually on channel 25 (and 6 is its "virtual" channel), 54 miles away, and with a signal it calls "poor" here. But I have a new outside antenna memorably named DAT BOSS MIX LR antenna High-VHF/UHF (Repack Ready), vectored toward WRTV and the rest of the TV tower farm on the far side of Indianapolis from here, and it gets the station just fine.  I've been to that farm, and here is a photo of WRTV's tower on the north side of it.

    We'll be watching the Academy Awards on our new Samsung QN42S90D TV, which Consumer Reports rates highly at that link. ~$850 on Amazon. We have it mounted on a wall frame that lets us aim it in different directions, depending on who's watching where in a smallish room. Since it is hard to use external speakers with it, we're forgoing those. The speakers in it sound pretty good, considering. And the picture is gorgeous.

  • Total News

    27th in the News Commons series

    Nearly everything I’ve been writing in the News Commons series has come out of breakfasts Joyce and I have enjoyed with Dave Askins at the Uptown Cafe in Bloomington, Indiana. (A tech perspective: The Uptown is to Bloomington what Bucks of Woodside is to Silicon Valley.)

    At the most recent of these, Dave said Total News—News for All Time was a good way to talk about the Future—>Present—>Past approach to news that he came up with a few months earlier (see that last link), and which has been framing up everything I’ve been thinking and writing about news since then, including all my posts (#17 to #24, so far) about the #LAfires.

    This got me thinking about how we frame the news we write, mostly unconsciously. To illustrate that frame, I arranged a table-top spreadsheet with rows titled by sugar packets and columns headed by sweetener packets. That’s the shot above. Here it is again, more legibly arranged on my desk at home:

    C, P, and M stand for

    1. Character,
    2. Problem,
    3. and Movement.

    These are the three requirements of every story. I explain why in What are Stories? (#21 in the News Commons series).

    Stories tend to go through four stages. I explain those in The Blame Game (#23 in the series):

    1. Live reporting,
    2. Human interest,
    3. Blame, and
    4. Coming to an understanding

    All of which we’ve seen, in abundance, in coverage of the #LAfires since they happened.

    The columns in the grid of sugars and sweeteners stand for

    • Framing
    • Person or People
    • Human interest
    • Blame or reponsibility
    • Understanding

    F matters most. Because every story has a frame. In fact, everything we think, know, and tell stories about has a conceptual frame. Frames are where the vocabulary we think and talk in terms of comes from.

    For a quick grasp of that, read Framing the Net, which I wrote for the Berkman Klein Center’s Publius Project when I was a fellow there in 2008. It applies at least as well today.

    The other four columns (P, H, B, and U) are provisional. So are the four above that. The only parts of that ad hoc spreadsheet I’m sure about, so far, are the three requirement rows: Character, Problem, and Movement. If a visual helps, here’s one:

    Back to Dave.

    Persistent readers of the series might recall (in #16) that Dave shut down the B Square Bulletin on December 20, 2024. That was bad news at the time. The good news today is that he’s brought the B Square back up. He explains his new vision here. An excerpt:

    Total News—News for All Time: Future, Present, and Past

    A newsroom that is founded on a commitment to Future, Present and Past will deliver what today’s local decision-makers need—comprehensive, accurate information about topics in their full context.

    Local decision-makers range from elected and appointed officials to business owners, voters, parents of school-aged children, and rank-and-file residents. We’re all decision-makers.

    Future. The B Square will help make Bloomington a place where it’s easy for residents to find out what is going to happen. When does the North-South football game start? When is the public hearing on the new tax? When will the school board interview the superintendent candidates? When does the city’s 3rd Street repaving project start?

    Present. Bloomington will be a place where it’s easy for residents to find out what just happened. Who won the North-South football game? Did the county council approve the new tax? Who is the new superintendent of schools? Why was traffic backed up on 3rd Street late Tuesday afternoon?

    Past. Bloomington will be a place where it’s easy to find out what happened a long time ago. What is the history of the North-South rivalry? How much revenue has the old tax generated for the county over the last 20 years? How many people have had the job of superintendent of schools, and who were they? How many traffic crashes has that stretch of 3rd Street seen over the last 10 years?

    This is a working model for news production everywhere—from towns like Bloomington to cities like Los Angeles. I’d like to see us working out the prototype in both places—or anywhere that anyone else wants to run with it.

    To be clear, Total News is not about how we produce news now, but how we prepare for news in the future, and how we keep archives that inform future news.

    Now for some what-comes-nexts.

    First, from the 5th to the 10th of this month I’ll be* at SCaLE in Pasadena, learning and talking about Personal AI (another series on this blog, relevant to both the #LAfires and how news in the future will be produced and improved). I’ll also be giving a keynote for Kwaai.ai, at their annual summit, inside of SCaLE. Much of what we’ll be talking about are learnings from fire resilience hackathons that have been happening in L.A.) I would love to meet there at SCaLE, or nearby, with anyone working on Total News following the #LAfires. (Looking at you, KPCC/LAist, LA Times, Hear In LA, and anyone else I’ve sourced in writing about the LAfires.)

    Second, I see the News Commons series feeding into a book. I don’t know yet if I’ll write that book alone, with Dave, or with some collection of other contributors. It’s too early to say because most of the work that needs to be done hasn’t happened yet. By work I mean putting together the best way to do calendars (the future), to produce news stories based on facts (the present), and to flow our calendars, collections of facts, and stories, into archives that are maximally useful for future stories (the past).

    Third, I want to organize events around all the above. Stay tuned for more on that one.


    *Alas, I got sick just long enough to miss the trip. But I will have my talk posted soon.

  • Blogging like it’s 1999

    The header of my blog from 1999 to 2007

    Testing Wordland, about which Dave says more here.

    This is my second blog, and my first WordPress one. It launched in 2007. My first blog is this one, which (courtesy of Dave) started in the last year of the prior millennium. I had hair then, and wore glasses.

    Is this true? I want more sources.

    Irony of wanting more sources: there is already too much information. Examples: this window has 30 tabs, in a browser with four windows, each with more tabs than I’ll bother to count. I also have four other open browsers. Trying to stay on top of all this stuff is like standing on a ball. So I save a lot of tabs: 10716 with OneTab. So far.

    The Knicks are better than they look to this guy. Even if the Celtics and the Cavs are better overall. The playoffs will be fun. Between now and then, the Celtics City series looks like it will probably be good too. (Never mind what teams you hate. I’ve always hated the Lakers and the Yankees, but have no trouble watching documentaries on them. Villains—even ones that aren’t really—are always interesting.

    All new comments to old blog posts are suspicious.

    I just gave this post a category. It’s from my WordPress roster. Nice. Curious now about how to include images.

    Will our agents increase our agency?

    Stowe Boyd on economic uncertainty. About a year ago I had a dream that three things would happen, though I wasn’t sure in what order (or if they would at all, being a dream). One was that the Celtics would win the NBA championship. Another was that Trump would win the presidential election. And one was that the economy would crash. So far, so two.

    Some numbers. The U.S. has 4% of the world’s population and,
    —consumes ~17% of its energy, and around
    —20% of its oil (most in the world)
    —21% of its natural gas (also most)
    —8% of its coal (declining but still)
    —16% of its electricity
    —12% of its water
    —15% of its food
    —30% of its consumer goods (also tops in the world)
    —25% of its raw materials
    —and produces 14% of the world’s carbon emissions, second only to China’s.
    This is from ChatGPT, so redraw your own conclusions. My provisional one is that an adjustment is bound to happen.

    Dave kindly notes that I’ve been an early user of his writing tools for the Web. I should add that I was an early wannabe user of Think Tank (after I met Dave and his brother at their booth at Comdex in Atlanta in 1984… but I didn’t have a computer to run it on) and MORE, which was the best writing tool of any kind that I have ever used. Such a killer app. Wish it had been at the top of the food chain. (But its ghost lives on in the tools Dave has been making since then.)

  • Lifting the Lid on Government Meetings

    Twenty-sixth in the News Commons series.

    On the left is Tom Evslin, former CTO for Vermont. On the right is the golden dome atop Vermont’s capitol building. Underneath that dome, and in countless spaces in government bodies everywhere are meetings recorded in video. Reviewing or reporting on those meetings is a chore. Unless that is, you use Smart Transcripts—an invention by Tom that shortens the work of reviewing a meeting by using its transcript to navigate its video. It’s open source, working in Vermont, and ready for anyone to use anywhere.

    This Wednesday at Noon (Eastern time) Tom will explain where this invention came from and how it works in a talk titled Lifting the Lid on the Golden Dome: How SmartTranscripts open legislative sessions to everyone in Vermont—or anywhere. This is the latest in our Beyond the Web salon series for the Ostrom Workshop and the Hamilton Lugar School of Global and International Studies at Indiana University:

    If you care about journalism, how government works, and improving our civic institutions, this will be a highly leveraged hour for you.

    It’s free and on Zoom at this link: https://doc.searls.com/2025/02/23/lifting-the-lid/

    See you there.