Does personal AI require Big Compute?

I don’t think it does. Not for everything.

We already have personal AI for autocomplete. Do we need Big Compute for a personal AI to tell us which pieces within our Amazon orders are in which line items in our Visa statements? (Different items in a shipment often appear inside different charges on a card.) Do we need Big Compute to tell us who we had lunch with, and where, three Fridays ago? Or to give us an itemized list of all the conferences we attended in the last five years? Or what tunes or podcasts we’ve played (or heard) in the last two months (for purposes such as this one)?

Let’s say we want a list of all the books on our shelves using something like OpenCV to detect text in natural scene images using the EAST text detector? Or to use the same kind of advanced pattern recognition to catalog everything we can point a phone camera at in our homes? Even if we need to hire models from elsewhere to help us out, onboard compute should be able to do a lot of it, and to keep our personal data private.

Right now your new TV is reporting what you watch back to parties unknown. Your new car is doing the same. Hell, so is your phone. What if you had all that data? Won’t you be able to do more with it than the spies and their corporate customers can?

It might be handy to know all the movies you’ve seen and series you’ve binged on your TV and other devices—including, say, the ones you’ve watched on a plane. And to remember when it was you drove to that specialty store in some other city, what the name of it was, and what was the good place you stopped for lunch on the way.

This data should be yours first—and alone—and shared with others at your discretion. You should be able to do a lot more with information gathered about you than those other parties can—and personal AI should be able to help you do it without relying on Big Compute (beyond having its owners give you back whatever got collected about you).

At this early stage in the evolution of AI, our conceptual frame for AI is almost entirely a Big Compute one. We need much more thinking out loud about what personal AI can do. I’m sure the sum of it will end up being a lot larger than what we’re getting now from Big AI.

 

 

 



8 responses to “Does personal AI require Big Compute?”

  1. Iain Henderson Avatar
    Iain Henderson

    Spot on Doc. The same argument you apply to personal AI has also always applied to personal data management on the individual side; it’s debatable how many of those requirements actually need AI versus other forms of compute. Our problem lies in that there is a perception there is money to be made in that current surveillance model. I’d contend that if there is, it is only for a tiny number of big tech co’s. Everyone else is just kidding themselves.

    1. I agree. Look back further to the global financial system and credit that started with the transistor/mainframe. And look even further back to yellow and white pages where information about people was open and freely available. Draw on those comparisons to see what works and doesn’t work to develop a model of trust that is equitable and efficient in global digital networked ecosystems.

  2. Doc, conversation last week with guy who’s business is to site data centers.

    Quick summary : demand is such that some utilities will have to double capacity in next 5-7 yrs, capacity built over last century or more.

    Granted, today’s generation and transmission is much more efficient, but utilities are not prepared

    1. Tech used to be anti-inflationary because of moore’s and metcalfe’s laws, but with the enormous waste of the cloud and network access to it (not just AI, but ad-tech, video, crypto, etc…) and the move to the flat-rate, all-you-can-eat, “as a service” (subscription) model that is usage independent, tech has been a prime if not THE major contributor to inflation over the past decade. This price to cost disparity is also the primary reason for growing information and wealth divides. The rather simple solution is to embrace an economic settlement system married to the protocol stack that serves not only as incentives and disincentives, but also a mechanism for redistribution of imbalances east-west and north-south in the informational stack.

  3. Personal AI can be a good goal, but a lot of the issues you mention require that our data be machine-readable, rather than human-readable. To many of those things mentioned are now available only as HTML pages – and it is not always easy to extract the data from those, although there’s probably a use case for AI to do that too, I guess. I would have liked to see the promise (to me) of XML make these things easier via having the data in XML with an XSL stylesheet to transform it into (X)HTML and CSS when it needed to be a visual presentation. Now we have the harder problem of extracting the data from the visual presentation.

    1. I think OCR will need to do a lot of it, until the keepers of the data realize that it will be better for them, as well as for us, that we get it in easily readable form.

  4. Until there is a terminating settlement system, the answer is yes. But that’s only a tool, policy needs to be built on top of it to give the user agency. Then there is economics and policy in place such that data can be processed and stored locally, but also centrally in a more efficient and equitable fashion. The reality is the “open and settlement free” model led to the big grift of Internets 1.0/2.0. Crypto is the 2nd big grift and AI is the third. This is plainly obvious to see; enormous wealth transfers. Except to those who espouse open and free.

    1. Great stuff, as usual, Michael. Let’s visit this in depth soon.

Leave a Reply

Your email address will not be published. Required fields are marked *