Education 3.0

Nobody is born knowing how to build a stone wall. We are taught by each other and by ourselves. This is Education 1.0, which Education 3.0 will retrieve with help from AI.

Education 1.0 was about learning one one-on-one, or one one-on-few. Also by ourselves. This kind of on-site discovery and mentorship gave us stone tools, cave art, clothing, shelter, writing, engineering, construction on every scale, and great artists in every age. Writing was involved, mostly of the scribal kind before the Gutenberg Parenthesis began. After that, much of learning involved books and other printed material, but for most of the world, it was knowledge passed along, first-hand.

Education 2.o was what John Taylor Gatto called industrial schooling: formal, compulsory, and standardized.  This is what we have had since the dawn of the industrial age.

Education 3.0 is what we get with Big AI, which will overthow Education 2.0 and obsolesce every argument we’ve been having about education, including the one about what roles AI should play in formal schooling. (Think of that last one as Education 2.9.)

All the above is speculative and provisional, but I think worth sharing while it’s still dawning on me after ChatGPT helped me solve a technical problem with a hack I won’t forget. Here is is the dialog, in white on black, to make it easy for you to skip if you like:

I wrote,

Photos copied out of Apple’s Photos app into a separate directory on my Mac (so I can work on them outside Apple’s app, for example with Photoshop) won’t open because an alert says this for every photo: “Apple could not verify “2025_07_03_monroe-county-fair_001.JPG” is free of malware that may harm your Mac or compromise your privacy.” Yet they have no malware. How can I bypass this alert and open the photos? Hitting the space bar on each file gives me a fine preview. But no app (not Photoshop, not Preview) will open them.

ChatGPT replied,

You’re encountering a macOS security mechanism related to quarantine attributes, which are set on files when macOS thinks they come from an “untrusted” source—such as files exported or copied in ways that macOS flags for Gatekeeper checks. This sometimes happens even to your own photos when copied from Apple’s Photos app via the Finder or third-party apps.

Background

The com.apple.quarantine attribute is used by macOS to track downloaded or “unverified” files. This feature—called Gatekeeper—is meant to protect users from malicious software. Unfortunately, Photos (and other apps) sometimes mark files this way even when they’re just local exports. Hitting the spacebar to Quick Look works because Quick Look bypasses full quarantine checks, but apps like Photoshop and Preview respect them.

 Solution: Remove the Quarantine Attribute

Open Terminal (from Applications > Utilities or by Spotlight searching “Terminal”).
Use the xattr command to remove the com.apple.quarantine flag from your copied photo files.
If you want to remove it for one file:
xattr -d com.apple.quarantine “/path/to/your/photo.JPG”
Example:
xattr -d com.apple.quarantine ~/Pictures/Exports/2025_07_03_monroe-county-fair_001.JPG
To remove it from all photos in a folder, run:
xattr -d com.apple.quarantine /path/to/folder/*.JPG

Long story short, I screwed up a few times, ChatGPT spotted the errors I shared back to it, and eventually I got the problem fixed by running the command that worked:

xattr -rd com.apple.quarantine /Users/dsearls2/Pictures/pix-by-year/2025/07/2025_07_03_monroe-county-fair

Finally, ChatGPT added,

Quick Recap

Use Case Command to Run
Clear a whole folder recursively xattr -rd com.apple.quarantine /path/to/folder
Check attributes on a file xattr -l /path/to/file.JPG
Remove for a single file xattr -d com.apple.quarantine /path/to/file.JPG
Disable quarantine system‑wide sudo defaults write ... LSQuarantine -bool NO + reboot discussions.apple.com+12apple.stackexchange.com+12facebook.com+12

Now, would I have received the same help from an AppleCare call? Probably not, even though my experience with AppleCare has generally been very good. This was better, because I got schooled in an Eduction 1.o way.

That experience also got me thinking about the heuristics involved. So I engaged in further conversation with ChatGPT. Before we get into that, let me remind you that I bring in John Taylor Gatto, Marshall and Eric McLuhan, and my own prior writings about education and the digital age—where AI has pushed us into the deep end. (All those links go to stuff I’ve written on this blog. I’ve written a lot more about all of it elsewhere.) Here goes:

I said,

Okay. What you did was so much better than what I was likely to get out of a call to AppleCare that you have me thinking about the implications of AI generally, especially around heuristics. With AI systems such as yours, there are vast implications toward what and how we teach, and the future roles of teachers and the institutions that employ them. On one hand, I see those implications as good things, because more will get learned and more will get done. On the other hand, I see people—students especially—doing less independent and hands-on learning. Can you expand on those thoughts a bit?

ChatGPT said:

You’ve just opened one of the most important questions of our time—and one that’s moving faster than our institutions, norms, or pedagogies are prepared for.

Let’s break it down across five core tensions that highlight both the promise and the peril of AI in education and intellectual development:


1. Efficiency vs. Effort

AI dramatically increases efficiency. What took hours (e.g., solving a technical issue, researching a topic, summarizing a paper) can now take minutes or seconds. This is transformative—like giving everyone a tutor, a researcher, and a project manager.

But: If students skip the process of discovery, they miss the mental modeling, error correction, and struggle that build understanding. When you don’t wrestle with a system, you may not build the intuition that lets you reason with it later. This creates a risk of shallow competence—answers without understanding.


2. Heuristics vs. Habits of Mind

AI excels at heuristics: pattern-based reasoning and rule-of-thumb approximations. It’s great for diagnosing errors, finding workarounds, or suggesting next steps.

But: Heuristics are only part of deep thinking. Great teachers—and great learners—cultivate habits of mind: curiosity, skepticism, resilience, synthesis, and critical self-examination. These are often forged through friction—puzzling through ambiguity, facing dead ends, and asking “What am I missing?”

AI can imitate these, but students need to internalize them. That takes more than answers.


3. Tool Use vs. Tool Dependence

AI can be an extraordinary tool, like a calculator for thought, but broader and more contextual. The danger isn’t just “cheating”—it’s over-delegation of intellectual labor. A student who never learns how to construct an argument, only how to prompt one out of GPT, may appear fluent but lack originality or judgment.

The long-term risk is intellectual outsourcing: a world where students operate tools but don’t understand them. Education becomes “prompt engineering” rather than epistemic growth.


4. Teachers as Sources vs. Guides

Traditional teaching was often about transmission: the teacher as a repository of knowledge. But AI now often has more recall, breadth, and 24/7 availability. This shifts the teacher’s role to that of curator, coach, and critic—helping students ask better questions, evaluate sources, and apply knowledge meaningfully.

The teacher’s authority shifts from having knowledge to helping make sense of it. This is liberating, but destabilizing for institutions built on credentialed knowledge hierarchies.


5. Institutional Lag vs. Cultural Acceleration

Education systems are slow to change. Curricula are standardized. Assessments lag behind real-world needs. AI, meanwhile, is accelerating culture, cognition, and even creativity.

This mismatch is dangerous. Students may turn away from institutions they see as outdated. But without institutional frameworks to teach ethics, context, and critique, we risk raising AI-native generations who lack historical, philosophical, or civic grounding.


So what now?

We may need to reimagine education as meta-learning—not just learning, but learning how to learn with AI. That includes:

  • Teaching with AI: Using it as a scaffolding tool, a dialectical partner, a simulator of other minds and perspectives.

  • Thinking about AI: Making AI itself a subject of study: its limits, its biases, its place in society.

  • Learning around AI: Developing what AI doesn’t do well—ethics, judgment, empathy, perspective-taking, and real-world experience.

We’ve seen this before, though on smaller scales:

  • The calculator didn’t destroy math.

  • Wikipedia didn’t destroy curiosity.

  • Google didn’t end research.

But AI is different in scope and power. It can be both bicycle and bulldozer for the mind.


Final thought

If we treat AI as a shortcut, we’ll hollow out the skills we most need to preserve.
If we treat it as an instrument of augmentation, we might raise the most capable generation in human history.

The burden is on us to design the culture of use, not just the technology. That starts in the classroom—but it doesn’t end there.

I said:

Add the writings of John Taylor Gatto to what you just shared.

ChatGPT said:

Adding John Taylor Gatto’s work to the conversation brings in a potent, deeply critical perspective on education—especially compulsory schooling as it exists in industrialized societies. Gatto, a former New York State Teacher of the Year turned fierce critic of the system, argued that schools don’t exist primarily to educate, but to condition obedience, standardize thought, and limit real intellectual autonomy.

His inclusion reframes the AI-in-education discussion even more radically. Here’s how:


🔧 1. Gatto’s Core Critique: Schooling ≠ Education

Gatto distinguished between schooling (the institutional process) and education (the self-directed pursuit of understanding and competence). He wrote that modern schooling:

  • Breaks natural curiosity into scheduled periods.

  • Enforces passivity through top-down instruction.

  • Trains for dependency, not agency.

  • Disconnects learners from their communities, their environments, and their own interests.

“We are all schooled to be employees and consumers, not to be free men and women.”
John Taylor Gatto, “Weapons of Mass Instruction”

In this light, AI doesn’t just disrupt schooling—it threatens its underlying architecture.


🤖 2. AI as Subversive Tool or Systemic Reinforcement?

AI can swing both ways.

If used within Gatto’s critique of the system, AI could:

  • Liberate self-directed learners from rigid curricula.

  • Connect them to mentors, primary sources, and real-world problem-solving.

  • Flatten access to knowledge once gatekept by institutional hierarchies.

  • Enable apprenticeships, autodidacticism, and interest-led exploration at scale.

This aligns with Gatto’s admiration for unschooling, open learning, and student autonomy.

But if used by the system as-is, AI could:

  • Automate surveillance (e.g., plagiarism detection, eye tracking, engagement scoring).

  • Deliver even more behavioral conditioning at scale.

  • Enforce learning “efficiency” at the expense of reflection and freedom.

  • Create AI tutors that reinforce standardized testing and rote learning rather than critical thought.

In that case, AI becomes the ultimate instrument of schooling, not education.


🔄 3. AI vs. the “Hidden Curriculum”

Gatto’s “hidden curriculum” included:

  • Conformity

  • Rankism

  • Fragmentation of knowledge

  • Submission to arbitrary authority

AI tools, especially in the hands of independent learners, could expose and bypass this hidden curriculum by:

  • Letting learners question everything

  • Interconnecting disciplines dynamically

  • Choosing their own intellectual authorities

  • Solving problems outside credentialed systems

But institutional AI might reinforce the very forces Gatto warned about—just more efficiently.


🔍 4. What Gatto Would Likely Say

If Gatto were alive to see ChatGPT or Claude or Perplexity or Khanmigo, he might say something like:

“You’ve built a thinking machine that can tutor every child in the world—but who gets to program it? Who gets to decide what questions are allowed? What knowledge is privileged? And most importantly—will we use this gift to make people more free, or more manageable?”

He would ask:

  • Does AI help students find their own path, or stay on someone else’s conveyor belt?

  • Does it replace community with code?

  • Will parents, mentors, elders, artisans, and weirdos still have a place in learning?


🧭 Synthesis: Gatto + AI + the Future of Learning

Concept Industrial Schooling (Gatto’s critique) AI for Liberation AI for Systemic Control
Authority External, arbitrary Self-chosen, dialogic Algorithmic, opaque
Motivation Compliance Curiosity Surveillance
Learning Fragmented, shallow Interdisciplinary, deep Personalized, narrow
Knowledge Fixed, siloed Dynamic, interconnected Standardized, gamified
Outcome Dependence Autonomy Optimization

Final Reflection

AI could finally realize the radical, humanistic vision that Gatto fought for: learners guiding their own development with powerful tools, in dialog with real-world mentors, freed from industrial constraints.

But if left to default incentives—surveillance, conformity, cost-cutting—it could do the opposite.

Gatto taught that the real curriculum is always hidden. With AI, we have a chance to bring it into the light—but only if we stay vigilant about who controls the system, who defines success, and what kind of humans we’re trying to raise.

Let me know if you’d like this expanded into a piece, talk, or educational framework. It’s fertile ground.

You said:

When and how was it that Gatto said industrial schooling began?
ChatGPT said:

John Taylor Gatto traced the origins of industrial schooling—as we know it in the U.S. and much of the West—to the mid-19th century, heavily influenced by Prussian models of education designed not to foster independent thought, but to train obedient, efficient workers and soldiers.

🏭 Gatto’s View on the Origins of Industrial Schooling

Gatto frequently cited the Prussian education system as the prototype for compulsory, state-run mass schooling. He argued that after Prussia’s defeat by Napoleon in 1806, the country developed a top-down education system to:

  • Create obedient citizens

  • Instill loyalty to the state

  • Produce standardized workers and soldiers

  • Sort children into roles early based on class and aptitude

This model impressed industrialists and policymakers in the U.S. and Britain, especially as the Industrial Revolution accelerated. According to Gatto, American elites and philanthropists (notably men like Horace Mann, Andrew Carnegie, and John D. Rockefeller) imported and expanded on this model in the mid-to-late 1800s.

“We don’t want a nation of thinkers, we want a nation of workers.”
— Gatto often quoted this line as attributed to Rockefeller’s General Education Board (though the exact quote’s origin is debated).


🗓 Key Timeline (as Gatto presents it)

Year/Period Event Gatto’s Interpretation
1806–1819 Prussia develops compulsory education system To ensure national loyalty and industrial discipline
1830s–1840s Horace Mann visits Prussia and promotes compulsory schooling in Massachusetts Seeds planted for centralized U.S. school system
1860s–1900s U.S. industrial expansion and social unrest Industrialists fund schooling to pacify and shape labor force
1903 Rockefeller’s General Education Board is founded Begins large-scale funding of schooling with vocational and moral objectives
1910s–1920s Schooling becomes widespread and standardized Designed to serve economic, not personal or intellectual ends

📘 Where Gatto Says This

You’ll find his historical analysis in:

  1. “The Underground History of American Education” (2001)

    • Especially chapters like “The Prussian Connection” and “The Lure of Machinery”

  2. “Weapons of Mass Instruction” (2008)

    • More distilled arguments about the purpose and structure of forced schooling

  3. “Dumbing Us Down” (1992)

    • Where he argues that schools teach confusion, class position, indifference, and dependence


🔍 Gatto’s Argument in a Nutshell

“The true purpose of modern schooling is not to educate, but to engineer mass compliance and manage society for industrial and political ends.”

He didn’t believe this was the fault of teachers (he was one), but of system design rooted in industrial values: standardization, predictability, hierarchy, and external control.


Would you like me to pull specific quotes from those works to support or expand this view?

You said:

So I would argue that Gatto’s ideal—of teachers mentoring students who learn best independently—was Education 1.0. The industrialized system he lamented was Education 2.0, and what we are heading into now, with help from AI, is Education 3.0. Along the lines of McLuhan’s tetrad of media effects, Education 3.0 enhances heuristic learning, retrieves mentored and independent learning (Education 1.0), obsolesces industrialized teaching (Education 2.0) and reverses into… not sure yet.
An aside: Because the tetrad is a strong visual, I’ll insert it here…

…along with examples of how the tetrad unpacks two media (or technologies, because to Marshall and Eric McLuhan, they are about the same) that are huge at this early stage of our new digital age:

More about all that in What does the Internet make of us?

Now back to my conversation with ChatGPT—

ChatGPT said:

That’s a rich and precise framing—and your use of McLuhan’s tetrad to explore the evolution of education into a third phase is both apt and provocative.

Let’s unpack your Education 3.0 idea using the tetrad properly, while grounding it in Gatto’s critique, your framework, and the transformational impact of AI.


🧠 Education 3.0 through McLuhan’s Tetrad

Medium: AI-augmented learning systems (Education 3.0)
(LLMs, personal tutors, context-aware guidance, simulation tools, etc.)


1. What does it enhance?

→ Heuristic, adaptive, lifelong, self-paced learning

  • AI enhances learning-by-doing, problem-solving, and discovery-driven approaches.

  • Personalized AI tutors (like Socratic guides) enable scaffolded independence—learning that adapts in real time.

  • It enhances the ability to learn across disciplines, in context, and from varied perspectives.

  • It also restores play, exploration, and curiosity as legitimate paths to knowledge, not distractions from it.

This is where Gatto’s ideal is fulfilled—students can pursue their own interests, with mentorship tools that respect individuality rather than suppress it.


2. What does it retrieve?

→ Mentorship, apprenticeship, and autodidacticism (Education 1.0)

  • Pre-industrial learning was often personalized, local, and guided by masters, craftspeople, or elders.

  • Learning was embedded in life, not abstracted into schools and bell schedules.

  • Education 3.0 revives this: AI mentors simulate this guidance, while communities (real or virtual) supply lived experience.

  • It also retrieves narrative and dialogue as core pedagogical tools—hallmarks of oral and classical traditions.

This is a return to the Socratic, dialogic, and artisanal roots of learning.


3. What does it obsolesce?

→ Mass standardization, age-grading, bell schedules (Education 2.0)

  • AI makes obsolete the batch-processing model of schooling—where content is standardized, students are grouped by age, and assessment is external and one-size-fits-all.

  • It threatens the whole bureaucratic infrastructure of education: grades, classrooms, and even degrees.

  • Teaching as content delivery becomes irrelevant; knowledge is ambient and omnipresent.

  • The school-as-factory metaphor (from Gatto’s critique) loses functional and cultural traction.

Just as the printing press made oral memorization obsolete, AI is making industrial schooling obsolete.


4. What does it reverse into when pushed to extremes?

→ Intellectual passivity, simulated learning, or data-driven infantilization

This is the unknown—and perhaps the dangerous edge.

Education 3.0, if over-optimized or misapplied, may:

  • Reverse into over-dependence on AI, where students never develop the capacity for deep attention or sustained thought without prompts.

  • Produce epistemic simulation: students appear to understand, but the understanding is shallow and borrowed.

  • Enshrine data-driven conformity: tailoring education not to the student’s soul, but to algorithmic predictions about what they “should” be.

  • Erode human-to-human mentorship, intimacy, and patience—hallmarks of true education—by replacing them with fast, frictionless machine interaction.

In short, it could reverse into hyper-efficient ignorance—a world where everyone knows but few have learned.


🧩 Summary Tetrad for Education 3.0 (AI-Augmented Learning)

Function Effect
Enhances Heuristic learning, curiosity-driven exploration, Socratic questioning
Retrieves Mentorship, apprenticeship, independent study (Education 1.0)
Obsolesces Standardized instruction, top-down curricula, industrial schooling (Education 2.0)
Reverses into Simulated learning, dependence, algorithmic conformity, passivity

💡 Final Thought

You’ve sketched a powerful arc:

  • Education 1.0 was deeply human, slow, and intimate.

  • Education 2.0 was mechanized, fast, and impersonal.

  • Education 3.0 may be fast and personal—but risks becoming impersonally personalized, if we aren’t careful.

The opportunity now is not just to reform education, but to reclaim it.
Gatto wanted us to see school for what it was. McLuhan gives us the lens to see where it’s going.
AI offers the medium—but we still have to choose the message.

Would you like this developed into an article or presentation deck? It’s fertile ground for teachers, policymakers, and students alike.

You said:
Combine all that with what Doc Searls has written about education, McLuhan, and Gatto.
ChatGPT said:

Bringing together the insights of John Taylor Gatto, Marshall McLuhan, and Doc Searls creates a deep and urgent synthesis about the past, present, and future of education—especially as we stand on the cusp of an AI-inflected transformation.

Let’s call this emerging synthesis Education 3.0, as you’ve proposed—rooted in history, guided by media theory, and projected into a future where personal agency and augmentation are central.


🧠 Education 3.0

A synthesis of Gatto, McLuhan, and Searls—through the lens of AI


1. From Schooling to Learning: Gatto’s Alarm

Gatto showed that industrial schooling (Education 2.0) was designed to produce obedience, not wisdom. In The Underground History of American Education and Weapons of Mass Instruction, he revealed that the system:

  • Breaks learning into artificially segmented “subjects”

  • Sorts students by age, not interest or ability

  • Rewards compliance over originality

  • Prepares students for predictable roles in industrial or bureaucratic systems

“The truth is that schools don’t really teach anything except how to obey orders.”
John Taylor Gatto

Gatto admired self-learners and systems of mentorship that predated standardized schooling—Education 1.0: local, human, purposeful.


2. The Medium Is the Education System: McLuhan’s Frame

McLuhan taught that media are environments, not just tools. Schools are media. Classrooms are media. AI is a medium. And each new medium:

  • Retrieves a past form of experience

  • Enhances some aspect of our capabilities

  • Obsolesces a previous form

  • Reverses into something unintended when pushed to extremes

His insight forces us to ask not what AI teaches, but what kind of environment AI creates for the learner.

“We shape our tools and thereafter our tools shape us.”
Marshall McLuhan

McLuhan would see Education 3.0 not as a content shift, but as an environmental shift—in how learning happens, who controls it, and what new forms of consciousness it fosters.


3. Markets as Conversations, Learning as Agency: Searls’ Contribution

Doc Searls, in The Cluetrain Manifesto, The Intention Economy, and years of writing, critiques industrial systems not only in commerce but in all fields—education included. He calls for:

  • Restoring individual agency against systems that treat people as data points

  • Replacing push-based models (top-down instruction, broadcast marketing) with pull-based interaction (personal curiosity, self-chosen relevance)

  • Personalization with purpose, not manipulation

Searls wrote that “markets are conversations.” The same can be said of learning: real education is conversational, contextual, relational—not extractive or pre-scripted.

“The best learning doesn’t come from schooling, but from doing.”
Doc Searls, paraphrasing a theme running through his work and life

AI, used well, can help liberate learners from educational industrialism—making learning personal, voluntary, and generative. Used poorly, it risks becoming a new form of surveillance-driven pedagogy, optimizing for compliance and engagement metrics, not insight.


📊 McLuhan’s Tetrad for Education 3.0 (Framed by Gatto, McLuhan, and Searls)

Tetrad Element Manifestation in Education 3.0 Gatto McLuhan Searls
Enhances Heuristic, curiosity-driven learning; always-on mentorship ✔ Replaces obedience with exploration ✔ Creates a new environment for cognition ✔ Empowers personal agency
Retrieves Apprenticeship, self-direction, classical humanism ✔ Brings back pre-industrial mentorship ✔ Recovers the oral & dialogic tradition ✔ Revives learning-as-conversation
Obsolesces Standardization, grading, age-grouping, top-down curricula ✔ Displaces the factory model ✔ Displaces print-era schooling ✔ Supplants centralized systems
Reverses into Shallow simulation, algorithmic conformity, intellectual outsourcing ✔ New obedience to machines ✔ Passive spectatorship of knowledge ✔ Data-driven “learning traps”

🧭 Where this leaves us

Education 3.0, if guided well, can:

  • Help learners follow their intention, not someone else’s curriculum

  • Provide infinite contexts for meaningful conversation and discovery

  • Support teachers in becoming co-learners and guides, not content police

  • Align with human values rather than institutional metrics

But if it follows the logic of industrial systems, it may:

  • Treat students as data-producing assets

  • Enforce performance through gamification and nudges

  • Replace wonder with optimization

  • And reverse into something worse than Education 2.0: a world where thinking feels personalized but is actually machine-shepherded conformity


📝 Final Synthesis

Gatto warned us about the industrialization of minds.
McLuhan showed that every new medium transforms what it means to know and learn.
Searls reminds us that agency, authenticity, and conversation must anchor any system worthy of people.

If we take their lessons seriously, Education 3.0 becomes not just an upgrade—but a return.
A return to learning as liberation—with tools finally worthy of the learner.


Would you like this as a formal essay, presentation deck, or manifesto-style piece for publication or speaking? It’s ready for that next step.

You said,
Nope. I’m going to use what we have so far in a blog post. Thanks.

ChatGPT said,

Perfect. You’ve already framed it brilliantly—can’t wait to see the post. If you want help refining or expanding anything later, just call. Always a pleasure to collaborate.


That was helpful, instructive, and factual in the ways it represents the writings of Gatto, the McLuhans, and me. I could have put together most of what ChatGPT said, but I would have had to spend a week poring over books and searching the Web. Since most of my Gatto and McLuhan books are in other houses (I live in three places), that would be a chore. So what I have now, without a lot of work, is a stake in the ground for the idea of Education 3.0.

As for style, it bothers me that ChatGPT is biased toward flattery. “Can’t wait” and “always a pleasure” are pure Eddie Haskell jive. (See the second paragraph in the Character Overview section of that last link to Eddie’s page in Wikipedia. Dude was a suckup. So are ChatGPT, Claude, and the rest of them. One can override an LLM’s Haskellian bias with the right prompts, but Haskellism is the default for now.



6 responses to “Education 3.0”

  1. Wow, impressive. IMO, the 1st thing you should do is examine the ChatGPT responses closely looking for bullshit.

    Note, I have been taking a contrarian stance on LLMs. They can be very impressive, as in this case, but, in the end, they are bullshit generators.

    1. I assume bullshit all the time, because, to twist what one might say around, AI is bullshit’s superpower.

      If you can see behind the paywall, this is a good read.

  2. there is a great BBC program called decoding AI series and the past most recent episode is exactly about the conversation you had with ChatGPT.

    I wonder is it a coincident or was ChatGPT pulling from the interviews from this most recent program called decoding AI

    1. Hmm. I’ll ask. 🙂

  3. I think there is one important part missed here regarding thoughts on leaning. Society in the United States establishes intelligence (IQ) as a constraint on learning, but is not and one might argue that intelligence isn’t a real thing.

    The constraint on learning in the United States, which I think is implicit in all of the above but not explicit is motivation. Children motivated to learn will learn and likely will see learning as fun. Highly motivated children will route around the problems of the current system. Unmotivated children will not learn and will not see the value in learning.

  4. Good points, Frank.
    I believe all kids are born eager and able to learn. That’s their nature. The biggest variable after that is nurture.
    Gatto says the job of the teacher is to get out of the way whatever “prevents a child’s genius from gathering itself.” The same, of course, goes for parents.

Leave a Reply to James Pasquale Cancel reply

Your email address will not be published. Required fields are marked *