34 Comments
Mar 14Liked by William M Briggs

So I gather, then, that since “AI” scrapes the web, presumably indiscriminately with respect to whether it is scraping other “AI” output (for it might be able to recognize its own output from the same training run, but not that of other models), and since more and more web material is “AI” generated, “AI” will eventually and inevitably degenerate into gibberish. Unless, of course , “AI” output is clearly labeled as such, which will then allow real people to ignore it. But then the label defeats one of the main purposes of “AI”, which is to fool people into thinking that they are reading words originating in real thought.

Expand full comment
Mar 14Liked by William M Briggs

We're living in a Turing test.

Expand full comment

Kind of like cannibalism.

Expand full comment
Mar 14Liked by William M Briggs

The danger of AI is not that it will take over the world, it's that those who control AI believe it can.

Expand full comment
Mar 14Liked by William M Briggs

The genius behind the (yet another) scam is in the use of the word "intelligence" in the title. There is no "intelligence" contained within Artificial Intelligence. Asking (or expecting) Artificial Intelligence to give a reasoned or intuitive answer to any question is an exercise in futility, because there is no intelligence present in the programming, artificial or otherwise, that is capable of performing these qualities. AI is nothing more than faster processing speed, but the result is still GIGO. Garbage In, Garbage Out.

The only "intelligence" involved in AI is in the people who write the program, the code. The actual program itself is no more intelligent than any other tool, like a hammer or a spoon. This was amply demonstrated by Google's recent AI program when it spit out pictures of almost surreal oddity. It was clear to all that there was zero "intelligence" involved in these pictures and that they were simply a reflection of the programmer's woke biases. AI is a mimic, a regurgitator, and nothing more.

As a side note, every time I see "AI" in print, my brain, as usual, refuses to conform and instead of reading the letters "Ay" and "Eye", it reads it as "Ay Ell", the diminutive for the name Albert. So, rather than fight what my brain sees, I no longer read "Ay Eye", but, as Paul Simon once requested, I call it Al.

Expand full comment
Mar 14Liked by William M Briggs

'The genius behind the (yet another) scam is in the use of the word "intelligence" in the title.'

Yes! Even more encouragement for the masses to view it as magic.

Expand full comment
Mar 14Liked by William M Briggs

Now we know how the image of the first beast will be given breath so as to deceive the inhabitants of the earth who will worship it.

Expand full comment
Mar 14Liked by William M Briggs

Sounds like AI may get stuck in a do loop; or a do do loop.

Expand full comment
Mar 14Liked by William M Briggs

Yes, we will need new observations to feed the model but these new observations will be increasingly made by individuals who increasingly navigate their lives through an online framework. The more online we are, the more we adapt our thinking models to the binary requirements of this system. Consider Appleyards concern in "the brain is wider than the sky" where humans interacting with machines would dumb themselves down in order to please said machine. How will this all extinguish itself?

Expand full comment
Mar 15Liked by William M Briggs

I remember the botched experiment of 2017, when two FaceBook chatbots were tasked with making a trade with each other and, not only did they fail at coming to terms with each other, their language broke down into gibberish which The Altantic ("What an AI's Non-Human Language Actually Looks Like") referred to as ... Facebotlish.

More recently, the US National Science Foundation ("Verbal nonsense reveals limitations of AI chatbots") reported how AI can still be fooled by gibberish -- nonsense sentences which do not actually have any interpret-able meaning.

Expand full comment

this is so because Large language model AIs are not intelligent, they are just good at recognizing patterns, quantifying them (statistically), and applying those patterns in response to prompts. Giant pattern repeaters.

Expand full comment
Mar 14·edited Mar 14Liked by William M Briggs

Hard AI proponents still working to prove Roger Penrose wrong, even though these proponents cannot begin to describe what "consciousness" is, much less demonstrate it in any given AI cluster of nested IF statements. Less a Turing test than a visit to the circus to talk to the mechanical fortune teller. The actual Turing test computer wouldn't need constant input to remain sensible. AI is like the fortune teller, where you have to keep replacing the paper output tapes (the 'fortunes'), while telling the crowd members with questions "move along, boy. You bother me."

Expand full comment
author

Yes. I call this The Great Bluff.

Expand full comment

"prove Roger Penrose wrong"

I am curious if you mind elucidating me on this exact reference.

Expand full comment

Roger Penrose wrote "The Emperor's New Mind", which was a thorough dismantling of the "Hard AI" proponents. Originally written in 1989, Penrose has updated that edition.

Penrose believes that studying human consciousness via behavior when under anesthesia may be a more fruitful path to discovering more about it.

Quantum mechanical states must be preserved from step to step (moment to moment) or you can't make an argument for consciousness. Computational processes do not preserve the quantum states, ergo cannot be a means by which to achieve what we know as consciousness.

Expand full comment

Thanks! I've added "The Emperor's New Mind" to my "to-read" pile. Looks exactly like something I've been searching for. Much appreciated - honest!

Expand full comment

You are very welcome.

Expand full comment
Mar 14·edited Mar 14Liked by William M Briggs

This doesn't describe "intelligence" but rather conflates AI with "language model". A predictive language model is in no way intelligent. It may be surprising and it may often be "correct", but those results are really only statistical. There is no insight and there is no intuition, body sense or spirit. AI in the sense usually applied - the large language model - may "know" things but it cannot understand them.

Expand full comment

To go a step further, I'd say that AI only has organized information, which is not even knowledge on this scale:

1) disorganized information

2) organized information

3) knowledge

4) understanding (proto-intelligence)

5) wisdom (intelligence)

Actual intelligence requires virtue.

Expand full comment

Knowing and understanding. You got it, not the same at all.

Expand full comment

There's bound to be plenty of AI generated crap on the Internet already. Even before AI got big, SEOs have been filling up the Internet with spun articles in order to build artificial links.

One reason why Google has moved Left is that Google has gravitated towards authority sources, since it can no longer trust its original link popularity algorithm.

Expand full comment

I’ll go one further than the inbreeding of knowledge, as I’ve said elsewhere ago. Once AI becomes widely adopted, the date of the doom of our knowledge has been set.

A thought experiment to show this is to consider a system model comprised of not only the various AI products (and releases thereof) and their respective training corpuses (subsets of total knowledge), but also the state of total knowledge, people - as generators of new knowledge, the people’s ability level to generate new knowledge (skill level), and the effects on people’s knowledge generation skills from the interaction with/dependence upon AI products. I believe such a dynamic model run over time, years and generations, will show not only the plateau of total knowledge but also people dumbing down with regard to their ability to generate new knowledge.

Other thoughts, related:

When we have become much lower in skill and higher in dependence on AI products, the consequences of a pulled plug would be much more dire.

Creativity may be an inherent trait of humans, but it draws upon experience and observation and knowledge (our own training corpus per se) as a baseline for inspiration. The quality of creativity must necessarily decline as the (moving) baseline for new generations atrophies and becomes dominated by banality and gibberish. Hence, the date of the doom of creativity may also be set along with the doom date of our knowledge.

Expand full comment
Mar 14Liked by William M Briggs

Data driven models cannot record things like muscle memory or the specific hybrid of mind and heart that is the crux of creativity?

Mathew Crawford's book ' The World Beyond Your Head: How to Flourish in an Age of Distraction'. He describes this sort of memory as not just muscles but hearing, sight and other developed senses that we can really only train using the master/ apprentice paradigm because the knowledge can only be passed through subtle explanitary nudges over time with lots of " hands on" experience.

Having practiced Win Chun Kung Fu, the concept of uploading this knowledge like in ' The Matrix ' movie, seems far fetched?

So yes I agree that A.I. needs a bit more substance than data from the internet to " progress".

Expand full comment

When does climate change get the AI imprimatur? That should settle the argument. My great fear is that while old buggers like me laugh at AI's pretensions, younger people are being brainwashed into believing that it is the source of all wisdom.

Expand full comment

I don't think the younger generations as a cohort are that stupid that they take AI as Lewis's "Head" from N.I.C.E. Don't believe the mainstream media with their hot takes on all MILs and all Gen Z. MSM's job is to lie and be incompetent, after all.

Expand full comment
Mar 14·edited Mar 14Liked by William M Briggs

Good call. I hope that you are right. Then I look at Swifties... When I was teaching many students had swallowed the climate change narrative and others.

Expand full comment
Mar 14Liked by William M Briggs

Another brilliant post. Thank you.

Expand full comment
Mar 14Liked by William M Briggs

I think there will be attempts to deal with this problem. Someone will come up with an HTML tag or attribute that you're supposed to use on AI content to let the AI harvesters know not to ingest something into their models. New versions of AI models will have features for screening input that seems to have been generated by AI but not tagged as such. Maybe they'll even try storing all generated content somewhere so potential input can be checked against it. It's going to be a big problem because there will be no way to prevent "input leakage" (And...I'm off to trademark that term, so maybe I can shake someone down for a few bucks).

Expand full comment
Mar 14Liked by William M Briggs

I remember some of my early coding attempts (ca 1985) errored out with the message "Too Much Recursion". That's how I think of this. A positive feedback loop of GIGO.

Or, if you prefer visuals, some remote control software tripped over its own shoelaces: https://substack.com/@tardigrade1/note/c-46233018

Expand full comment