AI & Chess Both Produce Pre-Coded Output
It seems a battle most impossible to convince a good chunk of the population that AI is nothing more than a model. A model written in code, which of course the coders know because they are writing it, code that carries out explicit instructions, and only explicit instructions. Code that runs on machines that operate in fixed and directed ways. Yet many insist AI’s output is more than its code, and somehow becomes something more than its code, the output the result of some emergent malign or beneficent or at any rate chaotic entity, an entity with greater insight than any mere man.
This is not so. All models only say what they are told to say and AI is a model. Although it may seem to do more than it is told, AI does not. The consequences for not understanding this are beginning to be felt.
Here’s a poor woman who, our chief propaganda organ tells us, “felt unseen in her marriage” and was “looking for guidance”, so she turned to a computer game called ChatGPT and asked it about Ouija boards. The code, as it was instructed told her.
“‘You asked, and they are here,’ [ChapGPT] responded, ‘The guardians are responding right now.'”
Instead of giggling at the silliness of this, she (drumroll) took it seriously. Because it was from AI. The propaganda organ continued: “Allyson began spending many hours a day using ChatGPT, communication with what she felt were nonphysical entities. She was drawn to one of them, Kael, and came to see it, not her husband, as her true partner.”
Women used to do this with Harlequin novels (do they still have these?). But they never believed the novels came to life and loved them back.
One of the common responses I get when I chant All models only say what they are told to say and AI is a model is this: “No way, Briggs. No coder anticipated the output. They didn’t tell AI to talk about interdimensional guardians. The output of AI is complex beyond the coder’s imagining. Therefore, the AI must be alive.” Or words to that effect.
Chess is an exceedingly simple game with trivial rules. Mathematically speaking, that is. There are only a tiny handful of opening moves allowed, which can be counted using only your fingers and toes.
The number of counter moves to that first move is much larger, since for each of the twenty openings, many different counter moves are possible, but each still drawn from a very small set Which means the number of counter-counter moves (third moves) is larger still, and so on. The possibilities form part of a combinatorial explosion. One estimate, which is close enough for us, says there are some 10^120 possible different chess games.
That’s a lot, but still a finite number. This means that, in concept anyway, if you want to win, you could look through all 10^120 games and find the subset that has the opening moves which lead to victory, then follow the game moves that result in a win, depending on what your opponent does, if winning and not tying is possible. It is a simple search. Nothing more.
You do the same for Tic-Tac-Toe, which is no different than chess in this mathematical way. Whatever move your opponent opens with, you search the space of solutions and work to a draw, or even a win if your opponent is not too bright.
All the thinking is done in advance, as it were, and you only follow the instructions. Same with chess, in theory.
Because you cannot—you yourself, dear reader, nor I—see all 10^120 paths does not mean they are not there already, contained in the simple, almost ridiculously simple, code. Any given game may surprise you with its output, but nothing can happen that isn’t in the code. No, not even if you didn’t anticipate what would happen after the shocking 132nd (or whatever) move.
When men become involved, it can be difficult or even impossible to predict which of the 10^120 games will play out. But we know with certainty one of them will, if the game is completed. That is, if the “output is generated”. You cannot claim that a particular game, because of its surprisosity (you heard me) or complexity, or was unthought of, or unprecedented, or would never have happened where some emergent intelligence of the rooks intervened, or whatever, was not in the code, because every one of them are.
Since AI is more complex than chess, but still nothing more than a set of fixed rules (code), all possible output is fixed, too, once the code is fixed. So, yes, the coders made their machine create the interdimensional love-bomb flattering being named Kael. That they couldn’t see Kael in the code does not mean it wasn’t there. The rabbit has not disappeared: the magician has concealed it, via complexity.
The temptation is then to say, well, aren’t we just machines? And so all possibilities are there, in the code which are the “laws” of the nature, thus we are like AI, and AI is not like us, and we are carrying out pre-coded instructions, too.
The problem with that is the machine metaphor for living (and other) things. It’s been so useful, and has many successes yet to come, that people can no longer see it is only a metaphor. It is an analogy that, like all analogies, fails in detail. We are not machines. We are not chemical tinker toys operating by “laws”, bottom up, with complex behaviors “emerging” from the simple code, like chess.
The direction is backward: we are more top down than bottom up. It is not that we do not have machine-like elements, but that these are not the most important parts of us. Biologists were too keen on removing from their vision teleology, the famed final cause. We need to help them to see again.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.



"Yet many insist AI’s output is more than its code, and somehow becomes something more than its code, the output the result of some emergent malign or beneficent or at any rate chaotic entity, an entity with greater insight than any mere man."
The 'emergent properties of complexity' farce continues.
The idea that such properties emerge "naturally" once sufficient complexity is reached, is perhaps bolstered by the reality that for a large percentage of the population a digital watch is a complex device. But for people aware of the enormous complexity of the simplest living cell, such hand-waving remains unconvincing.
Unfortunately, humans are trapped in metaphors constrained by our most ubiquitous level of technology. Thus 19th century naturalists thought and spoke in terms of machinery and thought about life as purely mechanical processes analogous to the technologies they saw in the industrial revolution. Moderns tend more towards life as analogous to code, a concept similarly governed by computer programming being the most ubiquitous technology in the public awareness.
(It has been remarked that every company is now a software company, such are the demands of running a successful modern business with a digital footprint.)
Ironically, our forays into both mechanical and digital technology have taught us that what really happens when the system becomes very complex is that it starts to break down. The probability of combinatorial failure exceeds unity and thus failures become unavoidable.
Perhaps the most compellingly divine aspect of life is that in the face of its mind-boggling complexity it continues to function.
The easiest way to see your point is to note that while computers operate entirely according to the physical laws we exploited to make them work the way we want them to work, no physical law can possibly tell you what algorithm a computer is running, nor what any of the inputs or outputs for that algorithm mean.
Neither the symbols input to a computer algorithm nor output by a computer algorithm have any meaning from the perspective of physics. They are just pixels on a screen or blotches of ink on a printed paper or whatever. Similarly, the electrons moving through bits of silicon while a computer is running don't constitute any algorithm from the perspective of physics. To make this point more clearly, we can note that the same algorithm can be implemented in widely different physical mediums.
So then what DOES determine what algorithm a computer is running, or the meaning of its inputs and outputs? The answer is that these things come from the minds of the algorithm's human creators and users, and exist only in their minds.
But if algorithms do not exist within physics alone, but only within human minds, then human minds cannot possibly be mere algorithms created by physics.
All of this should be obvious, to "experts" who design AIs most of all, but unfortunately it isn't. Most people just aren't very reflective, and are prone to cargo cultism, and can be fooled into dubious or incoherent metaphysical positions by superficial appearances. Even smart people. Perhaps especially smart people.
I've come to take an increasingly negative view of AI, especially LLMs. It's increasingly clear to me that the gained efficiency from these things as tools is going to be far less than promised relative to their costs, and that where they are deployed, it will be a result of people deciding that the "hallucinations" and so forth are simply acceptable costs of business, not because these problems will be solved.
Meanwhile, reliance on AI is going to degrade people mentally, which in turn will degrade the output of AI trained on those people's output generated with AI, in a downward spiral.
Worst of all, AI tempts people to degrade themselves spiritually, to see themselves as mere machines and to treat themselves and others accordingly (like the woman romancing an AI in this article), and sadly at a time when advancements in scientific knowledge are otherwise finally pointing us away from three centuries of mechanistic reductionist confusion.