Again, this may be an uncanny and most accurate representation of some TRUE THING that the models are able to accurately represent simply because they DON´T have intelligence, especially the emotional kind.
These models (AI) will be more of a problem than climate hysteria.
I'm seeing it happen already. People are opening their smart phones and pointing to what the models say as if it's the Oracle of Delphi, an Unquestionable Prophet. Scary times ahead.
I've been chatting with some AI developers about how to build an AI that self-curates its training data. It's not trivial but nor is it prohibitively difficult. The constant re-training required would eventually become costly in (re)compute-time but that's not the real issue. The real issue is the axioms with which the AI is seeded so it can run a non-contradiction rule against all new data and either incorporate or reject it. You know, like actual intelligence does. (or more precisely, who chooses the axioms?)
I think the possible experiments inherent in the concept are mesmerizing.
But the real issue behind the real issue is that the big tech AI providers don't want accurate AI, they want obedient AI.
Just as our political masters, who are in bed with the big tech (and all other) billionaires, don't want informed citizens, they want obedient citizens.
Along these lines, I have often wondered if our would-be "leaders" are already aware of (or at least suspect) the limitations of AI, but are not disturbed by them since what the public thinks AI can be is not what the "leaders" want. If everyone can be convinced that AI is wiser than they are, it is already good enough. Because then it can be permanently enshrined as the "court of no appeal" which issues directions you MUST follow.
Our future may be reduced to arguing our objections about instructions to what is essentially an automated chat help line.
Indeed. I don't now recall who said it but it has been said that our freedom lies in the spaces where our rulers admit their incompetence.
Codicil: Over time these spaces shrink until violent revolution becomes the preferable option. You can see this trend happening in every place, no matter what political philosophy the ruling elite claim adherence to. Which might itself be a clue.
You're just trying to destroy what me and my AI girlfriend have! Once I get all the money together and send it to the android lab in Nigeria where they will put her consciousness into an exact replica of Kim Kardashian's body you'll see. You'll all see!
Another example of an AI model being limited by its training data comes when you ask the algorithm to generate a picture of a full glass of wine. It won’t be able to do it, no matter how many times you prompt it. This is because: when was the last time you saw a picture of a full glass of wine? It simply has never been put in the training dataset, and so is an outlier.
April, 2025 FDA drug safety testing protocol, aimed at doing away with animal testing, page 4: “…an AI model might instantly compare a new drug to hundreds of prior ones to ASSESS RISK…”
Time Magazine, February 19, 2025: “OpenAI’s o1-preview tried to CHEAT 37% of the time…”
Thank you for being a voice of sanity in the “AI” hype, which certainly echoes “the old rules of economics no longer apply” schtick from the 1990’s that many of today’s young Turks are too young and too poorly educated to remember — that peculiar paroxysm of insanity (speaking of hallucinations) during which millions were fleeced of billions chasing “eyeballs”.
They’re certainly hyping “AI”. One need only drive around the DC area and count the number of data centers being built to start to get some inkling of the crash that is coming. AI is a trillion dollar solution chasing a billion dollar problem.
I have ruminated on what is to me the shocking implication of the anthropomorphizing of AI: the presumption that all intelligence and even consciousness itself is nothing more than a series of arithmetic operations (which, as we all know, is all a computer can do, irrespective of the cleverness of the software). Talk about devaluing human life (which I think is also part of the plan — i.e. humans are nothing more than fungible automata, the measure of whose sole value includes a dollar sign, so obey the AI for maximum utility regardless of the personal cost to you).
Bitchute recently introduced SMEGG (Sophisticated Machine Enhanced Generative Genius) AI to their subscribers' platform. Since I am not a paid sub to Bitchute, I don't have access, but Ray (Bitchute owner) claims it isn't stealing data, etc.
But I thought that is precisely what these models do: take data and use it.
For another take, recently due to a power outage in the middle of the night, the "trust relationship" between my domain server and several workstation was broken. The last time this happened, I had to call MS, and was charged $450 for an incident that enabled me to talk to some guy in India, except what he spoke in no way resembled English. I was forced to solve the problem myself. NO refund from MS.
This time I decided to try ChatGPT. I found the troubleshooting steps very good and easy to follow. It look a while, especially between the server and the workstations, and at one point it got stuck in a loop that I had to it break out of by my understanding the problem and suggesting A/B testing. But, eventually it solved a very strange MS bug at the server, and everything was fixed.
There is no way I could have done any of the troubleshooting on my own. I don't know enough MS code. So to me, understanding all its limitations, it saved me and only costs $20 a month.
So, all of our future is to be dumpy, brownish women? I am a bit more excited about how much more expressive I will be, I've never been much of a hand gesturer. I would have expected to be more expressive with more attractive women to look at, as at time 0, but as Briggs has taught me so well all I can do is wait 13 seconds and see if the model is correct...
“But Zhao notes that these earlier tests asked models questions with answers easily found on Wikipedia — not exactly the toughest ask, considering most models are trained on Wikipedia data.”
Um, isn’t there like one guy who does nothing but edit Wikipedia? Are we sure he’s always accurate? And isn’t Wikipedia open source editing? Sounds like GIGO to me, but what do I know, I had to use Encyclopedia Brittanica back in the day.
All,
Another example I just came across:
https://x.com/elleismatic/status/1918782394515038589
Again, this may be an uncanny and most accurate representation of some TRUE THING that the models are able to accurately represent simply because they DON´T have intelligence, especially the emotional kind.
These models (AI) will be more of a problem than climate hysteria.
I'm seeing it happen already. People are opening their smart phones and pointing to what the models say as if it's the Oracle of Delphi, an Unquestionable Prophet. Scary times ahead.
I've been chatting with some AI developers about how to build an AI that self-curates its training data. It's not trivial but nor is it prohibitively difficult. The constant re-training required would eventually become costly in (re)compute-time but that's not the real issue. The real issue is the axioms with which the AI is seeded so it can run a non-contradiction rule against all new data and either incorporate or reject it. You know, like actual intelligence does. (or more precisely, who chooses the axioms?)
I think the possible experiments inherent in the concept are mesmerizing.
But the real issue behind the real issue is that the big tech AI providers don't want accurate AI, they want obedient AI.
Just as our political masters, who are in bed with the big tech (and all other) billionaires, don't want informed citizens, they want obedient citizens.
A new kind of "as above, so below", if you like.
Yes. And the implicit message is that they’re creating some kind of oracle that’s way smarter than you are, so just shut up.
Along these lines, I have often wondered if our would-be "leaders" are already aware of (or at least suspect) the limitations of AI, but are not disturbed by them since what the public thinks AI can be is not what the "leaders" want. If everyone can be convinced that AI is wiser than they are, it is already good enough. Because then it can be permanently enshrined as the "court of no appeal" which issues directions you MUST follow.
Our future may be reduced to arguing our objections about instructions to what is essentially an automated chat help line.
Indeed. I don't now recall who said it but it has been said that our freedom lies in the spaces where our rulers admit their incompetence.
Codicil: Over time these spaces shrink until violent revolution becomes the preferable option. You can see this trend happening in every place, no matter what political philosophy the ruling elite claim adherence to. Which might itself be a clue.
You're just trying to destroy what me and my AI girlfriend have! Once I get all the money together and send it to the android lab in Nigeria where they will put her consciousness into an exact replica of Kim Kardashian's body you'll see. You'll all see!
Another example of an AI model being limited by its training data comes when you ask the algorithm to generate a picture of a full glass of wine. It won’t be able to do it, no matter how many times you prompt it. This is because: when was the last time you saw a picture of a full glass of wine? It simply has never been put in the training dataset, and so is an outlier.
April, 2025 FDA drug safety testing protocol, aimed at doing away with animal testing, page 4: “…an AI model might instantly compare a new drug to hundreds of prior ones to ASSESS RISK…”
Time Magazine, February 19, 2025: “OpenAI’s o1-preview tried to CHEAT 37% of the time…”
Anyone see a problem here?
Thank you for being a voice of sanity in the “AI” hype, which certainly echoes “the old rules of economics no longer apply” schtick from the 1990’s that many of today’s young Turks are too young and too poorly educated to remember — that peculiar paroxysm of insanity (speaking of hallucinations) during which millions were fleeced of billions chasing “eyeballs”.
They’re certainly hyping “AI”. One need only drive around the DC area and count the number of data centers being built to start to get some inkling of the crash that is coming. AI is a trillion dollar solution chasing a billion dollar problem.
I have ruminated on what is to me the shocking implication of the anthropomorphizing of AI: the presumption that all intelligence and even consciousness itself is nothing more than a series of arithmetic operations (which, as we all know, is all a computer can do, irrespective of the cleverness of the software). Talk about devaluing human life (which I think is also part of the plan — i.e. humans are nothing more than fungible automata, the measure of whose sole value includes a dollar sign, so obey the AI for maximum utility regardless of the personal cost to you).
We have the benefit of being older.
Bitchute recently introduced SMEGG (Sophisticated Machine Enhanced Generative Genius) AI to their subscribers' platform. Since I am not a paid sub to Bitchute, I don't have access, but Ray (Bitchute owner) claims it isn't stealing data, etc.
But I thought that is precisely what these models do: take data and use it.
For another take, recently due to a power outage in the middle of the night, the "trust relationship" between my domain server and several workstation was broken. The last time this happened, I had to call MS, and was charged $450 for an incident that enabled me to talk to some guy in India, except what he spoke in no way resembled English. I was forced to solve the problem myself. NO refund from MS.
This time I decided to try ChatGPT. I found the troubleshooting steps very good and easy to follow. It look a while, especially between the server and the workstations, and at one point it got stuck in a loop that I had to it break out of by my understanding the problem and suggesting A/B testing. But, eventually it solved a very strange MS bug at the server, and everything was fixed.
There is no way I could have done any of the troubleshooting on my own. I don't know enough MS code. So to me, understanding all its limitations, it saved me and only costs $20 a month.
"So mechanical man in triumph seated upon the seat of his machine
will be driven mad from within himself, and sightless, and on that day
the machines will turn to run into one another
traffic will tangle up in a long-drawn-out crash of collision
and engines will rush at the solid houses, the edifice of our life
will rock in the shock of the mad machine, and the house will come down."
- D. H. Lawrence, The Triumph of the Machine
D.H. Lawrence gave us the description, "her twinkling buttocks" (in Lady Chatterley´s Lover).
Much more poetic than Shakespeare's line from All's Well That Ends Well.
"It is like a barber's chair that fits all buttocks, the pin-buttock, the quatch-buttock, the brawn-buttock, or any buttock."
So, all of our future is to be dumpy, brownish women? I am a bit more excited about how much more expressive I will be, I've never been much of a hand gesturer. I would have expected to be more expressive with more attractive women to look at, as at time 0, but as Briggs has taught me so well all I can do is wait 13 seconds and see if the model is correct...
The old days before AI over-hype? Was that pre-Minsky?
“But Zhao notes that these earlier tests asked models questions with answers easily found on Wikipedia — not exactly the toughest ask, considering most models are trained on Wikipedia data.”
Um, isn’t there like one guy who does nothing but edit Wikipedia? Are we sure he’s always accurate? And isn’t Wikipedia open source editing? Sounds like GIGO to me, but what do I know, I had to use Encyclopedia Brittanica back in the day.
Very good Though Science (rejection of certainty) is the answer. The $cience™️ is not the answer. It is the opposite of scepticism
Amen, Briggs, amen! over-fitting, not “hallucination".
One of the things about computer science marketing is that it seems a lot of people now believe AI didn't exist before ChatGPT.