|
#Worldle #391 1/6 (100%)
π©π©π©π©π©π
https://worldle.teuteuf.fr
easy
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Now, when ChatGPT can write essays better than school kids, and has answers to lots of questions, it seems to me that it could also answer the exam questions kid are getting in school or college.
Of course, the AI proponents are going to praise this as proof of how "intelligent" ChatGPT is - it's so good, it could pass a college exam!
But is it?
Isn't it rather a poor comment of what nonsense we are doing in schools? Is schooling really meant to be repeating random facts, regurgitate what you have been told so you can spit it out again on an exam paper? Is this "learning"?
If you think that's learning, THEN of course ChatGPT is "intelligent".
Even Einstein apparently said "most of my work came from imagination, not logical thinking. And if you have problem with mathematics, I assure you mine are still greater."
A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination.
Cramming data down their throat is, in my opinion, NOT what a school should do. It's just another example of how "automation" takes something away from humans. But is it really taking something away, or is it not rather pointing out that this was, after, not really human to do this stuff?
Was it human to die as a slave while carrying stones to the pyramids in Egypt, or rowing the Roman boats? Certainly it wasn't - and now it's replaced by machines. It certainly created some unemployment, I guess - the real stupid people were then unemployed. But what business does anyone have to be stupid? That's where schools come in. But they, now, just make kids into parrots, easily replaced by chatbots.
Maybe ChatGPT just points out that the "robotic" repetition really does not have a place in our schools.
Something needs to change here, doesn't it?
|
|
|
|
|
nepdev wrote: Something needs to change here, doesn't it?
Yes, and it's the belief that ChatGPT is somehow any good. You can blame the mainstream media (once more) for inciting mass hysteria.
Someone else has already summarized ChatGPT's fundamental problem succinctly, as being confidently wrong. I find it hard to disagree with that. It's very little more than the sort of parlor trick BS artists manage to pull off.
|
|
|
|
|
|
But that's my exact point - what kind of question do they ask if they can be answered by a mindless robot???
Not matter how "reputable" that exam is
|
|
|
|
|
|
Just what is this consciousness that makes you Human? Does this question assert an untruth?
|
|
|
|
|
I find it interesting that questions like this are being asked in context of a glorified search engine with fancy language output.
|
|
|
|
|
I think you only have to look at how ChatGPT actually works - it tries to figure out the "best" next word in the sentence.
That is NOT how to reason or think.
Do you think that way?
Certainly not - I would guess you have a CONCEPT first before you open the mouth.
ChatGTP has no concept. It is just word babble.
Thinking is not talking, no matter how many "scientists" may tell you that the way we think is through words. Einstein did not. And what about musicians? They don't think "now I need to put a F# semiquaver here in this position" (and if they do, their music is balderdash)
|
|
|
|
|
That is how it works now. You have to look past the now, and at the future. New versions, new updates, new branches. It eventually become what we all fear and know to be true.
|
|
|
|
|
Well, it has the best words.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
Slacker007 wrote: um not really, at all - could not be further from the truth.
Not true.
For example from the first link.
1. It was one test
2. It was one class
3. It did not score perfect.
Then the second link
1. The 'passing' score was just barely and that was 60%.
2. Hardly the only thing that goes in to becoming certified.
3. Text suggests this is not something new. They have run this test before and this is just the first time it got a score that high.
4. Why would it matter? There are studies that suggest medical errors are in the top 10 causes of death in the US. And it could be as high as the top three. So are you worried that the software might make the wrong choice?
|
|
|
|
|
Don't you see that ChatGPT is only going to get "smarter" with time? Don't you see that?
it's passing all the tests, barely, but passing. It won't be long at all when it passes all the tests with 100% scores.
Humans make silly mistakes, like forgetting to remove all the gauze from a site before sewing up. AI bots will not forget.
I will be laughing at all of this, especially at you haters and doubters, everyday till I die.
|
|
|
|
|
Slacker007 wrote: I will be laughing at all of this, especially at you haters and doubters, everyday till I die. Agreed. I'm amazed by the number of developers and computer scientists (supposedly smart people) that are burying their heads on this one.
Automation and robotics will be eliminating physical / manual jobs soon enough. AI will be eliminating MANY white collar jobs in roughly the same timespan.
The world needs to figure out what to do with 8.5 billion idle humans.
|
|
|
|
|
|
Slacker007 wrote: AI bot haters and doubters remind me of this historical event:
Hindsight successes do not prove predictions of the future.
|
|
|
|
|
fgs1963 wrote: The world needs to figure out what to do with 8.5 billion idle humans. Maybe.... this?[^]
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Humans feel most comfortable at a temperature of ~20-22 degrees Centigrade (293-295 Kelvin), and have a body temperature of 37 degrees Centigrade (310 Kelvin) Given 8.5 billion people, each of whom produces ~100W of heat, we have for the total usable energy:
8.5 * 109 * 100 * (310 - 295) / 295 = 43 GW.
This is the total energy production of ten large power stations.
From this, you need to subtract the energy required for growing & distribution of food and waste elimination for all those bodies.
'The Matrix' is not very efficient at power production.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Bender from Futurama: "Wouldn't it be better to use a potato? Or a battery?"
|
|
|
|
|
Obligatory XKCD.com
Bond
Keep all things as simple as possible, but no simpler. -said someone, somewhere
|
|
|
|
|
It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material.
Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue.
This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO.
Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted.
The true danger of "AI" at this point in time is how much people believe that it exists.
|
|
|
|
|
We as humans steal too. Artists steal ALL THE TIME, its called "inspiration".
Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future.
AI - angel to some, demon to others.
|
|
|
|
|
Slacker007 wrote: Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. I'm old enough to remember the critiques from old mainframe programmers when PCs were introduced in the late 70's / early 80's. Much the same apathy.
|
|
|
|
|
Slacker007 wrote: but I can only hope that you are all intelligent enough to see past the now,
I am intelligent enough to know that these sort of claims show up every 10 years or so. And none of them pan out.
More so there are hundreds and even thousands (or more) of claims about something that will 'revolutionize' this that or the other things every single year.
However there is no such thing as a 'revolutionary' development. Everything new is built on achievements of the past.
This latest cycle of AI is not in fact new. The companies involved have been trying to make them better for years if not decades. And yet the current level is all that they have achieved.
|
|
|
|
|
Slacker007 wrote: Don't you see that ChatGPT is only going to get "smarter" with time?
I see claims about that.
And claims that self driving cars are just around the corner.
And flying cars are just around the corner.
And autonomous robots are just around the corner (got to love the marketing videos of the robot company that has them dancing and opening doors.)
Slacker007 wrote: Humans make silly mistakes,
And when they attempt to predict the future that is where they fail all the time. Even the near future.
In 1970 Marvin Minsky told Life Magazine, "from three to eight years we will have a machine with the general intelligence of an average human being."
|
|
|
|