|
Don't you see that ChatGPT is only going to get "smarter" with time? Don't you see that?
it's passing all the tests, barely, but passing. It won't be long at all when it passes all the tests with 100% scores.
Humans make silly mistakes, like forgetting to remove all the gauze from a site before sewing up. AI bots will not forget.
I will be laughing at all of this, especially at you haters and doubters, everyday till I die.
|
|
|
|
|
Slacker007 wrote: I will be laughing at all of this, especially at you haters and doubters, everyday till I die. Agreed. I'm amazed by the number of developers and computer scientists (supposedly smart people) that are burying their heads on this one.
Automation and robotics will be eliminating physical / manual jobs soon enough. AI will be eliminating MANY white collar jobs in roughly the same timespan.
The world needs to figure out what to do with 8.5 billion idle humans.
|
|
|
|
|
|
Slacker007 wrote: AI bot haters and doubters remind me of this historical event:
Hindsight successes do not prove predictions of the future.
|
|
|
|
|
fgs1963 wrote: The world needs to figure out what to do with 8.5 billion idle humans. Maybe.... this?[^]
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Humans feel most comfortable at a temperature of ~20-22 degrees Centigrade (293-295 Kelvin), and have a body temperature of 37 degrees Centigrade (310 Kelvin) Given 8.5 billion people, each of whom produces ~100W of heat, we have for the total usable energy:
8.5 * 109 * 100 * (310 - 295) / 295 = 43 GW.
This is the total energy production of ten large power stations.
From this, you need to subtract the energy required for growing & distribution of food and waste elimination for all those bodies.
'The Matrix' is not very efficient at power production.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Bender from Futurama: "Wouldn't it be better to use a potato? Or a battery?"
|
|
|
|
|
Obligatory XKCD.com
Bond
Keep all things as simple as possible, but no simpler. -said someone, somewhere
|
|
|
|
|
It's easiest way to see what these "AI" models are doing by looking at the "art" models. The systems are very good at finding the source material to steal (and there are lawsuits filed), but the rest is just a merge/morph operation with no real understanding of the material.
Sticking with the art model, if you tell it to "draw" a woman with red hair wearing a black dress, you'll get several reasonable representations. However, if you tell it to draw a Christmas parade, you'll get something that looks ok from a distance, but the people all have warped faces, or too many arms or some such issue.
This is because the system doesn't actually understand the material. It's the same thing with the text models. You tell it you want a paper on the theology of bed bugs, and it'll dutifully go out and find a bunch of source material on theology and bed bugs and attempt to merge these concepts into something that "sounds right". It will result in a final product that is as non-sensical as the original input. GIGO.
Now, if you take this initial technology and use that to train the next model on the concepts of "person", "dog", "car", "love", etc. you might get another step closer. However, there still isn't a reasoning engine in the mix. Until then, these toys won't be able to pass the Turing test. For all the fluff and thunder in the news, there are just as many stories of how easily these simple models can be tripped up, fooled and twisted.
The true danger of "AI" at this point in time is how much people believe that it exists.
|
|
|
|
|
We as humans steal too. Artists steal ALL THE TIME, its called "inspiration".
Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future.
AI - angel to some, demon to others.
|
|
|
|
|
Slacker007 wrote: Most of you guys are critiquing and criticizing AI's abilities now, but I can only hope that you are all intelligent enough to see past the now, and into what it can and will be doing in the near future. I'm old enough to remember the critiques from old mainframe programmers when PCs were introduced in the late 70's / early 80's. Much the same apathy.
|
|
|
|
|
Slacker007 wrote: but I can only hope that you are all intelligent enough to see past the now,
I am intelligent enough to know that these sort of claims show up every 10 years or so. And none of them pan out.
More so there are hundreds and even thousands (or more) of claims about something that will 'revolutionize' this that or the other things every single year.
However there is no such thing as a 'revolutionary' development. Everything new is built on achievements of the past.
This latest cycle of AI is not in fact new. The companies involved have been trying to make them better for years if not decades. And yet the current level is all that they have achieved.
|
|
|
|
|
Slacker007 wrote: Don't you see that ChatGPT is only going to get "smarter" with time?
I see claims about that.
And claims that self driving cars are just around the corner.
And flying cars are just around the corner.
And autonomous robots are just around the corner (got to love the marketing videos of the robot company that has them dancing and opening doors.)
Slacker007 wrote: Humans make silly mistakes,
And when they attempt to predict the future that is where they fail all the time. Even the near future.
In 1970 Marvin Minsky told Life Magazine, "from three to eight years we will have a machine with the general intelligence of an average human being."
|
|
|
|
|
"Reinforced Learning with Feedback" is the algorithm that ChatGPT is based on.
With a huge dataset taken from the internet and from the feedback of everyone that uses it telling it correct or incorrect it will "learn" but it won't learn the same way humans do.
It won't have imagination.It won't have intuition. It won't have any of the human characteristics that make what we humans call Intelligence.
That's why it'll always be ARTIFICIAL INTELLIGENCE. An IMMITATION of intelligence. Never the real thing. Let's get real. It's no more than just a system running an algorithm. If we treat is as such and it's no more than just a fancy toy. Treat it like your new "GOD" and well "It'll become faster, smarter" and all of that lah-dee-dah. Time to choose, humans.
|
|
|
|
|
If it's not linked, it's "unknowable". There will be a new movement to not record anything; we'll just exchange information with winks and nods so it can't be used.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
So the ubiquitous cameras will be used by the AI to learn the "winks and nods" language.
Resistance is futile!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Not being an anthropologist or a sociologist or whatever, I have recently been thinking (because I can) about the nature of "intelligence" (and sapience).
Particularly when people assert that many of the other animals on our planet are also sapient and intelligent -- which I don't deny.
But that means that "something else" must separate humans (and probably our extinct proto-human ancestors) from "the lower animals" -- but not "intelligence".
Because certainly "something" separates us from even our closest opposable-thumbed relatives.
Either that or there is disagreement on what constitutes intelligence -- I'm probably too stupid to have it explained to me.
While we humans have amassed a large collection of knowledge and technical ability I don't think we have actually become more intelligent than our stone-age forebears. In particular, we have the technology to transfer knowledge to others.
Consider: One day a cave man -- who did not have a stone hammer -- made a stone hammer, the first stone hammer ever conceived in the history of mankind. He did not become more intelligent by making the stone hammer, he already had the intelligence required to do it. But he gained knowledge and technical skill. He could then teach others to do the same, and those others gained knowledge and technical skill, but not intelligence -- they also already had the required intelligence to grasp the concept and the benefits to their lifestyles.
Regardless what the CIA thinks, I don't think knowledge and skill equate to intelligence.
In short, I don't think an AI has intelligence at all, it has only knowledge -- and logic programmed into it about how best to manipulate and present that knowledge.
The only way to win is not to play the game.
|
|
|
|
|
PIEBALDconsult wrote: But that means that "something else" must separate humans
Does it? Is it really a categorical difference? Could it be so simple as elephants not ending up with opposable thumbs, for example? Or rats didn't lose their fur so that they could develop oral communication, etc?
I'd argue that if there is a thing that separates us from animals, it's probably a combination of things. Opposable thumbs, highly articulated language, the inclination to alter our environment rather than adapt to it. All of these things led us to the top of the proverbial pecking order in nature's hierarchy of life.
I don't know if there's any one thing that's fundamentally different than other animals, at least in terms of category. Other animals have language (dophins), but not as articulate. Other animals alter their environment (beavers), other primates have opposable digits, as do many birds.
With the disclaimer that I am not religious: I remember reading the fall of man story in the old testament and coming away thinking that (based on my exegesis) part of what it was saying is that what makes us different than the rest of the animals is the ability to define and articulate elaborate moral frameworks ("the tree of knowledge of good and evil"). I thought that was interesting.
To err is human. Fortune favors the monsters.
|
|
|
|
|
So I think we are basically in agreement.
By "something else" I don't necessarily mean it has to be one thing, it may be a subtle variety of smaller things combined.
Yes, there are "lower animals" which do some of the things we do, such as communicating, using simple tools, making shelters, altering the environment (e.g. beavers making ponds). But we're the only ones who do all of them and more and to a mind-boggling extent. Surely our ancestors learned from them.
I don't think we can make assumptions about the morals of other animals, especially cats.
|
|
|
|
|
PIEBALDconsult wrote: But that means that "something else" must separate humans (and probably our extinct proto-human ancestors) from "the lower animals" -- but not "intelligence". The simplest concept of what that "something else" is, is the ability to choose. Animals generally respond instinctively (they can be trained to not respond instinctively, but that's still not choice.) We humans are unique in that we can choose not to respond by instinct. Ooh, that cake looks delicious, I'm going to eat it. Or, nice cake, but I'm watching my calories. Or I'm lactose intolerant so eating that would not be a good idea. Conversely, my cat loves to chew on certain plant leaves regardless of how many times he barfs them up later.
Therefore, I would say that intelligence is making good choices based on knowledge and skill, and also making poor choices for reasons we are conscious of but choose to ignore.
|
|
|
|
|
I'll need to give that more thought. Personally, I'm unclear on what constitutes instinct anyway, so I may be a bit lost.
As to choice, I'd still be unsure where to draw the line. For instance: When a pack of predators attacks the weakest members of a herd of prey, is that instinct or choice? Wouldn't instinct demand they attack the largest/meatiest? Is attacking the weakest members a learned strategy?
This reminds me of "A Beautiful Mind".
I think humans have probably lost much of the instinct our ancestors must have had and replaced it with learned knowledge.
Maybe that's what makes the difference today, but there still must have been chooser-zero who had the ability and acted on it. Probably some bratty kid refusing to eat his mammoth.
|
|
|
|
|
PIEBALDconsult wrote: Is attacking the weakest members a learned strategy?
The most amount of energy for the least amount of work, plus some morale/long-term thinking? Human babies could grow up and eventually help us so we let them be, some animals think very differently. (This sort of went dark awfully fast). At any rate: AI could definitely do that since its basically just math:
Calculus of variations - Wikipedia[^]
|
|
|
|
|
PIEBALDconsult wrote: Is attacking the weakest members a learned strategy?
Certainly is learned for larger carnivore mammals. Packs are easiest to see this but even when non-pack animals younger animals often have to survive on smaller prey because they keep picking the wrong prey animal to attack.
|
|
|
|
|
As your cat does with the leaves.
|
|
|
|
|
nepdev wrote: A school should prepare kids for life, give them some competence they can use, some knowledge they can apply, make them curious to create and use their imagination. This is why I sent my kid to a Waldorf School K-12, fortunately had the ability to do so. I was stunned when a couple months ago he called me simply to say how he so appreciated that I had done that. That was nice.
|
|
|
|