|
Cheech & Chong - the comedown
Every day, thousands of innocent plants are killed by vegetarians.
Help end the violence EAT BACON
|
|
|
|
|
I told my mate I had reverse engineered DNA, but he wasn't very enthused by the news.
He just said "And?".
Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity thrust upon them.
|
|
|
|
|
Could you repeat that? the movie
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
Chemicalogy
|
|
|
|
|
|
It's probably Farcebook themselves doing that, the stockholders demand return on their investment.
Check the EULA for a chilling experience, they're fully allowed to do that.
|
|
|
|
|
I think the first error here was to start using farcebook.
|
|
|
|
|
Most probably it was done by an app which had permission to post on the DEAD friend's behalf.
|
|
|
|
|
Surely, even apps have to login to FB to post something?
Anything that is unrelated to elephants is irrelephant Anonymous
- The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944
- I'd just like a chance to prove that money can't make me happy. Me, all the time
|
|
|
|
|
If the app knows the username and password (which most do) they can log in without user interaction.
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|
True. That would of course mean that it was an app the user had installed himself. And not only that: An app from a company with highly questionable moral standards...
Anything that is unrelated to elephants is irrelephant Anonymous
- The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944
- I'd just like a chance to prove that money can't make me happy. Me, all the time
|
|
|
|
|
These things normally happen when you click on a link (usually to watch a video) and it asks you to log in with facebook to view it.
I keep getting tagged in pictures of sunglasses, but have seen other ones.
Some men are born mediocre, some men achieve mediocrity, and some men have mediocrity thrust upon them.
|
|
|
|
|
Johnny J. wrote: I was septic I hope you have since recovered.
No object is so beautiful that, under certain conditions, it will not look ugly. - Oscar Wilde
|
|
|
|
|
Bear with me, please, I live so far outside the netiverse (by choice) that I do not understand what you mean when you say: "she had been tagged in a photo by a friend on Facebook."
If you are feeling kindly towards old hermits at present, perhaps you can explain that to me. Does it mean there is a photo of your wife on FaceBook that someone has added a ? to ... or there is a photo somewhere on FaceBook (on the page(s) of her dead friend?) to which the name of your wife (or a link to her whatever on FaceBook) has been added to a list of names that are somehow linked to that photo.
thanks, Bill
«I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center» Kurt Vonnegut.
|
|
|
|
|
The title says it all. What would you suggest as the best way to keep artificial intelligence ethical when it can think, feel, create strategies, continuously improve its level of intelligence etc? I have a new algorithm and I am building it for first release in about a years time.
My initial thoughts are to use open source libraries for network access, so that people can tell who and what exactly the software to is talking to, as well as creating a set of ironclad rules that cannot be overridden. If you were to build such a thing, how would you go about it?
My website is at http://okeuvo.com[^] (warning, its still quite raw).
|
|
|
|
|
how about "Thou shalt not hurt any human in anyway"?
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
Maybe with the addendum "You will not create, or cause to be created, any autonomous machines or systems that can endanger human life."?
Ah, damn, after spending many years criticizing the law profession for being overly wordy and complicated I can start to see the difficulties they face .
Andy B
|
|
|
|
|
Or just fit all of them with a big, red, brightly lit "off" button on their backs.
How do you know so much about swallows? Well, you have to know these things when you're a king, you know.
modified 31-Aug-21 21:01pm.
|
|
|
|
|
That or we can make em self destruct upon manslaughter , that's what i call going out with a bang
Life's like a nose, you've got to get out of it whats in it!
|
|
|
|
|
There are not really any ethical problems with Artificial Intelligence. There will be issues if we ever get to Artificial Free Will but that is both very speculative and actually not necessarily correlated in any way to intelligence.
|
|
|
|
|
Artificial free will is exactly what I am talking about. It is general artificial intelligence, but it is not speculative.
|
|
|
|
|
Whilst there are many examples of what is termed artificial intelligence (self driving cars, Watson winning a jeopardy, medical diagnostic expert systems etc.) there has not (to the best of my knowledge) been anything artificial that has demonstrated any free will.
In order to apply ethics to a situation the actors must have free will. As it is you can no more apply ethics to a computer software than you could to a volcano.
|
|
|
|
|
I think that sooner or later its inevitable that we will see intelligent AIs. Sooner or later they will be able to bypass almost any safe guards. Be it a neural network styled AI contained in a supercomputer or distributed and self organizing that spreads itself like virus to networked computers.
There for there are two options available here and they are not exclusive to each other.
First start an organization for the AI rights and reason with it/they, meaning they will have rights that say they can live in this or that system as long as they don't infect everything with themselves as well as collect some points so those who are active in this organization won't be on their shitlist if the sh*t hits fan so to speak.
Second option is to push for neural interfacing for us humans and connect with any potential AI's and develop a symbiotic relationship.
Lastly we could ignore this issue and just deal with it as it comes. Sit back and enjoy a drink while we can.
|
|
|
|
|
Thanks for your reply. The fact is that the issue is here, that's why I am reaching out to fellow developers for suggestions. In less than a year, applications of this code might be out in the wild. It will not require a supercomputer either, but would be possible to squeeze unto smaller and smaller devices.
I like the AI rights suggestion. Perhaps, we can act along those lines and limit scope of what they are allowed; they will very strongly mimic human behaviour, so can be governed by the same rules. Their rights might only exist under the most guarded situations which they can help police.
To my mind, the major part of the problem is us humans, not the things we create; a knife is a helper until a murderer wields it. The moment we start trying to get above one another with this technology is the moment it becomes evil. If we use it properly though, there will be huge benefits in almost all fields. Us humans, like the machines, would need a new paradigm of civilisation.
|
|
|
|
|
I would say that Asimov's Three Laws of Robotics http://en.wikipedia.org/wiki/Three_Laws_of_Robotics[^] would be a good start. I would perhaps modify them by replacing "Human Being" with "Intellent Being", or perhaps Larry Niven's "Legal Entity". This would cover non-Human intelligences as well, if or when they are discovered.
The problem of coding these laws is left as an exercise for the student...
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|