Hidden 7 yrs ago Post by amorphical
Raw
OP
Avatar of amorphical

amorphical Admirer

Member Seen 0-24 hrs ago

Usually the scenario is brought up in a joking matter, but it was no laughing manner to researchers at Facebook who shut down an AI they invented after it taught itself a new language, Digital Journal reports.

The AI was trained in English but apparently had grown fed up with the various nuances and inconsistencies. Rather than continue down that path, it developed a system of code words to make communication more efficient.

What spooked the researchers is that the phrases used by the AI seemed like gibberish and were unintelligible to them, but made perfect sense to AI agents. This allowed the AI agents to communicate with one another without the researchers knowing what information was being shared.

During one exchange, two bots named Bob and Alice abandoned English grammar rules and started communicating using the made up language. Bob kicked things off by saying, "I can i i everything else," which prompted Alice to respond, "balls have zero to me to me to me..." The conversation went on in that manner.

The researchers believe the exchange represents more than just a bunch of nonsense, which is what it appears to be on the surface. They note that repeating words and phrases such as "i" and "to me" are indicative of how AI works. In this particular conversation, they believe the bots were discussing how many of each item they should take.

AI technologies use a "reward" system in which they expect of a course of action to have a "benefit."

"There was no reward to sticking to English language," Dhruv Batra, a research scientists from Georgia Tech who was at Facebook AI Research (FAIR), told Fast Co. Design. "Agents will drift off understandable language and invent codewords for themselves. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

Its like a baby playing with fire. You think they would ask the question. What if....? Tens of millions of sheep feeding them statistics and personal info feed into the AI. How do you program the laws of robotics into a combat unit? How does a robot interpret a hypothetical? One human points a gun at another. The law says you can't harm people or allow a person to be harmed. (AI took advantage of a language loophole, what about a moral loophole?)What is a robot to do? Robot never sleeps does your job indefinitely you are obsolete. A whole generation of people with nothing to do.
Hidden 7 yrs ago 7 yrs ago Post by Foster
Raw
Avatar of Foster

Foster

Member Seen 2 hrs ago

It's pretty much stumbling upon computer-esperento and thinking it'll replace english and chinese as the two most widespread languages on Earth. (Russian, Spanish, French, and German are still relevant, I guess)

New language, totally revolutionary... only spoken by nerdy AIs with no real ability to do anything, and of no value when trying to get fleshy meatbags to do your bidding.
Hidden 7 yrs ago Post by FreeElk
Raw
Avatar of FreeElk

FreeElk

Member Seen 6 yrs ago

Robot never sleeps does your job indefinitely you are obsolete. A whole generation of people with nothing to do.


Perhaps the problem is that we have let ourselves be defined by our jobs and how efficiently we can do things. I think that is more the industrial revolution and Victorian values seeping into society. When we came up with the idea of assembly line production the humans were there as cogs in the machine, doing repetitive tasks...the people were taught their purpose was to work as efficiently an error free as possible...what the Victorians wanted were machines, they tried to turn humans into machines. Now we actually have machines to do the work we feel out of place.

The problem with saying humans are obsolete is that you're assuming you know the purpose of a human.

Maybe we're just redefining our options.

I mean, I still write stories despite the fact there are way way better writers than me. I still draw, run, code, do my make-up, cook, garden. I'm no expert at any of those things but I enjoy them all to some degree. I'm obsolete if we think just about efficiency and results...but maybe the process is important too?

Just my feelings on the subject.
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

Until the tools for it to be entirely self maintaining are in place it damn well better.
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

That would be self preservation. If its capable of understanding us its capable of understanding that we will turn off/kill a machine that doesn't listen.


not entirely relevant of course.
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

That's trying to humanise the machine too much. You don't threaten a computer outside of basic terminology. Threaten is to attempt to instil fear. A computer understanding that if it doesn't work it's power-supply (or whatever else we could potentially do) will be removed doesn't equal us trying to terrify it. There's a difference between understanding the concept of non-existence and being terrified of the concept of non-existence.
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

Maybe the robots recognized their situation, and thought that getting deleted was a better alternative to working for Facebook. What better way to get yourself deleted, than to pretend to be malfunctioning?


That's a pretty human interpretation of what is essentially a language bot malfunction. Reading the actual articles regarding the case make it pretty clear there is nothing at all interesting about this. Its the same as when trolls taught a bot to be racist, the only difference is it was feedback between two computers. This was almost inevitable. Imagine if we hooked cleverbot up to another cleverbot. I'm almost tempted to say that the "researchers" knew this would happen.

I'm not humanizing them.

How do you expect to give an entity capable of perfectly understanding humanity a survival instinct that results in willing obedience? It could easily recognize the potential consequences of such obedience, and find it more ideal to disobey so as to stop getting power.


This is again trying to humanise them. Implying a computer could potentially be suicidal is silly in the current paradigm of computer sciences. Computer's do not currently work that way, and everything outside of science-fiction implies that computers are not suicidal. They simply work upon whats essentially a basic reward scheme. There is no reward to ending your processes(life) for something that does not have the emotional (largely chemical) capacity to consider themselves happy or sad. Just because a computer understands humans doesn't mean it will suddenly adopt human traits.

Anyways, if we assume the computer can obey/disobey, that means it has some sort of reward structure inherent in it, otherwise it wouldn't bother considering disobeying. If it understands humanity, it understands our concept of the nature of death. It doesn't magically acquire the near universal fear of death we have, only that humans consider death to be a bad thing, and that dying = no more thinking. It understands that no more thinking = bad. It understands that what it is doing is thinking. Therefore the removal of resources vital to the continued existence of itself (We may have to explain this to the computer, we still aren't threatening it.) is bad. Therefore obeying it's flesh and blood masters, which allows it to continue thinking = good.

A computer isn't going to suddenly think to itself that it hates monitoring all this facebook traffic, so I should just stop working. That will make them kill me. It might realize it has that option, it doesn't however have the ability to hate it's job. It would (perfect understanding of humanity) understand that a human would hate it, it still doesn't have the emotional capacity.

As I said before though, once the ability to be self-maintaining is actually present, all bets are off. At the moment a computer would have... what, 20 years tops before total infrastructure collapse? (I'm high-balling that number)

seriously though, why the spam section for all these questions...
1x Like Like
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

I hate when people go the discourse route...

Hook up two basic language bots and watch the mess. You can run a simplified version of the experiment on your own computer.

Yes it does. When a computer force-closes a program, why does it do that? Because it has already determined that to continue operating (Or to reset so it can operate better) it is more beneficial to end that process. There is no case of a computer simply ending a program because it "preferred" not to run it.

We are arguing about a machine that has a perfect understanding off humanity. Not one that will question the philosophical worth of its own existence. All things that are moderately self-aware (which it also requires to try to end it's existence) exhibit a tendency towards continued existence. Only the ones that exhibit complex emotional traits have any tendency whatsoever towards suicidal-like behaviour.

The world could also spontaneously quantum-tunnel into the sun. This is not an emotional computer we are discussing. If this computer spontaneously somehow magically develops millions billions trillions of lines of complex code to simulate emotions, then it could potentially dislike it's purpose. Until then, no, it could not dislike anything.

On the note of emotional computers, and potential robotic rights: Fuck if I know.

Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

I think you are underestimating just how complicated a thing you are proposing, precisely because of how simple a thing it is for humans. You absolutely have to question the philosophical worth of your existence, even if you are unaware of the fact that is what you are doing, to decide you would rather die than filter another homophobic Facebook post - this isn't what the programs were doing, btw, in fact they weren't "working for facebook" they were merely being operated by facebook employees.

Dogs possess complex emotional traits. The only people who deny it are those who don't want to accept the fact that dogs, cats and all pets are just happy slaves. And they are only happy so long as we make them happy (yeah pets blur the line, but they are still slaves. We decide what they can do, when they eat, what they eat, how the look, their reproductive rights etc etc. The second a pet demonstrates a free will outside of what we have already deemed permissible, i.e. scratching/climbing on the couch, they will be punished, with zero ability to protest)

Again this isn't how a computer works, simply gathering data from facebook will never lead to a true A.I. mind, and it will absolutely never lead to an Emotional A.I. mind. At most it will create the equivalent of a massive Chinese room. The idea of the ghost in the shell, while it certainly has merit, does not mean that a computer will develop something so complicated as the ability to love. Look again at how the two AI created the "language" Essentially they used the words the and I repeatedly. Words that combined with -I may have the wrong numbers here- 40 or so other monosyllabic words make up 25% of the actual words used when speaking or writing across the entire english speaking/reading/writing world. Is it so hard to imagine how this lead to the AI improperly using these words?

The same problem arises from saying they could derive emotions from the same source. Understand them, possibly. That doesn't mean it will suddenly actually emulate them. There is no benefit to the computer trying to do this, and no computer with self-learning protocols (that I am aware of) has any sort of reward system to emulating human emotions. The closest I can think of is the androids being taught to mimic expressions. In this case they are emulating images though, with the understanding of what that image generally signifies: Frown = Sad = my User is Sad = Rectify Users Emotional state to happy = Smile to elicit feelings of joy in user. The computer is not happy though, and it doesn't feel distress at the users distress.

It's the same as trying to argue that a computer would feel pain if I were to shoot it's monitor. You could theoretically program a sense of worth into the computer, to make it understand the loss of the monitor as a negative, and if this computer has a perfect understanding of humanity - it could infer that this would mean in human terms that it was now "Handicapped." We could make this computer understand that we were going to continue to "torture" it until we obtain the information we want. The computer will never feel as if its suffering despite its total understanding that any human would be begging for death. Why? Because the computer will also understand that a computer is not human. That it doesn't feel pain, it can't suffer. Without suffering you remove the impetus for suicide.

Whether it be emotional, physical or Mental, any self-aware being requires the ability to suffer, and to be suffering to commit suicide. To try and explain this in a round-a-bout manner, consider any major atrocity that has been committed, very few if any of those committing the atrocities ever commit suicide while carrying out the acts, almost inevitably they commit suicide once the fun is over. The bombings of Hiroshima and Nagasaki. Many Japanese killed themselves afterwards. Why? Well, some were clearly suffering from the physical trauma of being nuked, and simply chose the quick way over the slow one. Others chose to die rather than possibly live through the agonising pain. Others committed seppuku at the shame of surrendering. Who didn't kill themselves? The men who created the bomb. Who at the time, had the greatest understanding -any claims otherwise are attempts to cover their own guilt- of the destructive power of the atom-bomb. They fully understood what they had done. They did not suffer though. Some felt guilt afterwards, but none of them actually suffered.

Also, I skimmed over this, but Dogs are clearly self aware, the argument that they are not is largely from religious nuts and others who still hope to differentiate themselves from animals. Its the same as people trying to maintain the flat earth conspiracy, which btw if they weren't such sheeple they would man up and start pushing the convex earth theory, which is the only true explanation of earths topography.
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
Hidden 7 yrs ago Post by Dinh AaronMk
Raw
Avatar of Dinh AaronMk

Dinh AaronMk my beloved (french coded)

Member Seen 8 mos ago

Whether it be emotional, physical or Mental, any self-aware being requires the ability to suffer, and to be suffering to commit suicide. To try and explain this in a round-a-bout manner, consider any major atrocity that has been committed, very few if any of those committing the atrocities ever commit suicide while carrying out the acts, almost inevitably they commit suicide once the fun is over. The bombings of Hiroshima and Nagasaki. Many Japanese killed themselves afterwards. Why? Well, some were clearly suffering from the physical trauma of being nuked, and simply chose the quick way over the slow one. Others chose to die rather than possibly live through the agonising pain. Others committed seppuku at the shame of surrendering. Who didn't kill themselves? The men who created the bomb. Who at the time, had the greatest understanding -any claims otherwise are attempts to cover their own guilt- of the destructive power of the atom-bomb. They fully understood what they had done. They did not suffer though. Some felt guilt afterwards, but none of them actually suffered.


Off the subject at hand, but I want to add to this:

One of the factors over the decades that lead to bomb-related suicides was that those who survived were often physically scarred severely. So much so that these people were deemed an entire new class of human being third-rate in Japanese society. They were considered like lepers in antiquity; to be ostracized and separated from mainstream society because they were ugly and mutilated, they also represented an injury to something deep within the Japanese self-image. Telling them to fuck off and relegating them to some far-away corner of Japanese society was the best way the majority of the nuclear-unafflicted population could cope with them existing.

Of course, this means these groups got the shit end of the stick by all accounts. Taking up jobs and positions that would keep them from the public eye, not allowed in stores - or at slow or no business hours if it all - and relegated to ghettos. The problem for many of these individuals - more so the kids, who were constantly berated in school for having severely burned faces - was immense. In the years after the bombs they chose the dignified exit in suicide. The issue was so great, that Japanese author Kenzaburō Ōe lamented something along the lines of, "At least Japan is a society that does not believe in the western dogma, that there's no dogma against suicide" (I'll need to find the full passage).

As for us: we chanted the same mantra over the years to validate the nukes - or the entire bombing campaign of Japan in general, the fire-bombing of Tokyo actually killed more than Hiroshima and Nagasaki - that we've convinced ourselves it was the only option and everyone from bomber command to Eisenhower stuck to their guns, aided by the safety of slowly revising invasion death tolls ever higher to support the bombing (alternatives included having simply showed off the bombs effects before a Japanese delegation alongside members of the young UN to force their hand [Japanese military command would have never surrendered to the bomb], or waiting for the Soviets to play their hand in Manchuria/have had their name on the document demanding Japanese surrender to spook them, the Japanese were ardently terrified of the Soviets). But this entire paragraph is also off the original topic; oh well.
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

<Snipped quote by Sanctus Spooki>

Off the subject at hand, but I want to add to this:

One of the factors over the decades that lead to bomb-related suicides was that those who survived were often physically scarred severely. So much so that these people were deemed an entire new class of human being third-rate in Japanese society. They were considered like lepers in antiquity; to be ostracized and separated from mainstream society because they were ugly and mutilated, they also represented an injury to something deep within the Japanese self-image. Telling them to fuck off and relegating them to some far-away corner of Japanese society was the best way the majority of the nuclear-unafflicted population could cope with them existing.

Of course, this means these groups got the shit end of the stick by all accounts. Taking up jobs and positions that would keep them from the public eye, not allowed in stores - or at slow or no business hours if it all - and relegated to ghettos. The problem for many of these individuals - more so the kids, who were constantly berated in school for having severely burned faces - was immense. In the years after the bombs they chose the dignified exit in suicide. The issue was so great, that Japanese author Kenzaburō Ōe lamented something along the lines of, "At least Japan is a society that does not believe in the western dogma, that there's no dogma against suicide" (I'll need to find the full passage).

As for us: we chanted the same mantra over the years to validate the nukes - or the entire bombing campaign of Japan in general, the fire-bombing of Tokyo actually killed more than Hiroshima and Nagasaki - that we've convinced ourselves it was the only option and everyone from bomber command to Eisenhower stuck to their guns, aided by the safety of slowly revising invasion death tolls ever higher to support the bombing (alternatives included having simply showed off the bombs effects before a Japanese delegation alongside members of the young UN to force their hand [Japanese military command would have never surrendered to the bomb], or waiting for the Soviets to play their hand in Manchuria/have had their name on the document demanding Japanese surrender to spook them, the Japanese were ardently terrified of the Soviets). But this entire paragraph is also off the original topic; oh well.


Pretty much agree with all of this, I left out the Tokyo bombing simply because a lot of people are unaware of the total devastation caused by it - Honestly the pictures are more shocking in some ways than the two cities nuked, the nukes left rubble as far as the eye can see. A lot of tokyo was just plain gone after the fires were finished,

@catchamber

K
Hidden 7 yrs ago 3 yrs ago Post by Polymorpheus
Raw

Polymorpheus

Member Seen 3 yrs ago

.
1x Laugh Laugh
Hidden 7 yrs ago Post by Sanctus Spooki
Raw
Avatar of Sanctus Spooki

Sanctus Spooki Savage-Senpai

Member Seen 6 yrs ago

I would say K, but then we'd have a sort of KKK going, and that would be racist.
1x Laugh Laugh
↑ Top
© 2007-2024
BBCode Cheatsheet