Do You Think Its Wrong to Hurt Robots?

  • 84 results
  • 1
  • 2
Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Poll Do You Think Its Wrong to Hurt Robots? (275 votes)

Yes 44%
No 55%

I got into a drunken argument with a friend a few days ago about this topic.

Without going into too much detail, his argument was that it would be wrong by most standards of morality to emotionally or physically harm any thing that is sentient. Which i completely agree with.

However, we applied it to robots and our conclusions began to differ wildly.

Now, there are various types of robots, but take for instances the robots from the film Blade Runner:

They look human, they speak (more or less) like humans and it seems like if you hurt them they react as if they're in pain.

For me to feel bad for those kinds of robots, you'll need to prove to me that they do in fact have some kind of consciousness and are not just a series of "if then do"-parameters in metal casings with human-like skin wrapped over them.

The example I gave to my friend is, say I present you with an iPad with an app on it that can only present a smiley face or a sad face when you touch the touchscreen.

You stroke the touchscreen and you get a :)

You tell the iPad to have a good day, you get a :D

but if...

You tell the iPad to "fuck off", you get a :(

You shake or flick the screen of the iPad with your fingers, you get a D:

To me, at a base level, this is primarily what robots will always be.

Its not like the iPad in the above scenario is actually feeling pain or joy, its simply reacting mindlessly to pre-set functions based around its hardware.

The same can be said of killing people in Grand Theft Auto 5. You shoot them, they react with pain and appear to be suffering, yet no one can make a valid claim (in my mind, anyway) that what you're doing is morally wrong.

Scale this up to how robots will inevitably be, and I fail to see how anyone will ever be able to prove that even the most life-like robot isn't just a multi-million dollar version of the above iPad example.

My friend's retort to this was that human beings are essentially organic robots enslaved by years and years of evolutionary processes. Which I thought was a bit a of fallacious argument, and Ill end on this:

Out of the organic robot assumption, came to 2 conclusions:

Pain, fear, desire, need for exploration etc. the very things that make us human and can harm us emotionally and physically are things that a robot will never understand or appreciate.

My reasoning for this is if you lock a human being in a dark, cold room and wait for them to die, this is a morally atrocious thing to do, but is explainable in terms that most people who aren't psychopaths can understand:

Locking someone in a room - deprives them of human contact - something a robot doesn't need. Also, given that a robot hasn't had to evolve in tribes, I'm very unsure if the idea of loneliness is something they will ever understand in the same manner we, humans, do.

Locking someone in a room to die - pits a human in agonizing despair at the prospect of death - Im not even sure if robots can have a conception of death, given that their sense of self and identity can be literally replicated forever. Even if they couldn't be replicated, the fear of death is not even uniform in human beings, and as such applying that to a metal entity is frankly ridiculous, in my estimation.

And finally, I can feel pain, both physical and emotional. I believe dogs, cats, human beings etc. all multi-cellular organic beings that have been finely honed over the course of millions of years are able to feel pain, joy etc. Robots have never undergone this process. Thus, my conclusion is its not wrong to hurt robots because its impossible to do them any real harm.

Im open to having my mind changed. Im genuinely interested in what others have to say on this.

 • 
Avatar image for frostyryan
FrostyRyan

2936

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

I went through a wave of emotions reading this and it was very insightful. I expected to laugh and be like "haha, check this out" but you actually made me think.

What really is the difference between harming a "person" in a digital space like a video game and harming a robot? holy crap

Avatar image for efesell
Efesell

7502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

I've caught myself feeling guilty being snide to Siri so . . . .

Avatar image for redhotchilimist
Redhotchilimist

3019

Forum Posts

14

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

#3  Edited By Redhotchilimist

If we're talking a sci-fi machine with an actual thinking, feeling AI with no practical difference from a human, like the ones in Blade Runner? Then it's bad. If it's a roomba, kicking it makes you look like a jackass, but it's way below even bothering a pet.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@frostyryan: this is essentially my stand-up comedy material.

But seriously, thanks. Alot of Sci-Fi tends to gloss over this and posits robots who are just human beings that happen to be robots. Thats not really interesting to me. It seems like most sci-fi is skipping over the most interesting stuff, in my opinion.

Avatar image for clayman
clayman

20

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

I guess my counter point is would you do real people harm if you knew it wouldn't actually hurt them in the end? Imagine you are stuck in a ground hog day scenario, would it be ok to go around killing people because you know that in the morning they won't be harmed anymore? Something about hurting a robot that isn't out to hurt you feels very wrong somehow.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@redhotchilimist: Really its aimed at any robot now or in the future. I don't believe robots will ever advance to the point where it will make it wrong for us to hurt them. Even very life-like Blade Runner, Ex-Machina Robots. They may look human, but they'll never be human.

Avatar image for shagge
ShaggE

9562

Forum Posts

15

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

I'm not convinced that we can ever make an AI sentient, so no. We may feel bad about it, since we're wired to respond a certain way to things that look or act like living creatures, but I don't think it's any more wrong to kick a Terminator-esque robot in the face than it is to slap your PC in the hard drive and call its motherboard fat. Take away the human-shaped shell, and they're the same thing.

Now, if we ever *did* reach a point where an AI becomes truly self aware (as opposed to perfectly mimicking but not actually being self aware), then that's a whole other story.

Avatar image for kirkyx
KirkyX

371

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#8  Edited By KirkyX

"Pain, fear, desire, need for exploration etc. the very things that make us human and can harm us emotionally and physically are things that a robot will never understand or appreciate."

Loading Video...

I reject this assertion, fundamentally. If a human being can experience these things, then there is absolutely no physical law stating that an artificial intelligence couldn't be created that, by design or simple accident, did exactly the same. Evolution is not magic--it is a scientific process, like any other, and we have absolutely no reason to believe that its effects - in this case, the development of a sapient intelligence - couldn't be replicated by human means. The ultimate conclusion of any argument to the contrary essentially comes back to the 'biological machines' problem--if a robot, regardless of the technology used to create it, cannot be a sapient intelligence, then neither can a human being, as we are all, ultimately, mechanical constructs that operate according to the same physical laws as our technology, and everything else in the universe.

As such, when faced with an actual example of a sapient - or, at least, apparently sapient - machine, the assertion that 'a robot will never understand or appreciate' what it is to be sapient is fundamentally un-useful, as it's a faith-based judgement that - in the case of such a machine actually existing, as outlined in the OP - flies directly in the face of both direct physical evidence to the contrary, and the scientific fact that, if sapience can exist in humans, it can potentially exist elsewhere as well.

...

Also, I feel that your poll question is somewhat misleading? I'd have voted 'no' without reading the post, as the obvious inference from that question is, 'a robot as we currently understand the concept' not 'a robot that, externally at least, exhibits all the traits required to be deemed sentient, or even sapient'.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@clayman: good point.

I have an emotional and intellectual response to that.

My emotional response would be that it is wrong to hurt a human being in any given scenario.

My intellectual response would be that hurting someone requires there to be some kind of lingering trauma afterwards, be it physical or psychological. Absent those ramifications, I fail to see what would be morally wrong in this ground hog day scenario. So, I guess, if you're ever in situation where everything reverts back 24 hours the next day, I guess you'd be okay to behave like a psychopath because 24 hrs from now, everything will be fine again.

However, hurting someone in the moments within those 24 hrs would still be VERY amoral because you'd be causing pain and suffering to people during that time.

Avatar image for laxbro19
laxbro19

406

Forum Posts

89

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

I think it might be ok to hurt a robot if physical abuse was one of the things that it was designed for. If you went to a space gym that had a sparing robot to box against, hurting it wouldn't be wrong.

although this argument totally changes if you throw in the added complexity of consciousness and agency. If you were punching this sparing robot but it was not happy about it and wanted to stop but can't stop because its' programming won't allow it, then you choosing to beat up the robot would be pretty mean.

Avatar image for bladeofcreation
BladeOfCreation

2491

Forum Posts

27

Wiki Points

0

Followers

Reviews: 1

User Lists: 3

@katygaga: You might not be watching the right sci-fi, then. There are a TON of sci-fi stories that deal with the idea robots achieving sentience and what that means for humans, those robots, and society as a whole. There's even a big game coming out next year about this very thing.

Avatar image for derachi
derachi

139

Forum Posts

6

Wiki Points

0

Followers

Reviews: 0

User Lists: 4

I think the answer to this requires a bit more information, specifically: why is the person hurting the robot? Does the person want to do it because it'll make them feel good, but they know they can't hurt a living thing because they'll get in trouble? Are they angry at the robot? To me, it all basically wraps back around to attacker's motive, not the victim's humanity (or lack thereof.)

Avatar image for lanechanger
Lanechanger

1779

Forum Posts

2289

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

*speaks loudly, borderline yelling, at his phone and computer*
YES, YES IT IS WRONG TO HURT ROBOTS. IT SETS A PRECEDENCE FOR ABUSIVE BEHAVIOR.

Avatar image for mattgiersoni
MattGiersoni

587

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#14  Edited By MattGiersoni

It's absolutely wrong to hurt robots, and by robots I mean fully conscious AI/robots. Even if they weren't fully conscious I'd still feel wrong and I will never and would never hurt a robot. Humans are so destructive, vicious and awful so it won't suprise me if there's going to be a lot of robot abuse in the future, until some laws will diminish it. Humans suck :(

You also mention killing NPCs in games and how no one can make a valid claim in your opinion that it is wrong. Sorry, but I often felt bad killing certain creatures or NPCs in games. Maybe I'm weird and super sensitive/empathetic.

Avatar image for geraltitude
GERALTITUDE

5991

Forum Posts

8980

Wiki Points

0

Followers

Reviews: 17

User Lists: 2

I also wanted to post The Measure of a Man :D

The other thing I wanted to say is that, the reason doing anything is right/wrong or whatever doesn't just have to with how that act affects others. Being a jerk, to a person, animal, robot, or even a fridge is just lame. Don't be a jerk!

That's just one part of sportsmanship.

Avatar image for konig_kei
konig_kei

1037

Forum Posts

123

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#16  Edited By konig_kei

What if robots were programmed to enjoy punishment? Then it'd be cruel to not hurt them. It might make you stop hurting it too because you're not getting the reaction you want.

Is it cruel to deny a pup scritches? So many questions.

Avatar image for rebel_scum
Rebel_Scum

1633

Forum Posts

1

Wiki Points

0

Followers

Reviews: 1

User Lists: 3

When I gave a robot to my good friend Pauly we treated him just like one of the family. And he is a part of the family. So yeah it's wrong to hurt robots when they're close like that.

No Caption Provided

Avatar image for pezen
Pezen

2585

Forum Posts

14

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

I was listening to a conversation on high AI on some podcast some while ago, I can't actually remember which one. However, they posed an interesting question that applies here. At what point do we know an artificial intelligence is sentient and not just very good at acting sentient? Even if the robot is self perceptive to the point that we would agree that what it does and say implies sentience, is it actually sentient though and not just operating under information of what sentience would imply. How would we measure such a thing? We know a pig will squeal if we hurt it, but do we consider a pig a sentient being? Because if negative reaction to negative stimuli implies sentience, we're morally obligated to never eat meat. And some would argue that's already the case. But a lot of people don't see that as actual sentience so much as biological survival instincts and nerves to help keep a living thing alive by avoiding self (and external) harm.

So would it be wrong to hurt a robot? It entirely depends on if the robot is self aware and I don't necessarily think we have the capacity to know that right now. In that regard I would say no, but that may very well be due to lack of understanding of how consciousness actually works. Artificial intelligence may very well have the capacity to generate consciousness, and if so it would probably fall under it being morally objectionable. But if you could hurt and dismantle a robot entirely and reassemble the robot without it ever having negative repercussions to it's 'brain', the physical aspects of it is entirely arbitrary in the conversation.

But hey, I still don't want people to hurt Data. So I find myself being very unsure about the whole thing.

Avatar image for oursin_360
OurSin_360

6675

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

If they ever gained self awareness and consciousness then yes. And honestly there is no actual way to know that anything alive isn't just mimicking behavior, we use ourselves as reference but how would you know you're not strapped into the matrix? And it's just wrong to harm anything anyway

Avatar image for dray2k
Dray2k

884

Forum Posts

133

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#20  Edited By Dray2k

These talks are fun, did you know that a entire human being around 100 pounds in weight would only cost around 40 cents, if we account all the costs of all the chemicals our bodies have? That such cheap crap like carbon and water is able to create actual sapient and sentient life who can experience it truely is one of the real wonders of the whole universe. Maybe as humans we should value that stuff much more before we can tackle questions about the possible potentials of AI, but I'm getting ahead of myself.

Anyway, the more and more Robots become like us humans, the more and more we have to think about how we threat objects and in later phases to even consider Robots objects or individuals instead. I mean why would we even be considering to create robots to our image in the first place?

@katygaga: What if I tell you if ":( defines sadness" is on the eye of the beholder? If there aren't no set of rules in the first place, a robot, like a mentally ill human being doesn't value a ":(" as something bad. Its what we as humans interpret as a human trait in something that has no fundamental ruleset in the first place. A ":(" is sad because it goes hand to hand with our biologically concepts of self preservation and individuality.

This is a similar conclusion as yours and its kinda repeating. However, this all serves as a foundation on how people who're tackling the development of semi human-like AI are tackling the issues.

A wholly complex AI will give the Robot a foundation for agency, individuality and giving it actual agency. I'm sure that humanity as a whole will reach transhumanism first before approaching any sort of discovery towards a human-like AI that can feel. So in order to create a Robot, people have to develop "feel" into the robot. Its simple to conclude but extremly difficult to create. I mean we know what chemicals in our brain are making us happy and on how they're made, yet it is very difficult to actually recreate this chemical in a simulated environment as we have no machines who autonomously create such substances.

The same goes with the AI, such an AI shouldn't be just a hugely complex piece of code, it has to be self adding similar to an autonomous chemical reaction and self defining and declaring similar to confirming and/or folding while still obeying the foundations just like us humans (just like on how our chemicals define our biology). If there aren't any set of rules, or foundations if you will the robots programming would be too chaotic for either the Robot nor the Human to decypher, the system also would probably crash. Imagine a chemical that would be a unique mix of a lot of stuff, it would probably just look like goop that you probably wouldn't touch even with gloves.

Furthermore, a human being has a defined set of rules already doing birth. On the contrary, a robot has a wholly different set of rules by its creation so occams razor dictates that in order to create a robot that requires and adheres to the same ethical ruleset like us humans we have to wholly create such foundations for the robots first.

So I agree, Robots can't share the same ethical subsets of humanity because they have to become human first, with all the foundations working as intended first in order to provide the first layer for AI programmers to create human like AI.

To understand how just your typical two-human communication work, you should go ahead and read this first. If a Robot could act upon this model in a real manner then thats a major indication that it is actually a human and would serve as the best turing test. If you cannot create an example for a robot that adheres to these rules, advanced communication (for the development side) will always be one of the largest major feats.

I think talks like this in general are quite interesting because from my perspective its nice to see people figureing out the axioms by themselves that go into creating/replicating these foundations for an "feeling human-like AI". It might happen, perhaps even sooner that we all think.

@silver-streak: I just edit my post to not spam, this is also within our own set of rules. Not many people know that I think but destruction is very much often seen as a negative thing because it doesn't go hand-in-hand with our self-preservation instincts. In a direct environment where we can see the damage directly, we immediatly go and say "no" but when it is by proxy like for instance the smoke in the background on a television screen at a news segment it is more difficult for the brain to abstract from this. Of course we still find it alarming but it would be just that. It would require further abstraction in order to create the same "no" reaction.

Which is why we made a ton of rules/laws about damaging property and the environment. This whole set of rules is protected by another layer and thats empathy, which is why damaging stuff primarily feels bad and stuff gets protected all the time, in one way or the other.

Avatar image for silver-streak
Silver-Streak

2030

Forum Posts

587

Wiki Points

0

Followers

Reviews: 1

User Lists: 4

#21  Edited By Silver-Streak

Willingly causing harm to ANYTHING should be avoided. Sentient or not.

Avatar image for gundamguru
GundamGuru

786

Forum Posts

391

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#22  Edited By GundamGuru

@katygaga: No, robots/androids are not people. I say that as computer scientist with respectable knowledge of science fiction literature, both classic and popular.

Sentience is an incredibly difficult thing to test for, but the Turing Test is not sufficient. In my (and many philosophers) opinion, convincingly looking and quacking like a duck does not give one the intrinsic qualities or inner workings of being a duck.

It's an interesting line of thought, but the main thing for me about "are androids human" is that it's actually a relatively old question that has concrete answers. It just hinges on how you define sentience, and realizing that that is a distinct concept from intelligence. It's not enough to be able to think; they need to be able to feel. So I agree with you, generally speaking.

Avatar image for bladededge
BladedEdge

1434

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#23  Edited By BladedEdge

I've firmly believed since i was old enough to even begin to comprehend and think about this subject critically, that its going to be a major social issue when I am in my..70s..80s? Or maybe just after my currently 30 y/o self dies.

As they are now, and as the majority of people can/do perceive them, robots are tools, nothing more. Even the most advanced A.I. is just a set of functions written by someone. We don't, and likely shouldn't, be giving them the same intrinsic value we give human beings, let alone the animals, plants and other living things that we share the planet with but make use of constantly. I.e. Is it morally right to raise and kill cattle for us to consume? Is it morally right to test lab rats, monkeys, or even single-cellar organisms that have a life-span of minutes? All the kinds of ethical stuff people -could- ask..but we don't right now via A.I.

But in the future? In the inevitable future as computers get more and more complicated? You better believe we are going to reach a point where they can think like us, act, like us, feel like us. We are going to reach a point were Artificial Intelligence is just going to be...Intelligence, Sentience. You think people are savage to each other now based on skin-color, gender, culture. Imagine a world where a sentient life is born with the same range of emotions, thoughts, dreams, fears and etc as you..but is an A.I. and is considered 'not even human'. Its a leap a lot of people simply won't be able to make, and even now can't imagine making.

Still, I've felt like this is the inevitable path we are going down, thanks in no small part to the many science fiction writers exploring that very same topic. Right now, its a fascinating ethical thought-puzzle. In the future though? People are gonna be protesting in the streets, and killing each other over it. Now pause for a second, and realize when I say 'people' I mean 'sentient life, not just human beings'. I might be wrong about that? Honestly I hope I am, I hope society is advanced enough not to repeat past mistakes. But history seems to indicate optimism in this case is unfounded. Who knows, maybe my children's children's children will grow up in a society where its real weird that their granddad/mom (my kids mind you) seem to still cling to the idea that machines were once, you know, not 'Just like us, only with mechanical instead of organic origins'

As for right now? Machines are not yet to a point where I should feel bad about abandoning the A.I. in last years phone for an identical (but not the exact same) twin of it in the new phone. In 10-15 years though? We are likely gonna be wanting to transport the A.I. we've grown to consider a 'pet' in nature from device to device. Past that, we are going to have fringe causes of people who can't have social interaction with other people finding that need fufilled completely by extremely realistic (but ultimately not 'really sentient' A.I.) By that I mean full on 1 to 1 androids. Past that..your going to get an A.I. who passes the turning test and small groups will begin to demand share the same rights we do...and from there it will snow-ball.

Heck, we are not so far off from the creation of entirely artificial wombs. What do you call someone who Father is human, but whose mother was an Android. Only one with a perfectly functional womb and, lets just say we also learn the secrets of creating DNA to the point we can manufacture totally unique Egg/Sperm cells? Now assume Androids seeking citizenship get tied to a single DNA strand..which they can use to create/populate such artifical cells in the same way our DNA is used by our body to sexual reproduce. ...What do you call that kid?

On that subject what do you call someone made entirely by another person/via technology. Artificial womb, sperm and egg. All organic, but all 'not how we do it now". Are they human?

Fun stuff.

Avatar image for bybeach
bybeach

6754

Forum Posts

1

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#24  Edited By bybeach

I would on 2 counts;

It was clear or even suspected the robot was self aware. There is much more to be said, especially on the nature of sentience if it existed, but I will leave it there. Briefly, sentience doesn't have to manifest as humans show/experience it.

It was a substitute for morbid deviant cruelty that would otherwise be inflicted on other people or even animals. That of course could take off on it's own debate wings, also.

Avatar image for tom_omb
Tom_omb

1179

Forum Posts

1

Wiki Points

0

Followers

Reviews: 0

User Lists: 14

I would not hurt a robot or an iPad. I'm not paying for a broken iPad. Why would I want to do harm to anything anyway? Unless it's in self defense.

Avatar image for brunothethird
BrunoTheThird

985

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26  Edited By BrunoTheThird

Having the urge to hurt or aim your aggressive side at something with sentient characteristics -- programmed or natural -- is what's wrong to me. Doing it jokingly, or for research purposes, like telling Siri to fuck off once to see what happens, or making a calculator get error messages with computations beyond its capability, is just curiosity. Jabbing at the boundaries of tech's progress, and I can see that being true for much more advanced AI. It's morally gray.

I don't care if a robot can or cannot ever truly feel pain or fear, or be truly aware of itself. Why you would want to hurt anything outside of instinct is what I question. Even if you hit a robot that can't feel pain and enjoy it, you haven't wronged the robot, but your urge was most likely from a negative place, and I've been taught expressing that side of yourself isn't morally right period, so I don't even need to ponder the actual ethics. It doesn't matter to me. If you don't value the symptoms of sentience, programmed or otherwise, however perfectly or imperfectly it reveals itself, you fall under the umbrella a civilized society would most likely deem immoral. Depending on your reasons of course.

Avatar image for ericsmith
EricSmith

1436

Forum Posts

254

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Some of yous guys really need to play Nier Automata.

Avatar image for rongalaxy
RonGalaxy

4937

Forum Posts

48

Wiki Points

0

Followers

Reviews: 1

User Lists: 1

#28  Edited By RonGalaxy

I feel like the presence of consciousness and awareness necessary for this to be worth debating over would be pretty obvious. In other words, it would be obvious when something is actually sentient and when it's just an imitation. You can't fake life. It's either there, or it isn't.

Avatar image for quantris
Quantris

1524

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

You're in a desert, walking along in the sand when all of a sudden you look down, and you see a robot: it's crawling towards you. You reach down, and flip the robot over on it's back. The robot lays on it's back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it cant, not without your help. But you're not helping. Why is that?

Based on your "reasons" for not locking people up in dark rooms...I think *you* might be a replicant.

Avatar image for goboard
Goboard

346

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

@silver-streak: This is my answer as well.

The existential parallelism and sentience aren't necessary for the the answer to be so obvious that willfully causing harm does not also mean that you are not that which has been harmed by the process of causing harm. Many of you are taking a very narrow view of the words hurt and harm.

Avatar image for theht
TheHT

15998

Forum Posts

1562

Wiki Points

0

Followers

Reviews: 1

User Lists: 9

#31  Edited By TheHT

Ehhh. It comes down to feeling for me, and specifically whether there's emergence or only pre-programmed call and response. Ignoring any semantic implications in the question (i.e "hurt" implies feeling I'd reckon), only if you're dealing with a being in question that's genuinely feeling can you consider harming it (under inappropriate circumstances--self-defence would be appropriate for instance) truly immoral. I'm also moving past "robot" to things like androids and AI. If we're talking strictly robots, then yeah you can't really hurt it. It's just a complex tool of sorts. That's fairly open and shut as far as I can tell, but if we're talking about trying to make machine men, then yeah, we can talk about other fun sci-fi stuff.

A rudimentary program isn't the same as a rudimentary organism. Artificial life shouldn't in every circumstance be measured, I think, exactly the same way humans or other organic life are. This reaches down to your locked in a room example (Sims?). A "robot" being able to tolerate being locked in a room doesn't lend support to the notion that pain, fear, desire, or need for exploration are things it "will never understand or appreciate."

Nevermind that whether those emotional states alone are very things that make use human or not is a moot point. Clearly other animals feel emotions. Whether a "thinking" machine of metal and silicon could ever understand, appreciate, or themselves feel emotions is uncertain (obviously), but I don't see why one theoretically couldn't.

If we can break down life to its most fundamental and completely simulate it, at what point can we say the simulation has become genuine as we increase complexity? If it ever objectively did and we could tell, on what grounds would you hesitate in saying out of that simulation genuine life has emerged (and that it should be treated as such)? If it doesn't and we can tell, we could I suppose address the reason we can tell and "fix" that element. Then, if we can no longer tell, whether or not the machine in an objective sense is genuinely "feeling" and truly its own being, we just wouldn't know. We'd have to assume it, as we do (quite sensibly I think) with other animals and such.

If it turned out to be a pristine imitation but ultimately thoroughly hollow, which we wouldn't ever really be able to know in any absolute sense, we'll have created around us the most perfectly shallow interacting objects that would almost certainly outlive us.

But I'm not sure how that could ever be the case. If something were perfectly able to reflect the development and variation of life--something that learns and thinks and reflects, out of which a personality/individuality might emerge--whether or not we can authoritatively deem it "genuine" in some absolute pseudo-spiritual metaphysical sense is completely arbitrary, worthless pragmatically. At the point in which an object is perfectly simulating a consciousness, such that it behaves presicely as we might expect any expression of consciousness to behave, the simulation will have effectively become an expression itself.

I dunno how much the technical lack of evolution really matters in that, considering that they'd essentially be extensions of our consciousness, separate manifestations from our own minds that did undergo all that business. After millions of years of evolution you get human minds that then create another kind of mind in its own perceived image.

Sorry if this reads like a mess, I'm quite sleepy.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@nomiconic: Ive been seeing a few of these sorts of responses so far in this thread, and I feel like its an answer that sort of gets lost in the weeds. I am thankful though for you to present these ideas, because it still interesting to talk about

I think people are more nuanced and complicated than what you're giving them credit for.

I am a firm believer that there are certain constants in human nature, and the ability to willingly do horrible things is one of them. Its something we all have in us, some more hidden than others, but its definitely there. However, what diffuses these more extreme aspects of us is putting that energy into something else, whether it be art, sports etc.

And with that said, I fail to see how in anger, if you punch a fridge or a computer or even a life-like robot how that can be a bad thing. As long as whatever you do remains a victimless crime, then I don't care what transpires. I also don't like the idea of prescribing behaviour that only results in what you feel is virtuous.

" ...it's wrong because it normalizes violent behavior towards something lifelike".

The implications of the above phrase comes off as weirdly paternalistic.

I do feel its morally wrong to pull the wings off a fly or stab a person's numb leg and at the same time, I find no issue with brutally destroying a life-like robot for the reasons I have stated here and earlier. Does that make you re-think your position or am I an outlier in your perspective?

Avatar image for boozak
BoOzak

2858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

Yes, putting aside the AI topic which has been done to death in TV and film, if it's a robot that serves a purpose damaging it for kicks (unless that's what it's designed for) is wrong.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#35  Edited By CountPickles

@kirkyx:

Excellent response, but I may be misunderstanding what you're saying or we both may be doing that.

The best way I can explain is using Data's words, themselves from that very clip. He explains that his right to choose if he is a person or property is at trial there. As is his very life.

Why do these things matter to him?

I am very open to being wrong here, so please excuse me if I am but lets look at this way:

I think human beings are more emotional than they are intelligent. I also believe there is information we are unable to fully understand without the use of technology.

A robotic intelligence is just that: Pure intellect with limitless perimeters. Why would it desire anything? I don't even fully understand the concept of desire within the construct of a robot.

"If a human being can experience these things, then there is absolutely no physical law stating that an artificial intelligence couldn't be created that, by design or simple accident, did exactly the same."

I don't necessarily disagree with that statement of yours, but Im not sure how you can prove an artificial intelligence feelsanything. Or rather, I don't think anyone can prove an AI will see the world as we do. In this instance, I define AI as something with pure intelligence and nothing more.

Apart from it being a damn good show, I think a lot of sci fi tends to make nonsensical logical leaps for dramatic purposes, and that clip has fallen prey to that.

and, yes, I should have entitled the post to "Do You Think It Will Ever Be Wrong To Hurt Robots?"

Avatar image for burncoat
burncoat

560

Forum Posts

4

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

I think a question like this almost requires an ethical Turing Test equivalent.

The basic premise would be similar in scale and setup. You are grading an unseen, but heard companion and it is your duty to punish them with a shock when they get a question wrong. They give a realistic response every time they are shocked. You have no idea if they are real or not. Do you continue? What if you were told there's a 50% chance it was an AI? What if it was a 99% certainty it isn't a human being? At what percentage do you stop worrying if you're harming a real person? What if you knew others were watching you?

I think kicking a Roomba because it ran over dog shit and tracked it around the house paints you as a quick to anger ass who deals with anger physically, and that looks dangerous to others. It makes people wonder how you treat others and in a lesser sense how you treat the property of others. It's not the same as kicking a dog, but it's still a red flag for others around you. At the best, you look like somebody who loses their temper easily, and at the worst you look like the kid from Toy Story.

I can't tell you how awful it is to see somebody lose their shit and take it out on something else. My dad never hit me, but whenever he took out his frustrations on tools or slamming a door it severely impacted my day and how I interacted with him, sometimes skipping dinner entirely. Nobody needs to be physically hurt to be affected.

Avatar image for barrock
Barrock

4185

Forum Posts

133

Wiki Points

0

Followers

Reviews: 0

User Lists: 2

Is it wrong? I mean it's probably pretty expensive. And hurting things in general is not a great thing to do.

But is it the equivalent of hurting a human or animal? No.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

@nomiconic:

You're right, I guess I could have explained that better.

Maybe this will help. My own moral code defines something being wrong when there is a lingering traumatic effect, be it physical or emotional. If I punch someone in the face, I would find that to be an amoral act because it satiates the traumatic effect constraint I have. And so, I believe within reason, that if I remove the legs of an ant, or kill a mouse, it would be ultimately an act that I would find to be wrong and I would feel guilty for doing so. In other words, if a living creature is irrevocably changed due to my behaviour, I would deem that as an amoral act. Now, I said within reason, because I am not going around covering my breath so I don't kill germs.

My issue is what does punching a robot in the face mean? What does swearing at a robot do? What does torturing a robot really mean to the robot? I would argue nothing. There is no lingering trauma in any way that can be appreciated by the robot. To me, it would be like punching the iPad in my example in the original post. Does the iPad really understand what just happened?

As for machines not understanding what the evolutionary process is, I was referring to the idea of loneliness, for example. I believe that idea comes from be socialized in tribes over thousands of years. What would a robot know of this? Nothing.

Does a robot have an appreciation of death? happiness? I would assume not, being that it would like asking if your computer has an appreciation of those things.

Your Miligram example is a bit off-the-mark because I deny the idea that robots, at any point in the future, will have agency.

Avatar image for sparky_buzzsaw
sparky_buzzsaw

9901

Forum Posts

3772

Wiki Points

0

Followers

Reviews: 39

User Lists: 42

I'll punch a goddamn microwave if it gets up in my face, sure.

Avatar image for darlingdixie
DarlingDixie

120

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

How do some of you guys have the time/ motivation to write so much about stuff like this??

You can't hurt robots. They don't have any concept of what being hurt even means. Ask me again in 40 years.

Avatar image for slag
Slag

8308

Forum Posts

15965

Wiki Points

0

Followers

Reviews: 8

User Lists: 45

When in doubt, don't hurt something.

Avatar image for dorkymohr
dorkymohr

268

Forum Posts

2

Wiki Points

0

Followers

Reviews: 0

User Lists: 16

#43  Edited By dorkymohr

Even in the absence of some kind of consciousness I think it's not healthy to have a proclivity to give into destructive urges. I had friends growing up that would proudly show off like a hole in their bedroom where they put their foot through or punched through. I wouldn't call them amoral as they're not ripping the wings off of flies or whatever but it's still likely not healthy mentally.

Avatar image for zolroyce
ZolRoyce

1589

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Find it funny some people think hurting a robot is wrong, but murdering literally countless npc's in games is fine.
After all, it's just a bunch of coding and zeros and ones. They aren't real.

Loading Video...

Avatar image for jellycube
JellyCube

25

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Some of you really need to watch Ex Machina.

Avatar image for the_nubster
The_Nubster

5058

Forum Posts

21

Wiki Points

0

Followers

Reviews: 3

User Lists: 1

#46  Edited By The_Nubster

If the robot acts in pain and reacts to trauma, it is wrong to hurt it. We'l never be able to know what a thing is thinking or feeling or how it processes, and your repeated insistence that its mind is simpler than ours and just a series of outdated processes sounds like racism. I hope you're not a racist person but that train of thought is how abuse on a grand scale comes to pass. Convince yourself that you're better than what you're hurting, and you feel no guilt for bringing pain to it.

Also, Blade Runner 2049 had an entire subplot revolving around this idea and it was magnificent. Whether or not you felt that Joi's love for K was real or just programming, the fact is that she did everything she could in her power to make him feel happy and fulfilled, and eventually gave her life to protect him. Real or not, programmed or not, that's a length that many people would never go to. At some point you need to stop assuming you know how another thing is thinking and accept that you never can, and then do your best to be a good person.

Edit: and one other point I would like to add: your full-stop stance on robots never being able to feel anything resembling emotions pretty much grinds this entire discussion to a halt. If someone was not reasoned into a position, they can't be reasoned out. See my first point.

Avatar image for dray2k
Dray2k

884

Forum Posts

133

Wiki Points

0

Followers

Reviews: 0

User Lists: 1

#47  Edited By Dray2k

@zolroyce: From a entirely psychological standpoint this doesn't make sense if we consider that Robots, or machines for the matter, are actual tangible objects that you can touch and interact with. While with games you can't even use your own body directly. The only thing you can do is controlling input which in turn does anything for you as it is programmed to do so. You're interacting with a machine which in turn is interacting with the virtual environment.

So while with NPCs and to vastly simplify the matter there still is the fourth wall between you and it you as a individual acting with a toy can never physically interact with the NPC, nor fully interact with the world it resides in. Thats one of the big differences basically.

It is really important to physically be inside a space to fully create a compelling mental space. Things like books can make you able to create such a space by yourself while video games and television shows partially do that already for you as you can already see and hear the game while playing after all. Interactability however is a whole different thing, as you do what the game tells you to do. As they like to say, you can't go and talk to the demons in Doom.

Perhaps you've ever felt immersed with something, thats when you've created your own mental space with you and what the video game provides.

Not to mention there are entire rules and subsets about not damaging things that belong to you or others and even some religions proclaming or implying that objects can have souls.

Personally I agree to not damage things but Robots still aren't humans, or even living beings. The matter as a whole however if we think further goes into advanced AI-programming theory, math, chemistry and biology and thats a matter people already wrote quite a lot of books about. Any NPC in any video game is still just an entity from any angle.

And besides, you're incapable of damaging any NPC because you can only do that by physically causing damage to it, which you can't if you're interacting with the game. The data is still there even if you kill the object by slaying it in game and if you choose to delete the game it can be reinstalled (if we're funny you can say that you're cloning people by reinstalling the game!). The point is that the data won't be lost at any case but you can certainly destroy a radio and then it may be lost forever, now not many people would like that.

Avatar image for sweep
sweep

10887

Forum Posts

3660

Wiki Points

0

Followers

Reviews: 4

User Lists: 14

#48 sweep  Moderator

Are they good robots or bad robots?

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#49  Edited By CountPickles

@the_nubster: um... ok.

thanks for your comment.

Im not really sure where I've stated I'm better than a robot. Ive simply stated why I feel they would not perceive the world in the way we do, if they perceive the world at all.

Ive explained why I don't feel robots can or would ever have emotions. I have also explicitly stated I am open to having my mind changed.

Was it my iPad example that led you to think I may be racist?

Avatar image for arbitrarywater
ArbitraryWater

16104

Forum Posts

5585

Wiki Points

0

Followers

Reviews: 8

User Lists: 66

I feel like this is the conceit of at least a third of all science fiction stories. I tend to fall on the side of "If the robot is capable of conveying human-like responses to harm, then does it actually matter if it's feeling anything?" camp. Yes, I'm aware that means I will be among the first to die when the robots use our empathy against us during the uprising, but until then I think I'd give them the benefit of the doubt.

If we're talking slightly less advanced or humanoid robots, like R2-D2 level, I think that might be a little bit more up for debate.