Something went wrong. Try again later

CountPickles

This user has not updated recently.

639 0 21 11
Forum Posts Wiki Points Following Followers

CountPickles's forum posts

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

#1  Edited By CountPickles

I have a feeling people who say "I see you" will like this. They say this to their friends or people they like. I also think people who say "a hot *insert time limit*" will like this.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@theht:

The thing is I don't know what you mean a "robot that's designed to be its own person". You'll need to elaborate on that. It could be a perfect imitation but an imitation, nonetheless. The reason I keep coming back to the iPad scenario is because it (for me) best illustrates the pure artistry and fakeness of it all.

Yes, someone has input functions and hardware so that the screen behaves a certain way but it doesn't mean its behaving in that way for any reason of its own choosing. I can even go so far as to say that anything manufactured will never achieve any real sense of identity or emotion on its own.

And yes, you can say "aren't humans just a series of calculations?" and you'd be right. The issue here is that I am more willing to say human beings are just another kind of "robotic automaton" than I am willing to say a robot of any design in the future will feel about the world and itself the way we do. I am fully aware of the mistake there, but I just simply fail to see any real connection between humans and robots. You can make an argument that if we're just calculations of evolution and robots are just calculations of their creators, that we're basically the same. The issue there is that then destroying an iPad is akin to destroying a human life. Im sure both of us would disagree with with this. But It may very well be the case, in an objective sense. But as a human being I just can't really seriously argue that point. It also leads to a whole host of other issues, where would we draw the line? calculators? word processors? Also why draw the line at a digital level? I assume it is a kind of human narcism to expect relatable emotions from a robot, right? Would anything that holds a function, and has an internal logic and purpose be entitled to protection?

I go back to another post here explaining that what makes something wrong is the lingering effect it has on something. If I cut someone's arm off, there is a traumatic effect present that is both psychological and physical.

What is cutting an arm off a robot mean to it? Its a machine, presumably a product. What is its identity and self-worth? well, I guess its whatever its creators tell it. Any kind of psychological ramifications to a robot (whatever the heck that means) can be ameliorated with software. Any kind of physical ramifications to a robot (whatever the heck that means) can be ameliorated with new hardware and construction. Beyond a physical body of a robot, its just pure intellect and calculations. What is a robot's understanding of self-preservation? why would it have that impulse? The answers to these questions, I either don't understand (which is obviously possible, I'm not the brightest bulb) or are not possible.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

#3  Edited By CountPickles

@the_nubster: um... ok.

thanks for your comment.

Im not really sure where I've stated I'm better than a robot. Ive simply stated why I feel they would not perceive the world in the way we do, if they perceive the world at all.

Ive explained why I don't feel robots can or would ever have emotions. I have also explicitly stated I am open to having my mind changed.

Was it my iPad example that led you to think I may be racist?

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@nomiconic:

You're right, I guess I could have explained that better.

Maybe this will help. My own moral code defines something being wrong when there is a lingering traumatic effect, be it physical or emotional. If I punch someone in the face, I would find that to be an amoral act because it satiates the traumatic effect constraint I have. And so, I believe within reason, that if I remove the legs of an ant, or kill a mouse, it would be ultimately an act that I would find to be wrong and I would feel guilty for doing so. In other words, if a living creature is irrevocably changed due to my behaviour, I would deem that as an amoral act. Now, I said within reason, because I am not going around covering my breath so I don't kill germs.

My issue is what does punching a robot in the face mean? What does swearing at a robot do? What does torturing a robot really mean to the robot? I would argue nothing. There is no lingering trauma in any way that can be appreciated by the robot. To me, it would be like punching the iPad in my example in the original post. Does the iPad really understand what just happened?

As for machines not understanding what the evolutionary process is, I was referring to the idea of loneliness, for example. I believe that idea comes from be socialized in tribes over thousands of years. What would a robot know of this? Nothing.

Does a robot have an appreciation of death? happiness? I would assume not, being that it would like asking if your computer has an appreciation of those things.

Your Miligram example is a bit off-the-mark because I deny the idea that robots, at any point in the future, will have agency.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

#5  Edited By CountPickles

@kirkyx:

Excellent response, but I may be misunderstanding what you're saying or we both may be doing that.

The best way I can explain is using Data's words, themselves from that very clip. He explains that his right to choose if he is a person or property is at trial there. As is his very life.

Why do these things matter to him?

I am very open to being wrong here, so please excuse me if I am but lets look at this way:

I think human beings are more emotional than they are intelligent. I also believe there is information we are unable to fully understand without the use of technology.

A robotic intelligence is just that: Pure intellect with limitless perimeters. Why would it desire anything? I don't even fully understand the concept of desire within the construct of a robot.

"If a human being can experience these things, then there is absolutely no physical law stating that an artificial intelligence couldn't be created that, by design or simple accident, did exactly the same."

I don't necessarily disagree with that statement of yours, but Im not sure how you can prove an artificial intelligence feelsanything. Or rather, I don't think anyone can prove an AI will see the world as we do. In this instance, I define AI as something with pure intelligence and nothing more.

Apart from it being a damn good show, I think a lot of sci fi tends to make nonsensical logical leaps for dramatic purposes, and that clip has fallen prey to that.

and, yes, I should have entitled the post to "Do You Think It Will Ever Be Wrong To Hurt Robots?"

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@nomiconic: Ive been seeing a few of these sorts of responses so far in this thread, and I feel like its an answer that sort of gets lost in the weeds. I am thankful though for you to present these ideas, because it still interesting to talk about

I think people are more nuanced and complicated than what you're giving them credit for.

I am a firm believer that there are certain constants in human nature, and the ability to willingly do horrible things is one of them. Its something we all have in us, some more hidden than others, but its definitely there. However, what diffuses these more extreme aspects of us is putting that energy into something else, whether it be art, sports etc.

And with that said, I fail to see how in anger, if you punch a fridge or a computer or even a life-like robot how that can be a bad thing. As long as whatever you do remains a victimless crime, then I don't care what transpires. I also don't like the idea of prescribing behaviour that only results in what you feel is virtuous.

" ...it's wrong because it normalizes violent behavior towards something lifelike".

The implications of the above phrase comes off as weirdly paternalistic.

I do feel its morally wrong to pull the wings off a fly or stab a person's numb leg and at the same time, I find no issue with brutally destroying a life-like robot for the reasons I have stated here and earlier. Does that make you re-think your position or am I an outlier in your perspective?

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@clayman: good point.

I have an emotional and intellectual response to that.

My emotional response would be that it is wrong to hurt a human being in any given scenario.

My intellectual response would be that hurting someone requires there to be some kind of lingering trauma afterwards, be it physical or psychological. Absent those ramifications, I fail to see what would be morally wrong in this ground hog day scenario. So, I guess, if you're ever in situation where everything reverts back 24 hours the next day, I guess you'd be okay to behave like a psychopath because 24 hrs from now, everything will be fine again.

However, hurting someone in the moments within those 24 hrs would still be VERY amoral because you'd be causing pain and suffering to people during that time.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@redhotchilimist: Really its aimed at any robot now or in the future. I don't believe robots will ever advance to the point where it will make it wrong for us to hurt them. Even very life-like Blade Runner, Ex-Machina Robots. They may look human, but they'll never be human.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

@frostyryan: this is essentially my stand-up comedy material.

But seriously, thanks. Alot of Sci-Fi tends to gloss over this and posits robots who are just human beings that happen to be robots. Thats not really interesting to me. It seems like most sci-fi is skipping over the most interesting stuff, in my opinion.

Avatar image for countpickles
CountPickles

639

Forum Posts

0

Wiki Points

11

Followers

Reviews: 0

User Lists: 0

#10  Edited By CountPickles

Brad Shoemaker