I got into a drunken argument with a friend a few days ago about this topic.
Without going into too much detail, his argument was that it would be wrong by most standards of morality to emotionally or physically harm any thing that is sentient.
However, we applied it to robots and our conclusions began to differ wildly.
Now, there are various types of robots, but take for instances the robots from the film Blade Runner:
They look human, they speak (more or less) like humans and it seems like if you hurt them they react as if they're in pain.
For me to feel bad for those kinds of robots, you'll need to prove to me that they do in fact have some kind of consciousness and are not just a series of "if then do"-parameters in metal casings with human-like skin wrapped over them.
The example I gave to my friend is, say I present you with an iPad with an app on it that can only present a smiley face or a sad face when you touch the touchscreen.
You stroke the touchscreen and you get a :)
You tell the iPad to have a good day, you get a :D
You tell the iPad to "fuck off", you get a :(
You shake or flick the screen of the iPad with your fingers, you get a D:
Its not like the iPad in the above scenario is actually feeling pain or joy, its simply reacting mindlessly to pre-set functions based around its hardware.
The same can be said of killing people in Grand Theft Auto 5. You shoot them, they react with pain and appear to be suffering, yet no one can make a valid claim (in my mind, anyway) that what you're doing is morally wrong.
Scale this up to how robots will inevitably be, and I fail to see how anyone will ever be able to prove that even the most life-like robot isn't just a multi-million dollar version of the above iPad example.
My friend's retort to this was that human beings are essentially organic robots enslaved by years and years of evolutionary processes. Which I thought was a bit a of fallacious argument, and Ill end on this:
Out of the organic robot assumption, came to 2 conclusions:
Pain, fear, desire, need for exploration etc. the very things that make us human and can harm us emotionally and physically are things that a robot will never understand or appreciate.
My reasoning for this is if you lock a human being in a dark, cold room and wait for them to die, this is a morally atrocious thing to do, but is explainable in terms that most people who aren't psychopaths can understand:
Locking someone in a room - deprives them of human contact - something a robot doesn't need. Also, given that a robot hasn't had to evolve in tribes, I'm very unsure if the idea of loneliness is something they will ever understand in the same manner we, humans, do.
Locking someone in a room to die - pits a human in agonizing despair at the prospect of death - Im not even sure if robots can have a conception of death, given that their sense of self and identity can be literally replicated forever. Even if they couldn't be replicated, the fear of death is not even uniform in human beings, and as such applying that to a metal entity is frankly ridiculous, in my estimation.
And finally, I can feel pain, both physical and emotional. I believe dogs, cats, human beings etc. all multi-cellular organic beings that have been finely honed over the course of millions of years are able to feel pain, joy etc. Robots have never undergone this process. Thus, my conclusion is its not wrong to hurt robots because its impossible to do them any real harm.
Im open to having my mind changed. Im genuinely interested in what others have to say on this.