[Poll] In your opinion, will extremely advanced robots/androids ever be "alive?"
Who the hell knows or cares at this point. Its like saying do you think we'll make faster then light travel possible ever.
Yes. Definitely! One day we will have robots that start by tiny building blocks with a very tiny string of code. This code will tell it how to grow and what it becomes. It will also build the basics of its mind that robots today have to simply respond to things as it is programmed to do. But this code will just be the ground work, it will be able to learn and override its original code.
You know... the same way humans work.
@sins_of_mosin said:
Who the hell knows or cares at this point. Its like saying do you think we'll make faster then light travel possible ever.
I understand where you are coming from, but I'm not asking whether or not you think the robots will ever become advanced. I'm assuming they will and asking whether you will personally be able to consider them alive or just elaborate imitations.
Will a robot ever be sentient? No. Will a robot ever be considered legally "alive" in some sense? Probably. I imagine robots will eventually advance to the point where they will at least be considered on a similar level as non-human animals.
Aside from being made from cells it's possible for a robot to display the other six characteristics of life. Even then the systems in a robot could be considered cells of a kind. Being "alive" isn't that hard to accomplish and has nothing to do with emotion. For example, an amoeba is alive but probably further from sentience than a modern AI.
As far as a robot becoming sentient I don't have a clue. I don't know enough about AI to know whether or not it is possible.
this is a good discussion after just finishing Persona 4 Arena
If you're asking if a robot will ever have the capacity for illogical human emotions or compassion, then no. I doubt it.
If you're asking if we'll ever have something like Data (yes, TNG), who really tries his best to understand humans, then... possibly. But not anytime soon.
If you're asking if a robot can become aware of itself - I'm not sure.
I think you may want to formulate that question in terms of Strong AI.
Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.
It doesnt matter what something is made of. If you had your brain replaced by a mechanical one that functions exactly the same are you no longer alive? Are you no longer you? Theres no reason a machine one day can not be considered alive, sentient, and intelligent. We are nothing more then biological machines.
Is it possible? Yeah, I bet it is. But probably not anytime soon. The brain, while similar to a computer, is a really advanced and complex piece of art. It will take a while to get that right. And even if we do, how smart will it be? Can you make something smarter than yourself?
I'd like to think that at this point we've learned from movies like The Terminator and The Matrix that this would be a BAD IDEA!
So no.
@Shivoa said:
I think you may want to formulate that question in terms of Strong AI.
Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.
This and the key aspect to human emotion being psychological patterns or defense triggers. At the highest levels, an AI will most likely never develop a thought process around anything other than a programmed sense of self-preservation. Most of our social interactions are based on personal experiences and presumptions from the past, and how to achieve the reaction we're looking for from our peers.
Simply put, your android bride of the future will probably not exhibit behavior that would be perceived as "daddy-issues" or any other social conditioning unless it was programed to.
But on the other hand, I perceive all of our emotions as being secondary to our own self-preservation, as well. How we act and the things we do are all traced back to the notion of survival. Even the blanket term of "love" is simply the amalgamation of various survival triggers that we quantify as one emotion. So over a long enough time span, I suppose an AI could be programmed for similar patterns of adaptation. From that perspective, being programmed for self-preservation by DNA and being programmed by a technician's algorithm aren't all that far removed from each other, just from an objective standpoint.
But who knows?
Personally, I think if we ever got to that point there wouldn't really be a need to incorporate it into some sort of manufactured vessel, as we would have the technology to simply augment the human brain with the linear problem solving capabilities of a Strong AI. But this still doesn't take into consideration the inherent chaos the human mind sometimes exhibits.
@Animasta said:
three laws motherfucker
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
@Grimhild said:
@Shivoa said:
I think you may want to formulate that question in terms of Strong AI.
Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.
This and the key aspect to human emotion being psychological patterns or defense triggers. At the highest levels, an AI will most likely never develop a thought process around anything other than a programmed sense of self-preservation. Most of our social interactions are based on personal experiences and presumptions from the past, and how to achieve the reaction we're looking for from our peers.
Simply put, your android bride of the future will probably not exhibit behavior that would be perceived as "daddy-issues" or any other social conditioning unless it was programed to.
But on the other hand, I perceive all of our emotions as being secondary to our own self-preservation, as well. How we act and the things we do are all traced back to the notion of survival. Even the blanket term of "love" is simply the amalgamation of various survival triggers that we quantify as one emotion. So over a long enough time span, I suppose an AI could be programmed for similar patterns of adaptation. From that perspective, being programmed for self-preservation by DNA and being programmed by a technician's algorithm aren't all that far removed from each other, just from an objective standpoint.
But who knows?
Personally, I think if we ever got to that point there wouldn't really be a need to incorporate it into some sort of manufactured vessel, as we would have the technology to simply augment the human brain with the linear problem solving capabilities of a Strong AI. But this still doesn't take into consideration the inherent chaos the human mind sometimes exhibits.
Following this logic, I assume the robots would be compared to humans as long as we shared the same space. For the reasons outlined above, I have no doubt that humans would create a robot colony in a such a state that the robots would be unaware of humanity. Thus allowing them the freedom to create their own society unfettered by awareness of a superior species, and create similar circumstances that led to our mental/emotional development. This would likely be a multi-generational project that could be used to give us insight as to how natural evolutionary processes manifest.
Of course, we'd have to protect ourselves from the eventually that they might create dangerous weapons that could do harm not only to themselves, but to us. For instance, if they were on a planet, we would have to implement some sort of...I don't know...radiation belt around the planet to prevent them from leaving the designated "living space".
EDIT: We should put them on a planet with high mineral content so we can have them extract the resources for our purposes.
@Dagbiker:
So you are saying it can happen? (As all of these things a machine can hypothetically complete).
Secondly though, this list is/was made purely in relation to the known life on Earth (life built around the abundance of water). These descriptions hold no weight when we go outside of the restrictions of Water-based planets and non-human-interfered eco-systems. I.e. It is only useful when talking about the current tree-of-life we know of right now. It is already being questioned with the current exploration onto how life on Earth began leaving hints that there may have been organisms here before their was water on Earth (one theory being that water came to Earth, along with some basic organisms, via a meteorite - possibly coming from Mars if it had life when it held water that is implied by its surface structure). This life could easily have worked very differently; it may not have required cells for instance. So there may have been an overlapping time where both types of organisms could coexist - completely different trees-of-life at the same time, and there may be a level about what we consider to be the top of what defines life which has different set of rules (sort of how there is a specific definition for mammals and a very specific definition of an organism, of which all mammals are organisms but, due to organisms having a more inclusive definition, not all organisms are mammals).
I think the question that needs to be answered first is what it means to be "alive". One definition of life states that it's the condition that separates animals and plants from inorganic matter, so under that definition it would be literally impossible for a machine to be considered "alive". Another states that life can be identified through some series of criteria, although the exact criteria seems to be foggy or change between definitions. It includes growth, reproduction, digestion, response to stimulus, adaptation to environment, cellular composition, consumption of matter or energy, and more. Depending on how you define life the answer could change wildly.
If robots ever become capable of making the same kinds of decisions as most humans do through reasoning that is both logical and illogical, than I would have to consider it as some form of being alive even if they are not organic. Sentience may be the correct term for it but it doesn't do justice to how amazing yet scary it will be if or when it happens.
@MooseyMcMan said:
I'd like to think that at this point we've learned from movies like The Terminator and The Matrix that this would be a BAD IDEA!
So no.
Actually, the AI in the Terminator movies is just a freaking military AI that went out of control when it was supposed to help out armed forces and stuff. While The Matrix robots were persecuted to the point where they kinda earned to right to kick our asses massively. (Seriously, watch the animatrix show. The robots kinda deserved to win.)
@Dixavd said:
Yes. Definitely! One day we will have robots that start by tiny building blocks with a very tiny string of code. This code will tell it how to grow and what it becomes. It will also build the basics of its mind that robots today have to simply respond to things as it is programmed to do. But this code will just be the ground work, it will be able to learn and override its original code.
You know... the same way humans work.
that sounds like a dream i had once. except it had to do with architecture and self perpetuating geometry instead of intelligence. trripy!
I don't like this whole "the only other way for life to exist is if it's exactly like us it has to be organic , to have a genetic code, that grows evolves and is depedent of materials and other resources" why? that's the only form of life that we know that doesn't mean is the absolute, we know literally nothing about life in the universe other than our own and the ones from this planet.
No.
If you lack a soul, you are not alive. Don't give me any BS about how the brain is just a bunch of electric signals, because it's clearly a lot more than that. There's a certain flair that we humans have that no other living thing has yet to replicate, so certainly we won't be able to make such a creation.
Please Log In to post.
Log in to comment