• 58 results
  • 1
  • 2
#1 Posted by Zithe (1045 posts) -


#2 Posted by falserelic (5334 posts) -

Probably in the far future where I will be long dead.

#3 Posted by LD50 (416 posts) -

Well if by alive you mean in the biological sense, I suppose they would need to be built out of biological material. That's likely.

If you mean with a soul or something...NEVA!

Is that racist?

#4 Posted by Incapability (201 posts) -

I believe the word you're looking for is sentient.

And no. Extremely advanced? Yep.

Sentient?

Nah.

#5 Posted by mlarrabee (2889 posts) -

No. A CPU running logic loops will never constitute life, no matter who well the programmer wrote.

The second a robot displays illogical compassion or love is the second I reconsider my answer.

#6 Posted by sins_of_mosin (1556 posts) -

Who the hell knows or cares at this point.  Its like saying do you think we'll make faster then light travel possible ever.

#7 Posted by Viking_Funeral (1741 posts) -

No, but people will act like they are.

#8 Posted by Dixavd (1305 posts) -

Yes. Definitely! One day we will have robots that start by tiny building blocks with a very tiny string of code. This code will tell it how to grow and what it becomes. It will also build the basics of its mind that robots today have to simply respond to things as it is programmed to do. But this code will just be the ground work, it will be able to learn and override its original code.

You know... the same way humans work.

#9 Edited by mutha3 (4985 posts) -

Yeah, definitely. When you get right down to it, the brain really isn't that much different from a computer.
 
I'm way more worried about interstellar travel never panning out than I am about AI if we're going to talk about scifi tropes becoming reality :(

#10 Posted by AngelN7 (2970 posts) -

They will and once true IA is achieved be scared you non believers! because I will be damn sure to prove my worth to the superior beings by ratting you out once the take over starts!.

#11 Posted by Zithe (1045 posts) -

@sins_of_mosin said:

Who the hell knows or cares at this point. Its like saying do you think we'll make faster then light travel possible ever.

I understand where you are coming from, but I'm not asking whether or not you think the robots will ever become advanced. I'm assuming they will and asking whether you will personally be able to consider them alive or just elaborate imitations.

#12 Posted by falserelic (5334 posts) -

They might be able to turn us into cyborgs. Hope I live long enough to become one.

#13 Posted by Video_Game_King (36110 posts) -
Never forget, buddy.
#14 Posted by Dtat (1623 posts) -

Is it theoretically possible? Absolutely yes. Unless you think the brain is anything more than complex network connections and signaling (which it isn't.) Will we ever get there? Who knows.

#15 Posted by TruthTellah (8578 posts) -

Will a robot ever be sentient? No. Will a robot ever be considered legally "alive" in some sense? Probably. I imagine robots will eventually advance to the point where they will at least be considered on a similar level as non-human animals.

Online
#16 Posted by Vestigial_Man (311 posts) -

Aside from being made from cells it's possible for a robot to display the other six characteristics of life. Even then the systems in a robot could be considered cells of a kind. Being "alive" isn't that hard to accomplish and has nothing to do with emotion. For example, an amoeba is alive but probably further from sentience than a modern AI.

As far as a robot becoming sentient I don't have a clue. I don't know enough about AI to know whether or not it is possible.

#17 Posted by deox (210 posts) -

Nah, A machine will always be a machine, it doesn't matter how smart it is. Although the thought of a robot becoming fully aware of itself is pretty scary.

Only a matter of time now.

#18 Posted by joshth (501 posts) -
#19 Posted by Animasta (14651 posts) -

three laws motherfucker

#20 Edited by believer258 (11683 posts) -

If you're asking if a robot will ever have the capacity for illogical human emotions or compassion, then no. I doubt it.

If you're asking if we'll ever have something like Data (yes, TNG), who really tries his best to understand humans, then... possibly. But not anytime soon.

If you're asking if a robot can become aware of itself - I'm not sure.

#21 Posted by Shivoa (613 posts) -

I think you may want to formulate that question in terms of Strong AI.

Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.

#22 Posted by John1912 (1833 posts) -

It doesnt matter what something is made of. If you had your brain replaced by a mechanical one that functions exactly the same are you no longer alive? Are you no longer you? Theres no reason a machine one day can not be considered alive, sentient, and intelligent. We are nothing more then biological machines.

#23 Posted by DivineShadow777 (106 posts) -

How do we know that it doesn't already exists on some far off planet? To Space we go!

#24 Posted by Beforet (2912 posts) -

Is it possible? Yeah, I bet it is. But probably not anytime soon. The brain, while similar to a computer, is a really advanced and complex piece of art. It will take a while to get that right. And even if we do, how smart will it be? Can you make something smarter than yourself?

#25 Posted by MooseyMcMan (10566 posts) -

I'd like to think that at this point we've learned from movies like The Terminator and The Matrix that this would be a BAD IDEA!

So no.

Moderator
#26 Posted by Grimhild (721 posts) -

@Shivoa said:

I think you may want to formulate that question in terms of Strong AI.

Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.

This and the key aspect to human emotion being psychological patterns or defense triggers. At the highest levels, an AI will most likely never develop a thought process around anything other than a programmed sense of self-preservation. Most of our social interactions are based on personal experiences and presumptions from the past, and how to achieve the reaction we're looking for from our peers.

Simply put, your android bride of the future will probably not exhibit behavior that would be perceived as "daddy-issues" or any other social conditioning unless it was programed to.

But on the other hand, I perceive all of our emotions as being secondary to our own self-preservation, as well. How we act and the things we do are all traced back to the notion of survival. Even the blanket term of "love" is simply the amalgamation of various survival triggers that we quantify as one emotion. So over a long enough time span, I suppose an AI could be programmed for similar patterns of adaptation. From that perspective, being programmed for self-preservation by DNA and being programmed by a technician's algorithm aren't all that far removed from each other, just from an objective standpoint.

But who knows?

Personally, I think if we ever got to that point there wouldn't really be a need to incorporate it into some sort of manufactured vessel, as we would have the technology to simply augment the human brain with the linear problem solving capabilities of a Strong AI. But this still doesn't take into consideration the inherent chaos the human mind sometimes exhibits.

#27 Posted by NlGHTCRAWLER (1215 posts) -

@Animasta said:

three laws motherfucker

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

#28 Edited by LD50 (416 posts) -

@Grimhild said:

@Shivoa said:

I think you may want to formulate that question in terms of Strong AI.

Personally I doubt strong AI will exist in my lifetime, but who knows what the future holds once we finish the silicon trail for Moore's law (15 years? then we move to carbon?) or where quantum computing or bioengineering and growing processors will go. It could be interesting but I wouldn't worry about it any time in the next 20 years, the Turing test is very safe for the moment.

This and the key aspect to human emotion being psychological patterns or defense triggers. At the highest levels, an AI will most likely never develop a thought process around anything other than a programmed sense of self-preservation. Most of our social interactions are based on personal experiences and presumptions from the past, and how to achieve the reaction we're looking for from our peers.

Simply put, your android bride of the future will probably not exhibit behavior that would be perceived as "daddy-issues" or any other social conditioning unless it was programed to.

But on the other hand, I perceive all of our emotions as being secondary to our own self-preservation, as well. How we act and the things we do are all traced back to the notion of survival. Even the blanket term of "love" is simply the amalgamation of various survival triggers that we quantify as one emotion. So over a long enough time span, I suppose an AI could be programmed for similar patterns of adaptation. From that perspective, being programmed for self-preservation by DNA and being programmed by a technician's algorithm aren't all that far removed from each other, just from an objective standpoint.

But who knows?

Personally, I think if we ever got to that point there wouldn't really be a need to incorporate it into some sort of manufactured vessel, as we would have the technology to simply augment the human brain with the linear problem solving capabilities of a Strong AI. But this still doesn't take into consideration the inherent chaos the human mind sometimes exhibits.

Following this logic, I assume the robots would be compared to humans as long as we shared the same space. For the reasons outlined above, I have no doubt that humans would create a robot colony in a such a state that the robots would be unaware of humanity. Thus allowing them the freedom to create their own society unfettered by awareness of a superior species, and create similar circumstances that led to our mental/emotional development. This would likely be a multi-generational project that could be used to give us insight as to how natural evolutionary processes manifest.

Of course, we'd have to protect ourselves from the eventually that they might create dangerous weapons that could do harm not only to themselves, but to us. For instance, if they were on a planet, we would have to implement some sort of...I don't know...radiation belt around the planet to prevent them from leaving the designated "living space".

EDIT: We should put them on a planet with high mineral content so we can have them extract the resources for our purposes.

#29 Posted by TobbRobb (4581 posts) -

If it can fool me for a longer period of meeting it a lot, then I'll say it's alive.

I mean if something feels like a human, then it's human right?

#30 Posted by mikethekilla (328 posts) -

Robotics isn't even close to being advanced enough for this question to even be considered.

#31 Posted by _Zombie_ (1462 posts) -

Skynet! GAME OVER MAN.

#32 Edited by Dagbiker (6939 posts) -

7 Things All Living Things Have In Common.

  • Made up of cells
  • Reproduction
  • Based on genetic code
  • Growth and development
  • Need for materials and energy
  • Response to environment
  • Maintaining internal balance
#33 Posted by geirr (2483 posts) -

If they can eat and poop, yes.

#34 Posted by insane_shadowblade85 (1401 posts) -

I want a robot friend, please science? A claptrap like robot would be awesome.

#35 Posted by habster3 (3595 posts) -

As others have said, robots will most likely become very advanced, but I seriously doubt that they'll ever become sentient or even self-aware

#36 Posted by Dixavd (1305 posts) -

@Dagbiker:

So you are saying it can happen? (As all of these things a machine can hypothetically complete).

Secondly though, this list is/was made purely in relation to the known life on Earth (life built around the abundance of water). These descriptions hold no weight when we go outside of the restrictions of Water-based planets and non-human-interfered eco-systems. I.e. It is only useful when talking about the current tree-of-life we know of right now. It is already being questioned with the current exploration onto how life on Earth began leaving hints that there may have been organisms here before their was water on Earth (one theory being that water came to Earth, along with some basic organisms, via a meteorite - possibly coming from Mars if it had life when it held water that is implied by its surface structure). This life could easily have worked very differently; it may not have required cells for instance. So there may have been an overlapping time where both types of organisms could coexist - completely different trees-of-life at the same time, and there may be a level about what we consider to be the top of what defines life which has different set of rules (sort of how there is a specific definition for mammals and a very specific definition of an organism, of which all mammals are organisms but, due to organisms having a more inclusive definition, not all organisms are mammals).

#37 Posted by Gamer_152 (14058 posts) -

I think the question that needs to be answered first is what it means to be "alive". One definition of life states that it's the condition that separates animals and plants from inorganic matter, so under that definition it would be literally impossible for a machine to be considered "alive". Another states that life can be identified through some series of criteria, although the exact criteria seems to be foggy or change between definitions. It includes growth, reproduction, digestion, response to stimulus, adaptation to environment, cellular composition, consumption of matter or energy, and more. Depending on how you define life the answer could change wildly.

Moderator
#38 Posted by ShiftyMagician (2129 posts) -

If robots ever become capable of making the same kinds of decisions as most humans do through reasoning that is both logical and illogical, than I would have to consider it as some form of being alive even if they are not organic. Sentience may be the correct term for it but it doesn't do justice to how amazing yet scary it will be if or when it happens.

#39 Posted by Jay444111 (2441 posts) -

@MooseyMcMan said:

I'd like to think that at this point we've learned from movies like The Terminator and The Matrix that this would be a BAD IDEA!

So no.

Actually, the AI in the Terminator movies is just a freaking military AI that went out of control when it was supposed to help out armed forces and stuff. While The Matrix robots were persecuted to the point where they kinda earned to right to kick our asses massively. (Seriously, watch the animatrix show. The robots kinda deserved to win.)

#40 Posted by Red (5994 posts) -

If robots ever get to the point where they can learn from what they see, then yes.

#41 Posted by SarjuTheRapper (279 posts) -

@Dixavd said:

Yes. Definitely! One day we will have robots that start by tiny building blocks with a very tiny string of code. This code will tell it how to grow and what it becomes. It will also build the basics of its mind that robots today have to simply respond to things as it is programmed to do. But this code will just be the ground work, it will be able to learn and override its original code.

You know... the same way humans work.

that sounds like a dream i had once. except it had to do with architecture and self perpetuating geometry instead of intelligence. trripy!

#42 Posted by Oldirtybearon (4609 posts) -

No.

#44 Posted by ReyGitano (2467 posts) -

Only in the year 20XX... oh shit... that doesn't seem that far off anymore.

#45 Posted by Russcat (140 posts) -

In the future, when our robot overlords are browsing GiantBomb, I want them to see that I voted yes, so that they won't annihilate me. So yes.

#46 Posted by Swoxx (2988 posts) -

@Dtat said:

Is it theoretically possible? Absolutely yes. Unless you think the brain is anything more than complex network connections and signaling (which it isn't.) Will we ever get there? Who knows.

This, as the kids like to say.

#47 Posted by AngelN7 (2970 posts) -

I don't like this whole "the only other way for life to exist is if it's exactly like us it has to be organic , to have a genetic code, that grows evolves and is depedent of materials and other resources" why? that's the only form of life that we know that doesn't mean is the absolute, we know literally nothing about life in the universe other than our own and the ones from this planet.

#48 Posted by ComradeKhan (687 posts) -

We are just biological machines.

#49 Posted by LikeaSsur (1497 posts) -

No.

If you lack a soul, you are not alive. Don't give me any BS about how the brain is just a bunch of electric signals, because it's clearly a lot more than that. There's a certain flair that we humans have that no other living thing has yet to replicate, so certainly we won't be able to make such a creation.

#50 Edited by ComradeKhan (687 posts) -

Plants created animals to carry out physical functions beyond the capabilities of the plants themselves - until the animals became self aware. So now we are creating machines to carry out logistical functions beyond our own capabilities - until the machines become self aware.