QUOTE (monsta666 @ Jan 06 2009, 06:57 PM) While it's true that humans and computers are not innately superior to one another each are superior in certain areas. Computers are better at handling large amounts of data and processing them quickly. They say the human mind is better at parallel processing and general recognition. Take that whole CAPTCHA code. It confuses the mighty computer but is a piece of cake for a human!
But show that same computer some complex mathematical formula and they'll beat the humans hands down.
Indeed, computers are much "smarter" than humans in certain areas, and vise versa, but you're talking about present-day computers here. That, however, is what is being worked on in AI right now. The main trump card that humans have over machines right now is learning pattern recognition. (visual, auditory, analytical, etc...). The methods being worked on right now on both the hardware and software stages are designed to store and organize a vast array of information, and use that to interpret new information given to it (kinda like humans do). It's still getting off the ground, but there have been some serious advances. The goal is to get computers to be able to do what humans do, in a similar (and someday shorter) amount of time. Coupled with the computer's ability to crunch numbers seriously fast, and we've got some seriously powerful tools at our disposal.
QUOTE Excluding programming bugs I doubt robots will be dangerous unless the designers built them that way. For a robot to turn against man it needs to develop ambitions for survival or feelings of superiority which are distinctly human qualities. I guess if man could create a robot that could emulate the human mind exactly this could be a problem but it's not a problem I see in the foreseeable future.
I don't know if I completely agree with that. A major component of artificial intelligence is the ability to learn (although nowadays that's limited to the ability to learn what a bitmap of a dog is). As the technology gets ever faster and more advanced, it's altogether reasonable to suspect that AI systems can learn things we wouldn't expect, causing them to act in unpredictable ways, even if there are no bugs whatsoever. Code at this level gets incredibly complex, and can be very unpredictable.
AI at that level, however, is a very long ways away. The stuff coming out nowadays will be very limited in purpose, and will by no means be capable of deciding to do anything malevolent.
Indeed, computers are much "smarter" than humans in certain areas, and vise versa, but you're talking about present-day computers here. That, however, is what is being worked on in AI right now. The main trump card that humans have over machines right now is learning pattern recognition. (visual, auditory, analytical, etc...). The methods being worked on right now on both the hardware and software stages are designed to store and organize a vast array of information, and use that to interpret new information given to it (kinda like humans do). It's still getting off the ground, but there have been some serious advances. The goal is to get computers to be able to do what humans do, in a similar (and someday shorter) amount of time. Coupled with the computer's ability to crunch numbers seriously fast, and we've got some seriously powerful tools at our disposal.
QUOTE Excluding programming bugs I doubt robots will be dangerous unless the designers built them that way. For a robot to turn against man it needs to develop ambitions for survival or feelings of superiority which are distinctly human qualities. I guess if man could create a robot that could emulate the human mind exactly this could be a problem but it's not a problem I see in the foreseeable future.
I don't know if I completely agree with that. A major component of artificial intelligence is the ability to learn (although nowadays that's limited to the ability to learn what a bitmap of a dog is). As the technology gets ever faster and more advanced, it's altogether reasonable to suspect that AI systems can learn things we wouldn't expect, causing them to act in unpredictable ways, even if there are no bugs whatsoever. Code at this level gets incredibly complex, and can be very unpredictable.
AI at that level, however, is a very long ways away. The stuff coming out nowadays will be very limited in purpose, and will by no means be capable of deciding to do anything malevolent.