Artificial Intelligence


Ad: Buy Girls Und Panzer Merch from Play Asia!
QUOTE (monsta666 @ Jan 06 2009, 06:57 PM) While it's true that humans and computers are not innately superior to one another each are superior in certain areas. Computers are better at handling large amounts of data and processing them quickly. They say the human mind is better at parallel processing and general recognition. Take that whole CAPTCHA code. It confuses the mighty computer but is a piece of cake for a human!
tongue.gif
But show that same computer some complex mathematical formula and they'll beat the humans hands down.
Indeed, computers are much "smarter" than humans in certain areas, and vise versa, but you're talking about present-day computers here. That, however, is what is being worked on in AI right now. The main trump card that humans have over machines right now is learning pattern recognition. (visual, auditory, analytical, etc...). The methods being worked on right now on both the hardware and software stages are designed to store and organize a vast array of information, and use that to interpret new information given to it (kinda like humans do). It's still getting off the ground, but there have been some serious advances. The goal is to get computers to be able to do what humans do, in a similar (and someday shorter) amount of time. Coupled with the computer's ability to crunch numbers seriously fast, and we've got some seriously powerful tools at our disposal.


QUOTE Excluding programming bugs I doubt robots will be dangerous unless the designers built them that way. For a robot to turn against man it needs to develop ambitions for survival or feelings of superiority which are distinctly human qualities. I guess if man could create a robot that could emulate the human mind exactly this could be a problem but it's not a problem I see in the foreseeable future.
I don't know if I completely agree with that. A major component of artificial intelligence is the ability to learn (although nowadays that's limited to the ability to learn what a bitmap of a dog is). As the technology gets ever faster and more advanced, it's altogether reasonable to suspect that AI systems can learn things we wouldn't expect, causing them to act in unpredictable ways, even if there are no bugs whatsoever. Code at this level gets incredibly complex, and can be very unpredictable.

AI at that level, however, is a very long ways away. The stuff coming out nowadays will be very limited in purpose, and will by no means be capable of deciding to do anything malevolent.
 
QUOTE (EggBeast @ Jan 07 2009, 03:13 AM)Indeed, computers are much "smarter" than humans in certain areas, and vise versa, but you're talking about present-day computers here. That, however, is what is being worked on in AI right now.
When that happens I would argue computers would be superior. At the moment it is 50/50 because the human brain is stronger in certain areas than the computer. If the computer can overcome these weaknesses and become equal to the human brain in pattern recognition then the computer will be superior. What weakness would the computer have? Sure the answer is not a romantic one but it is the truth. Saying all that, I doubt computers will reach that stage within the next 10 years and all we say is mere speculation. It's the reason I hesitate to say anything about future developments as it is notoriously difficult to predict what will happen in the far future, particularly in the electronics industry. You never know what will happen in 10 years time! Who knows, we could even be playing chess together!
tongue.gif
 
Artificial intelligence is a myth. It cannot be done. All we can do is mimic it but not create it. You have to remember, computers are stupid, very very stupid and know absolutly nothing, a ant has more of a brain than a computer. A computers intelligence is determined by the programs run on it so in the end, the programmer determines how smart a computer is.

Even if we had all the technology and data in the world, we can never create something that can think for itself or learn on its own unless it is aided with the help of a program.

Example: If a AI robot saw some person eating an apple, and the robot saw an apple on the floor, what does it do?

It would go into a thought process determining the pros and cons depending on how good of a program is running on it.

object = apple;
if (person == apple)
{
pick up apple;
}

else (if person not eating apple)
{
dont pick up apple
}

you get my drift...

So in conclusion all we can do is mimic AI to a point where it may seem so real that they are thinking on their own, but in reality when you get into the nuts and bolts, theyre simply running a very complex program.
 
QUOTE (InuyashaX)Artificial intelligence is a myth. It cannot be done. All we can do is mimic it but not create it. You have to remember, computers are stupid, very very stupid and know absolutly nothing, a ant has more of a brain than a computer. A computers intelligence is determined by the programs run on it so in the end, the programmer determines how smart a computer is. Even if we had all the technology and data in the world, we can never create something that can think for itself or learn on its own unless it is aided with the help of a program...
QUOTE (InuyashaX)...So in conclusion all we can do is mimic AI to a point where it may seem so real that they are thinking on their own, but in reality when you get into the nuts and bolts, theyre simply running a very complex program.
Practically:

Ants do indeed have a brain, but it is arguable whether ants "think" or if they merely act on instinct, which we may call "programming". There have been engineered robots that can easily complete all the tasks ants do naturally. The only difference is that ants have evolved instincts, while we program them in.

How can you prove that Humans aren't simply a robot running a program? How is a human different from an incredibly complex program running on an incredibly powerful processor?

'tired' and 'sleep conditions' are boolean variables
'Sleep' is the sleeping process

if (tired){
if (sleep conditions){
run Sleep;
}
}

Don't we humans do that, only on a much more complicated level?

Philosophically:

If humans are completely physical - the matter we are created with, down to the dna, defines our actions and individuality; creating an artificial, thinking, feeling, being is physically possible. This is merely proof of concept, as practically we are a long way off; but if humans are purely physical matter, then it is indeed theoretically possible.

However, if there is some sort of soul, or higher being, then it is not nearly as clear cut... or is that what you were getting at?
 
Are you saying were all robots?

Well the question is, if we break down a human to their DNA, and we break DNA further down, will we eventually find something mechanical?

Are we assuming robots can only be made mechanically or possibly biologically?

Well with that logic how about this, heres the simplest way to create a being with an AI and it only takes 9 months!

Heh catch my drift. I doubt were robots though we cant rule anything out, we could possible be so dang real we cant even tell.
 
Actually, you can't get much more mechanical than DNA. It doesn't work by magic. It works by having other chemical chains physically connecting to it and disconnecting. It is as mechanical as keys turning into locks but at a smaller scale.
 
I did mention that it was only proof of concept, not proof of practicality...

Mechanical does not necessarily mean metal.

To elaborate on what snorky said, dna is almost like a programming language (think firmware or bios).

I never said we were robots. I said that if we are entirely physical, then it is possible to artificially create a being with intelligence and emotions.
 
Interesting thread
wink.gif


There are many things I got to talk about and state to "present" my own opinion on this.

First its not really about wheter they can "think" (at least by definition, after all itd be an AI we'd be talking about and i think its possible), but rather wheter they can really "feel". The Bicentennial Man movie really put a perspective on me on that issue, it makes me think its more about perception and selfawareness, and if there's a soul it consists of intelligence or selfawareness..

But there is no way to know for sure. There IS such a thing as facts, but what we humans call truth is in no way such, its just what we perceive and believe/acept as fact, since there's no way to know something is a fact beyond reasonable doubt. Its like Socrates said to Diogenes, "I only know that I know nothing" (wich means "I only know I dont know anything for sure")

Besides no one knows what a soul really is, since phylosophically and religiously its been quoted differently for millenia. So there's no point arguing or discusing it (what souls are or wheter they exist), there's only point in acepting we dont know yet.

Also what would YOU define as AI, an AI with the IQ of a human being, of a fly, of a dog? get my point? we wont really have a way to know either when we really do have AI in our hands for various reasons, one of wich is for lack of comparison and another would be lack of understanding... we think of computer capabilities as RAM and GHZ and of human intelligence in IQ, so unless we make a neural interface like in Bicentennial Man (or a Virtual Imaging program just as "complex") that either "works both ways" or at least we think should work exactly like a human brain we wouldnt know for sure when we are looking at AI, and even so, there's still the soul dilema and wheter we are truly correct on everything, from what we define as intelligence, soul etc. so yes its a pain in the "head" because of the many variables, but we still cant prove the oposite.

We all believe in some god etc and in almost all constitutions in some part it reads "in the name of god", even the U$S says "in god we trust", and even so we still cant prove beyond a shadow of doubt that there is a god (catholic or otherwise), neither on a court of law or in a cientific lab, and yet there are laws to protect people's right to believe and even cientists believe.

I suggest everyone who hasnt watches Bicentennial Man and AI, while i dont like how they portray the kid in AI, looks totally emotionless (only cares and behaves how he's supposed to behave, like he's just 0s and 1s), the ending of how the aliens treat him like if he had feelings shows my point pretty clearly, that we would never know for sure, and its better to be on the safe side, not to prevent the matrix or terminator from happening rofl, but from preventing a posible (innocent) sentient life form from experiencing hell...

PS: and yes I do personally back up all the talk of organic lifeforms being organic machines, after all its true, but that and this are linked but actually separated question marks. wheter lifeforms are machines or not is just about each one's own lack of definition. chickenwing's and the people coming right before him are right we are machines. However relevant it is the fact of whomever is right (althought im saying we are machines) o how its related, discusing that is slightly walking offthread, the thread is about AI not artificial lifeforms
laugh.gif
 
Not that this part is the most relevant to this topic, but I thought I would just clarify a couple things...
QUOTE (Klyern @ Apr 22 2009, 01:49 AM)We all believe in some god etc and in almost all constitutions in some part it reads "in the name of god", even the U$S says "in god we trust", and even so we still cant prove beyond a shadow of doubt that there is a god (catholic or otherwise), neither on a court of law or in a cientific lab, and yet there are laws to protect people's right to believe and even cientists believe.
Actually not all of us do believe in some god (case in point, I personally do some believe in some god). Second, not all constitutions in some part read "in the name of god" (case in point, the United States Constitution doesn't even contain the word "god", the founders made of point of this). And lastly, US currency didn't say "In God we Trust", and the pledge didn't contain "Under God" until the mid 1950's, when a Catholic lobbyist group (knights of columbus) convinced Congress to require it.

But on the whole, I have to say that I agree with you, Klyern. Just one other thing though, I don't think it matters at all whether or not a machine is capable of "emotion" in order to be deemed artificially intelligent. Saying that just stems from the human tendency to project ourselves onto everything else. Emotions (if you "believe" in science) are simply a human learning mechanism very specific to our species (well, and to those animals most closely related to us, if you want to get picky). There no reason we should force machines to exhibit emotions before we'll call them artificially intelligent, the same way we wouldn't expect an alien form of life to exhibit our same emotions.

BUT! I recently read an interesting article. Remember when IBM built a supercomputer that beat the world's best chess players? Well, right now there's talk of IBM making another supercomputer to challenge the top Jeopardy players, which would be a huge step forward in AI. Here's a link...

http://www.pcmag.com/article2/0,2817,2346066,00.asp

Cool stuff
 
It might be a little late for me to join in, but this is very relevant to my interest, so I'll try anyway.

One of the Computational Intelligence (a sub field of AI) lectures that I attended consisted of a philosophical discussion, which included allot of the stuff that came up in this thread. (However we had a more atheistic approach. No mention of souls or higher beings).

The answer to what computers can achieve is very simple:
By definition computers are mathematical devices. This means that anything that can't be modeled mathematically, can't be achieved by computers. However any goal that lies within the realm of mathematics can already or will eventually be achieved by computers.
(As a side note: Not all mathematics has to do with numbers. The concept of abstract algebra has made mathematics a very broad field.)

What is intelligence?
There's no clear definition yet, but luckily there is some agreement when it comes to some of the sub elements, e.g.:
The ability to reason. (This has been mathematically modeled and incorporated in computers through propositional logic, but only to a limited extent so far)
The ability to learn, apply what you have learned, and learn from what you have applied. (This is the minimal requirement for computational intelligence)
The ability to act rational. (If it doesn't, I delete it)

Some extra definitions used in AI:
General intelligence - The ability to exhibit intelligent behavior under any conditions.
Specialized intelligence - exhibit intelligent behavior under specific conditions.
Intelligent Agent - An entity created using AI techniques.

The Turing test:
Considered highly flawed. Here are only a few of the flaws we came up with.
1) Human intelligence is not constant, even two humans pitted against each other could confuse the judge. And who would make a good judge is also up for debate (I heard a rumor that one IA was able to pass the test because a few computer scientists were mean enough to make a 6 year old the judge)
2) There are different kinds of intelligence. Like some humans are better at maths and others better at art, an IA can show too much competence in one field, thus giving itself away.
3) The test doesn't just test for intelligence, but also stupidity. If an IA doesn't get things wrong like a human, it will fail the test.
4) My favorite and a very good argument: "Why would I want to create another human?"

QUOTE A major component of artificial intelligence is the ability to learn (although nowadays that's limited to the ability to learn what a bitmap of a dog is). As the technology gets ever faster and more advanced, it's altogether reasonable to suspect that AI systems can learn things we wouldn't expect, causing them to act in unpredictable ways, even if there are no bugs whatsoever. Code at this level gets incredibly complex, and can be very unpredictable.
An IA can only do and learn what the programmer allows it to do and learn. Even in the case genetic programming (the self-reprogramming kind of AI), the IA is still limited to the syntax that programmer put forth.
However, I do agree that this kind of code is error prone and can as such cause unexpected results.
Making my first neural net consisted of: 1 day of designing, 1 day of programming, 2 weeks of debugging and 5 minutes of enjoying the fruits of my labor.
 
Errors in modeling can lead to improvements as well set-backs. An imperfect system may even learn, improve, and improvise after discovering an inconsistancy. For instance, given a model for a room having 4 walls, if a room accidentally appeared with 5 walls, finding a use for the fifth wall or adjusting to the fact that a fifth wall could exist may lead to another level of understanding. Also, AI might not be implemented as a purely digital model. When I was starting college, many aerospace applications still used analog computers. They were faster and the answer was not always as precise.
 
Great discussion, sorry for the late arrival!
wink.gif


Regarding the Turing test, I believe they had some moderately believable computer generated scripts in the seventies, and I don't really see it as relevant today. All it proves is the progress in computing power and a better grasp of conversational psychology.

Some people have argued that consciousness itself is an aberration that has developed because of the complexity of the different areas of the human brain in order to "negotiate" and avoid internal conflict. I.e. we always (often without realising it) "rationalise" our choices, often thinking we make a conscious choice, but where brain scans have demonstrated that the choice has already been made. It's quite a disturbing field to look into - "we" have so little control over ourselves, and most processes are entirely "subconscious". I still sometimes suffer crises of self-confidence when I believe I'm making a neutral, balanced decision based on the evidence available....
rolleyes.gif


I don't know if anyone reads Iain M Banks novels (beardy genius!), but he suggested that when an AI was built with no cultural or instinctive bias in a super-sophisticated future, it instantly sublimed into a supreme being. My take on this is the way the motivation has been pretty much missed out of the discussion so far. Even if we could create a convincing simulation of an AI, if it had no basic needs or motivation it would just sit there - although I suppose it might not make a bad slave if you could make it fear death, and kept threatening it! Most of our motivation is hormonally driven, sex, death, friendships etc. - without this we're nothing. Of course, it's also a curse!
laugh.gif
 
I remember taking some psychology classes on cognitive dissonance where we discussed the subfunctions for making decisions. There is very little that isn't almost arithmatically modelable. Modern advertising methods heavily leverage this and it is the reason that you are bombarded with the same commercial over and over. It is also why some political agendists keep hitting you with the same lie until many people around you take it as fact.
 
OP: Can machines think?

Well, what is thinking? As far as I can tell it can be stripped down to gathering data and making a decision on it based on a set of goals. In this respect, machines can think, though most of the time they're constructed with a predefined set of thoughts (i.e. case 1, do x, case 2, do y, default to z, etc.). It can get more complicated with compound conditionals, but generally how a machine is going to think is predefined before it starts thinking and won't change unless a user gives it certain data (a new program, a patch, etc.).

So, does predefined thinking count as thinking? Technically, sure, but it's not what most of us think of when we think about thinking. We're talking about AI, of which thinking is but one small piece. So what are the differences between predefined thinking and intelligent thinking? One of the major things that stands out is that intelligent thinking allows changes in how it thinks from any and all data, not just a special portion. Or rather, it decides which data will not affect how it thinks.

This means that there are thoughts about thoughts and thoughts about thinking in an intelligent system. However, at what level do you stop calling them thoughts about thinking or thoughts about thoughts about thinking, etc., and start calling them self-aware?

All neglecting, of course, that tricky bit: goals. Goals could be considered a special type of thought, as they are created the same way. However, goals are the measuring stick for whether the proposed plans of actions produced by thoughts get filtered by. They affect how thought processes are changed. As humans, we have some goals biologically ingrained. For a machine to be considered intelligent would it have to be built with a basic set of goals upon which it could elaborate?
 
It's going to seem like i'm picking away at what people are saying, but i'm merly adding my own oppion and offering a new way to think about the topic,or pointing out missing facts.

I'm bring this quote to the top beacuse it's most relvent to my argument.

[QUOTE:= Konohamaru-chan ]Hmm, I don't think mere emulation is a valid criterium for defining intelligence.
If I were to define intelligence, it would be related to the degree of skillfulness of a living creature in staying alive.

Thus the computer that just returns conversation could not be qualified as intelligent. It would prove intelligent if it started - in the course of dialogue - to lie, to seduce, to threaten, to empathize, or to try to persuade you to upgrade him (her ? it ?). Basically it would then use language for its own selfish purpouses that is to say intelligently[/QUOTE]

I do agree that simulating coverstation shouldn't define intelgence.

I have to argue the point that computers CAN'T think. computers are baced on math, they take the qustion you give them, They return the correct answer baced on the information given to them. You give them a command, they finnish the command, a computer has no free thought process, it cannot do math in a new way. it cannot be infuenced by emotion, it does not have the ability to lie, or to know when to lie to you. These things devied a what should be thinking and processing information. We have the ability to think about a subject that is not brought forward to us.

[QUOTE:= Gustav1976-sama]On a different note, it could also be argued that the majority of us are not as self-aware as we think we are. Before you dismiss this, consider this: Whilst you are aware of your existence and your environment, how aware are you really? Your mind automatically filters out all stimulous that it determines to be normal.ie. a 100% self-awre person would be are of the air pressure on their skin and every sound that reaches their ears but our brains/minds have learned to subconciously block the majority of these sensations which we experience anyway because if it didn't the result would an overload of stimulation. So, how self-aware are you REALLY?[/QUOTE]

I do agree with this some of this stament, and i'd like to add that most of your sences are assumed. You don't see everything your eyes and mind processes, you assume so much of what you see.

[QUOTE:= chickenwing71x-san] If humans truly evolved, and our brain is completely scientific and brought about by coincidence and evolution, a machine certainly has the potential to think as humans do. At the most basic level the brain works on positives and negatives, firing and not firing, just as modern circuits do.

We will never know until we can understand how our own brains work, which is still largely unknown. Just think about it, electrical pulses give us an identity and personality? Small cells exchanging signals can give us the ability to understand these cells? It's amazing, and beyond current technology. In the future, it might be possible, but if humans have a higher being, a spirit, a soul, then it is likely not possible.[/QUOTE]

We don't know this for sure yet. To much of the brain is still a mystery to confrim this. We might be working on positives and negative but there is always a chance in our brain that somthing doesn't fire the right way. or the single is lots or changed.

The choice of whether or not to sleep:
CODE Loop{
{[check sleep]
if [sleep conditions], then [sleep]
else[standby]
}
}

I get what you're trying to say. But i have to say this is a bad example. you generalise the sleep conditions, there are to many times when a condition might be met that but it's still not a good time to sleep. you can define all the condstions, but not all conditions need to be met or there might be a new condition that needs to be met. as humans we can add that in, as a computer runs, it doesn't know if that needs to be added.



[QUOTE:=Klyern-chan]We all believe in some god etc[/QUOTE]
No, Not everyone.
North America

A 2004 BBC poll showed the number of people in the US who don't believe in a god to be about 10%.[7] A 2005 Gallup poll showed that a smaller 5% of the US population believed that a god didn't exist.[21] The 2001 ARIS report found that while 29.5 million U.S. Americans (14.1%) describe themselves as "without religion", only 902,000 (0.4%) positively claim to be atheist, with another 991,000 (0.5%) professing agnosticism.[22] The most recent ARIS report, released March 9, 2009, found in 2008, 34.2 million Americans (15.0%) claim no religion. Of which, 1.6% explicitly describe themselves as atheist or agnostic, double the previous 2001 ARIS survey figure. The highest occurrence of "nones", according to the 2008 ARIS report, reside in Vermont, with 34% surveyed. [23]

Atheism is more prevalent in Canada than in the United States, with 19-30% of the population holding an atheistic or agnostic viewpoint.[29] The 2001 Canadian Census states that 16.2% of the population holds no religious affiliation, though exact statistics on atheism are not recorded.[30] In urban centres this figure can be substantially higher; the 2001 census[31] indicated that 42.2% of residents in Vancouver hold "no religious affiliation." A recent survey in 2008 found that 23% of Canadians said they did not believe in a god. [32]
From wiki, It wont let me post links.

After that my mind just trailed off. Sorry if i'm re-posting any facts.
 
Finally I got a chance to post again.

QUOTE I don't know if anyone reads Iain M Banks novels (beardy genius!), but he suggested that when an AI was built with no cultural or instinctive bias in a super-sophisticated future, it instantly sublimed into a supreme being. My take on this is the way the motivation has been pretty much missed out of the discussion so far. Even if we could create a convincing simulation of an AI, if it had no basic needs or motivation it would just sit there - although I suppose it might not make a bad slave if you could make it fear death, and kept threatening it! Most of our motivation is hormonally driven, sex, death, friendships etc. - without this we're nothing. Of course, it's also a curse!
Without a goal and the ambition/instinct to achieve that goal, it would literally do nothing. I'm not even sure if it would compile as a valid program; It would just be an empty program.
In the techniques that I'm familiar with, ambitions/instincts/main goals are still something preprogrammed. I'm not sure if this will change anytime soon.
In the case of the Turing Test, the goal would simply be to pass it. The IA would then have to identify any sub-goals.
Fear of death would in it self be an instinct. Although giving it the instinct to pursue points will probably be easier than teaching it the concept of death. The technical term is reinforcement learning.


QUOTE but generally how a machine is going to think is predefined before it starts thinking and won't change unless a user gives it certain data (a new program, a patch, etc.)
Yes and no. The techniques that I'm familiar with, are in fact "fixed I.Q." techniques. For instance, standard neural nets are user defined, i.e. they don't gain or lose neurons unless the user says so.However, what this predefined NN can learn, depends on it's circumstances. For instance if you make a NN to approximate functions, then the complexity of the functions it can learn is determined by the size of the NN (predefined), but the actual function it learns is not determined by the user. It will try to learn a function by producing answers and making adjustments based on the errors it produces. If circumstances change, i.e. a new function is introduced, then it will forget the old function and learn the new one using the same method.


QUOTE it cannot be influenced by emotion
I propose the following simplified mathematical model for "anger in humans":

CODE if in offended state
increase serotonin (not sure if this is the right one)
if serotonin levels high
act angry

Doing this in AI would be a bit complicated, but I would start with one NN that learns how to recognize the correct emotional states, and another that learns the appropriate behavior.


QUOTE But i have to say this is a bad example. you generalise the sleep conditions, there are to many times when a condition might be met that but it's still not a good time to sleep. you can define all the conditions, but not all conditions need to be met or there might be a new condition that needs to be met. as humans we can add that in, as a computer runs, it doesn't know if that needs to be added.
The sleep conditions by themselves form a mathematical function. For this an NN would work well, since it can learn the conditions overtime. If a new condition comes into play, then the NN just needs to learn the new function. An NN would even be able to assign different degrees of importance to each condition.
 
ph34r.gif

there are two condition to allows AI on the way to become awareness
1) the abilities to learn/accumlate information.
2) the abilities to analyse data/information in order to generate a set of mathematical function to model such curve.

-currently people dying trying to mimic human learning condition by generate buch of functions with unique parameters and variantion arrangement in order to accumulate information like human does, but still long way to go.
-grant it that computer today can take in information much faster then an average human does. However the abilities to analyze information still can only be archive by a person, largely due to the limitation of current mathematical. Even so, because of mathematical singularity, the probabilties of AI do exits.
 
QUOTE I propose the following simplified mathematical model for "anger in humans":CODE
if in offended state
increase serotonin (not sure if this is the right one)
if serotonin levels high
act angry

We're not 3. We're not going to make a computer that thinks like a 3 year old.

What an offended state? how do you define that?
how do you define what makes a mechine angry?

I prepose that you can't make a machine angry, beacuse you have to define what makes it angry, therefor you are only telling the mechine to get angry beacuse somthing is said, And under the assumtion you do define it, what if someone says the keyword that the computer must look for to become angry, but it's not used in a negative term?

I understand we can force mehcine to react like emtion, But it's still just us tell it to do it. The problem with math, is it's defined.

1+1= 2. There no other answer.

An emotion would be a much longer set of code, You would need to calculate EVERYHTING, over and over and over and over and over and over again.


CODE Loop
If person is NOT friendly
release.seratoine(.5)
End loop
end if
If person does not like you
release.seratoine(.5)
End loop
If person is not joking
release.seratoine(.5)
End if
If stomac > 110 but < 90
release.seratoine(.5)
end loop
end if
If battery < 40
release.seratoine(.5)
end loop
end if
If seratoine =>.5
Post: responce
end loop
end if



Thats not even 1/1,000,000th of the things that effect how you react to someone saying anything. Thats STILL leaving out how to define each statment, as well as which response to use baced on how angry you are, and I left out a possitvie responce, interaction with the envrioment, time of day, whats going on outside at the time, is the other person male or female, what about your current mood, there are to many things you don't know you think about.

Emotions are to complacted to be defined the way you define it and after all, the mechine is still only running over a proccess that you tell it to. There is no free will here, It just does as it's told.


Thats all for now, I'll likly edit this later.
 
QUOTE (InuyashaX @ Jan 29 2009, 06:22 PM)Are you saying were all robots?

Well the question is, if we break down a human to their DNA, and we break DNA further down, will we eventually find something mechanical?

Are we assuming robots can only be made mechanically or possibly biologically?

Well with that logic how about this, heres the simplest way to create a being with an AI and it only takes 9 months!

Heh catch my drift. I doubt were robots though we cant rule anything out, we could possible be so dang real we cant even tell.
Although you are probably being insincere here, I think that you probably do touch on the truth: It would be much easier to rebuild and reconstruct the mind (a la Dollhouse) than it would be to build it completely from scratch. We already have the blueprint on human thought, tweak it to make it better, faster, smarter, specialized and we have a great biological machine! Of course there are ethical problems in this method but oh well.

The other method would be to recreate the human brain (something we don't know anything about to begin with) to mimic human thought. Not only is this outrageously difficult it is probably impossible. No matter how close you mimic a human a computer will not be able to bypass the conditions in its coding. You can tell a computer when to be sad and how to act, but can it ever really act compulsively? Can it ever invent something great or start groundbreaking science? I find it hard to think a computer can actually mimic the impulsive, innovative side of humans. You can discuss strange loops and ghosts in the machine all you want but it is probably just science fiction. The fact remains a computer ran with the same conditions twice will always act the same way when a human run under exactly the same conditions twice could, and probably will, act differently both times.

But we need to know more about the brain, on how it is able to do that, to be able to program the computers to mimic it. And like I said earlier, it would be easier at that point to just rewrite the brains instead of mimicking them in a robot.
 
You know we have been pondering on the psycological and phylosofical side of this for too long, and i been wondering if its even possible (i mean someday it might) to recreate the huge amounts of storage capacity the human brain has with non organic matter, with today's hardware tecnology? (i think not) what i mean isnt like lol, a 8ghz and 200 terabites computer, i mean like making a super computer or something.

edit: also i wonder if we do actually need that storage capacity for an AI to be considered such. IE: animals? i think they dont have as much memory but they do have emotions etc, could the first AI possibly become animal-like in nature?
 
Playasia - Play-Asia.com: Online Shopping for Digital Codes, Video Games, Toys, Music, Electronics & more
Back
Top