Artificial Intelligence


Ad: Buy Girls Und Panzer Merch from Play Asia!

anbuu

-chi
Kouhai
I want to pose a question to the people of this forum that is well known in philosophy and that many people give very little thought to every day. The question was first critically posed by Alan Turing in his essay "Computing Machinery and Intelligence" which queries:

Can machines think?

A machine in this instance is just by standard definition, what you would generally regard as a machine in simple terms. This includes anything that we use to perform basic actions including computers. Thinking is quite a different story, as the definition of thinking and the relation between the mind and the brain is still debated.

To compensate for this, Turing modified the question to this scenario: We have a man A, a woman B, and an interrogator C which may be of either gender. They play an Imitation Game, where C must guess the gender of A and B but both contestants are trying to make C guess incorrectly. Is it possible for a machine to replace A and successfully fool C into believing it is a woman?

So I pose the questions:

Can machines think?
What is the actual process of thinking?
Can a machine play the part of A in the Imitation Game?

Try to think this one out; one sentence answers will be quite insufficient.
 
Well, answering these questions depends very much on how we define "machine" and "think". When you get down to the bare basics of what the brain is, it's ultimately just a biological machine, and we would definitely say that they think. Going just off of that, I've got to say that there are definitely *some* machines out there that can think.

For example, a few months ago they made a robot that teaches itself English. Even as we speak, a number of companies are developing computer structures made to mimic the brain. In the next few decades they expect to create cars that can drive themselves, that can invent new mathematics and physics theories, all kinds of crazy stuff.

...but then you've got the kind of machine like one of my professors is trying to get me to build, which just adds numbers in binary over and over again. It's hard to think of something like that as a thinking machine, but as you get more and more complex in the machinery and the software, you can make machines more even more intelligent than a human. It's definitely going to happen. I'm not just saying that out of my personal opinion, with the new hardware they're making, and some of these insanely awesome, insanely complex learning algorithms these genius programmers are making, it's happening right now. Computers are going to become some 1000 times faster in the next 2 decades. In addition to the new architectures being made to mimic the mind of a living being, machines will definitely become, at some level, sentient.

I think the real question is, would we be any different than these machines?
 
Yes, depending on the meaning of "think". You could program your average computer to give a certain answer depending on the question, and to remember incorrect answers and change it's answer patterns. As this gets more complex, we get to a new question:

Can you make an artificial, sentient machine?

At the moment we cannot. We can only give it the equivalent of "instincts", and cannot give it self-awareness. But is it possible to achieve this in the future, with only ones and zeroes? This is a strong philosophical debate, as it will eventually lead into debate on the existence of a soul, and what exactly defines humanity. For now, I do not believe it is possible to create an artificial sentient being. But as for participating in your game, it is possible.

EDIT: some more thoughts:

By saying sentience, I mean it as awareness of oneself and others. Meaning it realizes it's own existence as well as the existence of others. It can empathize, without losing it's own consciousness. At the point it assumes an identity and becomes lucid, sentient, self aware, then it is true artificial intelligence.

Again, it goes down to the presence of something more, something higher than physical being. If we are just chunks of electrically-charged meat, bone, and water - yes it is possible. If we have a soul, and a higher being, then no, it is not possible.

"If we could comprehend the human mind, it would be too complex for us to understand." Think about that one.
 
In the future it is entirely possibly to have a computing language that uses a more complex system than binary. In the late 1990's scientists started working on bi-chip which would use 4 signals.
Binary uses a very simple logic gate which gives the signal "yes" or "no" in essence, the proposed bio-chip would be based on something analagous to DNA/RNA which has 4 responses. So equivalent response would be "yes", "no""maybe" ro something similar.
If you combine this with the increase of miniaturisation.ie. the fact that computer compentns are getting smaller and smaller in relation to function then we might eventually reach a point where we have created a machine that is as complex as a human brain/mind.
One thing to bare in mind though is that many of the skills we use are all learned, when a human is first born we run on pure instinct and it is only as we are taught and learn that we develop skills and a more sophisticated way of thinking.
Remember we are not born able to talk or walk and first borns learn mainly by absorbing vast amounts of information from their surroundings.
 
A machine can think in the limits of what its been programmed to think about. In your imitation game, if you program a robot/machine to decieve then thats what it will do, whether it succeeds or fails merely decides how good the programming was.

I think a better question would be can a machine have self autonomy and a will of its own. If i program a robot to stand still and nothing else can it overide its programming and make a "conscious" decision to move then that is a different matter.

QUOTE I am not so sure that people think. I see so little evidence of it. I am not sure what thinking even is and wonder if perception is only an echo. It is already getting to be difficult to tell people from machines.

Good point. If we are to take a freudian stance and say children are born as blank canvases then we are already stained by our parents and the society we are born in and they shape our thought process in the future. Even if you completely rebell against your parent's ideology they have stilled shaped your thought process. This leads to the question of whether there really is freedom of thought. I am not so sure. All i can say is that human being thinking is fundamentally flawed reflected in that fact that what i have just said is not conclusive and someone could easily criticise me for thinking in this fashion and vice versa.

So to relate this back to your question, machines are blank canvases in which we can program it to do what we want but we can only project our thought process which i believed to be already flawed, on a robot which will also have a flawed way of thinking.

Also who is to say there is only one type of thought process, human beings do not do things by logic alone but also emotions. So with this question lots of open ended stuff to discuss in which we learn less about machines and more about the human pysche.
 
QUOTE (anbuu @ Oct 30 2008, 11:55 PM)Can machines think?
Like others have already suggested, the answer to your question is largely dependant on what you define "thinking" as. Taking your example of a robot imitating a person, I believe this can be done by a machine. A lot will depend on the programming of the robot for this to be possible. One thing to bear in mind, regardless of how intelligent this machine works it must also look convincing. An intelligent metal box will not convince anyone it is a human!

To answer this question fully perhaps it best to look at animals and humans. Many of the things animal do can be programmed into a robot. As such, these things would not be considered true intelligence (at least from what you are suggesting). Humans on the other hand have thought processes that are definitely associated with intelligence and are difficult to replicate in a machine. Things like self-awareness and learning abilities are things that separate man from beast. This is probably the biggest difference between man and machine. At the moment machines do not demonstrate self-awareness and have limited learning abilities. If you use these criterias to measure intelligence then it could be safe to say machines do NOT have intelligence. Then again, like I said earlier a lot will depend on what you define "thinking" as.
 
QUOTE Things like self-awareness and learning abilities are things that separate man from beast.

Don't rush.
Great apes and dolphins are self-aware on some levels (mirror self-recognition. Which is quite high level, a kid learn that between 3 and 4).
And learning some abilities, like using tools or mirrors, is quite widespread.
However, I agree with you to say that the discussion is irrelevant without a much much more precise definition of thinking.
 
(Just to clear things up, this idea of the Imitation Game is not mine, it was Turing's. I guess I didn't make that clear previously.)

I see that many of you have posed a very good question for the discussion: what exactly is thinking in this context?

Thinking, as best as I can explain it, would be the ability to reason through problems no matter the situation, no matter if the knowledge lies outside of our limits. Thinking would bring about novel behaviors, and would be cultivated by both environmental and genetic situations.

And I'm also pleased that this question, which I am well aware of is quite a simple question to argue for, has lead to the discussion which I will try to support: the action of thinking and the relation between the mind and the brain. Two theories come to mind, that of the dualist claim and the mind/brain identity theory, and they essentially deal with the mind and brain. (Although this may seem off topic, I assure you this is of relevance) Dualist believe that the mind and brain(body) are separate and mind/brain identity theorists believe they are the same.

Many of you have sided with the latter theory, because you all assume that both are the same and therefore with a simple advance in technology one could artificially create a brain to cause a machine to think. However these theories are still up for debate, so we must see what the answer to the question is from both sides. You have all given arguments for the mind/brain theory, and many are sound in their claims. However I have a couple of questions.

I've seen a reference to sentience, and this is certainly a trait that a machine would need if it were to "think" or reason in the same manner that humans do. But how can we truly know that the machine, if created, really understands that it exists and that people exist other than itself? We could surely ask it, but if it automatically states that "I am aware of my existence as a machine and understand that I was created by other beings known as humans," then this could simply be reactionary programming and not sufficient proof as to the claim that it is "self aware." There is no way to open it's head and see if it's actually thinking this statement, as we cannot do the same with humans. But this is trivial.

Also, I believe one of you mentioned that a modern robot can teach itself English, which relates to my next point: the machine created to think must be able to teach itself new things which are not already programmed into its memory. This could be imagined as a child computer, which fitted with an appropriate program c, could learn in the same manner everything that a human child can, and also develop its own unique way of learning and doing activities in the same manner that a child would. Now the blank canvas discussion is along the lines to my belief, however it was also mentioned that we would merely program the machine with our flawed way of thinking, which is not contrary to the question. We want the machine to at least think, flawed it may be, because if we ourselves are flawed then we can make machines no better intellectually than ourselves (that is to say we cannot create a perfect intellectual being).

Oh and I believe one of you spoke as if we were debating on the intelligence of current machinery, which is not really the case. We are questioning the possibility, not the conceivability.

Hope this makes some sense, because I'm multitasking and it might not
sad.gif
 
As this is an anime-based site, I thought I would inject a little something anime related into the discussion. The orginal movie of Ghost in the Shell and even it's spinoff series and subsequent films posed this question also when they consider whether a machine can have a "ghost" whilst the reference to ghost probably means soul or spirit in essence it is the same question. When does does an AI reach a point when it becomes self-aware and if such a thing were to happen and a machine could attain self-awareness what would this mean for humans?

On a different note, it could also be argued that the majority of us are not as self-aware as we think we are. Before you dismiss this, consider this: Whilst you are aware of your existence and your environment, how aware are you really? Your mind automatically filters out all stimulous that it determines to be normal.ie. a 100% self-awre person would be are of the air pressure on their skin and every sound that reaches their ears but our brains/minds have learned to subconciously block the majority of these sensations which we experience anyway because if it didn't the result would an overload of stimulation. So, how self-aware are you REALLY?
 
I've taken a lot of psychology and engineering courses related to human learning and mimicking human learning. I really don't see a difference in how people learn and how machines can be programmed to learn.

Emotions are heavily if not totally influenced by context, experience, and even hormones. Peoples actions, even little movements of their bodies are all just compensations for reflexes in response to status of nerve transmissions that can even be falsely triggered or impaired by chemical issues. Our thoughts may actually just be the results of the exceptions of the firings not the actual transmissions. If you discount the low level programming of a machine, then you must dismiss the low level programming that is innate in humans that allow us to learn.
 
In my eyes, the point still boils down to this: If humans truly evolved, and our brain is completely scientific and brought about by coincidence and evolution, a machine certainly has the potential to think as humans do. At the most basic level the brain works on positives and negatives, firing and not firing, just as modern circuits do.

But if there is a soul, a higher being, or a separate mind entity, everything changes. Can a human exist, think, and have awareness and identity without a soul? Then it is possible. If the soul is tied into brain, and the soul functions as an identity and allows awareness, then it is not possible.

We will never know until we can understand how our own brains work, which is still largely unknown. Just think about it, electrical pulses give us an identity and personality? Small cells exchanging signals can give us the ability to understand these cells? It's amazing, and beyond current technology. In the future, it might be possible, but if humans have a higher being, a spirit, a soul, then it is likely not possible.
 
QUOTE (Gustav1976 @ Nov 01 2008, 10:44 AM)As this is an anime-based site, I thought I would inject a little something anime related into the discussion. The orginal movie of Ghost in the Shell and even it's spinoff series and subsequent films posed this question also when they consider whether a machine can have a "ghost" whilst the reference to ghost probably means soul or spirit in essence it is the same question.  When does does an AI reach a point when it becomes self-aware and if such a thing were to happen and a machine could attain self-awareness what would this mean for humans?
Even if people devise a computer than can mimic a human brain exactly (very possible) the machine would not be considered a living life-form. MRS GREN (Movement, Respiration, Sensitivity, Growth, Reproduction, Excretion and Nutrition) will see to that! Taking the concept of a ghost, which I guess is something similar to soul or spirit. The basic assumptions is that only living creatures can possess souls. As machines are not alive it is impossible for it to possess a soul.

Off course, if someone made a robot that fulfilled the MRS GREN criteria then things would be different. Furthermore with the advancement of prosthetic limps (and possibly organs) our definitions of life is likely to be challenged, and could even change. Until this time arrives, it is safe to say no machine can ever be alive regardless of how intelligent it is.
 
Monsta you;re forgetting that in the original film, at that point in time a lot of people were so mechanised that that there was a very fine line between human and machine and the entity known as the Puppetmaster, because of this was able to fulfill all the GER etc functions to some extent apart from one which it succeeded in doing (in a way) at the end of the film. Don;t forget we already use machines that perform some biologival functions when our bodies are unable to do this by themselves. A respirator-machine for example perform the function of a lung and dialysis machines perform the function of a kidney. Also the film made a very poingant comment when the Puppetmaster said "How can you claim I am not alive when neither science nor philosophy is able to define what life is?".
So the question could be when doesa machine with AI become a sentient entity? But that is a discussion for a different thread. Artificial Intelligence IS possible but we have not reached a stage anywhere near enough for this to become a viable possibility just yet. An interesting thing to think about is do we WANT A.I.s?
 
QUOTE (Gustav1976 @ Nov 02 2008, 10:59 PM)How can you claim I am not alive when neither science nor philosophy is able to define what life is?".
Doesn't science already define what life is? In many biology textbooks they give a range of conditions an object must fulfil to be considered alive (namely MRS GREN). If a machine is capable of fulfilling all these conditions then we can say it is alive. You will notice none of these conditions consider intelligence a criteria for life, indeed many living organisms do not have any intelligence (trees, plants etc).

There are numerous devices that can aid life but that does not mean the device is alive. A dialysis machine may help a person with kidney problems or a oxygen tank may help someone with emphysema but that doesn't mean the devices are alive. Similarly, if a person contains a prosthetic limb/organ the person will still be human as they follow the conditions of being alive (they can still reproduce, grow etc).

That said, the issues covered in Ghost In The Shell are not completely irrelevant. What is often a defining for factor humans is their intelligence; if a machine can replicate this, do we lose our humanity? Or on another level, if a machine can demonstrates human intelligence does it become human? The answer to these questions can be answered if we use the currently accepted definition of life. It's not a romantic answer, and is a bit of a kill-joy, but it works!
smile.gif


That means a human with a robotic brain is alive because it does all the stuff a living creature does (assuming the robotic brain mimics the functions of the brain). While a robot with a human brain is unlikely to be alive; it cannot reproduce (or have the potential to reproduce) and cannot grow. Admittedly these things are easy to say because we have never seen a robot with human brain (or vice-versa) to argue otherwise!
 
QUOTE (monsta666 @ Nov 02 2008, 08:33 PM) Doesn't science already define what life is?
Well, yes and no. We have a conventional definition but no definition that is free from ambiguity. In physics we generally assume that any entity which exists as a continuous phenomenon and decreases its inner entropy at the expense of free energy and subsequently rejects it in a degraded form is endowed with life. So to assume that we cannot argue the concept of life because supposedly life is already 'defined' would be to commit a fallacy.

Although this discussion is very intriguing, I feel that we have deviated into something unrelated to artificial intelligence. We simply want an answer to the aforementioned questions of machines and thinking. Sure, we may mention that giving the machine the ability to think will give it life, which would prompt a definition, however we want it to be comparable to a human, not any other living organism (because naturally bacteria cannot reason, no need to simply give the machine "life").

Oh yes, and one more thing that I want to mention: we all seem to refer to the machine in question as "the machine" or whatnot, even after we conceive that a thinking machine is possible. We want a machine, be it computer or some other device, to have the capacity to FOOL us into thinking it is a human. This relates back to the Imitation Game in that I do not want to doubt for a second that both A and B are humans. If a computer has the ability to grow (mentally), reason, gesticulate, etc. in manners that will fool us into thinking that it is a human, then that would suffice for a thinking machine, in the context of the original question.
 
hmm I wanted to say something witty and provocative to contribute but for now you'll just have to accept the following comment:
I think therefore I am but I do not think I am
 
I'm actually kind of happy this thread received such a discussion. Usually my threads wither away and die...
 
Wow this section is like uncharted terrortory for me
biggrin.gif


QUOTE (anbuu @ Nov 01 2008, 01:49 AM) Thinking, as best as I can explain it, would be the ability to reason through problems no matter the situation, no matter if the knowledge lies outside of our limits.

So someone who has a mental retardartion and cant think things through isnt thinking?

My next point is Mon Mons thing with GREN,


QUOTE Even if people devise a computer than can mimic a human brain exactly (very possible) the machine would not be considered a living life-form. MRS GREN (Movement, Respiration, Sensitivity, Growth, Reproduction, Excretion and Nutrition) will see to that

Im sorry but to me that doesnt show thinking that shows Sentcience(sp?), or have ai misunderstood your post?

To me thinking would have to be aware of its surrondings and maybe its current place within society, if a machine knows this then to me thats proof that it has the abilty to think.
 
QUOTE (Hiasubi @ Nov 04 2008, 08:27 AM) So someone who has a mental retardartion and cant think things through isnt thinking?
Well, if someone made a violent advance toward someone who was mentally ill, I am pretty confident that they would flee in terror or confusion. This would be the crudest form of thinking, but I see your point. I worded that in a poor manner, and unfortunately I don't have a solution at the moment
sad.gif
 
QUOTE (anbuu @ Nov 03 2008, 10:05 PM)Well, yes and no. We have a conventional definition but no definition that is free from ambiguity.
It maybe not be universally accepted but it is still pretty widely accepted (I would wager it is more accepted than the theory of evolution). The MRS GREN definition is used for a various things in biology. For example viruses are not alive as they cannot reproduce outside a host. The same applies to proteins, they are not alive for the same reason (they can't reproduce). That means viruses don't die, they are destroyed while proteins or enzymes denature. It's subtle, but it has important applications.

My main point is this, the definition of life is pretty well defined but the definition of humanity is less well defined. As a result, the definition for humanity is more open to debate. How do we define humanity? Is it intelligence or is it more related to our morals/ethics? After all these are qualities that are often associated with man and is often used to separate man from beast, or in this case machine. Okay so this is strictly not answering the original question (the mimic test). Nevertheless these are typical things that separates man from machine. A machine cannot develop morals like a human...

As for your original question. Could a machine mimic the sex of human so it was impossible to distinguish between the real thing? I think to answer this question it's best to take a step back. Could a machine hold a conversation with a human so the human could not tell he was speaking to a machine? If that were possible then I'm sure it would be able to fool us into believing it was the opposite sex.

In my opinion this is perfectly feasible, it just requires cleaver programming. Loads of people follow behavioural patterns and I'm sure machines could replicate this. Saying that, there's a lot of superficial stuff to consider: how the machine looks; body language and how it sounds. These things will matter a lot! If it can do those things correctly it is half-way into fooling you!


QUOTE (Hiasubi @ Nov 04 2008, 12:27 PM)My next point is Mon Mons thing with GREN,

Im sorry but to me that doesnt show thinking that shows Sentcience(sp?), or have ai misunderstood your post?
MRS GREN is not supposed to define thinking it is supposed to define life. Like I said, there are many living things that don't think (trees/plants etc). So you're right, MRS GREN does not relate to thinking. It will just prevent a machine from ever being human, no matter how intelligent it is.

The problem comes when we make a machine that is highly intelligent and carries a lot of human traits (morals, compassion etc). Once this happens our definition of life is sure to be challenged.
 
Playasia - Play-Asia.com: Online Shopping for Digital Codes, Video Games, Toys, Music, Electronics & more
Back
Top