Is artificial intelligence theoretically possible?
Asked by
Ame_Evil (
3051)
December 8th, 2009
I just had a philosopsychology lecture today that made me think – is it actually possible to create a machine that actually THINKS (not just pretends to) and has actual consciousness. In other terms, is it possible to create a Frankenstein that is capable of everything that another human being does?
My thoughts against this were that it had to be something naturally produced – created through the evolutionary cycle – that could actually possess conciousness. But now I look at it, my view of consciousness is just another structure in the human brain that sits next to attention and memory retrieval. If this is the case, surely anything that can run self-sufficiently by balancing attention, memory, choices etc (all human behaviour) MUST display consciousness as that is in essence what I have defined.
What are your views? Is it possible to create an artificial intelligence? I am of course ignoring all technical issues which deem it very extremely unlikely to actually create AI any way.
Observing members:
0
Composing members:
0
30 Answers
Yes. I think so. ‘Ventually.
Despite all the scientific advancements there have been in numerous biological fields, the brain is still one of the least understood structures there is. Until the brain is fully understood, I don’t think it is going to be possible to create something similar to it using electricity, metal, and plastic.
Sorry, I may have been slightly unclear. I am ignoring all of those issues but just focusing on the detail of will it actually be an artificial intelligence (that is basically a real sentient being no different to a child of someone), or just something that appears very convincingly to think for itself?
I think it is inevitable, and I think that programmers had better build in some Asimov-like safeguards now, so that they will be firmly entrenched when the time comes, since intelligence is likely to arise suddenly and unexpectedly in a kind of sui generis happenstance.
An artificial intelligence can certainly create a course of action or an answer based on trillions of pieces of data. The capacity to do what humans do as “illogical” creators seems to defy strictly linear thought processes and appears out of the range of machines.
Perhaps when we actually recognize behavior that is in our collective best interest we can create machines to replicate this or recognize the machines that can already do this. But until we can actually give credence to the scientific minds of humans and recognize their intelligence, there is little hope for artificial intelligence except as something to be feared. I think someday soon. Hopefully before it’s too late.
@SeventhSense So…create a machine that only makes the best / right decisions, apart from emotion? Interesting thought! They’d all be considered female, though…
I also think it’s possible, and probably inevitable unless we all die first. The brain is, in essence, a machine – it just uses neurotransmitters and neurons instead of electrons and wires.
@Val123
Well yes the feminine objective would be good…
it would be great for the first 3 weeks of the month..
Heaven forbid! that’s not a literal heaven by the way
@Ame_Evil I like the fact that you’ve noted how much the definition of “consciousness” affects the question and the answers. :) well done.
Definitions are everything. My beliefs are that a proper definition of “life” or “consciousness” would reveal that many things we’ve already accomplished have about the same qualities as “conscious.” Most notably a video game character.
Similarities that strike me between Video Game Characters and Humans. Both:
– React to input.
– React based on their “Brain’s” memory.
– Will only react within the limitations of their physical selves.
Animals are exactly the same. They do all they can based on memory and they also act only within their physical limitations. Again, video game characters do exactly that. The only difference between a human’s function and a video game character’s function seem to be the complexity of the functionality itself. Which is the same difference between human brains and animal brains. They’re built differently and hence have different limitations, but they don’t really attempt to do anything significantly different.
One thing that I believe holds back the development of a proper Artificially Intelligent computer is the idea of Free Will. Programmers and engineers the world over continue to believe in what I consider to be the Myth of free will. They are constantly trying to mimic this non-existent feature and thereby, attempting to create a product that isn’t really like the human brain. The concept of free will misleads their efforts.
Well intelligence is just the ability to learn stuff and then apply it. That’s apart from any moral code. There are already ways to program low level or specific artificial intelligence into things. Robots that learn not to go into certain rooms, pick up certain boxes based on the color.
@Val123
but they would not be programmed for PMS!
OK now we’ve reached the border of intelligence. There is not a machine on earth that has been able to solve this enigma..
New to this. At Ame_Evil: Free will is indeed real. Whether you believe in God, a Supreme Being, or Xanaxx the Second, or whatever, a human being certainly has the capability of making its own choices. A murderer chose to kill. His mind gave him that option, and his free will led him to take it. I don’t see how it isn’t a real concept. Oh, and not big on AI, but if you want to create anything artificial, you first have to know how the real thing works. Personally, I doubt that’s going to happen very soon, if at all.
@Maximillian said: “Free will is indeed real”
but s/he’s wrong about that, trust me! :)
A hard SF writer that has a way of writing about AI with (what seems to me) a ring of verisimilitude to it is Greg Egan. Here is the first chapter of his book Diaspora, the best explanation I’ve read of how one might go about “growing” a human-like software AI.
I found this talk by Jeff Hawkins (inventor of the Palm Pilot) very interesting.
I think it is eventually possible, but unless they apply quantum mechanics in this one, that’s gonna be a helluva big of a machine!
@Ame_Evil, to adequately answer your question, you first need to define what you mean when you say thinking and consciousness. Because there’s no clear consensus as to what exactly these things are.
If we look at the evolution of animals, it’s pretty clear that the ability to think isn’t this black-and-white, either-or phenomenon. Consciousness is rooted in the brain. But brains have a long history of evolution, from us smartass apes to our reptile ancestors, to their fish ancestors, and down to our common ancestors with starfish, who have only rudimentary nervous systems.
At which point in the evolution of nervous systems did “thinking” and “consciousness” arise? The question doesn’t make sense to me, because I think consciousness evolved gradually as brains gradually increased in complexity.
We currently have robots and AI systems capable of more complex “thoughts” than, for example, insects. When it comes down to it, a brain is just a computer. There’s nothing “special” about the fact that brains are organic computers, as opposed to silicon-and-metal. As our technological computers continue to become more powerful at an exponential rate, I think it’s a given that we’ll achieve AI as complex, thoughtful, and “conscious” as human beings… eventually.
@hiphiphopflipflapflop, oh man, that book was so awesome. And trippy. And, um, hard to understand. Five spatial dimensions? WTF?
Yes, and there are at least 2 ways doing this:
1) Continue with existing computer technology and refine the algorithms
2) Reverse engineer the brain and build copies of it
@mattbrowne OK, whose brain we gonna use to reverse-engineer? Hmmmm?
I was thinking of the brain of one Flutherite, but then I decided it would pose a danger to humanity.
@pdworkin Yes, that would defeat the whole purpose!
Read the book “Gödel Escher Bach” by Douglas Hofstader.. It sheds some new light on the territory (and manages to be funny while doing it).. Spoiler: Douglas Hofstader thinks “yes”.
Ame_Evil: wrong about what? You don’t have the capability of controlling what you eat in the morning? Or not eating at all? Or not going to work? Kinda sucks for you.
@hiphiphopflipflapflop
Fascinating discussion. I’m eager to see what his team comes up with next. I hope I’m not too annoyed by future anticipating machines though. I’m annoyed enough at computers or voice prompts now that say, “Did You mean?” or “Are you sure?”...aggghhh!!
@nisse – It’s a great book. I read it around 1986.
Response moderated (Writing Standards)
Answer this question