These three questions reflect the divergent interests of AI researchers, linguists, cognitive scientists and philosophers respectively.
The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Early research into AI, called "good old fashioned artificial intelligence" (GOFAI) by John Haugeland, focused on these kind of high level symbols.
These arguments show that human thinking does not consist (solely) of high level symbol manipulation.
Turing notes that no one (except philosophers) ever asks the question "can people think? research defines intelligence in terms of intelligent agents.
" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". An "agent" is something which perceives and acts in an environment.
In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems.Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement).However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate.They do not show that artificial intelligence is impossible, only that more than symbol processing is required.In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove.
This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.