The philosophy of human mind describes two functionalisms: mental states and mental contents. Mental states functionalism are categorized by their functions about mental input and output and other mental states. Mental contents, also referred to as conceptual, or inferential role semantics are composed of their function in relation mental inputs, outputs, and other mental contents. Philosophers that endorse mental state functionalism to an extent also endorse computation functionalism. Functionalism is the doctrine that any mental state, whether a thought or pain does not depend on its mental constitution, but on its function solely, or the role that the mental state plays in the cognitive system. Drawing on the Turing Test hypothesis and Searle’s Chinese Room logic, it is arguable that computer functionalism fails to prove that a computer is capable of performing advance simulation of any feature of human intelligence nor is it capable of understanding how it came to possess its intelligence.
The history of computation functionalism dates back to the origin of Computability Theory. Alan Turing’s theory of computation coined to investigate the foundation of mathematics. In a bid to prove that the solution of the first order problem was not related to any algorithmic solution, Turing formulated Turing Machines (TMs) (Berrar & Schuster, 2014). The theory argues that machines can solve computation that a human mind can. Turing argues that if a machine can hold a conversation so well such that human beings cannot tell the difference between a computer and a person, then the computer can think. According to Turing, the rules followed by a human being are similar to the instructions and commands programed into a computer (Berrar & Schuster, 2014). The application process of the rules by human beings is similar to the processes a computer uses in executing instructions. It is the function of a computer to ensure that instructions are followed in order and correctly.
Delegate your assignment to our experts and they will do the rest.
According to Fodor (1975), for a machine to be considered a computing mechanism, there must be a relationship between its physical state and intellectualist description of what it does. Given this assumption, computer-programming language can be seen as mapping of its physical state onto sentences in the English language such that English sentences given as commands to the machine are expressed as functions in that state. Fodor (1975) relates human language to high-level programming language and the human language of thought to a machine-computer language. He observes that the human language of thought can be inferred to as human cognition, which is the ability of the humans to manipulate language and make inferences to the semantic properties of thoughts. Fodor (1975) also describes computer languages using semantics and idioms in his quest to render stored-program computers as intelligent. According to machine functionalists, if a machine passes the Turning Test, it is considered to have a mentality and therefore can think on its own. Many philosophers including John Searle as seen in his Chinese Room Logic have refuted this capability.
John Searle’s Chinese Room Logic expressed as an experiment refutes the idea that a computer is a mind. In the Chinese Room Logic, an experiment involving a person with no understanding of Chinese is put in a room, with a set of commands described as Chinese expressions. The person stays in the room until they are familiar with the rules. Comparing the person in the Chinese room with one outside the room who has the genuine understanding of Chinese, Searle infers that the input and output relationship of the two persons is similar and that what goes on in the Chinese room is similar to what goes on in a programmed computer (Wakefield, 2003). Searle concludes that governed rules and manipulation, computer programming, and Turing Tests are invalid tests to mentality. Additionally, Searle suggests that no amount of manipulation can produce meaning, and mentality is impossible without meaning. No matter how complex computers are, if they are simply symbol-pushing engines, then a mind cannot be a computer no matter its complexity, as it requires more than mere symbol-pushing abilities (Wakefield, 2003). Therefore, according to Searle, machine functionalism fails to prove its point.
Searle in his experiment observes the following objections to the Turing Test: first, he argues that the system reply itself refutes the claim that a programmed computer is a mind. In the experiment, the person in the Chinese room internalizes and memorizes the command, but that does not necessarily mean that he understands Chinese. Second, robot reply, according to Searle, the person in the Chinese room, symbolizing the computer, continues to execute the Chinese expression even though he does not understand Chinese. Finally, even though the machine is intelligent, the brain simulator cannot correct an error not unless commanded to do so as a human mind would (Wakefield, 2003).
Searle’s argument though was originally designed to refute the artificial intelligence ideas, applies solely to digital computers the argument cannot be generalized as to apply to all machines. However, key pointers were made at the end of the experiment: that a computer program cannot be substituted as minds or constituents of minds. Also, a running computer program cannot be equated to the phenomenon of a human brain, and that no program can produce or be equated to the powers of the human mind. Therefore, given the observation made by Searle in his Chinese Room Logic experiment it suffices to conclude that the premise of computer functionalism fails to prove that a computer could understand what it does.
References
Berrar, D. P., & Schuster, A. (2014). Computing machinery and creativity: lessons learned from the Turing test. Kybernetes: The International Journal of Systems & Cybernetics , 43 (1), 82-91.
Fodor, J. A. (1975). The language of thought . Cambridge, Mass: Harvard University Press.
Wakefield, J.C. (2003). The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI. Minds & Machines , 13 (2), 285-319