Functionalism

1. Synopsis

Functionalism is “the view that mental states are functional states, states defined by their causal role” [Block 1980, p6].

Previous theories of mind have troubled analytic philosophers by either allowing the existence of non-physical substances or by disallowing the possibility of non-human minds. While dualism has been largely shelved, mind-brain identity theories have been the subject of much recent debate. I examine the opposing views of Kripke and Feldman and show how Functionalism offers an alternative which avoids these perceived shortcomings.

Finally, some problems of Functionalism are discussed: Block’s homunculi problem; Lycan’s “Lilliputian Argument”; Block’s problem with physically defined input and output; Kalke’s comments on the need to set boundaries before functional equivalence is meaningful; and the issue of qualia.

2. Motivation for Functionalism

Prior to the 1960’s, theories of the mind fell into two categories1. The oldest, dualism, claims that the mind is distinct from (or at least not wholly explainable by) physical substances. The more recent, materialism, proposes an identity between the mental and the physical.

There are, of course, a variety of theories in these two categories and a corresponding variety of arguments for their inadequacy. Plato, Aristotle and Descartes proposed quite different types of dualism and modern dualist views are equally diverse. Similarly, behaviourists and mind-brain identity theorists, while both materialistic, hold quite divergent views.

It is not the purpose of this essay to examine these theories or their problems in any detail except to the extent that they have motivated analytic philosophers to develop the alternative of Functionalism.

The chief question which dualism finds difficult to answer is “If the mind is non-physical, how can it be causally related to physical events without violating demonstrable physical laws?”2 If the weight of experience suggests that there are only physical things in the universe, then an alternative to dualism must be found.

The shortcoming of materialistic views is that they are too restrictive: they legislate against the possibility of minds except when identified with carbon-based brains. But ontologically, this should be an empirical question rather than a necessary one. If we are to allow that machines and Martians may have minds, then an alternative to materialism must be found.

2.1 Kripke and Feldman

Saul A. Kripke [Kripke, 1971] argues that a mind-brain identity theory3 cannot be contingent and Fred Feldman’s [Feldman, 1974] response shows how grappling with this issue leads towards Functionalism.

It has been held (says Kripke) that “the mind is the brain” is a contingent identity statement. For instance “My being in pain at such and such a time is my being in such and such a brain state at such and such a time” is held to be a contingent claim since we can imagine being in pain without having that particular brain state and we can imagine having that brain state without feeling pain.

On such a picture there would be a brain state, and we pick it out by the contingent fact that it effects us as pain. Now that might be true of the brain state, but it cannot be true of the pain. The experience itself has to be this experience, and I cannot say that it is a contingent property of the pain I now have that it is a pain. [Kripke, p145]

Although he writes of “experience”, Kripke is not raising the issue of qualia here. He is simply pointing out that “being in pain” is a necessary part of the mental state of pain, but is a contingent part of the associated brain state. So how can the mental state be identical to the brain state?

Additionally, Kripke points out that describing “such and such a mental state” and “such and such a brain state” is to define two rigid designators4. If it turns out that two rigid designators designate the same entity, then they are necessarily identical. Hence, if mind is identical to brain then it is so necessarily: a mind-brain identity theory cannot be a contingent claim.

Feldman responds by describing several types of mind-brain identity theory and suggesting that there is one which does not fall to Kripke’s argument.

There are three classes of “contingent psychophysical identity theory”: one’s which equate mental and physical substances; one’s which equate mental and physical events; and one’s which equate mental and physical types, phenomena and properties. [Feldman, p148]

Within the second class, he distinguishes “three main views about the nature of events”.

According to the “propositional view” [Feldman, p149], there are three characteristics of events: they can be the objects of propositional attitudes (eg Jones being amused is an event because it can be hoped for); they need not occur in order to exist (Jones’s being amused is an event, even if Jones is never actually amused); and they can reoccur (the event of Jones being amused may happen more than once). Feldman acknowledges that this view is implausible.

According to the “structural view” [Feldman, pp149-150], “events are complex, structured entities the main constituents of which are properties, individuals, and times”. (Note that including a time parameter implies that an event must occur in order to be an event.) This allows a neat test for identity: two events are identical if and only if they have the same property, individual and time. But this view turns out to be actually a theory in the “types, phenomena and properties” class rather then the “event” class.

According to the “concrete events view” [Feldman, pp150-155], the same event “can be described in a variety of nonequivalent ways”. This is the interesting view (the preceding has just been getting rid of poorer options) which stands up to Kripke.

How does the concrete events view eludes Kripke’s argument? Feldman proposes that “being a case of someone’s being amused may not be an essential property of that event” [Feldman, p153]. He extends the standard analogy of Benjamin Franklin:

We might just as sensibly ask how Franklin, the inventor of bifocals can still exist in a world in which there are no inventors. Well, assuming he is only accidentally an inventor, he can do it easily; he just has to stick to politics and publishing and keep out of the laboratory. [Feldman, p153]

Though Feldman doesn’t express it this way, I think the implication is that if you take the concrete events view, then “such and such a brain event” and “such and such a mental event” can be denoted without specifying rigid designators. That being the case, then Kripke’s point (that rigidly designated entities cannot be contingently identical) “loses its foothold”.

But in formulating the concrete events view, Feldman has had to modify some other features of the psychophysical identity theory.

Firstly, he allows (though with severe doubts) that “if we adopt the concrete events view, we should take causal indiscernibility to be our criterion of event identity” [Feldman, p150]. Secondly, the concrete events view “leaves open the possibility that in some other possible world, some blithe spirit [has mental events] but has no body” [Feldman, p151].

Avoiding Kripke has required Feldman to accept that mental events should be seen in terms of their causal roles, and that mental events are multiply realisable. In summary, confronting the problems of contingent psychophysical identity theories has lead him to Functionalism.

3. The Content of Functionalism

R.J.Nelson suggests that the main source of misunderstanding about Functionalism is a confusion between at least six possible meanings of the term “function” [Nelson, p366]. It requires quite some work to explicate the Functionalist view of mind from other uses of the term “functionalism”5.

Block identifies three functionalisms: functional analysis; computation-representation functionalism; and metaphysical functionalism.

Functional analysis attempts to explain a system in terms of the relationships between component parts [Block 1980, p171].

Computation-representation functionalism attempts to map psychology to computer programming. Psychological states are represented in a way which allows psychological processes to be seen as computations involving those representations [Block 1980, p171].

Metaphysical functionalism (Functionalism) is a theory of the nature of mind which proposes that the important thing about mental states is that they are functional states [Block 1980, p172]. That is, mental states are characterised by their function, rather than by their form.

Functionalism is concerned “with mental state types, not tokens – with pain, for instance, not with particular pains ” [Block 1980, p172]. Functional states are characterised by three sorts of causal relationship: those between external stimuli and internal states; those between different internal states; and those between internal states and external responses.

Functionalist models are usually phrased in terms of Turing Machine state tables. A Turing Machine is a simple mechanism, which (surprisingly for its simplicity) can compute any algorithmic problem6. “If a mental process can be functionally defined as an operation on symbols, there is a Turing machine capable of carrying out the computation.” [Fodor 1981, p120]

The “If” in the above quote minimises what is actually a rather large assumption: namely, that mental processes are algorithmic. At the moment there is little justification for this: we await future empirical findings to substantiate it7. It should be noted that Functionalism is still in the stage of theory development where the key issues involve establishing internal consistency. While Functionalism is motivated by empirical observations, it has yet to yield testable predictions or any program of empirical verification.

3.1 Is it Reasonable to Postulate Mental States?

Behaviourists take mental states to be dispositions to act in particular ways. Such dispositions can be readily discussed in terms of stimulus and response. Functionalists, however, talk of mental states causing behaviour, and also of mental states affecting other mental states [Block 1980, pp175-176].8 For instance “being on a diet” will involve some mental state which will interact with the mental state of “being hungry” (among others) in order to determine whether to eat or not.

There is nothing unusual about postulating theoretical entities: it is done as part of normal scientific theory development (for instance valence in chemistry). The postulation of mental states is supported by introspection, but this on its own is not sufficient justification. A reasonable justification would be if it could be shown “that they provide the simplest systematic account of what one observes about other people’s behaviour” [Fodor 1968, p131].

Theoretical entities which have been inferred from observations may later be directly observed, at which point they cease to be theoretical. The line between observed and inferred entities is disputable [Fodor 1968, p132]. At least the following principle should be enforced: “the claim that T is an inferred entity can be true only where it is logically possible to observe (makes sense to speak of observing) that something is a T” [Fodor 1968, p133]. In other words, postulating mental states must be an analytic claim and some sense must be given “to the notion of direct observational verification” [Fodor 1968, p135].

3.2 Does Functionalism Require Physicalism?

Fodor writes that Functionalism is neither dualistic nor materialistic [Fodor 1981, p114]9. Functionalism is a broader view, which comments on mental states without specifying how these states are practically realised. A mind may be realised in a carbon-based brain, or in some non-physical substance, or in electronic circuitry, or indeed in any medium which allows states to interact in the required way. “The psychology of a system depends not on the stuff it is made of … but on how the stuff is put together.” [Fodor 1981, p114]

D.M.Armstrong and D.Lewis10 follow a view of Functionalism which holds that mental states are to be “functionally specified”. For instance, suppose we have “the ridiculously simple theory that pain is caused by pin pricks and causes worry and the emission of loud noises, and worry, in turn, causes brow wrinkling” [Block 1980, p174]. Then the mental state of pain is precisely the thing specified by those functions.

But that specification can equally be met by a physical state. If the specification of a mental and physical state is the same, then the mental and physical states must be identical. Armstrong and Lewis claim that if you look for the thing which fits the functional specification, you will find a physical state (though maybe several physical states will be found which fit equally well).

In response to this, Block stands up for the “Functional State Identity” rather than “Functional Specification” view: if a mental state may be realised in several physical states, then the determining factor is not any physical characteristic, but the causal role. It seems that Functional Specification only applies to token physicalism; when applied to type physicalism, it leads to Functional State Identity.

3.3 Eliminative Reduction

Many scientific domains are comprised of abstractions from a more basic domain. For instance the entities and theories of chemistry are abstract notions which could be fully described in terms of physics. We say that there is an eliminative reduction from chemistry to physics, or that chemistry reduces to physics.

We believe the reductive identity “heat is the movement of molecules” because of empirical evidence11. We no longer talk of heat theory as a functional system, because it has exactly one realisation. This doesn’t mean that it’s not useful to talk about heat, but we know that when we do so we are only using shorthand for talk about movement of molecules.

The situation would be different if we found empirically that heat was both the movement of molecules (in gases) and due to calorific fluids (in solids) [Kalke, p89]. If that were the case, it would make sense to talk about heat as a separate, higher, functional theory with two realisations. We would say that heat theory was irreducible.

From a Functionalist perspective, is it likely that discussion of mental states will reduce to neuroscience?

Fodor notes [Fodor 1968, pp143-147] that there is a difference between microanalysis and functional analysis. Physical sciences use microanalysis (aka reductive analysis) to establish identities of composition (eg water is [composed of] H20). On the other hand functional analysis doesn’t ask “what does X consist of?”, but rather “what roles does X play?”, in order to establish identities of function. The claim that mental states can be reduced to neuroscience arises from an expectation that discussion of the mind should engage in microanalysis rather than functional analysis.

A camshaft is a valve opener, but this does not prevent us discussing valve openers completely independently of any theory of camshafts. A mousetrap does not reduce to any one physical mechanism: many objects (with no common mechanical property) may serve as mousetraps. If objects, properties and laws in a domain can be multiply realised, then no eliminative reduction is possible. And that does seem to be the situation with the mind:

It still remains quite conceivable that identical psychological functions could sometimes be ascribed to anatomically heterogeneous neural mechanisms. In that case, mental language will be required to state the conditions upon such ascriptions of functional equivalence. [Fodor 1968, p146]

Block claims that “Functionalism gives us reduction without elimination” [Block 1980, p177]. Any functionally defined machine can be reduced to (ie could equally well be defined as) input-output behaviour. But this fact does not deny the existence of intermediate states.

4. Problems with Functionalism

4.1 Block’s Homunculi Problem

Ned Block claims that all forms of Functionalism are either too liberal (meaning they allow too many systems to count as minds) or too chauvinistic (allowing too few systems to have minds) [Block 1978, pp291-293]. He presents a situation involving homunculi to illustrate the accusation of liberalism [Block 1978, pp275-278].

Suppose that, instead of brain, a person had a large number of little men in their head. These men examine a large board of lights (representing external stimuli), and a blackboard on which is written the current state. Each man has an index card defining his job: “whenever you see such and such a state written on the blackboard and such and such a light is on, press such and such an output button and modify the blackboard to such and such a state”.

Such a head full of homunculi could be organised (by appropriate instructions on the index cards) to be functionally equivalent to a human. It could produce stimulus and response characteristics identical to a human and could be made to implement any intermediate states we care to specify. (If there’s any trouble finding enough men small enough to do the job, we could instead use the whole population of China communicating to a brainless human body via satellite.)

But it seems nonsensical to attribute mental states to such an homunculi-head. Where in such a system is there any intelligence? Where in such a system is there any qualitative experience? (See Section 4.5) We are not at all tempted to call this system a mind. Yet Functionalism does call it a mind.

Such an argument does not apply to “Psychofunctionalism” (the doctrine that mental states are psychological states) since we could sensibly say that an homunculi-head could have the same psychology as a human. But the problem with Psychofunctionalism is that it is too closely tied with a particular psychology (namely human psychology). It outlaws any type equivalence between human minds and (say) Martian minds, since Martians are likely to have quite different psychology than humans.

In brief, Functionalism falsely accepts an homunculi-head as a mind and is therefore too liberal; Psychofunctionalism denies non-human minds and is therefore too chauvinistic.

4.2 Lycan’s Lilliputian Argument

William G. Lycan [Lycan 1978, esp. pp284-285] proposes an extension to Block’s homunculi-head which he believes hammers an extra nail into the Functionalist coffin.

Consider one of the little men inside the homunculi-head. He will have his own mental life (thinking for instance that index cards should be painted phosphorescently to make them more readable in the dim lighting inside the head). If Functionalism is true, then it would be possible to specify a mechanism which duplicates this little man’s input, output and internal states. Let us then suppose that this mechanism is realised in the homunculi-head (ie we write a set of index cards for the little men to manipulate which instantiates the mind of one of the little men!).

Then, Lycan argues, the homunculi-head will be consciously thinking that index cards should be painted phosphorescently. Now on my summary if his scenario, this result is unsurprising and quite uninteresting. Of course the homunculi-head will be thinking about phosphorescent index cards, because we designed its inner states to mimic the little man who was himself thinking about phosphorescent index cards.

Were I to turn out to be a homunculi-head, I would (or could) have thousands of explicitly contradictory beliefs…; further, despite my overwhelming inclination to deny it, I would have conscious awareness of each of my homunculi’s conscious mental lives. But this seems outrageous and possibly impossible. [Lycan, p285]

But this conclusion is based on the construction of mechanisms to duplicate the mind of each little man, and the realisation of all of them simultaneously in the homunculi-head. It is not the conclusion which is outrageous, but the construction.

4.3 Block’s Difficulty with Physically Defined Input and Output

Functionalists tend to specify inputs and outputs in the manner of behaviourists: outputs in terms of movements of arms and legs, sound emitted and the like; inputs in terms of light and sound falling on the eyes and ears. [Block 1978, p294]

“Psychofunctionalists … [suppose] that inputs and outputs can be specified by neural impulse descriptions.” [Block 1978, p293]

Either approach requires that input and output be defined physically. But surely realisations of a functionally defined system may have totally different forms of input and output, all having some functional equivalence, yet without any common physical property. Indeed it is even possible that in a realisation of a functionally defined system the physical internal states themselves may be input and/or output devices. For instance, it may be possible for a human to communicate by altering their brain activity in a way which is observable only by an EEG.

“Functionalists pointed out that physicalism is false because a single mental state can be realised by an infinitely large variety of physical states that have no necessary and sufficient physical characterisation. But if this functionalist point against physicalism is right, the same point applies to inputs and outputs. … There will be no physical characterisation that applies to all and only mental systems’ inputs and outputs. … Hence, the kind of functionalism held by virtually all functionalists cannot avoid both chauvinism and liberalism.” [Block 1978, p295]

This seems to me to be a bogus argument, for there is no reason why inputs and outputs must be physically defined. It may be that no Functionalist has bothered to, but there’s no reason why inputs and outputs can’t be defined functionally in the same way that internal states are.

We don’t care if one human mind communicates in English while another communicates in Chinese. Similarly, if we were to design a computer program which generates natural language, we would not be concerned that a human produces spoken output while our computer only produces written output. We are only concerned about the functional content of the output.

We need not worry that the brain may be able to communicate via EEG, because a computer (or other proposed mind) may do a functionally identical job via an electronic (or other) equivalent to an EEG.

4.4 Kalke’s Point about Boundary Setting

Kalke calls into question the concept of functional equivalence [Kalke p91]. Functionalists claim that two states (or systems) are identical if they fulfill the same causal roles (ie they serve the same function). But such a test depends too much on what boundaries you set.

For example, at one level a mousetrap may be functionally realised as a cat or as a mechanism of wood, wire spring and cheese. These are functionally equivalent if the boundary you set on the function being compared is “being something which catches mice”. But if you specify a more detailed functional boundary, the equivalence may no longer hold.

Nearly any two physical systems could be considered functionally isomorphic under some description which fixes the behaviour of each. But if this is so arbitrary then what’s the purpose of calling them functionally equivalent?

4.5 Qualia12

There is a prima facie case that at least some mental states have some qualitative character which is not captured by the functional equivalence to a machine state. For humans, pain is not just some neurological state, it really hurts.

According to Block and Fodor, it could be that every person experiences pain differently (i.e. has different qualia when in a state of pain). Such possibilities are called cases of “inverted qualia” (due to the similarity to the inverted colour hypothesis). It may even be possible “for two psychological states to be functionally equivalent … even if only one of the states has qualitative content” (a case of “absent qualia”). [Block and Fodor, p245]

Can Functionalism account for qualia?

One is tempted to disregard the issue as an illusion, but Richard Boyd advises:

We are unable to imagine exactly how an arrangement of physical parts could interact so as to manifest a feeling of pain or so as to make a decision… Such strong intuitions should be taken seriously because what we misleadingly call “intuitions” are, quite often, instances of scientifically reasonable judgements, based on observation, informed by theoretical considerations, and amenable to revision in the light of new evidence. They are, indeed, perfectly typical examples of the “theory-mediated” inductive judgements of the sort that are commonplace and essential in the proper conduct of scientific inquiry. [Boyd, pp94-95]

Boyd mentions three factors which undermine our intuition about qualia [Boyd, pp95-96]:

  1. In the area of biochemistry, similar intuitions (about what sort of processes could be physically realised) have been shown to be too limiting;
  2. Artificial intelligence research is showing how problem solving can be mechanised, and hence other features of the mind may be too; and
  3. Information processing models of cognition help us to imagine how intelligence, pain etc could be physically realised.

Block provides a fourth factor: we have no understanding of how the brain produces qualia. “I do not see how [human] psychology in anything like its present form could explain qualia” [Block 1978, p289]. However, he still maintains that there are good reasons to accept our intuitions that while humans do have qualia, machines (or homunculi-heads) cannot.

5. Conclusion

I have shown how dissatisfaction with dualism and materialism has lead to the development of Functionalism. By proposing that minds contain states which can be identified by their causal roles, Functionalism avoids having to deal with the details of how minds are realised.

It is as reasonable to postulate mental states as it is to postulate theoretical entities in other sciences. Functionalism avoids physicalism and provides a level of analysis which is not reducible to neuroscience.

There are, however, several powerful arguments against Functionalism (and some not so powerful ones). Qualia seems to be the main issue which needs to be addressed before Functionalism gains theoretical adequacy. And then the key question will become “But is it empirically verifiable?”.

Bibliography

Block,N.; “What is Functionalism” in “Readings in Philosophy of Psychology”, vol 1; pub. Harvard University Press, 1980

Block,N.; “Troubles with Functionalism” in “Perception and Cognition. Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, vol 9; pub. University of Minnesota Press, 1978; reprinted in [Block 1980]

Block,N., Fodor,J.A.; “What Psychological States are Not” in Philosophical Review, vol 81 no 2, April 1972; reprinted in [Block 1980]

Boyd,R.; “Materialism without Reductionism: What Physicalism Does Not Entail” in [Block 1980]

Dennet,D.C.; “Current Issues in the Philosophy of Mind” in American Philosophical Quarterly, vol 15 no 4, October 1978

Feldman,F.; “Identity, Necessity, and Events”; reprinted in [Block 1980]

Fodor,J.A.; “Materialism” in Psychological Explanation; pub. Random House, 1968. Reprinted in ??.

Fodor,J.A.; “The Mind-Body Problem” in Scientific American, January 1981

Kalke,W.; “What is Wrong with Fodor and Putnam’s Functionalism” in Nožs, vol 3, 1969

Kripke,S.A.; “Identity and Necessity” in Identity and Individuation; ed. M. Munitz; pub. New York University Press, 1971; excerpted in [Block 1980]

Lycan,W.G.; “A New Lilliputian Argument against Machine Functionalism” in Philosophical Studies, vol 35, 1979

Nelson,R.J.; “Mechanism, Functionalism, and the Identity Theory” in the Journal of Philosophy, vol LXXIII no 13, July 1976


Footnotes

1 The categorisation in these few paragraphs is based on Fodor 1981, p114.

2 I personally do not believe that this question is unanswerable, and I hope to examine the issue in a future essay.

3 I will leave aside the basic issues clarifying what is meant by the claim that the mind is the brain. A proper coverage of materialistic identity theories would take another essay.

4 A “rigid designator” is one which denotes the same entity in all possible worlds.

5 In this essay, “Functionalism” with a capital “F” always refers to Block’s “Metaphysical Functionalism”.

6 Turing Machines are too familiar a subject to me to bother about a complete discussion of their capabilities.

7 This is my observation, though Block echoes it (op.cit. p172).

8 Nelson (1976, p367) says that the difference between a behaviourist’s disposition and a Functionalist’s state is that you can have a disposition (eg to be angry) without being in a state (actually being angry). Only the latter is an event.

9 “Materialism” has at least two senses. In the general sense it is the view that there are only material things in the universe. The other is that minds only occur as human-like brains (aka Physicalism). Fodor means the second; he would not discard materialism in the first sense simply because of Functionalism.

10 This analysis of Lewis and Armstrong is taken from Block 1980, pp180-181.

11 Though Kripke denies that this can be a contingent claim [Kripke, p144]

12 This is a fairly meagre introduction to a large topic which really requires its own essay.