Monday, December 24, 2007
Meaning and Mental Representation Chapter 5
2 problems with this theory, one stemming from the "only" clause [the disjunction problem] and one stemming from the "all" clause.
(1) The problem with saying that it's necessary that only instances of the property cause tokenings of the symbol is that some noncats cause |cat|s. Once again, this is the problem of misrepresentation.
Fodor proposes a solution based on the following asymmetrical dependence:
(i) If mice didn't cause |mouse|s, shrews wouldn't cause |mouse|s.
(ii) If shrews didn't cause |mouse|s, mice wouldn't cause |mouse|s.
The asymmetrical dependence of shrews on mice lies in the fact that (i) is true, but (ii) is false.
However, is (ii) false? After all, if shrews didn't cause |mouse|s, it might be because mouse-looks didn't cause |mouse|s, in which case mice wouldn't cause |mouse|s either. This leads us to the objection that there is no single interpretation that makes (i) true and (ii) false. For there are two ways to break the shrew-to-|mouse| connection, and two ways to break the mouse-to-|mouse| connection. If mousey looks don't cause |mouse|s, then both (i) and (ii) are true. If mousey looks do cause |mouse|s, but mice don't cause mousey looks, this won't affect the shrew-to-|mouse| connection, so both (i) and (ii) are false.
(2) The problem with saying that it is necessary that all instances of the property cause tokenings of the symbol is that not all cats cause |cat|s. Why do we want to say that it is all instances? Because if some cats don't cause |s|s, then the extension of |s| should be the subset of cats that do cause |s|s. So we need genuine covariation- all cats cause |s|s (or, any cat would cause an |s| if given a fair chance). But what is it to be given a fair chance? This leads us to a problem of circularity similar to the one we saw in chapter 4 [where Cummins concluded that Lockeans must specify ideal conditions in a way that does not presuppose content assignments to states of the cognitive system]: If covariance is grounded in a mechanism that, under the right conditions, will produce a |cat| from a cat, and, according to the CTC, the mechanism in question can be understood only by appeal to inner representations, then in order to understand the mechanism that CTC invokes to explain covariance between cats and |cat|s we must already understand representation and the explanatory role it plays in mental mechanisms.
Wednesday, December 12, 2007
'Symbols', 'Representations', and their contents
But it’s worth noting, I think, that something hinges on there being a viable analogy between symbols like the lanterns and the mental symbols of CTC; and just calling the two very different sorts of things symbols is not enough. So we’re assuming that mental symbols are sufficiently like non-mental symbols. But it’s far from immediately clear to me that this is appropriate. In fact, there seems to be good reasons to think that they aren’t too similar, for instance, the considerations that have to do with what Cummins called original meaning.
If someone wants to defend, for instance, what Cummins calls a symmetrical theory of meaning(fullness?), (in order to avoid the circularity in saying that mental representations get their meaning from original mental meaning), where mental and non-mental meaning are essentially the same sorts of thing, there’s a really tall task ahead of them. For one thing, content would be unrestricted, there would be content everywhere, in cognitive and non-cognitive systems. It becomes less clear that the notion of representation is an interesting one (and possibly one that can ground mental causation) if tree-rings represent/symbolize the tree’s age in the same way that I (mentally) represent triangles and categories for natural kinds.
I don’t think it’d be right to say, for instance, that the lanterns represented, had their content, or meant anything without their being assigned the representational role they had been assigned for the purposes of interpretation by cognitive agents. In effect, the kind of symbol that lacks what Cummins called original meaning seem to be representations, but they lack original content. I think ‘original content’ is a better term than ‘original meaning,’ just because of the terminological adjustments I’m hinting at. I guess I have naïve intuitions that there is something like original meaning/content from which some kinds of symbols (like the lanterns) get their meaning. Although I do want to say that original meaning/content is not in play in a really strong sense, i.e. that there’s no representational content anywhere that doesn’t derive from mental meaning. But then maybe I want to split the notions of representational content and meaning in a substantial way. I’ll try to say more about what I mean below.
I mean, I think Cummins is right to point out that the Gricean theories come up short with respect to mental representation, if they would want to go there. As Hagueland notes, on threat of regress [Gricean] mental meaning can’t account for the content of representations. But I’m also inclined to think that there may be, and if there isn’t maybe there should be, different senses of representation and meaning in play. Meanings apply, maybe, to symbols that derive their semantic properties from original meaning. Mental representations, maybe, don’t ‘mean’ anything; they’re just what we use because they’re what we have.
If so, maybe we should reserve ‘meaning’ for non-mental representations, like the lanterns. I really don’t want bite on the line Cummins says Fodor takes either, but it’s partly because I think that the sense of ‘intentionality’ in play is very underdeveloped, by Cummins and many others. The term ‘intentionality’, Dretske has noted, is ‘much abused’. I don’t have an argument (yet), but I’d like to ground mental representation in intentionality without being committed to the view that representational contents are just propositional attitude contents. I think things like concepts and spatial maps, but maybe not things like production rules and phonemes, have intentionality. (And with respect to concepts, I don’t think (as Davidson apparently did) that having beliefs is necessary for having concepts).
While I’m at splitting stuff, and back to the top, we could say that there are (at least) two different kinds of SYMBOL, maybe corresponding to two different kinds of representation. There are those deriving content from original meaning (content), i.e. the lanterns, and those that have their content by some other natural means and are capable of playing the data structure role CTC wants. The interesting symbols for our purposes, of course, would be the latter ones. After all, even if CTC is right, if our thoughts are language-like symbols, they aren’t the kind we interpret (I don’t think) or use to communicate anything. This doesn’t mean it’s a good idea to jump immediately to some symmetrical theory of symbols, meaning, representation, content, etc. (just to avoid the problem of intentionality and ultimately the ‘problem of consciousness’ which I think is a strong motivation here). It also makes me wonder if we should expect, from the outset, a theory of mental representation to contribute in a substantial way to a theory of representation in general; mental representation and the representational roles of symbols that lack original content seem like they might be very different things.
Tuesday, December 11, 2007
The Cummins Dictionary - Words from Chpt. 1
Representation = A ‘whatnot’ (state or object) with a (particular?) content.
Mental Representation = A mental whatnot with a (particular?) content.
Symbol = A representation that does not resemble what it represents; a content-bearing whatnot that does not derive its content from a relation of similar what it represents.
Mental Symbol = A mental representation that does not resemble what it represents.
Content = A generic term for whatever it is that underwrites semantic and/or intentional properties.
Inexplicit Content = A generic term for whatever it is that underwrites the semantic and/or
intentional properties that a system bears in virtue of its structure (and not its representational and/or intentional properties).
Representational Theory of Intentionality = Intentional states inherit their content from
the representations that constitute them. The identification of intentionality with representation. The content of an intentional state is a representation, the total state is an
attitude toward the representation.
Cummins on Intentionality = Intentional states are just the propositional attitudes;
philosophers have tended to assume that the problem of mental representation is the problem of what attaches beliefs and desires to their contents. BUT a theory of mental representation need not give us intentional contents. The data structures underwriting the representational states of CTC are not equivalent to intentional states or their contents.
Theory of Meaning = A theory of what it is in virtue of which some particular whatnot
has the semantic content that it has.
Theory of Meaningfulness = A theory of what it is in virtue of which some kind or class of
whatnots have any meaning at all.
Orthodox (Classical) Computationalism = Cognition is ‘disciplined symbol manipulation.’
Mental representations are language-like, symbolic data structures fit to be the inputs/outputs of computations; Mental representations are contentful mental symbols; the content of a mental symbol is the data structure the symbol represents; the objects of computation are identical with the objects of semantic interpretation.
Connectionist Computationalism = Orthodox Computationalism + mental representations are
not (necessarily?) language-like symbols. Also, it is not the case that the objects of computation are identical with the objects of semantic interpretation.
The Problem of (Mental) Representation (PMR) =
I like Cummins quick & dirty formulation of the question at the heart of the problem that occurs at the end of 1:
‘What is it for a mental whatnot to be a representation?’ Equivalently – What is it for a mental whatnot to have a content?
CTC takes the notion of a contentful mental whatnot as an ‘explanatory primitive.’ I suppose this is to say that ontological questions are deferred – assume that there are such things as mental representations, what explanatory work do/should we expect of them in a defeasible theory of cognition? In effect, CTC is a solution to PMRS, not PMR.
The Problem of (Mental) RepresentationS (PMRS) =
What is it for a particular mental representation to have some particular content? What is it for a contentful mental whatnot to have the particular content that it has? How are the particular contents of particular mental representations individuated?
I thought it was interesting that this problem is of no concern to Cummins. He somewhat off-handedly lets us know that proposed answers to PMRS don’t ‘really matter much [to his project in the book]’ and that his ‘topic is the nature of representation, not what sorts of things do the representational work of the mind.’(2)
At this point I don’t really understand why he would dismiss PMRS as irrelevant. Presumably, and as he admits, a solution to PMR that takes mental representations as explanatory primitives but then fails to account for its own notion of ‘mental representation’ is not satisfying. But won’t an ‘account of the nature of the ‘mental’ representation relation’ include an answer to PMRS? If not, why not? It’s a unclear to me at this point, since cognitive scientists refer to multiple kinds of mental representations - phonemes, spatial maps, concepts, etc. Cummins acknowledges this multiplicity of representations.
Notables from Chapter 1
The Central Question(s): What is it for a mental state or a mental object to bear a semantic property? What makes a mental state or object a representation?
Reaction: Does this entail that the problem of mental representation reduces to or is
equivalent to the problem of meaningfulness?
Cummins’ Three Varieties of Content (the generic stuff that underwrites whatever semantic properties are present):
Content of a cognitive system might be characterized in the following ways:
According to its intentional states (if it has them)
According to its representational states (if it has them)
According to the inexplicit content yielded by its structure
Also, intentional content ≠ propositional content, cf. Revere’s Lanterns. They bore the propositional content that the British were coming (a) by land if one latern was lit, (b) by sea if two lanterns were lit. But we shouldn’t attribute any contentful intentional states to the lanterns.
Reaction: Where did the lanterns derive their propositional content from and why does it matter here? The problem of representation (general; equivalent to the problem of meaningfulness in general? Is there such a problem), I presume, does not necessarily reduce to the problem of mental representation? Non-mental things can represent. The lanterns, for example, are not a mind or even a cognitive system. Why does it even matter whether non-intentional systems can bear propositional content or any content at all? We're after mental representation, not representation in general.
The Meaningfulness – Meaning Distinction and the Representation – Representations Distinction
The theory of meaningfulness/theory of meaning distinction is analogous to the distinction between the theories of mental representation and theories of mental representationS. The problems behind the theories of representation have to do with what it is for a mental representation to have a content and with the nature and content of particular representations respectively.
We might ask, similarly, what is it for a whatnot to have meaning and we might ask what it is for a particular whatnot to have a particular meaning. As with the former distinction, Cummins suggests that an answer to the general problem needn’t provide an answer to the particular problem. Is this right? What good is a theory of mental meaning that goes unapplied to instances, what might it tell us? What good is a theory of mental representation that goes unapplied to instances of (at least) kinds of mental representations? I don’t mean these to be rhetorical questions. Like I said the other day, questions about the ‘nature’ of things, especially vexing relations like meanings, confuse me.
Also, it is worth noting that, while Cummins insists that his question regards the nature of representation, he also insists that the bulk of the content of the book is concerned with theories of meaning. The strategy, then, is to look at in virtue of what theories of the meanings of particular whatnots have the particular meanings that they do (it should become clear from there, he says, what general theory of meaningfulness is entailed).
The asymmetry between this and the approach to the theory of mental representation struck me. As I pointed to above, Cummins says it doesn’t really matter which approach one takes to PMRS because the concern is with PMR. Why is the particular to general approach appropriate in the case of meaning/meaningfulness but not in the case of mental representation/mental representations? Why, especially, if there is some strong relationship between the question of mental meaning and mental representation, as there appears to be?
Suggestions: Pay attention to the broader initial problems.
Characterizing MEANINGFULNESS might be a broader project than characterizing mental meaningfulness.
Characterizing representation is broader project than characterizing mental representation.
But by Cummins’ definitions, a theory of meaningfulness applies only to kinds or classes of things, presumably things like sentences, propositions, signs, and especially mental representations. By Cummins’ own admission, different fields (within cognitive science) use MENTAL REPRESENTATION to refer to different explanatory primitives. The result is that the theory of mental meaningfulness, i.e. the theory of what it is in virtue of which mental representations have any semantic content at all is not a single project.
Also, since it’s not clear where and if the theory of mental meaningfulness and the theory mental representation come apart (both are concerned with what it is in virtue of which mental representations have semantic content), the ‘theory of mental representation’ we are engaged in will depend on which theoretical framework we find ourselves in. I assume that we should take ourselves to be within a broad theoretical framework, i.e. the CTC, but even within CTC there are adherents from the different fields that make up cognitive science. It leaves me wondering, when we say that it’s widely accepted that mental representations are language-like symbols, whether we’re saying that this is what the computer scientists, philosophers, linguists, and maybe the cognitive psychologists think, but not the neuroscientists, what about the behavioral neuroscientists? And we're certainly only with the classical computationalists, and not the connectionists.
Also, is ‘semantic content’ just ‘meaning’? And then is ‘mental content’ just ‘mental meaning’?
Representational Theory of Mind Chapter 2
RTM- while mental states differ, one from another, mental states are representational states, and mental activity is the acquisition, transformation, and use of information and misinformation.
Contrast between human mental life and non-human mental life:
(1) We are flexible in our behavioral capacities.
(2) We are sensitive to the info in a perceptual stimulus rather than to the physical format of the stimulus.
The underlying idea here is that adaptive flexibility, especially learning, requires an ability to represent the world, for it is the info in the stimulus, not its physical form, that our behavior is sensitive to.
The big question: In virtue of which of their properties do the propositional attitudes (such as beliefs and desires) play the role they do in the causation of behavior? We need to show how physical systems like ourselves could have mental states that (a) represent and misrepresent the world, and (b) participate in the causation of behavior.
The No Magic Powers Constraint to answers to the big question: The functions allegedly essential to mental states must be functions actually performable by physical stuff.
One attempt at answering the big question- the language of thought hypothesis.
3 arguments for LOT [see Fodor Language of Thought book for more detail]: (1) semantic parallels between thoughts and sentences. (2) Syntactic parallels between thoughts and sentences. (3) Processing argument- processing has characteristics that make commitment to a language of thought inescapable.
If Fodor is right about LOT, we can naturalize the representational theory of mind. And it supports belief-desire (intentional) psychology, enables us to formulate three theses about the occupants of intentional roles:
Thesis 1: Propositional attitudes are realized by relations to sentences in the agent's language of thought. [this is intentional realism- humans' behavior and mental states are often the product of their beliefs and desires]
Thesis 2: The psychologically relevant causal properties of propositional attitudes are inherited from the syntactic properties of the sentence tokens that realize the attitudes.
Thesis 3: The semantic content of propositional attitudes are explained by the semantic properties of mentalese. The semantic properties of a token of mentalese are explained by its syntactic structure, and the semantic properties of the concepts that compose it.
Potential worry: Representational theories of mind that are unsupported by computational models risk turning into magical/circular theories of the mental, by positing an undischarged homunculus.
Computational models of cognitive processes help psychological theories avoid this regress (of the undischarged homunculus) in 3 different ways:
(1) Individualism: The processes that operate on mental representations are sensitive to the individualist or narrow properties of these representations. So, cognition is the processing of mental representations. But the cognitive mechanisms must be tuned to the structural features that code meaning, for they have no direct access to the extracranial causes of those features. Kinda like elementary formal logic.
(2) Mechanizing reason- it makes precise and manageable the idea of decomposing an ability into subabilities.
(3) 'Hard-wired' Reasoning Processes: In order to explain how the mind recognizes the structural features, we must posit a set of basic operations that the brain carries out, not in virtue of representing to itself how to carry them out, but in virtue of its physical constitution. So, (a) The properties of most immediate causal relevance to the cognitive mechanisms mediating the interaction of the sentence tokens in LOT are mind internal properties of some kind; (b) important cognitive processes are computational processes.
So, according to RTM, thoughts are inner representations with a double aspect- they represent in virtue of causal relations of some kind with what they represent, but their role within the mind depends on their individualist, perhaps syntactic, properties. So RTM is linked to the computational theory of the mind.
Meaning and Mental Representation Chapter 4
(L1) x represents y in LOCKE = x is a punch pattern that occurs in a percept when, only when, and because LOCKE is confronted by y (whiteness, a cat, whatever)
Positive aspect of this theory: proposes that the things that mediate cat recognition in the system must be the cat representations.
Another positive aspect of this theory: Does away with resemblance as the ground of represenation, and solves the problem of abstraction (nothing can resemble all and only the blue things, but something can be the regular and natural effect of blue on the system, and hence occur in the system's percepts when and only when blue is present to it).
The fundamental difficulty facing Lockean theories is to explain how misrepresentation is possible; for suppose LOCKE is confronted by a cat but generates a dog percept D- then it is not true that D occurs in a percept when, only when, and because a dog is present, since no dog is present and the current percept has feature D.
The covariance theory strategy for dealing with the problem of misrepresentation is via idealization- either idealizing away from malfunctions, or idealizing away from suboptimal conditions of perceptual recognition.
General problem for idealization solutions: The idea that one can idealize away from cognitive error is incompatible with a fundamental finding of CTC- error is essential to a well-designed cognitive system with finite resources, because in order to succeed it must take short cuts. [I like this quote- "Epistemology for God and epistemology for us are two different things. God never had to worry about recognizing tigers in time to evade them]
Specific problems for idealization solutions:
(L2) [idealizing away from malfunction] x represents y in LOCKE = were LOCKE functioning properly, punch pattern x would occur in a percept when, only when, and because LOCKE is confronted by y.
Problem: The most obvious/everyday cases of perceptual misrepresentation- illusions- are not cases of malfunctions, but cases of proper functioning in abnormal circumstances.
(L3) [idealizing away from suboptimal conditions of perceptual recognition] x represents y in LOCKE = were LOCKE functioning properly and circumstances ideal, x would occur in a percept when, only when, and because LOCKE is confronted by y. [the basic idea here is that something is a representation of a cat in virtue of having some feature that is, in percepts, an effect of cat presence and not of anything else]
Problem: Any specification of ideal circumstances will lead us in a circle. For according to this theory, we're going to have covariance only when the epistemological conditions (e.g. ideal circumstances) are right. And specifying those conditions will already presuppose content assignments to states of the cognitive system, because in order for the system to "get it right", it means that it has representations with the right content. So, to avoid being circular, Lockeans must specify ideal conditions in a way that does not presuppose content assignments to states of the cognitive system.
[At this point in the chapter, Cummins starts getting into possible strategies for the Lockean, involving inexplicit content, and I don't really understand it. I think further reading would be required to really get the inexplicit content stuff. However, I think that the main gist of the chapter can be captured without getting into that stuff, because Cummins concludes the chapter by reiterating what I have called the general problem for idealization solutions, and the specific problem for idealizing away from suboptimal conditions of perceptual recognition.]
Monday, December 10, 2007
Meaning and Mental Representation Chapter 3
(1) Makes truly radical misrepresentation impossible, allows for misrepresentation only when the dissimilarity is relatively small.
(2) The problem of the brain as medium: Similarity theory seems incompatible with physicalism. If mental representations are physical things, and if representation is grounded in similarity, then there must be physical things in the brain that are similar to the things they represent. But this could only work if the mind-stuff is nonphysical. And "restricted" similarity (like pictures, cartoons) won't work because it is only "perceived" similarity.
(3) Similarity theories cannot deal with abstraction. How can a representation represent a whole class of things that differ widely from another on many dimensions? How do we rule out resemblance in irrelevant aspects?
For Locke, the problem of abstraction and the problem posed by secondary qualities lead to the covariance theory solution.
Meaning and Mental Representation Chapter 2
According to the neo-Gricean theory of meaning, semantic properties of representations are derived from the intentionality of their users- either directly, or indirectly via convention. So, meaning depends on the communicative intentions of communicating agents.
This theory is a species of theory that reduces meaning generally to intentionality. So, it provides an asymmetric treatment of meaning in that it accords priority to mental meaning. [But, it is possible to hold that mental and nonmental representation are basically the same- see Block 1986 "Advertisements for a Semantics for Psychology", and Millikan 1984 Language, Thought, and Other Biological Categories. A symmetrical treatment of representation must ground intentionality in mental represenation. Two basic strategies- localism and globalism.]
The problem with using this theory to explain mental representation is that people don't use mental representations with the intention to communicate anything to anyone. One strategy, perhaps, for solving this would be to reduce nonmental meaning to intentionality, and then use RTI to reduce intentionality to mental representation.
Now for the main questions: What is it for a mental representation to have a content, and determines what content it has? In the context of CTC- what makes a data structure a represenation, and what determines what it represents.
The Representational Theory of MInd Chapter 1
Possible answer: Its introspective, experiential quality.
Problems with this answer: (a) absence of introspectible qualities- person can be angry without being able to tell that fact about themselves. (b) Not distinct- not obvious that experiential sensations of anger are different from other emotional states of great arousal, such as fear, excitement. (c) Anger seems to have cognitive component, involving special types of belief and desire. But cognitive states (i) need not be conscious; (ii) are not distinguished from one another by their experiential quality.
Alternative answer: Functionalism- mental kinds/properties are identified by what they do, or what they are for, not what they are made of. So there is the following role/occupant distinction that provides us with two different, but complementary, ways of describing human mental life:
(1) It is a mental life in virtue of its functional description- specifies the causal roles of the full range of human psychological states.
(2) Description which specifies the physical nature of the occupiers/realizers of those causal roles.
Two features of functionalism that Sterelny points out:
(1) Availability of double descriptions (role/occupant) is not distinct to psychology- e.g. computer science, hardware description/information flow description. And the discovery of the gene illustrates how a theory of function can be developed independently of a theory of physical realization.
(2) Multiple realization (here, one mental state having wildly varied physical realizations) is not restricted to psychology.
Machine functionalists- cognitive processing is a special case of running a progam; cognitive states are states of the machine on which the mind-program is running. It was thought that anything whose behavior fits a machine table (a la Turing machine) is a functional system. But this turned out to be a bad idea [For more exposition on what was wrong with early functionalism, see Block (1978) “Troubles with Functionalism” in Block ed. Readings in Philosophy of Psychology Volume One]: 1) This makes functional descriptions too cheap/weak, because too many things (like the Brazilian economy, a pail of water in the sun, and the solar system) would qualify as functional systems. So the existence of entirely accidental correlations between physical states and symbols on a table isn’t enough for something to be a functional system. 2) Mysterious realization- in general, natural kinds are realized by more physically fundamentally natural kinds. But in machine functionalism, the relation is mysterious- it is a relation between a mathematical object (the mathematical function the machine table specifies) and a physical device.
So machine functionalism doesn’t capture what is distinctive about a functional system. Functional systems are systems whose existence and structure have a teleological explanation. Teleological account of the mind- the mind has “an internal organization designed to carry out various perceptual, cognitive and action-guiding tasks. It has that organization and those purposes in virtue of its evolutionary history.” [For more on the teleological response to early functionalism, see Lycan 1981 “Form, Function, and Feel”. Journal of Philosophy 78, pp. 24-50; Millikan 1986 “Thoughts Without Laws; Cognitive Science With Content”. Philosophical Review 95, pp. 47-80]
What kinds of creatures are intentional systems? Intentional system must (a) have perceptual systems, so there is a flow of info from the world into the system; (b) have a reasonably rich system of internal representation (thermostats aren’t intentional systems in pat because they represent only temperature); (c) have cognitive mechanisms that enable it to use perceptual info to update and modify the internal representations of the world, and (d) have mechansims that translate its internal representations into behavior that is adaptive if those representations fit the world.
So intentional systems can be psychologically very different from each other. So, actually, there are not two theories of the mind, a functional theory and a physical theory. For psychological states vary in the degree to which they are independent of their physical realization, and in the extent to which they are tied to particular psychological organization. This leads us to homuncular functionalism, where intentional systems have a multiplicity of psychological structures [exactly why, I don’t really get. For more on homuncular functionalism, see Lycan “Form, Function, and Feel” and Lycan (1981) “Towards a homuncular theory of believing” Cognition and Brain Theory 4 pp 139-59.]
Homuncular functionalism: (1) Functionalism- essence of a mental state is what it does, not what it is. (2) Mind is modular. (3) Each homunculus is in turn made up of more specialized simpler homunculi, until we reach a level where the tasks the homunculi must carry out are so simple that they are psychologically primitive.
2 big defenders of homuncular functionalism- Dennett and Lycan (maybe also Simon). They like the example of our specialized cognitive mechanism for face recognition.
Meaning and Mental Representation Chapter 1
There have been 4 answers to the problem of representations, concerning the sorts of things that can be mental representations: (1) Mind-stuff inFORMed- the same stuff that makes a red ball makes us perceive a red ball. Similarity is the big thing here- what we have in our head is capable of representing the world because it is made of the same stuff. (2) Images- same as Mind-stuff inFORMed view, minus Aristotelian jargon. (3) Symbols: (a) in contrast to preceeding views, symbols don’t resemble the things they represent; (b) they can be inputs and outputs of computations. (4) Neurophysiological states- mental representation is a biological phenomenon essentially.
The second problem concerning mental representations is the problem of representation- understanding what it is for a cognitive state to have a content.
There have been 4 answers to the problem of representation, concerning the nature of representation:
(1) Similarity- in order to be able to think about things in the world, need to have something resembling the thing in the world in your head.
(2) Covariance- certain characteristic activity in (neural) structure covaries with something out there in the world.
(3) Adaptational role- this, not covariance, accounts for the representation.
(4) Functional or computational role- functionalism applied to mental representation.
[I’m not sure I understand exactly what’s involved in solving the problem of representation. I would love it if we could think of an analogy in some other area of philosophy. Maybe this Cummins quote on methodology will be helpful: “We must pick a theoretical framework and ask what explanatory role mental representation plays in that framework and what the representation relation must be if that explanatory role is to be well grounded.”]
Most of the book will be assuming an orthodox computationalism background (CTC- computational theory of cognition) that provides an answer to the problem of representations (mental representations are symbolic data structures) but is agnostic about the problem of representation (concerning what it is for a data structure to have semantic properties).
Cummins urges that at the outset, in order to help distinguish between the various issues involved and solutions proposed, we should not be assuming either (a) Representational theory of intentionality (RTI)- intentional states inherit their contents from representations that are their constituents; or (b) The language of thought hypothesis, according to which, cognitive states involve “quasi-linguistic formulas type identified by their states in an internal code with a recursive syntax.”
The reason why it is important not to assume RTI at the outset: Represented content isn’t all the content there is. There is also inexplicit content of various kinds (e.g. content implicit in the state of control, content implicit in the domain, content implicit in the form of representation, content implicit in the medium of representation), and if nothing like the RTI is true there is also intentional content. [I don’t think I fully understand this point. It might be helpful to read Cummins 1986 “Inexplicit Information” in The Representation of Knowledge and Belief, ed. Brand and Harnish]
The reason why it is important not to assume the language of thought hypothesis at the outset: “A symbol can have a proposititonal content even though it has no syntax and is not part of a language-like system of symbols.” E.g. Paul Revere’s lanterns.
Monday, October 15, 2007
Memory and Language Workshop
Monday, August 27, 2007
sorry, here's the link i think
Kripke lightning
Also, thinking about the lightning example from Place, here are some cool stats from the 4 year olds: 58% think that getting struck by lightning is improbable but not impossible, and 42% think that eating lightning for dinner [yes, a picture is included] is improbable but still possible.
Oh, the conclusion in the article (well, the abstract, haven't read the article) is that children generally are stricter with impossibility attributions than adults are. "Children intitially mistake their inability to imagine circumstances that would allow an event to occur for evidence that no such circumstances exist."
I'm guessing that this will ultimately have more philosophical significance for concepts, and conceptual development, than stuff pertaining to Kripke's modal intuitions, but who knows. Susan Carey is really at the top of the game in terms of philosophically-relevant psychology.
I'm also guessing that they didn't ask the subjects to imagine pain without the sensation of pain, but probably some experimental philosophers have jumped all over that one.
Tuesday, August 21, 2007
Mind-Brain Identity (continued)
Neo-Dualism (ND): Pains are individuated/identified by the way they feel, so the causal role of neurons in the instantiation of pain is not essential to what pain is (we can have Martian pain and Ghost pain, ETC.?). What counts as a pain is any particular occurrence of a particular kind of phenomenal feel. Material constitution is nothing essential; so, pain cannot be identified with neural states.
Rorty's Counter-Consideration: ND's Hypostatization of Pain
Hypostatization: Taking a quality/property and transforming it into a subject of predication.
Pain Hypostatized: PAIN is taken from being considered a property of persons and conceived of as a particular, elligible for predication in its own right. But...
- ND PAIN is PAINFULNESS, the feeling of pain, or the universal or concept of what it's like to be in pain, abstracted from particular instances of pain.
- Confusion: ND PAIN is a universal construed as a particular. A particular pain state is just an acknowledged instance of a universal: PAINFULNESS.
- What appears to be an ontological distinction is merely a confused instance of the particular/universal distinction; paradoxically, mental particulars (pains) are like universals.
The upshot, according to Rorty, is that early Identity Theorists and Neo-Dualists are talking past each other:
- Smart/Place are talking about what is essential to someone's being in pain.
- Neo-Dualists are talking about what is essential to something's being a pain.
Rorty suggests that this sort of hypostatization is exactly the brand of error made historically by, for instance, Locke and Plato. The error is this: '...we simply lift off a single property from something...and then treat it as if it itself were a subject of predication.' Consider, following Rorty, the properties of being red or being good as the Platonic Forms REDNESS and GOODNESS or the Lockean ideas of RED and GOOD. ND PAIN, in other words, is like a Platonic form and a Lockean idea in that it is held to exist independently of its worldly instantiations, while maintaining causal intercourse with the world.
Sunday, May 13, 2007
More Armstrong, Conceptual Analysis
(1) I agree that conceptual analysis appears to be a degenerating research program, which is part of the reason why I think it might be interesting to think about conceptual analysis in Lakatos' terms. That is, let's put all of our cards on the table: What is it exactly that we are trying to explain (for example, in my post on Place I wondered what the parallel would be for explanation involved in identifying lightning with electric charges), what is the "hard core" that the different approaches are committed to, and how do the different approaches compare with one another in terms of the standards for successful and degenerating research programs.
(2) Assuming we're skeptical of traditional forms of conceptual analysis: Is there still any sense in which understanding/discovering "consciousness is a brain process" is different from understaning/discovering "lightning is electric charges", in the sense that it seems easier to understand the meaning of "lightning" than "consciousness" or any other mental concept?
(3) Again, assuming we're skeptical of traditional forms of conceptual analysis: Perhaps this needn't preclude us from giving some sort of functional account of mental states (as opposed to mental concepts), but maybe we'd need to provide different types of arguments in support of the functionalist account? That is, to what extent are functionalist accounts dependent upon conceptual analysis? And to the extent that a functionalist account is not dependent upon conceputal analysis, how does it compare with Place's identity theory?
(4) Poison vs. Lightning: Is it more instructive to think of mental concepts as analogous to lightning (with the proviso that Place is not including cognition/volition in this analogy), or as analogous to poison? I suppose the poison analogy lends itself more to the conceptual analysis project, but there are problems with the lightning analogy as well: I'm repeating myself here, but if the meaning of "lighting" was on par with the meaning of mental concepts, philosophers would not be so interested in mental concepts. Of course, it could be that the interest has been systematically mistaken (this is the conclusion of eliminative materialism, yes?), but there is still work to be done in explaining away the apparent difference.
(5) Finally, I just want to point out that the conceptual analysis of poison here seems weird, although maybe this is just a reflection of my skepticism of conceptual analysis: Is the game here to try and find necessary and sufficient conditions for poison? If so, a simple causal analysis clearly doesn't accomplish this. Do we just look up poison in the dictionary, as Armstrong seems to have done?
Wednesday, May 2, 2007
Kripke on the Brain
- We discover the identity of (epistemically) disparate phenomena, like lighting and electric discharge, by identifying contingent properties of the relata (lightning might've been colored red and been identical to the motion of sugar molecules, for example).
- But if x truly is y, x is necessarily y.
- Epistemically disparate verification conditions do not entail contingent identity.
- What appear to be 'contingent' identities are metaphysically necessary (the contingency is merely epistemic and so illusory).
For instance:
- If pain is a brain state/process (say, c-fiber firing), it is necessarily c-fiber firing.
- But we can imagine c-fiber firing with no correlated pain.
- So, pain is not necessarily c-fiber firing.
- So, pain is not a brain process (c-fiber firing).
Alternatively, or in addition:
- If the identity relation holds at all, it necessarily holds.
- The a posteriori discovery of identities does not make them contingent.
- If a = b, then a and b share all essential properties in common.
- If pain is c-fiber firing, then pain and c-fiber firing share all essential properties in common.
- We cannot imagine pain in a world with no creatures to experience pain.
- We can imagine a world in which a creature x's c-fibers fire but x experiences no pain.
- The feeling of pain is essential to the concept PAIN.
- The feeling of pain is not essential to the concept C-FIBER FIRING.
- So, pain and c-fiber firing fail to share all essential properties in common.
- So, pain is not c-fiber firing.
If this is close to right, the argument is a very Cartesian one; where conceivability is awarded the status of truth-maker (or breaker). Please amend. I'd like to have an adequate synopsis of what Kripke is up to before moving on.
Monday, April 30, 2007
conceptual analysis, here we come
I love Armstrong's honesty in admitting that he's not quite sure what's going on in conceptual analysis. This topic is bound to come up again and again during our readings. Unfortunately, and unsurprisingly, Armstrong does not solve the problem of conceptual analysis here. However, I want to point out two interesting things going on in his discussion of conceptual analysis:
(1) The idea of conceptual analysis as a research program (a la Lakatos) seems like a refreshing way to think about the nature of, and possible progress in, philosophical analysis. I'm pretty sure that Lakatos' approach to scientific theories has only been applied to the study of theory development/change in the natural sciences, and not to the social sciences. So it's a pretty big jump to think of conceptual analysis in philosophy along these lines. Still...
(2) Is the conceptual analysis of poison actually similar to the conceptual analysis of mental concepts? I don't know. What would a conceptual analysis of lightning look like? What does Armstrong mean when he says "it is surely not an empirical fact, to be learnt by experience, that poisons kill."
Armstrong and the Problem of the Secondary Qualities: Part of this I think has to do with qualia issues that we'll get to in future readings. His first response looks similar to Place's discussion of the phenomenological fallacy. Then he moves on to the nature of secondary qualities such as color. I'm content to leave this issue alone for the time being.
PCRG Reading Schedule Update
- Armstrong's Materialism (one paper in Mind & Cognition and one photocopied in the office): Introducing the Causal Theory of Mind (Early Functionalism)...
- Prinz & Goldman/Pust. (photocopies in office) We detour (only slightly) into methodology: a look at conceptual analysis. What kind of philosophical and scientific work might/should we expect from the modal intuitions generated by the method of cases? Do they provide a privileged sort of data? What should we expect from the philosophy of mind, if anything, over and above what the 'sciences' of the mind might discover?
- Comparing and contrasting metaphysical essentialism with psychological essentialism. (photocopies in office). Are there any consequences (for instance) for Kripke's account of identity and necessity if metaphysical essentialism is ill-founded? Kripke relies heavily on the method of cases and semantic intuitions.
- Philosophical and Psychological Behaviorism. (Ryle, Skinner, Chomsky, Place, Quine, etc.). The shift from Behaviorism to Cognitivism in psychology is one of the most marked in recent scientific history. In the wake of that shift, Behaviorism has often come to be ridiculed as an embarrisingly untenable position. This is certainly unfair; it took the development of radically new technology and Chomsky, possibly the most celebrated intellectual in recent history to usher in a new paradigm. We should look closer to see what was motivating behaviorists, and where and why it was perceived to go wrong. What's the difference between Philosophical and Psychological Behaviorism? How are they related? I suspect we'll be on this for a while, once we get here.
Passing Thought on Kripke
Crucially, the question is can we imagine pain in a world without creatures to sense pain? I'm not sure how much it matters how we answer this question. If we take pain to be a property of nervous systems, which are physical things in the world, isn't asking if we can imagine pain where there're no creatures to sense it just like asking whether or not we can imagine heat in a world with no molecules? If so, Kripke's intuitions might be misleading, or maybe even incoherent. More later, but I'd like to hear thoughts about this...
Saturday, April 28, 2007
This Pain and the Concept, PAIN
To try to clear up some suggestions (mostly for myself) I made last time, let's consider some identity statements, coupled with counterfactual statements:
(1) This table is made of wood.
(1*) This table might have been made of ice.
(2) This table is a packing case.
(2*) This table might have been a coffin.
(3) Cicero is Tully.
(3*) Cicero might not have been Tully.
(4) Heat is the motion of molecules.
(4*) Heat might not have been the motion of molecules
(5) Pain is a brain process.
(5*) Pain might not have been a brain process.
What distinctions might we draw here, and do they matter? The first thing that sticks out is that the 1-3 examples include either indexical reference or reference to particulars; 4-5 include general terms of reference. Also, the verification conditions for the 3 are ordinary, by which I mean they can be verified by the experience of the linguistically competent man on the street, given access to the particular referents. The latter 3 require sophisticated equipment, and maybe a lot of education to verify. I won't explore whether or not this matters here.
1-2 make reference to the composition of particulars, 3 includes only two names denoting a single object, 4-6 include reference to the composition properties. I'll try to unpack this below a little, because I think these distinctions are significant; they may affect which among these, if any, should be understood as necessary identities, such that it’s counterpart (*) is (metaphysically/logically?) impossible. I suggest that the case of the particular identities is importantly different from that of the general ones.
In 1-2 there is pull to the intuition that, once verified, the identities could not have been otherwise. When we go to the (*) counterpart sentences, it is not at all clear that were the table made of ice, or were the table a coffin put to another use, that we’d be talking about the same table in either (*) case. Maybe it is necessary that this table is made of wood, etc. ; there seems to be a straightforward sense in which indexicals rigidly designate.
With regard to 3, similar considerations apply. We may individuate reference for ‘Cicero’ and ‘Tully’ by different contingent facts functioning as descriptions about one and the same individual, but that we associate contingent facts with different names doesn’t imply that identity of the referents is contingent. Regardless of what descriptions apply, talk of Cicero and talk of Tully is talk of the same man. The (*) counterparts look impossible here, too; maybe proper names are good candidates for rigid designators, too.
The situation is less clear when we get to 4-5. We should look at them in turn. In 4, as was indicated, we refer to a property (not an object) and endorse its identity with a physical phenomenon. On the one side we have a property of a kind of subjective, apparently private, experience, the sensation of ‘heat’. On the other, the motion of molecules taken to be the same thing. As Kripke notes, we apparently use the contingent fact that ‘heat’ causes the sensation of ‘heat’, to identify ‘heat’. Crucially, there is supposed to be an analogy between contingent reference conditions for properties like ‘heat’ and contingent reference conditions for names like ‘Cicero’. We pick out Cicero by the contingent fact that he was a statesmen; so says Kripke, we pick out ‘heat’ by the contingent fact that it produces a particular kind of experience/sensation.
It is not clear at all to me that there is not some question begging going on here. If not, Kripke’s language is extremely confusing; he can’t mean to say we pick out ‘heat’ in the world by the contingent fact that it causes such and such a sensation. He must mean that we pick out the motion of molecules in the world by the fact that it produces such and such a sensation. But if that is right, and the sensation is only contingently related to the motion of molecules, then the identity is contingent. Kripke hasn’t shown that HEAT and MOTION OF MOLECULES are conceptually connected in such a way as to make the identity necessary. He would’ve had to show that there is something to the concept HEAT over and above the sensation. So, the (*) sentence does not express an impossible state of affairs in this case (4*), due to an important, if only implicit, disanalogy with cases of particular-object reference: we can imagine Cicero not being a statesmen, but we cannot imagine him not being Tully; but, unless HEAT is more than just a sensation correlated with a lot of molecular agitation, we can imagine that it is not identical with molecular motion.
What about the case of pain? In 5, we refer to a (less controversially, at least on the face of it) apparently private kind of state of consciousness, and endorse its identity with a neurological process. Kripke thinks this case is importantly different from (4): that is, we do not pick out states of pain by the contingent fact that we sense pain in a particular way. But here Kripke ignores the fact that we are no longer expressing an identity statement about a particular (and it may be that this owes to his target, strict identity, but his arguments against materialistic identity as a program appear to be defeasible); His crucial move seems to be this:
[it] 'might be true of the brain state [that we pick it out by the contingent fact that it affects us in a particular way], but it cannot be true of the pain. The experience itself has to be this experience, and I cannot say that it is a contingent property of the pain I now have that it is a pain.’
Fair enough, but that’s not what (5) says, and the material identity theorist doesn't need to be so strict. Even Place, in particular, might have been able to argue that Kripke conflates (5) with (7):
(7) This pain is a brain process.
(7*) This pain might not have been pain.
Now, consider:
(8) This heat is the motion of molecules.
(8*) This heat might not have been the motion of molecules.
Switching from general claims to specific claims does change the climate. But how so?
Presumably, PAIN should be read here as I indicated HEAT might be read above: there maybe nothing more to the experience of pain than the sensation or experience. Grant Kripke that this pain or that pain is necessarily pain; (7*) is impossible. But this does not imply that (5*), ‘Pain might not have been a brain process,’ is impossible. In other words, it may be necessary to claim that that 'this pain is this brain process,' but it need not be necessary that 'pain is a brain process': contra Kripke, ‘Pain is a brain process’ might really be a contingent identity statement, even though cases of particular pain might express necessary identity relations between particular experiences and whatever underwrites them.
Since we can imagine PAIN in creatures very unlike us (like Lewis’ Martian) we don’t need to identify PAIN as a kind with 'pain' as referring to this or that sensation/experience. I think the same goes for HEAT. This or that sensation of heat rigidly designates the experience as the reference, but it doesn’t rigidly designate HEAT as the motion of molecules. Since the identity is plausibly a contingent one, the materialist is not required to answer Kripke’s challenge that ‘he has to show that [pains without brain states] are impossible.’
Tuesday, April 24, 2007
Kripke in ‘Heat’
To pit the relevant points against each other, Kripke’s attack is on the claim that, ‘consciousness is a brain process’, is a contingent identity statement. For the ID theorist, I guess, this kind of identity is desirable because it allows for a the conceptual independence of relata that are ontologically identical. This way the layman can talk about lightning without knowing anything about electric discharge, heat without knowing anything about molecules, and pain without knowing anything about neuroscience. This seems to be Place's antidote to dualist arguments resting on an insistence of the ontological independence of reference for distinct concepts.
Now embark an experiment in thought with Kripke guiding. We should entertain the question along the way as to whether Kripke is begging the question against ID by assuming that conceptual distinctions belie ontological ones. I find the experiment unconvincing, and I’d like to hear some input about whether or not I’ve got it right and why it has had so much pull for folks.
Imagine a population of Martians who feel ‘heat’ when they touch ice, or any other solid with exceptionally slow molecular motion and feel ‘cold’ when they touch fire, and other objects where there’s lots of molecular agitation. I think we can imagine this situation. Given the conceptual distinction here, are we entitled to say that ‘heat is the motion of molecules’ is a true identity statement of sorts? The ID guys say yes, but it’s a contingent identity; Kripke says yes, but it’s a necessary identity: the conceptual distinction we think we see isn't really there.
On Kripke's account, we just first say, yes, it’s just that Martian ‘heat’ is cold and Martian ‘cold’ is heat. But, of course, we aren’t talking about terms here, not about what Martians would say in martianese or in what they would say in English if they new it, when having a particular kind of experience. We’re talking about sensations, and Kripke invites (implores?) us to join him in the intuition that, even were there no creatures to feel ‘heat’, heat would still be identical with the motion of molecules. But this is altogether less obvious. If anything, if we are wont to award some pull to Kripke’s imagination, he seems to be readable as giving an argument for the non-identity of reference for our concepts of HEAT and MOTION OF MOLECULES.
For instance, where there are no creatures to sense ‘heat’, one might also be pulled to say that, sure, there might exist the motion of molecules, but that needn’t be identical with ‘heat’ as we know it. This is plausible if we allow imaginary things like Martians who feel ‘cold’ where we feel ‘heat’. The sensations themselves, as Kripke points out, are contingent. But there is no argument here that the sensation of ‘heat’ is essential to the motion of molecules, in fact, just the opposite seems to be supported. If we disallow contingent identity, we might wonder whether we have an identity statement here at all. All things being equal, functionalism is looking more and more attractive. If we allow that the motion of molecules might produce a different sensation in an exotic kind of ‘nervous system’, why assume that ‘heat’ is identical to the motion of molecules just because we sense it that way?
But it is worth noting that Kripke completely side-steps what Place and Smart have in mind: that sensations of ‘heat’ might be brain processes. Leaving this as an empirical hypothesis, there is room to say that other kinds of brains (maybe Martian brains) might sense the motion of molecules as ‘cold’. But this is either just a defense of the contingent identity holding between the referents of ‘heat’ and ‘motion of molecules’, or it’s an argument for the non-identity of these relata. But Kripke advocates neither.
Nevertheless, Kripke continues along this questionable line, taking it as a given that heat and the motion of molecules are the same thing (and necessarily so). What if there were no creatures initially, then some evolve, like the Martians, that sense what we call ‘heat’ as cold? Again, here, Kripke explicitly assumes the objectivity of the property of being hot and it’s identification with the motion of molecules. ‘Would we say,’ he asks, ‘that heat has suddenly turned to cold?’ No, he says, instead we’d say that Martians didn’t feel ‘heat’ when we do. But this seems to beg the question and to fail to address the issue of the phenomenological fallacy. We might say, to be more careful, that the Martians feel the motion of molecules differently. This, to some extent incompatible with Kripke's intuition that HEAT is the MOTION OF MOLECULES.
The problem I suspect is an equivocation on the sensation of heat and a theoretical conception of HEAT. At all events, the situation is markedly less clear when we substitute the identical predicates, ‘agitation of molecules’ and ‘slowing down of molecular motion’ for heat and cold respectively. We should say, then, that the new creatures sense heat when molecules slow down and cold when they’re agitated, and then contingent identity is confirmed, unless we presuppose that heat just is the motion of molecules. But the support for this presumption was not strong.
In effect, there is an important sense in which Kripke and the classical ID theorist might be talking past each other: heat is not, and should not be identified with the motion of ‘external’ molecules. Rather, if I can pretend I’m an ID theorist, ‘heat’ is just a sensation, and so a property of the nervous system; do we have any independent evidence otherwise? As the thought experiment shows, maybe the sensation of ‘heat’ should just be identified with contingent brain processes. So it might be the motion of molecules, but not anywhere else but in a nervous system appropriately composed.
All this may just point to a categorical difference between ‘simple’ sensations like heat, pain, taste, etc. on the one hand and those which are annexed to concepts like LIGHTNING, or LIGHT, and even HEAT, in there theoretical senses, on the other. If we avoid the phenomenological fallacy, by saying that heat is no property of an experience (no property of the world outside nervous systems; which is not to say that it is a property of an experience in a phenomenal field), and that the visual experience of lightning is the same, etc., the question becomes whether we are miss identifying 'heat' with the motion of molecules in objects outside the nervous system. But I don't think this is a real worry; we're after contingent identity after all, and we can tell some story about how processes in the nervous system are causally correlated with the motion of molecules in objects, we get contingent identity, of sorts. Maybe, this is a little daunting, and I’m not sure where it leads; without a broader metaphysical realism it might appear to problematic, but this wouldn’t concern Place who explicitly advances a scientific hypothesis.
I’ll wind it up by saying that the analogy with mind-body identity is correspondingly weak here. With regard to Kripke’s argument, I don’t see an obvious reason why we can’t treat PAIN like HEAT, at least tentatively. Kripke says we pick out heat by the contingent fact that it affects us in a particular way, but isn’t this the case with pain as well? With 'heat we just have the further consideration of the correlation between neural states and molecular states in objects; with 'pain' we're concerned with neural states only; but neither need involve an irreducible phenomenological story. This is why I suggest that Kripke equivocates on two senses of ‘heat’. The sensation of heat could be contingently identical to brain processes and the motion of molecules in objects; the important thing is that that would not entail that our sensation of heat is not identical to molecular motion. We have ‘heat’ our sensation, which is contingently identical to the motion of molecules in nervous systems, and 'heat' independently understood as the external motion of molecules. Both are independently verifiable, but one is a property of molecules in external objects and one (according to Place) would be a property of a brain process. But we needn't defer immediately to the phenomenological fallacy. If the ID theorist is right, then the 'external' vs 'introspective' distinction is misguided (at least in some cases) to begin with. If so, contingent identity by composition is alive.
Monday, April 23, 2007
What do you mean we, Kripke?
According to identity theorists (IT), pain is identical with a neural state of type x. Those who oppose IT argue that we can imagine pain existing without that neural state, and therefore it can't be the case that pain is identical with a neural state. The standard IT reply is that this possibility of pain existing without that neural state doesn't justify a conclusion that pain is not identical with a neural state. Rather, this possibility shows that the identity statement is contingent, rather than necessary, and this shouldn't worry us because this is just another contingent scientific identification, similar to the identification of heat with molecular motion.
According to Kripke, there is an important difference between the identification of pain with a neural state, and the identification of heat with molecular motion. True identity statements of the form 'x is y' have the following setup: We observe something by way of some sort of description. We then give a name to the object that matches that description, and that name 'x' rigidly designates the object. Now, here's the important thing: If the description that we use to pick out the object rigidly designated by 'x' is an essential property of the object, then, if it is true that 'x is y', 'y' must have that essential property as well.
Often what happens in the case of (what Kripke thinks are mistakenly labeled) contingent identity statements is that we pick out 'x' with a contingent property, pick out 'y' with a different contingent property, and then discover that 'x' and 'y' rigidly designate the same object. This is what happens in the case of the identification of heat with molecular motion: we pick out 'heat' with the contingent property of producing a certain sensation (Kripke argues that this is contingent because we can imagine the existence of heat without it being felt, so therefore the sensation of feeling heat must be a contingent, rather than essential, property of heat), and pick out 'molecular motion' with a different contingent property, and then discover that 'heat' and 'molecular motion' designate the same object.
However, according to Kripke, pain is different: "Although we can say that we pick out heat contingently by the contingent property that it affects us in such and such a way, we cannot similarly say that we pick out pain contingently by the fact that it affects us in such and such a way." That is, Kripke has the following intuition (to be fair, derived from a thought experiment): There is a crucial distinction between pain and heat. We are able to imagine the existence of heat without it being felt, so therefore the sensation of feeling heat must be a contingent, rather than essential, property of heat. We are unable to imagine the existence of pain without it being felt, so therefore the sensation of feeling pain must be an essential, rather than contingent, property of pain.
If Kripke's argument works, it seems to undermine the IT reply to the objection that points out that we can imagine pain existing without that neural state. Remember, the IT reply is that this is just another contingent scientific identification. But Kripke has argued that contingent scientific identifications are true identity statements insofar as it isn't the case that [we pick out the object designated by 'x' with an essential property of that object, yet we can imagine the object designated by 'y' lacking that property]. And in the case of pain, Kripke has argued, we pick out 'pain' with the essential property of feeling pain, yet we can imagine the neural state (say, C-fiber stimulation, although that's not what neuroscientists think anymore, right?) lacking that property.
When I first read this article, I was extremely frustrated by it. To some extent, I still am. But I think I'm understanding the argument a little better, and it's a pretty cool way to block the IT reply. My problem with the argument is that it relies on notions such as "essential" that I can't make sense of and weird modal intuitions. How do we determine whether a certain property is an essential property of a particular object, or whether it is a contingent property? Is it dependent on what we would call that object if it lacked that property? Who is the "we"?
Sunday, April 22, 2007
Some Considerations Regarding Identity
To frame what the identity theory is up against and, and prime for the Kripke thing, I’ll look in a little more detail here at what it would be for any one thing to be identical to another. Here is a plausible and more or less typical account (let’s call it L-identity, because I like nicknames and I think it’s Leibniz’s) of what it is for two things to be identical (tip your 40 to Leibniz):
(x)(y)(x = y → (Fx iff Fy))
That is, take any thing x and anything y: x is y only if, for any property F, if x is F then y is F (and vice versa). In other words, finally, two things are identical if and only if they share all properties in common. For our purposes, L-identity dictates that:
Consciousness is a brain process only if, for any property F, if consciousness is/has property F then some brain process is/has property F (and vice versa).
This seems to be enough to support the claim that strict identity clearly fails on this widely accepted criterion for identity. The non-identity theorist needs only to point out that conscious states like pain, pleasure, joy, and sorrow apparently have properties that whatever brain process(es) are correlated with them do not. Presumably these would be the so-called phenomenal properties. Neuro-cellular events are not, in and of themselves, joyous or painful unless a subject is consciously experiencing the states.
Of course, whether or not types or tokens of such states always underwrite the qualitative states is not especially important, because correlation is no criterion for identity. Smoking is correlated with lung-cancer, but they aren’t the same thing. Even perfect correlation doesn’t necessarily implicate an identity relation or even a causal relation. The point generalizes to cases of perceptual states like visual representations; for instance, there are no black or hot pink neural states, though particular neural states presumably underwrite the experience of color. By all counts, strict material identity fails if L-identity is our criterion.
But L-identity is not what Place appears to be talking about, and the ‘is’ of composition seems like it has the potential to do a lot of work here: ‘lightning is the motion of electric charges’ is an identity statement, but one with merely contingently identical relata, both of which are incapable of being simulaneously apprehended by a subject. We might ask whether the two relata share all properties in common, as is necessary if we accept L-identity, but this would miss the point that some phenomena are apparently identical in a different sense, though they may have radically different descriptions and radically disparate verification conditions.
So, Place leans on the following: (1) that whether or not the application of LIGHTNING and MOTION OF ELECTRIC CHARGES are appropriate in a particular case is an empirical question, depending on independently verifiable objects, and (2) to doubt that identical objects can have different verification conditions is to fall victim to the Phenomenological Fallacy (PF) which I’ll repeat here:
When a subject s describes his ‘introspective’ observations of so-called qualitative states (i.e. his experience of looks, smells, sounds, feels, and other seemings), s describes literal properties of objects and events experienced in the ‘phenomenal field’.
He explains PF in the context of after-images in a phenomenal field, but we can generalize to the experience of lightning. The lightning case, too, seems to fail L-identity, but this only seems important if we place too much stock in the phenomenological ‘data’. It may be that the point here is just the relatively simple one that one and the same event/object can have different descriptions. Do the motion of electric charges and lightning share all properties in common? Not phenomenologically, but if it is a true identity statement by composition, then yes, they do; we just can't from the perspective of a single subject, verify the identity simultaneously. All this is of course consistent with the fact that a hallucination of a bolt of lightning, or a dreamt experience of lightning is not identical with some instance of electrical charges in motion, since it does not pick out something so composed. It’s just that we’re in a brain state similar to that which are in when we experience real lightning. But to make the inferential leap from the conceptual independence of LIGHTNING and MOTION OF ELECTRIC CHARGES, together with their phenomenological independence, to non-identity is a species of PF, and is not clearly warranted at all. Moreover, it appears to be a contingent fact that the relata are identical, and that Place was right to insist that the claim ‘Consciousness is a Brain Process’ is a scientific hypothesis on par with ‘Lightning is the Motion of Electric Charges’.
What we might try to do is to qualify L-identity with a supervenience clause or something, so as to accommodate the ‘is’ of composition. This'll be messy, but, take anything x and anything y:
x is y only if, for any property F, if x is F then y is F (and vice versa), and bona fide occurrences of x are perfectly correlated with y such that if F is a directly observable phenomenal property, then F supervenes on G (a predicate describing the composition of x).
It’s sloppy and I won’t even bother trying to formalize it. But you can see what I’m thinking, F might have multiple descriptions, each depending on the level of material composition one is addressing. As Place notes, an accurate description of composition should not be ruled out as a candidate for an identity statement just because different properties are involved. Of course, this is not super-satisfactory with regard to other intuitions about identity. We still might be wont to say, motivated by L-identity, that we need more than descriptions that appear to be related, we need something like L-identity to guide us, otherwise we’re just begging the question as to whether two relata pick out the same phenomena.
'Brains' Post: 'Explaining What It's Like'
Wednesday, April 18, 2007
Identity Theory, Behaviorism, etc.
Place's initial worry is that accepting inner processes entails dualism. Historically, is this because at the time inner processes were seen as mysterious, especially perhaps in the heyday of behaviorism?
Logic, ontology, and meaning:
Place points out two ways in which people have mistakenly dismissed the identity theory on logical grounds alone. I think I understand Place's argument for the mistake in the first way: (I don't think I'm setting this up any differently than Mike did, but I'll set it up again in case I made a mistake here that contributes to why I don't understand Place's argument for the mistake in the second way)
The mistake is in failing to distinguish between the 'is' of definition and the 'is' of composition. If we fail to make that distinction, then we have 2 (rather than 3) cases involving 'is':
1) 'x is y and nothing else' makes sense.
2) 'x is y and nothing else' is nonsense.
The mistake occurs in thinking that if x and y are logically independent, it can't be the case that 'x is y and nothing else' makes sense. Furthermore (here's where ontology kicks in), if 'x is y and nothing else' is nonsense, then x and y are ontologically distinct.
Once we make the distinction, we see that there are two ways for 'x is y and nothing else' to make sense. So even if we show that "consciousness" and "brain process" are logically independent, it can still be the case that "consciousness is a brain process and nothing else" makes sense.
But I am puzzled by the second way in which Place claims that people have mistakenly dismissed the identity theory on logical grounds alone. The second way involves the argument from the logical independence of two expressions to the ontological independence of the entities to which they refer. Place maintains that the argument works when Rule A (as Mike calls it) works, yet Rule A doesn't always work.
But I don't understand the role of Rule A in Place's argument. Place begins his article by stressing that "consciousness is a brain process" is definitely not a thesis about meaning. If so, we shouldn't be worried about cases where the expression used to refer to x doesn't entail the expression used to refer to y, right?
Mabye he's trying here to demonstrate why he is adamant about his article not being a thesis about meaning, but if so, what does this have to do with operations of verification? That is, I thought he already explained that "a cloud is a mass of tiny particles" contains an "is" of composition rather than definition, and this (rather than non-simultaneous verification) is why "there is nothing self-contradictory in talking about a cloud which is not composed of tiny particles in suspension." So, when it comes to clouds, lightning, and consciousness, it seems to me that these cases involving identity aren't breaking Rule A. Rather, these are simply cases involving the "is" of composition rather than definition.
So: What Place really seems to be worried about is how to maintain an identity statement in light of verification conditions that not only differ- as in the case of "cloud" and "mass of tiny particles"- but differ without continuity of observation. Am I wrong to think that this would appear to be an insoluble problem by behaviorists, and would be a motivation for them to focus entirely on behavior? And am I wrong to think that Place's solution (by ingenious analogy to lightning)- establish identity by showing that "introspective observations reported by the subject can be accounted for in terms of processes which are known to have occured in his brain"- lays a crucial foundation stone for post-behaviorist cognitive science?
If so, I conclude that Place's article is both messier (in terms of argument structure) and more important than I originally thought.
Tuesday, April 3, 2007
The Phenomenological Fallacy
Place has a straightforward criticism of arguments from qualia that is interesting insofar as it doesn’t appeal to the functional role of any particular qualitative state, though it seems to hint in that direction. The Argument from Irreducibility, as I’ll call it, rests crucially on the following error in reasoning:
The Phenomenological Fallacy (PF)
When a subject s describes his ‘introspective’ observations of so-called qualitative states (i.e. his experience of looks, smells, sounds, feels, and other seemings), s describes literal properties of objects and events experienced in the ‘phenomenal field’.
The idea is that s reports on experiences that he is having in a kind of phenomenological theatre; reminiscent of the very Cartesian idea that the mind is necessarily better known than anything ‘external’ to the mind. What is the problem with this line of reasoning?
To use Place’s example, allow that s experiences a green after-image. s reports something like, ‘When I close my eyes I see a green object in front of me.’ The phenomenological fallacy is to claim that s literally experiences or actually sees a green object. Of course, such an object clearly does not correspond to any object in s’s environment, nor is likely to be manifest in any particular brain process where we to open up s’s head and look inside. Echoing the original argument from the conceptual independence of CONSCIOUSNESS and BRAIN PROCESS, the conclusion is just that COLOR, for instance, is not the sort of concept applicable to brain processes at all.
According to Place, here is the error: PF depends on the assumptions, more or less equivalent, that:
(1) Our ability to describe things in our environment depends on our consciousness of them.
(2) Our descriptions of things are primarily descriptions of our conscious experience and only inferentially descriptions of objects in the environment.
In effect, we infer real properties from phenomenal ones, but Place claims that the opposite is just the case. It is true that “We begin by learning to recognize the real properties of things in our environment.” But the fact that we learn to recognize real properties by the way they look, smell, etc. does not entail that we have to learn how to describe those looks, smells, tastes, etc. before we can describe the real objects themselves. It is simply a myth to assume that we describe experience with reference to phenomenal properties; rather we have from the beginning access to actual physical properties/objects. It is these real properties and objects that in turn ‘give rise’ to conscious experience we then try to describe.
So, to return to the subject s, when he says, ‘When I close my eyes I see a green object in front of me’, we should analyze this claim into, ‘When I close my eyes I have the sort of experience I normally have when I look at a green patch of light.’ The distinction is that the second claim makes it clear that there isn’t anything there, really: no green ‘object’. It is interesting to think about how this line of reasoning might apply to other phenomenal ‘objects’ and states. Place might be read as implying, for example, that pain would be somehow likewise ‘unreal’, or at least provide an interesting argument to the effect that pain is not an intentional object, as other have argued.
Now, Place’s argument seems to rest on some pretty substantial metaphysical/epistemological assumptions about the nature of objects and our access to those objects. But I’m sure there are some strategies to draw on there. What’s interesting to consider is the possibility that if we lose phenomenal objects (as they’re construed by the folks Place is concerned with), we may be free to posit the kind of identity Place wants. At least, there is nothing inconsistent or contradictory in saying for instance, that a particular experience of pain, or a green- after image is anything over and above a brain process. Again, if we can have contingent identity of the kind Place argues for early on, then the ‘irreducibility’ of the conscious states to physical states becomes a matter of a subject’s just not being able to verify the former introspectively. But we can’t learn the general identity of LIGHTNING and ELECTRIC CHARGES IN MOTION by introspection either.
Sunday, April 1, 2007
Place's Analogy
By adding this technical description, in light of certain discoveries, we presumably gain predictive powers. We now know the conditions under which flashes of lightning will be seen by people on the street, and this in turn provides a better explanation of why those flashes of lightning took place. I'm assuming that something like this is what we're looking for in the case of consciousness: We want to be able to know the conditions under which, I guess, people will report feelings of consciousness (or something else)? And we want a better explanation than people on the street can provide for why those feelings of consciousness take place. But is there a more concrete description of what it is we're trying to explain?
A final comment on the lightning analogy. Back in the day, before they knew with certainty that lightning is just the motion of electric charges, what were the possibilities involved in the scientific explanation of lightning actually is? And, to push the analogy, what have people thought are the possibilities involved in what consciousness actually is? Assuming that the word consciousness actually has a reference (sorry, I'm probably butchering the real philsophy of language involved here), could its reference be anything other than some sort of brain process?
Friday, March 30, 2007
Expressions and Entities
Assume that a kind of object/state of affairs has two properties (or two sets of properties) x (x = the property of being red) and y (y = the property of being colored). Where x is unique to the kind of object/state of affairs in question (only the class of red things are red), the expression used to refer to x, namely ‘red’, will always entail the expression used to refer to y, namely ‘colored’.
If (A) were an exception-less rule for language, any expression logically independent of another expression which uniquely characterizes a kind of object (as in 'table' and 'packing case') would necessarily refer to a characteristic which is not associated with the entity in question. But since (A) applies almost universally, we are usually warranted in arguing from the logical independence of concepts and expressions to the ontological independence of referents. Hence the intuition that consciousness is not a brain process; again CONSCIOUSNESS and BRAIN PROCESS appear to be logically independent concepts.
Crucially, Place argues that there are cases in which (A) fails: (A) fails where the verification conditions for the instantiation of two different properties (or sets of properties) can not be satisfied simultaneously. Note that, for instance, CLOUD and MASS OF TINY PARTICLES IN SUSPENSION appear to be conceptually/logically independent. However, this doesn’t provide us with grounds for asserting that a particular cloud and the mass of tiny particles constituting it are ontologically independent entities. There is one thing, a cloud, picked out by logically independent concepts and expressions. This is just because we can’t satisfy the verification conditions for claims like ‘That cloud x is a huge fluffy, white, fleecy-looking mass’ and ‘That cloud x is a mass of tiny particles in suspension’ simultaneously. We observe the cloud as a fluffy white mass from afar, and as a mass of tiny particles from within it. In fact, as Place notes, we have different words/concepts to capture the objects to which we aim to refer. In this latter case, being inside a cloud, we call it a ‘fog’; being outside a fog, we call it a ‘cloud’; but a fog just is a kind of cloud and vice versa.
For this reason, we might say that it is synthetically (by which I mean contingently) true that a particular cloud is a mass of tiny particles (and nothing else). But it is analytically true that, for example, a particular bachelor is unmarried; the former expresses an ‘is’ of composition, and the latter an ‘is’ of definition. Also, to repeat another example, it synthetically true that ‘His table (by composition) is an old packing case’, whereas it is analytically true that ‘A square (by definition) is an equilateral rectangle’.
What else can we say about the identity expressed by the ‘is’ of composition? It’s worth noting, as Place does, that in the case of the cloud, raw visual observation will suffice to verify that some cloud or other is a mass of particles. So there is the obvious continuity between one’s cloud-observations and mass-o'-particles-observations, where the one virtually seamlessly becomes the other as one approaches. But our case is more difficult; the verification conditions for something’s being a conscious state and those for something’s being a brain process appear to be mutually exclusive and totally disparate. That is, the verification conditions for one never verify the other.
Consider, with Place, the example of lightning. A bolt of lighting is an instance of electrical charges in motion. We observe it visually, but we don’t observe the charges. No matter how hard we try, we couldn't; verifying the existence of the charges requires special methods. What warrants our assertion that lightning is electricity in motion? In what sense are the disparate requisite observations for verification of each side of the relation observations of the same phenomenon?
Correlation will not suffice. Two things can be correlated perfectly without being identical. For instance, in a coin-tossing contest, the correlation between my tossing heads and my opponent doing so can have a correlation of 1 if every time I throw heads he does, too. But our tosses are not identical. Instead, we should say that two different observations are observations of the same thing when one explains the other. More specifically, we verify that we are observing the same phenomenon via different methods when our technical observations gel with theory so as to provide an adequate explanation of some phenomenon. So,
Lighting is the motion of electric charges (and nothing else).
might be a true statement if it becomes apparent that the motion of electric charges through the atmosphere causes what the untutored eye perceives to be lightning; and, of course, even the tutored eye will never see the electrical charges, in and of themselves, moving.
Wednesday, March 28, 2007
Three ‘is’s’: definition, composition, and the ‘is’ of predication.
(i) Conscious states are describable independently of our awareness of brain processes.
(ii) The verification conditions for statements about consciousness and brain processes are entirely independent.
(iii) There is nothing self-contradictory in the thought that some subject has a conscious experience without any correlated brain process.
There maybe problems among these, but, as Place notes, even granting (i), (ii), and (iii), it doesn’t follow that the claim ‘consciousness is a brain process’ is necessarily false. Rather, ‘consciousness is a brain process’ is neither self-contradictory nor self-evident.
Consider the following sentences, which have the feature that the subject expression and the object expression appear to be adequate characterizations of the same things/states of affairs:
(D1) A square is an equilateral rectangle.
(D2) Red is a color.
(D3) To understand an instruction is to be able to act appropriately under the appropriate circumstances.
(C1) His table is an old packing case.
(C2) Her hat is a bundle of straw.
(C3) A cloud is a mass of particles in suspension.
Contrast these with the following examples:
(P1) Toby is eighty years old.
(P2) Her hat is red.
(P3) Giraffes are tall.
Place claims that one reason for holding that D-sentences and C-sentences have the feature that the subject term and object term are adequate characterizations of the same thing is that in both cases we can reasonably add the clause ‘and nothing else’ to them. Compare:
(D1*) A square is an equilateral triangle and nothing else.
(C3*) A cloud is a mass of particles in suspension and nothing else.
Compare with a P-sentence:
(P3*) Giraffes are tall and nothing else.
P-sentences don't seem to have the property in question. This is plausibly just because the relation does not seem to be between independently adequate descriptions of the same things.
Aside from the similarity pointed out, D-sentences differ from C-sentences in that the former appear to be necessarily true by definition and the later only contingently true (on condition of verification). For D-sentences, there is a logical/conceptual/analytic relationship such that the subject term picks out something contained in the predicate term. Presumably, this is what Place means by the ‘‘is’ of definition.’
By contrast, for the ‘‘is’ of composition’, there is no such (apparently) analytic relationship between the terms. The two terms involved seem to adequately describe each other (for a given context), but it is only contingently true that they do so. This goes even for (C3), which stands out because it lacks an indexical component.
Why assume that we can deny the claim that 'consciousness is a brain process' on strictly logical grounds in the first place? The argument that Place is criticizing, initially, is this:
(1) If the meanings of two expressions differ, they can’t both provide an adequate characterization of the same object.
(2) There is no contradiction in conceiving of a conscious state which is not underwritten by a brain process.
(3) The meanings of ‘consciousness’ and ‘brain process’ differ (are conceptually independent).
(4) Therefore, ‘consciousness’ and ‘brain process’ cannot be adequate characterizations of the same thing.
(5) Therefore, consciousness is not a brain process.
Place digs in right at the beginning; at premise (1), offering a kind of error theory that might help to motivate the identity theory. It is a mistake to assume that where meanings of expressions differ, that both cannot adequately describe the same thing, in effect, there are true, contingent identity statements. After all, if we were to subscribe to the faulty assumption (says Place), then we might wind up in the weird position of having to assert that it is logically impossible for any table to be an old packing case; since, there is no contradiction in asserting that someone has a table but not a packing case. The same would presumably go for the impossibility of any hat being a bundle of straw, or any cloud being a conglomeration of particles. [t-b-cont’d]