Representation = A ‘whatnot’ (state or object) with a (particular?) content.
Mental Representation = A mental whatnot with a (particular?) content.
Symbol = A representation that does not resemble what it represents; a content-bearing whatnot that does not derive its content from a relation of similar what it represents.
Mental Symbol = A mental representation that does not resemble what it represents.
Content = A generic term for whatever it is that underwrites semantic and/or intentional properties.
Inexplicit Content = A generic term for whatever it is that underwrites the semantic and/or
intentional properties that a system bears in virtue of its structure (and not its representational and/or intentional properties).
Representational Theory of Intentionality = Intentional states inherit their content from
the representations that constitute them. The identification of intentionality with representation. The content of an intentional state is a representation, the total state is an
attitude toward the representation.
Cummins on Intentionality = Intentional states are just the propositional attitudes;
philosophers have tended to assume that the problem of mental representation is the problem of what attaches beliefs and desires to their contents. BUT a theory of mental representation need not give us intentional contents. The data structures underwriting the representational states of CTC are not equivalent to intentional states or their contents.
Theory of Meaning = A theory of what it is in virtue of which some particular whatnot
has the semantic content that it has.
Theory of Meaningfulness = A theory of what it is in virtue of which some kind or class of
whatnots have any meaning at all.
Orthodox (Classical) Computationalism = Cognition is ‘disciplined symbol manipulation.’
Mental representations are language-like, symbolic data structures fit to be the inputs/outputs of computations; Mental representations are contentful mental symbols; the content of a mental symbol is the data structure the symbol represents; the objects of computation are identical with the objects of semantic interpretation.
Connectionist Computationalism = Orthodox Computationalism + mental representations are
not (necessarily?) language-like symbols. Also, it is not the case that the objects of computation are identical with the objects of semantic interpretation.
The Problem of (Mental) Representation (PMR) =
I like Cummins quick & dirty formulation of the question at the heart of the problem that occurs at the end of 1:
‘What is it for a mental whatnot to be a representation?’ Equivalently – What is it for a mental whatnot to have a content?
CTC takes the notion of a contentful mental whatnot as an ‘explanatory primitive.’ I suppose this is to say that ontological questions are deferred – assume that there are such things as mental representations, what explanatory work do/should we expect of them in a defeasible theory of cognition? In effect, CTC is a solution to PMRS, not PMR.
The Problem of (Mental) RepresentationS (PMRS) =
What is it for a particular mental representation to have some particular content? What is it for a contentful mental whatnot to have the particular content that it has? How are the particular contents of particular mental representations individuated?
I thought it was interesting that this problem is of no concern to Cummins. He somewhat off-handedly lets us know that proposed answers to PMRS don’t ‘really matter much [to his project in the book]’ and that his ‘topic is the nature of representation, not what sorts of things do the representational work of the mind.’(2)
At this point I don’t really understand why he would dismiss PMRS as irrelevant. Presumably, and as he admits, a solution to PMR that takes mental representations as explanatory primitives but then fails to account for its own notion of ‘mental representation’ is not satisfying. But won’t an ‘account of the nature of the ‘mental’ representation relation’ include an answer to PMRS? If not, why not? It’s a unclear to me at this point, since cognitive scientists refer to multiple kinds of mental representations - phonemes, spatial maps, concepts, etc. Cummins acknowledges this multiplicity of representations.
Notables from Chapter 1
The Central Question(s): What is it for a mental state or a mental object to bear a semantic property? What makes a mental state or object a representation?
Reaction: Does this entail that the problem of mental representation reduces to or is
equivalent to the problem of meaningfulness?
Cummins’ Three Varieties of Content (the generic stuff that underwrites whatever semantic properties are present):
Content of a cognitive system might be characterized in the following ways:
According to its intentional states (if it has them)
According to its representational states (if it has them)
According to the inexplicit content yielded by its structure
Also, intentional content ≠ propositional content, cf. Revere’s Lanterns. They bore the propositional content that the British were coming (a) by land if one latern was lit, (b) by sea if two lanterns were lit. But we shouldn’t attribute any contentful intentional states to the lanterns.
Reaction: Where did the lanterns derive their propositional content from and why does it matter here? The problem of representation (general; equivalent to the problem of meaningfulness in general? Is there such a problem), I presume, does not necessarily reduce to the problem of mental representation? Non-mental things can represent. The lanterns, for example, are not a mind or even a cognitive system. Why does it even matter whether non-intentional systems can bear propositional content or any content at all? We're after mental representation, not representation in general.
The Meaningfulness – Meaning Distinction and the Representation – Representations Distinction
The theory of meaningfulness/theory of meaning distinction is analogous to the distinction between the theories of mental representation and theories of mental representationS. The problems behind the theories of representation have to do with what it is for a mental representation to have a content and with the nature and content of particular representations respectively.
We might ask, similarly, what is it for a whatnot to have meaning and we might ask what it is for a particular whatnot to have a particular meaning. As with the former distinction, Cummins suggests that an answer to the general problem needn’t provide an answer to the particular problem. Is this right? What good is a theory of mental meaning that goes unapplied to instances, what might it tell us? What good is a theory of mental representation that goes unapplied to instances of (at least) kinds of mental representations? I don’t mean these to be rhetorical questions. Like I said the other day, questions about the ‘nature’ of things, especially vexing relations like meanings, confuse me.
Also, it is worth noting that, while Cummins insists that his question regards the nature of representation, he also insists that the bulk of the content of the book is concerned with theories of meaning. The strategy, then, is to look at in virtue of what theories of the meanings of particular whatnots have the particular meanings that they do (it should become clear from there, he says, what general theory of meaningfulness is entailed).
The asymmetry between this and the approach to the theory of mental representation struck me. As I pointed to above, Cummins says it doesn’t really matter which approach one takes to PMRS because the concern is with PMR. Why is the particular to general approach appropriate in the case of meaning/meaningfulness but not in the case of mental representation/mental representations? Why, especially, if there is some strong relationship between the question of mental meaning and mental representation, as there appears to be?
Suggestions: Pay attention to the broader initial problems.
Characterizing MEANINGFULNESS might be a broader project than characterizing mental meaningfulness.
Characterizing representation is broader project than characterizing mental representation.
But by Cummins’ definitions, a theory of meaningfulness applies only to kinds or classes of things, presumably things like sentences, propositions, signs, and especially mental representations. By Cummins’ own admission, different fields (within cognitive science) use MENTAL REPRESENTATION to refer to different explanatory primitives. The result is that the theory of mental meaningfulness, i.e. the theory of what it is in virtue of which mental representations have any semantic content at all is not a single project.
Also, since it’s not clear where and if the theory of mental meaningfulness and the theory mental representation come apart (both are concerned with what it is in virtue of which mental representations have semantic content), the ‘theory of mental representation’ we are engaged in will depend on which theoretical framework we find ourselves in. I assume that we should take ourselves to be within a broad theoretical framework, i.e. the CTC, but even within CTC there are adherents from the different fields that make up cognitive science. It leaves me wondering, when we say that it’s widely accepted that mental representations are language-like symbols, whether we’re saying that this is what the computer scientists, philosophers, linguists, and maybe the cognitive psychologists think, but not the neuroscientists, what about the behavioral neuroscientists? And we're certainly only with the classical computationalists, and not the connectionists.
Also, is ‘semantic content’ just ‘meaning’? And then is ‘mental content’ just ‘mental meaning’?
3 comments:
I don't think he's really dismissing PMRS as irrelevant, it's just that he thinks we can do some productive work on a solution to PMR while ignoring, for the time being, PMRS. Here's the analogy to biology that Sterelny raises: There was lots of good work in population genetics being done prior to the discovery of the structure of DNA in 1953. So, a theory of function can be developed independently of the theory of physical realization.
In a different comment I wrote about why I thought the Revere example is relevant here. In addition, I think that a really good theory of mental representation should have something to say about non-mental representation as well. Just as a really good theory of human behavior should, I think, have something to say about non-human behavior as well.
I like your comment on the asymmetry between the approach to theory of meaning/fulness, and approach to PMR/PMRS, I definitely hadn't picked up on that. I'm also puzzled by what to think about it. I'm content to leave it at that, for the time being.
Post a Comment