Cummins distinguishes two different problems concerning mental representations. The first problem concerning mental representations is the problem of representations- understanding the physical instantiations of mental representations (orthodox computationalism- symbolic data structures; connectionists- activation levels of ensembles of simple processors, and/or the strengths of the connections among such processors), and their roles in mental processes.
There have been 4 answers to the problem of representations, concerning the sorts of things that can be mental representations: (1) Mind-stuff inFORMed- the same stuff that makes a red ball makes us perceive a red ball. Similarity is the big thing here- what we have in our head is capable of representing the world because it is made of the same stuff. (2) Images- same as Mind-stuff inFORMed view, minus Aristotelian jargon. (3) Symbols: (a) in contrast to preceeding views, symbols don’t resemble the things they represent; (b) they can be inputs and outputs of computations. (4) Neurophysiological states- mental representation is a biological phenomenon essentially.
The second problem concerning mental representations is the problem of representation- understanding what it is for a cognitive state to have a content.
There have been 4 answers to the problem of representation, concerning the nature of representation:
(1) Similarity- in order to be able to think about things in the world, need to have something resembling the thing in the world in your head.
(2) Covariance- certain characteristic activity in (neural) structure covaries with something out there in the world.
(3) Adaptational role- this, not covariance, accounts for the representation.
(4) Functional or computational role- functionalism applied to mental representation.
[I’m not sure I understand exactly what’s involved in solving the problem of representation. I would love it if we could think of an analogy in some other area of philosophy. Maybe this Cummins quote on methodology will be helpful: “We must pick a theoretical framework and ask what explanatory role mental representation plays in that framework and what the representation relation must be if that explanatory role is to be well grounded.”]
Most of the book will be assuming an orthodox computationalism background (CTC- computational theory of cognition) that provides an answer to the problem of representations (mental representations are symbolic data structures) but is agnostic about the problem of representation (concerning what it is for a data structure to have semantic properties).
Cummins urges that at the outset, in order to help distinguish between the various issues involved and solutions proposed, we should not be assuming either (a) Representational theory of intentionality (RTI)- intentional states inherit their contents from representations that are their constituents; or (b) The language of thought hypothesis, according to which, cognitive states involve “quasi-linguistic formulas type identified by their states in an internal code with a recursive syntax.”
The reason why it is important not to assume RTI at the outset: Represented content isn’t all the content there is. There is also inexplicit content of various kinds (e.g. content implicit in the state of control, content implicit in the domain, content implicit in the form of representation, content implicit in the medium of representation), and if nothing like the RTI is true there is also intentional content. [I don’t think I fully understand this point. It might be helpful to read Cummins 1986 “Inexplicit Information” in The Representation of Knowledge and Belief, ed. Brand and Harnish]
The reason why it is important not to assume the language of thought hypothesis at the outset: “A symbol can have a proposititonal content even though it has no syntax and is not part of a language-like system of symbols.” E.g. Paul Revere’s lanterns.
Subscribe to:
Post Comments (Atom)
4 comments:
Sorry this is in caps...
I SORT OF ALLUDED TO THIS, BUT I’M NOT TOO SURE ABOUT THE WAY CUMMINS LAYS OUT THE TERRAIN HERE. THE PROBLEM OF REPRESENTATION IS THE PROBLEM OF MENTAL REPRESENTATION. PRESUMABLY, NOT ALL REPRESENTATION IS MENTAL REPRESENTATION, AND THE PROBLEM OF MENTAL REPRESENTATIONS, HE SAYS, HE’S NOT INTERESTED IN. SO HE’S INTERESTED IN, LIKE YOU SAID, WHAT IT IS FOR A COGNITIVE STATE TO HAVE A CONTENT (AND NOT WHAT IT IS FOR ANYTHING TO HAVE A CONTENT). THAT’S WHY I THOUGHT THE REVERE LANTERN EXAMPLE WAS WEIRD, IF THE LANTERNS WHERE SYMBOLS, THEY WEREN’T MENTAL SYMBOLS, SO WHO CARES IF THEY BEAR PROPOSITIONAL CONTENT. THERE’S A LOT TO SAY HERE, I THINK, WITH REGARD TO THE STATUS OF INEXPLICIT CONTENT. WHERE DO THE LANTERNS GET THEIR CONTENT FROM? PRESUMABLY, ONLY COGNITIVE SYSTEMS ARE ELIGIBLE FOR INEXPLICIT CONTENT, OTHERWISE WE’RE GONNA HAVE MAGNETOTOXIC BACTERIA, DIVING BELLS, AND TREE LEAVES BEARING INEXPLICIT CONTENT.
NOW, I HAVE TO BE CAREFUL HERE NOT TO ASSUME SOMETHING LIKE RTI: REPRESENTATIONAL STATES ARE NOT NECESSARILY INTENTIONAL STATES. WE NEED MORE FROM CUMMINS ON THE RELATIONSHIP BETWEEN MEANING, REPRESENTATION, MENTAL MEANING, MENTAL REPRESENTATION, AND INTENTIONALITY. IT IS NOT SOMETHING YOU CAN GET FOR FREE, FOR INSTANCE, THAT INTENTIONAL CONTENT IS BOURNE BY ALL AND ONLY PROPOSITIONAL ATTITUDES. INTENTIONALITY IS ABOUTNESS, AND IT’S NOT OBVIOUS TO ME THAT ABOUTNESS HAS TO BE THE ABOUTNESS OF ATTITUDES TOWARD PROPOSITIONS.
I’LL TAKE A STAB. SOLVING THE PROBLEM OF (MENTAL REPRESENTATION) WOULD AMOUNT TO GIVING AN ACCOUNT OF WHAT IT IS FOR A COGNITIVE SYSTEM TO BE IN A STATE OR TO INSTANTIATE AN OBJECT THAT BEARS A CONTENT (THAT MEANS SOMETHING?) AT LEAST ON THE FACE OF IT THEN, SUCCESS WILL AMOUNT TO AN
ANALYSIS OF MENTAL REPRESENTATION, PROVIDING CONDITIONS,
NECESSARY/SUFFICIENT OR BOTH FOR A COGNITIVE SYSTEM’S BEING IN A CONTENTFUL STATE OR INSTANTIATING AN OBJECT WITH CONTENT.
HERE’S AN ANALOGY WE COULD EXPLORE. SOLVING THE PROBLEM OF MENTAL REPRESENTATION WOULD BE LIKE SOLVING THE PROBLEM OF JUSTIFICATION: WE’D HAVE AN ANALYSIS OF JUSTIFICATION OR JUSTIFIED BELIEF OR SOMETHING LIKE THAT.
THEN THE PROBLEM OF EPRESENTATIONS (WITH AN S) IS LIKE THE STRUCTURAL
PROBLEM OF JUSTFICATION POLLOCK ALLUDES TO: WHICH STRUCTURES ARE THE JUSTIFIED ONES, WHAT DO THEY LOOK LIKE, WHAT ARE THEY CONSTITUTED OF? THE PROBLEM OF REPRESENTATIONS CAN BE SPUN AS A STRUCTURAL PROBLEM LIKE THAT.
QUINE HAD AN INTERESTING SUGGESTION THAT’S RELEVANT: NORMATIVE PART OF EPISTEMOLOGY, HE SAID, HE LIKENED TO A PROJECT IN ENGINEERING…THE ‘TECHNOLOGY OF TRUTH-SEEKING,’ WHICH HE SAW AS A PROJECT IN PREDICTION.
ANYWAY, I GUESS MY STAB IS TO SAY, BY ANALOGY, MAYBE THE CONTRAST BETWEEN THE TWO PROBLEMS OF MENTAL REPRESENTATION(S) IS ANALAGOUS TO TWO PROBLEMS IN EPISTEMOLOGY. THERE IS THE GENERAL, ANALYTICAL PROBLEM (THE ‘NATURE OF’ PROBLEM) AND THERE IS SOME KIND OF STRUCTURAL-ENGINEERING PROBLEM (THE WHAT IS IT FOR PARTICULAR INSTANCES TO BE INSTANCES OF A KIND OF THING, OR DESCRIBABLE AS A KIND OF THING BECAUSE THEY ARE APPROPRIATELY COMPOSED).
by the way, the thing i said i'd 'take a stab at' was the analogy you mentioned you thought it'd be helpful to find
Thanks for taking a stab. I think it might be difficult to make the analogy with justification, because justification is clearly normative, whereas content is less clearly normative. [one of the things I like about phil mind- we can mostly forget about the complications of normativity]
I was thinking maybe it might be more like something in phil science? In any case, I want the problems that we're trying to solve to be more clearly laid out at the beginning. Actually, I think Sterelny chapter 2 does a pretty good job of it. Here's the basic problem that we're trying to solve: How is it that physical systems like ourselves could have mental states that (a) represent and misrepresent the world, and (b) participate in the causation of behavior.
I thought the point of the Revere example was to show that just because a symbol- not necessarily a mental symbol- has propositional content, this doesn't necessarily mean it is part of a language-like system of symbols. So, likewise, just because mental symbols have propositional content, this doesn't necessarily mean that they are part of a language-like system of symbols such as the language of thought. Nor does this mean that they are not part of a language-like system of symbols. Cummins is just trying to tease apart the different hypotheses and their implications.
Post a Comment