Monday, December 24, 2007

Meaning and Mental Representation Chapter 5

Fodor's causal theory: "A symbol expresses a property if it's nomologically necessary that all and only instances of the property cause tokenings of the symbol."
2 problems with this theory, one stemming from the "only" clause [the disjunction problem] and one stemming from the "all" clause.
(1) The problem with saying that it's necessary that only instances of the property cause tokenings of the symbol is that some noncats cause |cat|s. Once again, this is the problem of misrepresentation.
Fodor proposes a solution based on the following asymmetrical dependence:
(i) If mice didn't cause |mouse|s, shrews wouldn't cause |mouse|s.
(ii) If shrews didn't cause |mouse|s, mice wouldn't cause |mouse|s.
The asymmetrical dependence of shrews on mice lies in the fact that (i) is true, but (ii) is false.
However, is (ii) false? After all, if shrews didn't cause |mouse|s, it might be because mouse-looks didn't cause |mouse|s, in which case mice wouldn't cause |mouse|s either. This leads us to the objection that there is no single interpretation that makes (i) true and (ii) false. For there are two ways to break the shrew-to-|mouse| connection, and two ways to break the mouse-to-|mouse| connection. If mousey looks don't cause |mouse|s, then both (i) and (ii) are true. If mousey looks do cause |mouse|s, but mice don't cause mousey looks, this won't affect the shrew-to-|mouse| connection, so both (i) and (ii) are false.

(2) The problem with saying that it is necessary that all instances of the property cause tokenings of the symbol is that not all cats cause |cat|s. Why do we want to say that it is all instances? Because if some cats don't cause |s|s, then the extension of |s| should be the subset of cats that do cause |s|s. So we need genuine covariation- all cats cause |s|s (or, any cat would cause an |s| if given a fair chance). But what is it to be given a fair chance? This leads us to a problem of circularity similar to the one we saw in chapter 4 [where Cummins concluded that Lockeans must specify ideal conditions in a way that does not presuppose content assignments to states of the cognitive system]: If covariance is grounded in a mechanism that, under the right conditions, will produce a |cat| from a cat, and, according to the CTC, the mechanism in question can be understood only by appeal to inner representations, then in order to understand the mechanism that CTC invokes to explain covariance between cats and |cat|s we must already understand representation and the explanatory role it plays in mental mechanisms.

Wednesday, December 12, 2007

'Symbols', 'Representations', and their contents

I suppose the lanterns do some work to serve the Cummins’ purpose that you mention. Maybe a symbol or group of symbols has propositional content, still the symbols don’t need to be language-like. Score for the connectionists (not that I’m taking sides here yet).

But it’s worth noting, I think, that something hinges on there being a viable analogy between symbols like the lanterns and the mental symbols of CTC; and just calling the two very different sorts of things symbols is not enough. So we’re assuming that mental symbols are sufficiently like non-mental symbols. But it’s far from immediately clear to me that this is appropriate. In fact, there seems to be good reasons to think that they aren’t too similar, for instance, the considerations that have to do with what Cummins called original meaning.

If someone wants to defend, for instance, what Cummins calls a symmetrical theory of meaning(fullness?), (in order to avoid the circularity in saying that mental representations get their meaning from original mental meaning), where mental and non-mental meaning are essentially the same sorts of thing, there’s a really tall task ahead of them. For one thing, content would be unrestricted, there would be content everywhere, in cognitive and non-cognitive systems. It becomes less clear that the notion of representation is an interesting one (and possibly one that can ground mental causation) if tree-rings represent/symbolize the tree’s age in the same way that I (mentally) represent triangles and categories for natural kinds.

I don’t think it’d be right to say, for instance, that the lanterns represented, had their content, or meant anything without their being assigned the representational role they had been assigned for the purposes of interpretation by cognitive agents. In effect, the kind of symbol that lacks what Cummins called original meaning seem to be representations, but they lack original content. I think ‘original content’ is a better term than ‘original meaning,’ just because of the terminological adjustments I’m hinting at. I guess I have naïve intuitions that there is something like original meaning/content from which some kinds of symbols (like the lanterns) get their meaning. Although I do want to say that original meaning/content is not in play in a really strong sense, i.e. that there’s no representational content anywhere that doesn’t derive from mental meaning. But then maybe I want to split the notions of representational content and meaning in a substantial way. I’ll try to say more about what I mean below.

I mean, I think Cummins is right to point out that the Gricean theories come up short with respect to mental representation, if they would want to go there. As Hagueland notes, on threat of regress [Gricean] mental meaning can’t account for the content of representations. But I’m also inclined to think that there may be, and if there isn’t maybe there should be, different senses of representation and meaning in play. Meanings apply, maybe, to symbols that derive their semantic properties from original meaning. Mental representations, maybe, don’t ‘mean’ anything; they’re just what we use because they’re what we have.

If so, maybe we should reserve ‘meaning’ for non-mental representations, like the lanterns. I really don’t want bite on the line Cummins says Fodor takes either, but it’s partly because I think that the sense of ‘intentionality’ in play is very underdeveloped, by Cummins and many others. The term ‘intentionality’, Dretske has noted, is ‘much abused’. I don’t have an argument (yet), but I’d like to ground mental representation in intentionality without being committed to the view that representational contents are just propositional attitude contents. I think things like concepts and spatial maps, but maybe not things like production rules and phonemes, have intentionality. (And with respect to concepts, I don’t think (as Davidson apparently did) that having beliefs is necessary for having concepts).

While I’m at splitting stuff, and back to the top, we could say that there are (at least) two different kinds of SYMBOL, maybe corresponding to two different kinds of representation. There are those deriving content from original meaning (content), i.e. the lanterns, and those that have their content by some other natural means and are capable of playing the data structure role CTC wants. The interesting symbols for our purposes, of course, would be the latter ones. After all, even if CTC is right, if our thoughts are language-like symbols, they aren’t the kind we interpret (I don’t think) or use to communicate anything. This doesn’t mean it’s a good idea to jump immediately to some symmetrical theory of symbols, meaning, representation, content, etc. (just to avoid the problem of intentionality and ultimately the ‘problem of consciousness’ which I think is a strong motivation here). It also makes me wonder if we should expect, from the outset, a theory of mental representation to contribute in a substantial way to a theory of representation in general; mental representation and the representational roles of symbols that lack original content seem like they might be very different things.

Tuesday, December 11, 2007

The Cummins Dictionary - Words from Chpt. 1

I read back over chapters 1 & 2 of Cummins last night. Since I thought the rapid fire introduction of terminology, with many ambiguous terms (not ambiguous in Cummins, but in general), was a little baffling, I’m gonna’ give myself a closer look at the vocabulary.

Representation = A ‘whatnot’ (state or object) with a (particular?) content.

Mental Representation = A mental whatnot with a (particular?) content.

Symbol = A representation that does not resemble what it represents; a content-bearing whatnot that does not derive its content from a relation of similar what it represents.

Mental Symbol = A mental representation that does not resemble what it represents.

Content = A generic term for whatever it is that underwrites semantic and/or intentional properties.

Inexplicit Content = A generic term for whatever it is that underwrites the semantic and/or
intentional properties that a system bears in virtue of its structure (and not its representational and/or intentional properties).

Representational Theory of Intentionality = Intentional states inherit their content from
the representations that constitute them. The identification of intentionality with representation. The content of an intentional state is a representation, the total state is an
attitude toward the representation.

Cummins on Intentionality = Intentional states are just the propositional attitudes;
philosophers have tended to assume that the problem of mental representation is the problem of what attaches beliefs and desires to their contents. BUT a theory of mental representation need not give us intentional contents. The data structures underwriting the representational states of CTC are not equivalent to intentional states or their contents.

Theory of Meaning = A theory of what it is in virtue of which some particular whatnot
has the semantic content that it has.

Theory of Meaningfulness = A theory of what it is in virtue of which some kind or class of
whatnots have any meaning at all.

Orthodox (Classical) Computationalism = Cognition is ‘disciplined symbol manipulation.’
Mental representations are language-like, symbolic data structures fit to be the inputs/outputs of computations; Mental representations are contentful mental symbols; the content of a mental symbol is the data structure the symbol represents; the objects of computation are identical with the objects of semantic interpretation.

Connectionist Computationalism = Orthodox Computationalism + mental representations are
not (necessarily?) language-like symbols. Also, it is not the case that the objects of computation are identical with the objects of semantic interpretation.


The Problem of (Mental) Representation (PMR) =

I like Cummins quick & dirty formulation of the question at the heart of the problem that occurs at the end of 1:

‘What is it for a mental whatnot to be a representation?’ Equivalently – What is it for a mental whatnot to have a content?

CTC takes the notion of a contentful mental whatnot as an ‘explanatory primitive.’ I suppose this is to say that ontological questions are deferred – assume that there are such things as mental representations, what explanatory work do/should we expect of them in a defeasible theory of cognition? In effect, CTC is a solution to PMRS, not PMR.

The Problem of (Mental) RepresentationS (PMRS) =

What is it for a particular mental representation to have some particular content? What is it for a contentful mental whatnot to have the particular content that it has? How are the particular contents of particular mental representations individuated?

I thought it was interesting that this problem is of no concern to Cummins. He somewhat off-handedly lets us know that proposed answers to PMRS don’t ‘really matter much [to his project in the book]’ and that his ‘topic is the nature of representation, not what sorts of things do the representational work of the mind.’(2)

At this point I don’t really understand why he would dismiss PMRS as irrelevant. Presumably, and as he admits, a solution to PMR that takes mental representations as explanatory primitives but then fails to account for its own notion of ‘mental representation’ is not satisfying. But won’t an ‘account of the nature of the ‘mental’ representation relation’ include an answer to PMRS? If not, why not? It’s a unclear to me at this point, since cognitive scientists refer to multiple kinds of mental representations - phonemes, spatial maps, concepts, etc. Cummins acknowledges this multiplicity of representations.


Notables from Chapter 1

The Central Question(s): What is it for a mental state or a mental object to bear a semantic property? What makes a mental state or object a representation?

Reaction: Does this entail that the problem of mental representation reduces to or is

equivalent to the problem of meaningfulness?

Cummins’ Three Varieties of Content (the generic stuff that underwrites whatever semantic properties are present):

Content of a cognitive system might be characterized in the following ways:

According to its intentional states (if it has them)
According to its representational states (if it has them)
According to the inexplicit content yielded by its structure

Also, intentional content ≠ propositional content, cf. Revere’s Lanterns. They bore the propositional content that the British were coming (a) by land if one latern was lit, (b) by sea if two lanterns were lit. But we shouldn’t attribute any contentful intentional states to the lanterns.

Reaction: Where did the lanterns derive their propositional content from and why does it matter here? The problem of representation (general; equivalent to the problem of meaningfulness in general? Is there such a problem), I presume, does not necessarily reduce to the problem of mental representation? Non-mental things can represent. The lanterns, for example, are not a mind or even a cognitive system. Why does it even matter whether non-intentional systems can bear propositional content or any content at all? We're after mental representation, not representation in general.

The Meaningfulness – Meaning Distinction and the Representation – Representations Distinction

The theory of meaningfulness/theory of meaning distinction is analogous to the distinction between the theories of mental representation and theories of mental representationS. The problems behind the theories of representation have to do with what it is for a mental representation to have a content and with the nature and content of particular representations respectively.

We might ask, similarly, what is it for a whatnot to have meaning and we might ask what it is for a particular whatnot to have a particular meaning. As with the former distinction, Cummins suggests that an answer to the general problem needn’t provide an answer to the particular problem. Is this right? What good is a theory of mental meaning that goes unapplied to instances, what might it tell us? What good is a theory of mental representation that goes unapplied to instances of (at least) kinds of mental representations? I don’t mean these to be rhetorical questions. Like I said the other day, questions about the ‘nature’ of things, especially vexing relations like meanings, confuse me.

Also, it is worth noting that, while Cummins insists that his question regards the nature of representation, he also insists that the bulk of the content of the book is concerned with theories of meaning. The strategy, then, is to look at in virtue of what theories of the meanings of particular whatnots have the particular meanings that they do (it should become clear from there, he says, what general theory of meaningfulness is entailed).

The asymmetry between this and the approach to the theory of mental representation struck me. As I pointed to above, Cummins says it doesn’t really matter which approach one takes to PMRS because the concern is with PMR. Why is the particular to general approach appropriate in the case of meaning/meaningfulness but not in the case of mental representation/mental representations? Why, especially, if there is some strong relationship between the question of mental meaning and mental representation, as there appears to be?

Suggestions: Pay attention to the broader initial problems.

Characterizing MEANINGFULNESS might be a broader project than characterizing mental meaningfulness.

Characterizing representation is broader project than characterizing mental representation.

But by Cummins’ definitions, a theory of meaningfulness applies only to kinds or classes of things, presumably things like sentences, propositions, signs, and especially mental representations. By Cummins’ own admission, different fields (within cognitive science) use MENTAL REPRESENTATION to refer to different explanatory primitives. The result is that the theory of mental meaningfulness, i.e. the theory of what it is in virtue of which mental representations have any semantic content at all is not a single project.

Also, since it’s not clear where and if the theory of mental meaningfulness and the theory mental representation come apart (both are concerned with what it is in virtue of which mental representations have semantic content), the ‘theory of mental representation’ we are engaged in will depend on which theoretical framework we find ourselves in. I assume that we should take ourselves to be within a broad theoretical framework, i.e. the CTC, but even within CTC there are adherents from the different fields that make up cognitive science. It leaves me wondering, when we say that it’s widely accepted that mental representations are language-like symbols, whether we’re saying that this is what the computer scientists, philosophers, linguists, and maybe the cognitive psychologists think, but not the neuroscientists, what about the behavioral neuroscientists? And we're certainly only with the classical computationalists, and not the connectionists.

Also, is ‘semantic content’ just ‘meaning’? And then is ‘mental content’ just ‘mental meaning’?

Representational Theory of Mind Chapter 2

What is the function of our mental states?
RTM- while mental states differ, one from another, mental states are representational states, and mental activity is the acquisition, transformation, and use of information and misinformation.
Contrast between human mental life and non-human mental life:
(1) We are flexible in our behavioral capacities.
(2) We are sensitive to the info in a perceptual stimulus rather than to the physical format of the stimulus.
The underlying idea here is that adaptive flexibility, especially learning, requires an ability to represent the world, for it is the info in the stimulus, not its physical form, that our behavior is sensitive to.
The big question: In virtue of which of their properties do the propositional attitudes (such as beliefs and desires) play the role they do in the causation of behavior? We need to show how physical systems like ourselves could have mental states that (a) represent and misrepresent the world, and (b) participate in the causation of behavior.
The No Magic Powers Constraint to answers to the big question: The functions allegedly essential to mental states must be functions actually performable by physical stuff.
One attempt at answering the big question- the language of thought hypothesis.
3 arguments for LOT [see Fodor Language of Thought book for more detail]: (1) semantic parallels between thoughts and sentences. (2) Syntactic parallels between thoughts and sentences. (3) Processing argument- processing has characteristics that make commitment to a language of thought inescapable.
If Fodor is right about LOT, we can naturalize the representational theory of mind. And it supports belief-desire (intentional) psychology, enables us to formulate three theses about the occupants of intentional roles:
Thesis 1: Propositional attitudes are realized by relations to sentences in the agent's language of thought. [this is intentional realism- humans' behavior and mental states are often the product of their beliefs and desires]
Thesis 2: The psychologically relevant causal properties of propositional attitudes are inherited from the syntactic properties of the sentence tokens that realize the attitudes.
Thesis 3: The semantic content of propositional attitudes are explained by the semantic properties of mentalese. The semantic properties of a token of mentalese are explained by its syntactic structure, and the semantic properties of the concepts that compose it.
Potential worry: Representational theories of mind that are unsupported by computational models risk turning into magical/circular theories of the mental, by positing an undischarged homunculus.
Computational models of cognitive processes help psychological theories avoid this regress (of the undischarged homunculus) in 3 different ways:
(1) Individualism: The processes that operate on mental representations are sensitive to the individualist or narrow properties of these representations. So, cognition is the processing of mental representations. But the cognitive mechanisms must be tuned to the structural features that code meaning, for they have no direct access to the extracranial causes of those features. Kinda like elementary formal logic.
(2) Mechanizing reason- it makes precise and manageable the idea of decomposing an ability into subabilities.
(3) 'Hard-wired' Reasoning Processes: In order to explain how the mind recognizes the structural features, we must posit a set of basic operations that the brain carries out, not in virtue of representing to itself how to carry them out, but in virtue of its physical constitution. So, (a) The properties of most immediate causal relevance to the cognitive mechanisms mediating the interaction of the sentence tokens in LOT are mind internal properties of some kind; (b) important cognitive processes are computational processes.
So, according to RTM, thoughts are inner representations with a double aspect- they represent in virtue of causal relations of some kind with what they represent, but their role within the mind depends on their individualist, perhaps syntactic, properties. So RTM is linked to the computational theory of the mind.

Meaning and Mental Representation Chapter 4

General intro to problems confronting covariance theories:
(L1) x represents y in LOCKE = x is a punch pattern that occurs in a percept when, only when, and because LOCKE is confronted by y (whiteness, a cat, whatever)
Positive aspect of this theory: proposes that the things that mediate cat recognition in the system must be the cat representations.
Another positive aspect of this theory: Does away with resemblance as the ground of represenation, and solves the problem of abstraction (nothing can resemble all and only the blue things, but something can be the regular and natural effect of blue on the system, and hence occur in the system's percepts when and only when blue is present to it).
The fundamental difficulty facing Lockean theories is to explain how misrepresentation is possible; for suppose LOCKE is confronted by a cat but generates a dog percept D- then it is not true that D occurs in a percept when, only when, and because a dog is present, since no dog is present and the current percept has feature D.
The covariance theory strategy for dealing with the problem of misrepresentation is via idealization- either idealizing away from malfunctions, or idealizing away from suboptimal conditions of perceptual recognition.
General problem for idealization solutions: The idea that one can idealize away from cognitive error is incompatible with a fundamental finding of CTC- error is essential to a well-designed cognitive system with finite resources, because in order to succeed it must take short cuts. [I like this quote- "Epistemology for God and epistemology for us are two different things. God never had to worry about recognizing tigers in time to evade them]
Specific problems for idealization solutions:
(L2) [idealizing away from malfunction] x represents y in LOCKE = were LOCKE functioning properly, punch pattern x would occur in a percept when, only when, and because LOCKE is confronted by y.
Problem: The most obvious/everyday cases of perceptual misrepresentation- illusions- are not cases of malfunctions, but cases of proper functioning in abnormal circumstances.
(L3) [idealizing away from suboptimal conditions of perceptual recognition] x represents y in LOCKE = were LOCKE functioning properly and circumstances ideal, x would occur in a percept when, only when, and because LOCKE is confronted by y. [the basic idea here is that something is a representation of a cat in virtue of having some feature that is, in percepts, an effect of cat presence and not of anything else]
Problem: Any specification of ideal circumstances will lead us in a circle. For according to this theory, we're going to have covariance only when the epistemological conditions (e.g. ideal circumstances) are right. And specifying those conditions will already presuppose content assignments to states of the cognitive system, because in order for the system to "get it right", it means that it has representations with the right content. So, to avoid being circular, Lockeans must specify ideal conditions in a way that does not presuppose content assignments to states of the cognitive system.
[At this point in the chapter, Cummins starts getting into possible strategies for the Lockean, involving inexplicit content, and I don't really understand it. I think further reading would be required to really get the inexplicit content stuff. However, I think that the main gist of the chapter can be captured without getting into that stuff, because Cummins concludes the chapter by reiterating what I have called the general problem for idealization solutions, and the specific problem for idealizing away from suboptimal conditions of perceptual recognition.]

Monday, December 10, 2007

Meaning and Mental Representation Chapter 3

Problems for the idea that representation is founded on similarity:
(1) Makes truly radical misrepresentation impossible, allows for misrepresentation only when the dissimilarity is relatively small.
(2) The problem of the brain as medium: Similarity theory seems incompatible with physicalism. If mental representations are physical things, and if representation is grounded in similarity, then there must be physical things in the brain that are similar to the things they represent. But this could only work if the mind-stuff is nonphysical. And "restricted" similarity (like pictures, cartoons) won't work because it is only "perceived" similarity.
(3) Similarity theories cannot deal with abstraction. How can a representation represent a whole class of things that differ widely from another on many dimensions? How do we rule out resemblance in irrelevant aspects?
For Locke, the problem of abstraction and the problem posed by secondary qualities lead to the covariance theory solution.

Meaning and Mental Representation Chapter 2

How an account of mental meaning might fit into an account of meaning:
According to the neo-Gricean theory of meaning, semantic properties of representations are derived from the intentionality of their users- either directly, or indirectly via convention. So, meaning depends on the communicative intentions of communicating agents.
This theory is a species of theory that reduces meaning generally to intentionality. So, it provides an asymmetric treatment of meaning in that it accords priority to mental meaning. [But, it is possible to hold that mental and nonmental representation are basically the same- see Block 1986 "Advertisements for a Semantics for Psychology", and Millikan 1984 Language, Thought, and Other Biological Categories. A symmetrical treatment of representation must ground intentionality in mental represenation. Two basic strategies- localism and globalism.]
The problem with using this theory to explain mental representation is that people don't use mental representations with the intention to communicate anything to anyone. One strategy, perhaps, for solving this would be to reduce nonmental meaning to intentionality, and then use RTI to reduce intentionality to mental representation.
Now for the main questions: What is it for a mental representation to have a content, and determines what content it has? In the context of CTC- what makes a data structure a represenation, and what determines what it represents.

The Representational Theory of MInd Chapter 1

Question: What makes a mental state the distinctive mental state it is, e.g. anger?
Possible answer: Its introspective, experiential quality.
Problems with this answer: (a) absence of introspectible qualities- person can be angry without being able to tell that fact about themselves. (b) Not distinct- not obvious that experiential sensations of anger are different from other emotional states of great arousal, such as fear, excitement. (c) Anger seems to have cognitive component, involving special types of belief and desire. But cognitive states (i) need not be conscious; (ii) are not distinguished from one another by their experiential quality.
Alternative answer: Functionalism- mental kinds/properties are identified by what they do, or what they are for, not what they are made of. So there is the following role/occupant distinction that provides us with two different, but complementary, ways of describing human mental life:
(1) It is a mental life in virtue of its functional description- specifies the causal roles of the full range of human psychological states.
(2) Description which specifies the physical nature of the occupiers/realizers of those causal roles.
Two features of functionalism that Sterelny points out:
(1) Availability of double descriptions (role/occupant) is not distinct to psychology- e.g. computer science, hardware description/information flow description. And the discovery of the gene illustrates how a theory of function can be developed independently of a theory of physical realization.
(2) Multiple realization (here, one mental state having wildly varied physical realizations) is not restricted to psychology.
Machine functionalists- cognitive processing is a special case of running a progam; cognitive states are states of the machine on which the mind-program is running. It was thought that anything whose behavior fits a machine table (a la Turing machine) is a functional system. But this turned out to be a bad idea [For more exposition on what was wrong with early functionalism, see Block (1978) “Troubles with Functionalism” in Block ed. Readings in Philosophy of Psychology Volume One]: 1) This makes functional descriptions too cheap/weak, because too many things (like the Brazilian economy, a pail of water in the sun, and the solar system) would qualify as functional systems. So the existence of entirely accidental correlations between physical states and symbols on a table isn’t enough for something to be a functional system. 2) Mysterious realization- in general, natural kinds are realized by more physically fundamentally natural kinds. But in machine functionalism, the relation is mysterious- it is a relation between a mathematical object (the mathematical function the machine table specifies) and a physical device.
So machine functionalism doesn’t capture what is distinctive about a functional system. Functional systems are systems whose existence and structure have a teleological explanation. Teleological account of the mind- the mind has “an internal organization designed to carry out various perceptual, cognitive and action-guiding tasks. It has that organization and those purposes in virtue of its evolutionary history.” [For more on the teleological response to early functionalism, see Lycan 1981 “Form, Function, and Feel”. Journal of Philosophy 78, pp. 24-50; Millikan 1986 “Thoughts Without Laws; Cognitive Science With Content”. Philosophical Review 95, pp. 47-80]
What kinds of creatures are intentional systems? Intentional system must (a) have perceptual systems, so there is a flow of info from the world into the system; (b) have a reasonably rich system of internal representation (thermostats aren’t intentional systems in pat because they represent only temperature); (c) have cognitive mechanisms that enable it to use perceptual info to update and modify the internal representations of the world, and (d) have mechansims that translate its internal representations into behavior that is adaptive if those representations fit the world.
So intentional systems can be psychologically very different from each other. So, actually, there are not two theories of the mind, a functional theory and a physical theory. For psychological states vary in the degree to which they are independent of their physical realization, and in the extent to which they are tied to particular psychological organization. This leads us to homuncular functionalism, where intentional systems have a multiplicity of psychological structures [exactly why, I don’t really get. For more on homuncular functionalism, see Lycan “Form, Function, and Feel” and Lycan (1981) “Towards a homuncular theory of believing” Cognition and Brain Theory 4 pp 139-59.]
Homuncular functionalism: (1) Functionalism- essence of a mental state is what it does, not what it is. (2) Mind is modular. (3) Each homunculus is in turn made up of more specialized simpler homunculi, until we reach a level where the tasks the homunculi must carry out are so simple that they are psychologically primitive.
2 big defenders of homuncular functionalism- Dennett and Lycan (maybe also Simon). They like the example of our specialized cognitive mechanism for face recognition.

Meaning and Mental Representation Chapter 1

Cummins distinguishes two different problems concerning mental representations. The first problem concerning mental representations is the problem of representations- understanding the physical instantiations of mental representations (orthodox computationalism- symbolic data structures; connectionists- activation levels of ensembles of simple processors, and/or the strengths of the connections among such processors), and their roles in mental processes.
There have been 4 answers to the problem of representations, concerning the sorts of things that can be mental representations: (1) Mind-stuff inFORMed- the same stuff that makes a red ball makes us perceive a red ball. Similarity is the big thing here- what we have in our head is capable of representing the world because it is made of the same stuff. (2) Images- same as Mind-stuff inFORMed view, minus Aristotelian jargon. (3) Symbols: (a) in contrast to preceeding views, symbols don’t resemble the things they represent; (b) they can be inputs and outputs of computations. (4) Neurophysiological states- mental representation is a biological phenomenon essentially.
The second problem concerning mental representations is the problem of representation- understanding what it is for a cognitive state to have a content.
There have been 4 answers to the problem of representation, concerning the nature of representation:
(1) Similarity- in order to be able to think about things in the world, need to have something resembling the thing in the world in your head.
(2) Covariance- certain characteristic activity in (neural) structure covaries with something out there in the world.
(3) Adaptational role- this, not covariance, accounts for the representation.
(4) Functional or computational role- functionalism applied to mental representation.
[I’m not sure I understand exactly what’s involved in solving the problem of representation. I would love it if we could think of an analogy in some other area of philosophy. Maybe this Cummins quote on methodology will be helpful: “We must pick a theoretical framework and ask what explanatory role mental representation plays in that framework and what the representation relation must be if that explanatory role is to be well grounded.”]

Most of the book will be assuming an orthodox computationalism background (CTC- computational theory of cognition) that provides an answer to the problem of representations (mental representations are symbolic data structures) but is agnostic about the problem of representation (concerning what it is for a data structure to have semantic properties).
Cummins urges that at the outset, in order to help distinguish between the various issues involved and solutions proposed, we should not be assuming either (a) Representational theory of intentionality (RTI)- intentional states inherit their contents from representations that are their constituents; or (b) The language of thought hypothesis, according to which, cognitive states involve “quasi-linguistic formulas type identified by their states in an internal code with a recursive syntax.”
The reason why it is important not to assume RTI at the outset: Represented content isn’t all the content there is. There is also inexplicit content of various kinds (e.g. content implicit in the state of control, content implicit in the domain, content implicit in the form of representation, content implicit in the medium of representation), and if nothing like the RTI is true there is also intentional content. [I don’t think I fully understand this point. It might be helpful to read Cummins 1986 “Inexplicit Information” in The Representation of Knowledge and Belief, ed. Brand and Harnish]
The reason why it is important not to assume the language of thought hypothesis at the outset: “A symbol can have a proposititonal content even though it has no syntax and is not part of a language-like system of symbols.” E.g. Paul Revere’s lanterns.