I suppose the lanterns do some work to serve the Cummins’ purpose that you mention. Maybe a symbol or group of symbols has propositional content, still the symbols don’t need to be language-like. Score for the connectionists (not that I’m taking sides here yet).
But it’s worth noting, I think, that something hinges on there being a viable analogy between symbols like the lanterns and the mental symbols of CTC; and just calling the two very different sorts of things symbols is not enough. So we’re assuming that mental symbols are sufficiently like non-mental symbols. But it’s far from immediately clear to me that this is appropriate. In fact, there seems to be good reasons to think that they aren’t too similar, for instance, the considerations that have to do with what Cummins called original meaning.
If someone wants to defend, for instance, what Cummins calls a symmetrical theory of meaning(fullness?), (in order to avoid the circularity in saying that mental representations get their meaning from original mental meaning), where mental and non-mental meaning are essentially the same sorts of thing, there’s a really tall task ahead of them. For one thing, content would be unrestricted, there would be content everywhere, in cognitive and non-cognitive systems. It becomes less clear that the notion of representation is an interesting one (and possibly one that can ground mental causation) if tree-rings represent/symbolize the tree’s age in the same way that I (mentally) represent triangles and categories for natural kinds.
I don’t think it’d be right to say, for instance, that the lanterns represented, had their content, or meant anything without their being assigned the representational role they had been assigned for the purposes of interpretation by cognitive agents. In effect, the kind of symbol that lacks what Cummins called original meaning seem to be representations, but they lack original content. I think ‘original content’ is a better term than ‘original meaning,’ just because of the terminological adjustments I’m hinting at. I guess I have naïve intuitions that there is something like original meaning/content from which some kinds of symbols (like the lanterns) get their meaning. Although I do want to say that original meaning/content is not in play in a really strong sense, i.e. that there’s no representational content anywhere that doesn’t derive from mental meaning. But then maybe I want to split the notions of representational content and meaning in a substantial way. I’ll try to say more about what I mean below.
I mean, I think Cummins is right to point out that the Gricean theories come up short with respect to mental representation, if they would want to go there. As Hagueland notes, on threat of regress [Gricean] mental meaning can’t account for the content of representations. But I’m also inclined to think that there may be, and if there isn’t maybe there should be, different senses of representation and meaning in play. Meanings apply, maybe, to symbols that derive their semantic properties from original meaning. Mental representations, maybe, don’t ‘mean’ anything; they’re just what we use because they’re what we have.
If so, maybe we should reserve ‘meaning’ for non-mental representations, like the lanterns. I really don’t want bite on the line Cummins says Fodor takes either, but it’s partly because I think that the sense of ‘intentionality’ in play is very underdeveloped, by Cummins and many others. The term ‘intentionality’, Dretske has noted, is ‘much abused’. I don’t have an argument (yet), but I’d like to ground mental representation in intentionality without being committed to the view that representational contents are just propositional attitude contents. I think things like concepts and spatial maps, but maybe not things like production rules and phonemes, have intentionality. (And with respect to concepts, I don’t think (as Davidson apparently did) that having beliefs is necessary for having concepts).
While I’m at splitting stuff, and back to the top, we could say that there are (at least) two different kinds of SYMBOL, maybe corresponding to two different kinds of representation. There are those deriving content from original meaning (content), i.e. the lanterns, and those that have their content by some other natural means and are capable of playing the data structure role CTC wants. The interesting symbols for our purposes, of course, would be the latter ones. After all, even if CTC is right, if our thoughts are language-like symbols, they aren’t the kind we interpret (I don’t think) or use to communicate anything. This doesn’t mean it’s a good idea to jump immediately to some symmetrical theory of symbols, meaning, representation, content, etc. (just to avoid the problem of intentionality and ultimately the ‘problem of consciousness’ which I think is a strong motivation here). It also makes me wonder if we should expect, from the outset, a theory of mental representation to contribute in a substantial way to a theory of representation in general; mental representation and the representational roles of symbols that lack original content seem like they might be very different things.
Subscribe to:
Post Comments (Atom)
5 comments:
Oh man, we're getting into so many terms of art, it's kinda confusing. I'm finding it helpful to think about this on a daily basis, to start getting more fluent with the terminology (although this is very unhelpful for motivating me to work on the crappy metaphysics paper I have to write).
Let's see, first off it'd obviously help us to have more phil language background if we're gonna do theory of meaning stuff.
For better or worse, I'm not as skeptical of the symmetrical theory as you are. It might just be bgecause my first exposure to this stuff was in reading Millikan, so I'm maybe more used to the symmetrical way of thinking than the asymmetrical. And I also have an unhealthy amount of worship for Millikan's wisdom. Anyways, Cummins spends very little time on this subject, we'll have more to go on when we read the Block (1986), which might be our first reading next quarter?
You say that you'd like to ground mental representation in intentionality. Could you spell out a little more the motivations behind this? As Cummins presented it, both the asymmetrical theory and the symmetrical theory should be committed to the implausibility/impossibiliby of grounding mental represenatation in intentionality.
You maintain that "mental representation and the represenatational roles of symbols that lack original content seem like they might be very different things." But the good reason that you cite to think that they aren't too similar- the original meaning considerations- is a reason in support of the meaningfulness of non-mental states being derived from the meaningfulness of mental states, and I don't think this shows that they are necessarily very different things. Could you spell out a little more your reasons for thinking they are very different things, and what might be at stake in this assertion?
No shit, man, I’m swimming in terminology over here, and (I think) ambiguities. Now, I was curious about your comment that it would help to have more philosophy of language background. Of course, I think that’s right, but did you mean to insinuate that the ‘theory of meaning’ is in the domain of the phil of language? Or just that so much work on meaning has been done in there that the more we studied in there the better? I only ask because a big issue I’m having right now is that I can’t tell (in Cummins; and probably in general) whether the general theory of meaningfulness comes apart from the general theory of representation. Do they? Does it matter? If they don’t then we can simplify our language by stipulating that we’ll use representation-talk instead of meaning-talk or vice versa. The crucial notion seems to be CONTENT in either case anyway; the meaning of any representation is the content that it bears.
Yeah, I mean, I can’t take a stand on symmetry yet; to me it seems that there are strong considerations in favor of both symmetry and asymmetry. By the way, I’m extremely interested in Millikan’s account of intentionality, and I’m not really moved by the objection that the CTC is committed to the history-less study of cognition. What might be interesting to consider also is Davidson’s swampman, although I never really felt the force of that either. We should look at (well, I should look at) Milikan’s ‘Biosemantics’ in the Mind & Cognition anthology.
Cummins thinks that grounding MR in intentionality is implausible because he thinks (I think as many do) that all (and only?) propositional attitudes are intentional states. I think this is an artifact of Fodor’s theory, which has as an aim the vindication of folk-psychology. A lot of the work I want to do in the future involves dissociating intentionality from the attitudes, without trivializing intentionality and making all representational properties intentional properties. My interest in non-human animal cognition involves the attribution of intentional states to non-human animals, but not beliefs and desires in anything resembling the folk-sense.
That’s a nice point. Let’s see, what might be at stake. I try to avoid splitting terminological hairs here, but look at it this way. Right, if we’re talking about meaning, it doesn’t seem like there’s really a substantial way to say that objects and states mean things in different ways. Meaning, whatever it is, I think I’m happy to say, is meaning.
But maybe then we really do need to distinguish between meaning and content. Mental representations, we’re taking it, are contentful. But this is not to say that they mean anything, and maybe this is where the theory of meaning, representation, and mental representation come apart. One sense in which MRs are meaningless is the sense in which they are never the subjects of interpretation; we just have them.
The point I’m pushing here, very tentatively, is that not all symbols/representations bear content in the same way. Some, like the lanterns, are imbued with content/meaning by stipulation. Others, like MRs bear their content essentially or something like that. That is, however an MR is structured, it bears the content that it does essentially, not by stipulation, and not as a derivative. The difference between mental representation and representation in general is that while the former are always un-interpreted, some representations are interpreted, and are only representations insofar as they are interpreted. The substantial difference I point to is in this vicinity. Some representations (the mental ones) are inherently contentful, what it is to be an MR is to bear content. Some representations are not inherently contentful, it is not a necessary condition on something’s being a lantern that it bear content.
I think there’s lots more to be said on both sides here. I’m looking forward to it!
hey, i'm a moron, my replies to your comments will seem really disjoint because i was looking at your comments as i wrote. i forgot to include transitions and/or text i was responding to. annoyingly, i can't figure out how to edit comments.
by the way, i changed your blog status to administrator, are you getting emails now?
"Cummins thinks that grounding MR in intentionality is implausible because he thinks that all propositional attitudes are intentional states." I'm not really seeing this in the Cummins, but maybe I'm a step or three behind. The big problem is that this chapter is only 5 pages, so there are probably a million assumptions behind every paragraph. Anyways, here's how I thought the argument was working ong pgs. 22-23: If one adopts a neo-Gricean theory of meaning[and this is not the only game in town], then meaning in general depends on intentionality. So if we want to apply a neo-Gricean theory to meaning of mental representations, then the meaning of mental representations must depend on intentionality. It seems to me that, whatever one's view on intentionality, this will be implausible, given that the Gricean theory of meaning and communciation would have to posit a heck of a lot of intentional stuff going on ("sub-personal agents")inside the heads of humans, animals, whatever, and that seems implausible.
"The meaning of any representation is the content that it bears." That sounds like it's in the right ballparkish, but certainly different theories will have different implications. My impression is that there is theoretical work to be done by each of "meaning", "representation", and "content", and one of our goals for next quarter should be in getting clearer on what work is being done by which term.
Yeah, I didn't mean that the theory of meaning is strictly in the domain of phil language, but rather that we can't do serious theory of meaning stuff without being aware of what's been done in phil language.
I propose moving on to the next couple of chapters. whaddya say?
Yes I think it’s wise to move on; I’m a walking quagmire. Seriously, though, I agree about aiming at least partially at getting a clearer picture of the theoretical roles of ‘meaning’, ‘representation’, and ‘content’. I’d feel like I gained a lot if I gained that.
I’ll just say a couple things. Here is the Cummins quote that prompted me to say what I said (about the picture of intentionality I think is inadequate and misleading):
‘As I use the term, intentionality is just a system with propositional attitudes (belief, desire, and so on). Thus construed, intentionality is a commonplace, and hence so is intentional content…the assumption I want to scout out is the assumption that the problem of mental representation is just the problem of whatever attaches beliefs and desires to their contents.’(14)
This is sort of aimed at the RTI theorists: they think that mental representations are the constituents of propositional attitudes, and that’s why they think the problem of MR is just the problem of intentionality. I don’t know if this is fair to RTI theorists or not.
Here is the footnote attached to this piece of text:
“The general practice in philosophy these days is to use the term ‘mental representation’ or ‘cognitive representation’ as a generic term for any state of a cognitive system that is individuated by semantic content. This simply blurs the distinction between representational states (e.g., data structures) and intentional states (e.g., beliefs).”
I know we can’t really get into this now, but I wanted to clarify that it is this distinction that I question. I’m OK with the blurring, because I don’t associate intentionality with only the propositional attitudes. That may not be kosher if I wanted to hold onto Cummins’ variety of CTC, or maybe any at all, but I’ll have to pick my position later on. We move on!
Post a Comment