contra aftab... again!

Awais Aftab and I have been having a discussion, via our respective blogs, about the intelligibility of certain notions in cognitive science. This stemmed from our opposing valuations of Anil Seth's book 'Being You'. Here's his latest post; below: my response.

Orbits and Explanations

What's an orbit, and what in a celestial system is properly said to orbit what? Well, take your pick:

i) What's properly said to orbit what (the sun orbits the earth, or the earth orbits the sun) depends purely on a decision as to what we set as our reference frame. (This was the 'geometric' conception I was working with before.) Pin the frame to the earth, and the sun will be doing the orbiting. Pin the sun, or go with Newton and pin the 'fixed stars' (imagine they exist), and we'll now have the earth doing the orbiting. (Note that, on this definition, talk of orbits just represents the distance and orientations of bodies: perfectly circular orbits are not here a different matter than two bodies rotating on their axes.) 

ii) What's orbited is always the centre of mass of a system of bodies, not any object in it. Thus the centre of mass of the solar system is - because the sun is so massive compared to planets and moons - fairly near the sun's centre. The sun wibbles its way around it without it ever moving outside the sun's circumference. The earth orbits this centre of mass from a greater distance; so it always 'goes round', if not 'orbits', the sun.

iii) What orbits what is given by a grammatical rule which says: when two bodies move around one another, the body which is most massive is that which is orbited.

iv) If the centre of mass in a system remains inside one of the objects, we say the object in question is what is orbited. (This is another grammatical rule.)

Now if I read him correctly, Aftab would reject i)-iv) and opt instead for: 

v) X is properly said to orbit Y if we have an explanation for X going round Y but not an explanation for Y going round X.

For Aftab, 'because Sun is extraordinarily more massive than Earth, it has a much larger gravitational pull'. And because of which has the much larger gravitational pull, we have 'a perfectly good explanation as to why Earth would move around the Sun' but not vice versa.

Now, Newton's third law tells us that forces between bodies are equal and opposite. So earth's gravitational pull on the sun is, one might think, as great as that of the sun on earth. And so I'm not entirely clear what Aftab means by saying that the pull of the earth on the sun is less than that of sun on earth. But perhaps we should distinguish between pull and force. Thus we might now say 'Because it's so much more massive, that same amount of force from the earth moves the sun but a little, whereas because it's so much less massive, the same amount of force moves the earth a lot'. And 'pull' we define in terms of the force's effects. (A big man readily pulls a little child along the ground and not vice versa, despite the fact that the forces in play between them are equal and opposite.) But the difficulty with this is that it simply begs the question we're trying to address. For why should we describe the effects in one way (the earth moves) rather than the other (the sun moves)? You can't here appeal to their movements in establishing which pulls which - well, not without a crippling circularity. (The example of the big man and the little child doesn't contradict this, for here we've involve a third body - the earth - which we've already established as our frame of reference.) The dilemma I see for Aftab here is simply: i) if you appeal to grammar to do the work of establishing which orbits which, then the appeal to explanation is redundant. ii) if however you leave room for explanation to do its alleged orbit-determining work, then you'll just end up begging the question as to what shall be counted as moving (and so as to what we shall count as orbited). My own proposal is that we instead just make clear which of senses i) - iv) we're using and leave matters there.

Perception

Aftab tells us that 'Gipps seems to think' that 'how do we perceive?' 'is not a meaningful question'. I'm a bit puzzled by that. After all, in both of my previous posts I said that it surely made good sense to enquire into the neurobiology and physiology of smell, hearing, sight, etc. Why can't that 'how?' question be used to prompt such enquiries?

Now I do happen to think - don't you too? - that we'd need to find out rather a lot more of what was puzzling the utterer of such words ('how do we perceive?') before we could be sure that anything we said would be meeting their need. (After all, word strings enjoy such meaning as they have only in particular contexts. 'How do we perceive?' does not, in and of itself, invite any particular enquiry; its sense is radically underdetermined.) But it's surely not hard to imagine contexts - I already adverted to some neurobiological contexts in my previous posts - into which a word string like 'how do we smell?' or 'how do we touch?' could be inserted and in which it could constitute a meaningful question. 

But perhaps here's the focus of our disagreement. There's a use of the 'how do we x?' question which asks which component actions we need to perform in order to succeed at action/task x. How did you plough the field? Well, I got the tractor out, filled it up with gas, attached the plough, lowered the plough into the earth, drove it over the field, etc. My thought is that, when you ask 'how do you x?' in that spirit, we're typically already at the end of the action line when we get to hearing, smelling, moving your finger, etc. At this point, other questions and other answers may find their place - for example, 'when you move your finger / smell the rose, what happens in your nose/brain/arm to make this possible?' Our interests will now typically be framed in physiological terms. Cognitive scientists, however, typically take there to be one or more intermediary levels of explanation here - levels that in some sense are still worth calling 'psychological' even if we're no longer talking about the actions of whole persons. It is about the viability of such levels that, I believe, Aftab and I are in disagreement. But, to be 100% clear about this: I'm not trying to rule out a priori that enquiries and explanations framed in cognitive scientific terms are possible. My method is different: it's to urge that those who posit such a level a) aren't clear about what they mean, and b) rather look as if they've got in an unwitting muddle. (The difference between 'you're talking nonsense!' and 'might you say what you mean, because so far as I can tell you're not using words in the normal way here?' should I hope be obvious by now.)

Prediction

The terms in which Aftab articulates this intermediary level are 'information', 'inference', and 'prediction'. It's not, as he puts it, that the brain makes (Bayesian) inferences or predictions or processes information in the ordinary sense of those terms. Instead it does something analogous. So, what are these analogous senses? Seth didn't tell us in his book, and I've not yet found ready elucidations in the cognitive science literature. Now, Aftab doesn't tell us what it is for a brain to make something like an inference, but he does offer a suggestion as to what it might be for it to make something analogous to a prediction. This is predictive text on a phone.

If I understand Aftab right, then the idea is that the brain may be said to make predictions in the same sense that the phone makes predictions when we're texting. It's not that the brain predicts in the normal sense of 'predict', since otherwise we'd be in the peculiar business of trying to explain our ability to, say, make predictions in terms of our brain's ability to, er, make predictions - which would kinda be a non-starter. (It'd be like positing representations to explain how we see things - when the notion of a representation, if it's being used in anything like the ordinary sense, is clearly of something which itself needs to be seen. Or like explaining procedural knowledge in terms of the possession of theoretical knowledge which we'd have to know what to do with... etc. etc.) Instead, the brain 'predicts' in the sense of 'predict' that's in play when we talk of the phone predicting. Well, what is this sense, and is this a realistic suggestion?

Consider an online or paper dictionary: type in / look up 'arbo' and it will (let's imagine) show an alphabetical list like 'arboreal', 'arboriculture', 'arborization' ... etc. When we use predictive text, though, the order of words appearing on the screen isn't alphabetical, but instead depends on how often we've personally used them before (and how often we've used them after the previous word you've just written, etc. etc.). It's this difference - from a pre-programmed static order to a dynamically updated order - that gives our talk of the phone 'predicting' its sense.

I don't know that any particularly clear intuitions exist regarding what happens if this 'prediction' no longer displays. I mean: imagine that the problem is just with the output to the screen: might we say the phone is still predicting text? And at what point of failure in matching displayed word with intended word do we say that the phone is no longer predicting? Is it making bad predictions then, or just not predicting? And of course it's not that the phone knows what a word is, knows that you're typing on it, has any kind of orientation towards the future, can read or speak or write, has any genuine competencies, knows a language, can try or not try to do anything, etc. Speaking and writing - and ordinary predicting - are activities that go on for beings with a social form of life, and not only does the phone not enjoy sociality - it's not even alive. The phone has no praxis: it's not oriented to the truth; it's not engaged in intentional actions, since it has no ends other than those set by the programmer or those for which it's employed by the user; it doesn't actually follow or fail to follow rules - though we of course can describe its activity by using a rule (i.e. it behaves in accord with, rather than actually follows, rules); it only gets things 'wrong' or 'right' in an utterly derivative sense - i.e. in relation to our intentions to write this or that word; it can't think thoughts, and so the 'predictions' it makes aren't instances of thought; it understands (and misunderstands) nothing. But that's all fine of course. We don't mean that the phone is really making predictions in the normal sense. Predictions, after all, are actions, whereas all the phone (and, for that matter, the brain) has going on in it are instead (and as Aftab himself alludes to) happenings.

Now, Aftab says that what we have, when talking of predictive text, is nevertheless an analogical rather than metaphorical sense of 'predict', and that it's 'similar enough' to what we do when we think about what will happen and issue an actual prognostication. I confess I'm not quite sure what to make of this given both the myriad dissimilarities and the utterly derivative, artifactual, sense in which a phone 'predicts' anything. But perhaps the clue is in what Aftab also says: models of celestial bodies only make predictions of the planets' positions in a metaphorical sense, and to say of a pancreas that releases insulin (or whatever it does) in proportion to what's consumed rather than 'waiting' to detect blood sugar levels (I've no idea how it works; just imagine, ok!) is to indulge a 'pure metaphor'. In these situations there's 'nothing like prediction actually happening (as far as we know)'. .... But why is it that we say that the phone is doing something like predicting but that (my imagined) pancreas is not? Well, the only disanalogy I can see between them is that what the phone is involved with, even though of course it knows nothing of it (since it's not a knower), is semantic information or meaning. The marks on the phone's screen count as information because of how we relate to them, because of the place this artefact enjoys in our rich communicative, social, lives. 

The question still standing, now, is whether the brain could make predictions in something like this sense in which the phone predicts. And the issue I see with this suggestion is that there's an important sense in which we don't use our brains to think or smell. Now, sure, and of course, you'd have a hard job thinking or perceiving without a brain! And I don't mean to turn my face against idioms like 'use your brain for goodness sake!' That's not my point. What is my point is that the significance of the phone display really is a function of the phone having a role as an artefact within our discursive form of life. The significance derives from that use. The brain, however, has no such role. We can't see or hear or smell it or what's going on in it; we can't handle it; it's not a tool. It's part of us, an organ inside us, rather than something to which we, the 'whole us', stands in a meaning-conferring relation. Meaning is not conferred by us on our own brain activations: the activations are not used; they've merely a causal, rather than a meaningful, role in our normative practices.

In short, the relationship between brain stimulations and human psychological activity is quite unlike that between phone displays and human psychological activity.  (This, in effect, is precisely why the functionalist notion of 'brain as computer' failed all those years ago.) Whilst artefacts enjoy a derivative form of intentionality, organs don't. However we ought to articulate the relationship between events in my noggin and the thoughts I have, analogising with artefacts won't do it.

Information

During his discussion Aftab suggests that 'maybe, just maybe' the brain makes something analogous to inferences about what is in the world around it. He doesn't delineate this analogous concept directly, instead choosing to focus on something he calls a 'physical' as opposed (presumably) to an ordinary, 'semantic', sense of 'information'. What is this 'information'? Information, in the sense in which, say, the brain can be said to process information, is present if the

state of a system at one point in time has a discernible relationship with the state of a system at any other time (e.g. you can use an equation to calculate the state of a system at one point given the state of the system at another point). In the case of perception, let's say I see a tree in front of me, and then I copy the shape of the tree on a piece of paper. We can think of this in terms of flow of 'information' -- there is a relationship between the physical state of the tree, the physical state of my brain, and the physical state of the piece of paper.

The 'system' here is presumably the tree-brain-paper system. To offer another example: the longer you leave a pizza in the oven, the less 'information' it eventually contains as to what toppings (vegetables, cheeses, microbes, etc.) were on it when you put it in. Information in this sense is, note, relative to what's discernible by some or other observer. (It may also be something like Shannon-information, another non-semantic kind of 'information' which cognitive scientists have said is relevant to the study of brain processes.) 

Now it seems very likely to me - despite Wittgenstein's Zettel §610 - that there'll be physical-information 'about' (i.e. reliably correlated traces of) the environment 'in' the brain. There's presumably no little mouse neurone that lights up whenever you see or smell a mouse, but there will be brain activation patterns which in some way or other map onto both the objects around one which are causally impacting on the senses and onto what perceivers take themselves to perceive. As Aftab rightly says it's the task of neuroscience to work out these relations between sensory stimulations and perceptual reports / perceptually-informed activity.

What I can't yet see, however, is that this notion of physical information is going to get us anywhere when it comes to making sense of what it is for a brain to (in some or other similar-to-our-normal-use-of-the-terms sense) make inferences or predictions. After all, what inferences in the ordinary sense have to do with precisely is information in the semantic sense. Yet here we're all agreeing that there's no ordinary, semantic, information in the brain.

This, then, is the difficulty I see for the cognitive scientific project as it's typically spelled out.  On the one hand it's urged that the brain is making predictions, inferences, etc., not in a metaphorical sense but in something like the literal sense. To support this it's pointed out that artefacts like computers and phones do after all make something like predictions, process information, etc. However then when it's pointed out that these artefacts are only said to engage in meaning-related activity in a derivative concessionary sense, because of the place we confer on them within our normative practices, and that the brain enjoys no such role - its role being instead its causal contribution to our capacity to engage in such practices - then notions of information etc which don't have to do with ordinary meaning are instead invoked. But the difficulty now is that causal operations on meaningless physical information look simply nothing like predictions and inferences in anything like their ordinary forms.

Comments

Popular Posts