Tuesday, August 16, 2016

Recent Books on Causation III: Carolina Sartorio, Causation and Free Will, Oxford, 2016

 Carolina Sartorio, Causation and Free Will, Oxford, 2016

The styles of philosophy change. Spinoza gave us axioms, from which it was patent his “theorems” did not follow. Hobbes, and Locke and Hume gave as long essays. Berkeley and Hume, dialogues.  Nowadays, philosophical style is more often like a video game with unspoken rules: the reader is told the author has a goal, followed by example, counterexample, perplex after perplex, which the author dispatches one after another, like so many arcade mopes, with occasional reverses to revive the dead and kill them again. Double tap. And, then, finally, the reader reaches The Theory. Or not.  Ellery Ells’ endlessly annoying Probabilistic Causality is like that, and so, less endlessly—hers is a short, dense book--is Carolina Sartorio’s Causation and Free Will. You can’t say Ells didn’t think hard about his topic, he did, and so evidently has Sartorio, but you can say that both of them, and a lot of other philosophers, could have made reading and understanding a lot easier by laying cards on the table to begin with. At least her syntax is not contrived to hide banality beneath bafflement.

Shelled and peeled, the story is this: an action is done freely by a person if (and I suppose only if) the person caused the action via a sequence of events that included, as actual causes, rational (given the person’s desires and beliefs) reasons for the act and absences of reasons not to do it, absences, again, as actual causes in “a normal, non-deviant way.” (p. 135).

How can absences of reasons be causes, you ask. Easy, you ate ice cream because you did not have a reason not to of the kind “I am allergic to ice cream” because you are not allergic to ice cream and you know it. So the absence of that reason was a cause of your eating ice cream. In the vernacular, we allow absences as causes all the time: my tomato plants died because I didn’t water them.  Of course, if metaphysicians take the vernacular literally and allow absences as causes then they  will have an infinity of them in every case: my plants died because Barack Obama did not water them, and so on.  Sartorio is content with that, and presumably content with an infinity of such ghost causes accompanying every cause that actually happens. Essentially, every ceteris paribus clause becomes an infinity of actual but non-actual (because absent) causes.

Absences as causes might seem gratuitous in her story. They are there because she wants to distinguish, on the one hand, between courses of action in which the agent would be sensitive to reasons against the action were the reasons real (the absent causes) and, on the other hand, courses of reasoning in which the agent would not be sensitive to similar reasons were they real (the absent non-causes).  Philosophy is in some places Humpty-Dumptyish, and metaphysicians are legally free to talk as they want, including saying that if in deciding to do something you would be sensitive to a reason, were you to have it, a reason that you do not in fact have, then the absence of that reason is a cause of what you do.  I don’t think such talk helps anything, and in science, where absences are ceteris paribus clauses or shorthands for unknown (or boring) positive details, it’s silly. Ask a physicist to predict the position of Mars from a position and momentum it does not have now.

Absences as causes necessitate recourse to “a normal and non-deviant way,” she argues, because the absence of a reason could be a cause of an effect because, were the reason to be present, that would cause some external process  (Sartorio likes examples with miraculous neuroscientists standing ready to intervene) to prevent the effect, and so the agent would be “sensitive” to the absence of the reason. 

Ever since it became abundantly clear that we are biological and physical machines, not just our bodies, as Descartes allowed, but the whole of us, as Helmholtz allowed, philosophers doing “moral psychology” have tried to reconcile us to the loss of the Thomistic/Cartesian fancy.  The plain fact seems to be that we do not have anything of the kind that Aquinas and Descartes claimed for us. So live with it.  Daniel Dennett (Elbow Room) assures us that we should be content, even happy with our state; it gives us everything we could want. He is wrong. We could want not to be like that, and most of us do. The that is a machine whose workings are determined—or at least caused—by forces that antedated us. The that is a person who has as a zygote or neonate been implanted with a device that determines her subsequent responses to her environment. We do not want to be like that even if nature did the implanting. To be in human bondage, and know it, is one of the metaphysical agonies.

One compatibilist response to the metaphysical agony is that it pines for an incoherence, that there could not thinkably be a system of the kind Descartes and Aquinas claimed us to be. But of course there could. We have perfectly clear mathematical theories of non-deterministic automata, whose transitions between states (Hilary Putnam once thought of them as mental states) are neither determined nor probabilistic.  The other compatibilist response is Orwellian, meaning changing the language. I think Sartorio’s response is of the Orwellian kind, but tempered. She says she has the intuition that if the human machine is formed by nature, well, its actions can be free. She doesn’t offer a survey of others’ opinions. Bless her, she elaborates only on the condition that her intuition is correct.

There remains the serious scientific project of how consciousness, and deliberation happen, and how they came about, and the sociological, anthropological project of understanding the conditions under which various communities claim free agency and when they do not, and how those conditions (which have evidently changed) come about as a social process, and perhaps the moral project of consoling those who agonize for the loss of free will, but there doesn’t remain anything metaphysical to do about freedom of the will.  Nothing, at least, of value.

Monday, August 15, 2016

Recent Books on Causation II, Douglas Kutach, Causation

Douglas Kutach, Causation, Polity Press, 2014

This, too, is an introductory book, but a good one.  The author mixes in historical sources with a wide ranging, and generally accurate and informative exposition of contemporary (i.e, since 1946) accounts of the metaphysics of causation. It has some sensible questions for readers. I would use it as a textbook, with some apologies to the students. What apologies?

1.     Like most other discussions of the metaphysics of causality, Kutach appeals to what we think we know for motivation, examples and counterexamples, but there is not the least hint of how causes can be, and are, discovered.
2.     While the book is less mathophobic than most philosophy texts, it is not always mathematically competent, doesn’t use what it does develop well, and presents mathematical examples that will be unenlightening or worse to most students.
a.     Early on “linearity” is discussed a propos of causal relations, but the author clearly doesn’t mean linearity. It is not clear what he means. Monotonicity perhaps, or non-interaction.
b.     Having introduced conditioning and independence and the common cause principle, there is a rather opaque discussion of Reichenbach’s attempt to define the direction of time by open versus closed “conjunctive forks” but the author fails to note that closed forks become open when common causes are conditioned on.  One question asks students to describe a graphical causal model with a specific probability feature, which would have been straightforward if the reader had been given an illustration of how graphical causal models are parameterized to yield probability relations, but that did not happen.
c.      As an example of uncertain extensions of familiar cases, students are referred to transfinite arithmetic.  Some help.
3.     Some the exposition could be more attractive, notably the explanations of token versus type, singular versus general. Distinctions (never mind notation) from formal logic are suppressed everywhere, even when they would help. The presentation of determinism is unclear and inadequate.
4.     Metaphysical discussions of causality inevitably make claims about what people would say without any consideration of what people do say. The extensive psychological literature on causal judgement, some of which has interesting theories, is entirely ignored.
5.     And sometimes the author says exactly the opposite of what he means—slip of the keyboard?

Ok, nothing is perfect, there could be better textbooks, but this one is usable, which is to say, given the alternatives, outstanding.

Saturday, August 13, 2016

Recent Books on Causation, from the Really, Horribly Bad to the So-So to the Pretty Good

There are a bunch of books on causation recently. I expect to review them all here in due time. At least one is so bad that it does not deserve reviewing, let alone having been published, but at least there should be a warning somewhere. So here.


I. The Worst: Stephen Mumford and Rani Lill Anjum, Causation, A Very Short Introduction, Oxford, 2013

  Causation is meant to be a quick introductory text surveying contemporary and historical views of causation. For an astute reader, it would be very quick, stopping at, say, page 12. Should in misplaced charity that reader venture on, she would find chapters badly organized, missing their targets (e.g, "finding causes" is reduced to an uninformative mention of randomized, controlled trials), historically uninformed, and terribly referenced. But, as I say, any reader on cortical alert would throw the book away around page 12. There, the authors address Russell's early argument that causes cannot be fundamental because causes are asymmetrical and the fundamental laws of physics are symmetrical equations.

Russell is wrong they say, because "equations have at least some directionality." Here is their argument:

"We say that 2 + 2 = 4, for instance, which is to say that each side is of equal sum. But is is less obvious that 4 = 2 + 2 insofar as 4 can also be the sum of 1 + 3. The point is that 2 +2 can equal only one sum 4, whereas 4 can be the sum of several combinations (2 and 2,  1 and 3, 10 minus 6, and so on). And in this respect there is at least some asymmetry." (pp 12-13)

Somewhere, in Norway or Nottingham, the transitivity of equality, and Russell's point, was missed. 

Then, in nice condescension, the authors write that 

"Second, Russell's account was based on his understanding of the physics of 1913. There have been a number of attempts by physicists to put asymmetry back into physical theory. One such notion is entropy, which an irreversible thermodynamic property."

The  last clause of the last sentence is a bit of nonsense, --it's not the property that is irreversible, it's changes in it, but more importantly the idea of entropy, and the word, had been in physics for about 50 years when Russell wrote.   In 1913, Russell didn't understand the physics of 1913, and neither, apparently, did the authors in 2013.


Wednesday, January 13, 2016

The ;Nonsense of "The Stone"

The New York Times occasional philosophy column, The Stone, has built a reputation for unilluminating heat, slovenly inference and wanton accusations.  Almost any column would do as an example. I will take a recent reflexive example, “When Philosophy Lost its Way” in the January 11, 2016 Times.

First, what way did philosophy lose?  The high moral ground, for one thing, say the Texan authors. Philosophers of yesteryear (before the 19th century) showed integrity and selflessness. Our contemporaries by and large do not.  The study of philosophy, in yesteryear, elevated those who pursued it.  Of old, philosophers were concerned with human functions and purposes. Now they are not. Philosophy was a quasi-priesthood, a vocation. Now it’s just a job. Philosophy of old was spread among the professions, the idle rich, etc. Now it’s confined to philosophy professors.

Second, how did philosophy lose its way?  It became part of the university.  That removed philosophers from “modern life.” (I wonder where the philosophy professors live who don’t: pay taxes, have illnesses, worry for their children, hold political views, fall in and out of love, get divorced, give to charities, etc. Maybe it’s North Texas.)  In the good old days, lots of people with different interests were philosophers, but after the 19th century they all became academics. lost their virtue and their connection with human concerns.  That’s the story.

Unlike the Texas philosophers, I am loathe to defame the integrity or selflessness of contemporary philosophers. I have met a few really vile ones, but mostly they have seemed pretty ordinary folk on moral dimensions.  But I am not so sure that philosophers of old were selfless and notably different in integrity from their contemporaries. It reads to me as if the Texans have been taking The Apology as the common standard of philosophers before philosophers became professors.  Was Aristotle, who left a contentious democracy to educate the mad son of a monarch, selfless?  Was Plato, the Athenian aristocrat, selfless?  Moving up, what was selfless about Leibniz—did he sacrifice himself in some way for others?  Few characters in intellectual history seem less selfless or charitable than Hobbes and Newton, who saw personally to the mutilation of coin clippers. Integrity (and courage)? You won’t find it uncompromised in Locke, who contributed (albeit on tolerance) to the Fundamental  Constitution of Carolina,  an oligarchy ruling over indentured servants that violated both letter and spirit of Locke’s 2nd treatise—which treatise Locke made sure not to publish while he lived.

There are lots of examples of 20th century philosophers who acted with selflessness and integrity.  Bertrand Russell, who went to prison over his opposition to World War I; David Malament, who did the same over his opposition to the Vietnam War; Paul Oppenheim and Carl Hempel, who helped Jews out of Germany during the Third Reich; Albert Camus, who was part of the French underground. Philosophers not engaged with modern life? Read Philip Kitcher, read Daniel Dennett’s more recent works, read just about anything by Peter Singer. Are there no 20th century philosophers who were not professors? Alan Turing was one of the most influential philosophical writers of the 20th century—among other things of course. He held an academic position only in the last years of his life.  Camus was a journalist. Paul Oppenheim was a businessman. John von Neumann, who stimulated both the philosophy of quantum theory and computation, was a mathematician.  Russell spent most of his career outside of the academy. Lawrence Krauss, a physicist, is a metaphysician as well. 

What is true is that as universities spread and secularized, a lot more people became “philosophers” and a lot of them are very ordinary people with ordinary minds. The same is true of lots of disciplines I expect, say physics.

What is the author’s remedy? Simple: philosophers should get out of universities. The authors teach at the University of North Texas.

Causal Decision Theory and Conditioning: a Primer

Standard Savage decision theory as well as Richard Jeffrey’s alternative, address a normative problem for an odd doxastic condition.  an agent fully believes:

·      a set of all of the available, mutually exclusive actions;
·      a set of exhaustive and mutually exclusive possible states of the world;
·      a set of consequences—outcomes—of each possible state of the world/action pair.

and the agent:

·      Has coherent degrees of belief in the possible states of the world;
·      Has utilities (or in Jeffrey’s version, desirabiities) for the outcomes.

The normative question is which action the agent ought to take. The answer offered is the action, or one of them, that maximizes the expected utility, where the expectation is with respect to the degrees of belief in the states of the world.

From an ideal Bayesian perspective, what is essential is the distinction between actions and outcomes and their costs or values.  The ideal Bayesian knows which actions have the maximal expected utility. The states of the world are gratuitous.  Followers of Savage, or Jeffrey’s in effect assume the agent only obtains the expected utilities by calculating them using the specified states of the world and probabilities of outcomes, given the various possible states of the world and actions.

What is odd is that no epistemological problem is considered about how an agent knows, or could know, or rationally assess, the possible states of the world and their probabilities, the possible actions, or the probabilities of outcomes effected by alternative actions in the several possible states of the world.  a thorough subjectivist such as Jeffrey would answer these questions: all that is relevant are the agent’s degrees of belief about actions, states of the world,and outcomes and their desirabilities.  Epistemology reduces to observing, Bayesian updating, and rather trivial computation. Be that as it may, or may not, causal decision theory considers two kinds of complications.

1.     The agent believes that the action chosen will influence the state of the world.
2.     The agent believes that the state of the world will influence the action chosen;

This is already a conceptual expansion for the agent, to include causal relations and probabilities of actions.
In case 1, how should the agent take account of the belief that the choice of action will be influenced by the state of the world?  For simplicity, first assume the outcome is a deterministic function of the action, a, and the state, s, of the world, and the utility is U(o(a, s) where o is some function actions and states.

Proposal 1:  Calculate the expected utility for each action as the sum over states of the world of the utility of each action in that state of the world multiplied by the probability of that state of the world given the action:

(1) Exp(U(a))  = Σs U(o(a,s)) Prob(s | a)

In Savage theory the last factor on the right hand side of (1) and (2) is just Prob(s)

One “partition question” concerns whether the action that maximizes utility is the same depending on how the set of states is “partitioned.” Let S be a variable that ranges over some finite set of values, s1,…,sn.  a coarsening of S is a set S1 = {{s1 v..v sk}, {sk +1 v …v sm},….{sm v…v sn}}, etc. a refinement is the inverse.

Coarsening can change the probability of an outcome on an action. Let S = {s1, s2, s3} and suppose S’ is a coarsening of S to {(s1 v s2), s3}. For all outcomes o and actions a, let o and a be independent conditional on s1 and likewise on s2 and s3, but S not be independent of A.  Then for any outcome in O:

P(O | a, (s1 v s2)) = P(O, | (a,s1 v a,s2) = P (O, a, (s1 v s2)) / (P(a,s1 v a,s2))  =

P((O, a, s1) v  P(O, a, s2)) / (P(a,s1 v a,s2)) =

P(O, a, s1) + P(O, a, s2) / ((P(a,s1) +P( a,s2)) =

[P(O | a, s1) P(a, s1) + P(O | a, s2)] / ((P(a,s1) +P( a,s2)) =

[P(O | s1) P(a, s1) + P(O | s2) P(a, s2)] / ((P(a,s1) +P( a,s2)) =

(P(a) [P(O | s1) P(s1 | a) + P(O | s2) P(s2 | a)]) / (P(a)( (P(s1 | a) +P(s2) | a)) =

[P(O | s1) P(s1 | a) + P(O | s2) P(s2 | a)] / (P(s1 | a) + P(s2) | a))

The probability distribution of 0 given the state s1 v s2 in S’ varies as the conditional probabilities of s1 and, respectively, of s2 vary with the value of A they are conditioned on, and O and A are not independent in S’ but they are independent—by assumption—in S.  

For case 2, the results and the argument are similar.  The general point is an old one, Yule’s (on the mixture of records).

The partitioning problem does not apply to Savage’s theory—it makes no difference how the range of possible state values are cut up into new coarsened variables.  

So decision theory when the actions influence the states or the states influence the actions is up in the air—the right decision depends on the right way to characterize the states.  Various writers, Lewis, Skyrms, Woodruff and others, have proposed vague or ad hoc or infeasible solutions. Lewis proposed to chose the most specific “causally relevant” partition, which I take to mean the finest partition for which there is a difference  in elements of the partition in the probabilities of outcomes conditional on actions. Skyrms objects that this is often unknowable, and proposes an intricate set of alternative conditions, which Woodruff generalizes. The general strategy is to embed the problem in a logical theory of conditonals, and entwine it with accounts of “chance”and relations of chance and degrees of belief, e.g., the principal principle. The general point is hard to extract.

When states influence actions Meek and Glymour propose that there are two theories. One simply calculates the expected values of the outcomes on various actions as with Jeffrey’s decision theory, the other assumes that a decisive act is done with freedom of the will, represented as an exogenous variable, that breaks the influence of the state on the act.  

Appealing as the second story may be to our convictions about our own acts as we do them, or deliberate on what to do, it is of no avail when the actions influence the states, not vice-versa. For that case, one either knows the total effect of an action on the outcome, or one doesn’t, and if one doesn't, there is nothing for it except to know what the states are that make a difference.  One would think serious philosophy would have focused then on means to acquire such knowledge. One would be wrong.

Monday, August 25, 2014

Review of Philosophy of Science, 81, July, 2014

This issue of Philosophy of Science contains some good, some bad, some odd. It gives evidence that methodology in philosophy of science is pretty much in the doldrums or worse, while good work is being done producing economic models for various ends.


This is a very brief rehash of some history of probability, coupled with some remarks on ergodic probabilities, remarks that go nowhere. The piece seems oddly  trivial and unworthy of its distinguished author.  One has to wonder why it was published—or submitted.   Hypothesis: The author is eminent and a colleague of the editors. That sort of thing has happened before in Philosophy of Science, although not that I can think of under the current editors.  But one of the things colleagues should do for one another is discourage the publication of stuff that is trivial or bad in other ways. 

Ben Jantzen,


Likelihood has an apparent problem. Suppose you are weighing hypotheses h1 and h2. You know b. You learn e. Should you compare h1 and h2 by

p(e | h1, b) / p(e | h2, b)  or by p(e, b | h1) / p(e, b | h2)?

Which hypothesis is preferred may not always be the same on the two comparisons. Jantzen makes the sensible suggestion that which to use depends on whether you are asking about the extra support e gives to h1 versus h2 in a context in which b is known, or whether you are asking about the total support.  Jantzen’s point is not subtle, but the paper is well done and the examples (especially about fishing with nets with holes too large) are illuminating.

Which reminds me of a deeper problem with likelihood ideas that seem not to be much explored: likelihood doctrine seems to imply instrumentalism. 

Likelihood arguments are used not just to compare hypotheses but to endorse hypotheses, e.g., via maximum likelihood inference.  Consider two principles:

1.      Hypotheses addressing a body of data should be preferred according to the likelihood they give to that data.
2.      A hypothesis should not be endorsed if it is known that there are other hypotheses that are preferred or indifferent to it by criterion 1 above, especially not if there is a method to find such alternatives .

If the data is finite, the hypothesis just stating the evidence has maximum likelihood.  So some additional principle is required if likelihood methodology is to yield anything more than data reports.  The hypothesis space must somehow be restricted.

Try this:

3.      Only hypotheses that make predictions beyond the data are to be

So suppose there are data e1…en and consider some new experiment or observation e not in the data but for which “serious” hypotheses explaining e1…en gives some probability to the outcomes. Let the outcomes be binary for simplicity and so h gives the probability to be is P(e | h).  Consider the hypotheses:

e1&…&en & argmax<h,> (P(e | h) if argmax<h,> (P(e | h) > argmax<h,> (P(~e | h) ,and e1&…&en & argmax<h,> (P(~e | h) otherwise

This hypothesis meets condition 3 and gives e (or ~e) a likelihood at least as great as any alternative hypothesis.

Ok, try this:

4. Only hypotheses that make an infinity of predictions are to be considered.

But the stupid pet trick above can be done infinitely many times. So try this

5. The hypotheses must be finitely axiomatizable.

 Still won’t do, as (I think an easy adaptation of) the proof in http://www.jstor.org/stable/41427286 shows.

Lina Jansson


Both the thesis and the argument of this paper are either opaque or weird; it is difficult to see the warrant for publishing.  Her stalking horses are “causal accounts of explanation.”  On Streven’s account, causal asymmetry is why X explains Y rather than the other way round—Dan Hausman had that idea earlier; on Woodward’s account, X causes Y but Y does not causes X implies that a manipulation of X changes a manipulation of Y, but not vice versa.  So far as I know, neither of them claim that all explanations are causal explanations. But a lot of them are.

Jansson’s argument seems to be as follows:

Leibniz held that Newton’s gravitational theory was not a causal explanation, because causal explanations require mechanisms and no mechanism was given for gravitational attraction. She reads Newton as “causally agnostic” about his laws, which seems to me a very long reach. He was agnostic (publicly) about the mechanisms that produce the laws, but not that the laws imply causal regularities: drop a ball and that will, ceteris paribus, cause it to take up a sequence of positions at times in accordance with the law of gravity.  But suppose, for argument, she is right, then what is the argument?

She writes: “Put simply, the problem of understanding this debate from a causal explanatory perspective stems from the reluctance, on both sides, to take there to be a straightforward causal explanation given by the theory.”  And, a sine qua non of a correct account of explanation is that it be able to “understand the debate. “ 

There is this oddity about universal gravitation and causation. If I drop a ball it causes the ball to fall, the ball’s falling influences the motion of Mars (instantaneously on Newton’s theory), and the change in the motion of Mars influences the course of the ball, also instantaneously. Immediate feedback loop. But Mars influence doesn’t determine the position of the ball after I drop it, and the position of the ball after I drop it doesn’t cause my dropping it.

Anyway, her point is different. Here is the form of the argument. 

Accounts S and W say Newtonian gravitational theory is causal.
Neither the creator of the theory nor its most prominent critic unequivocally said it was causal.

Therefore accounts S and W are false (or inadequate, or something).


A: Chemical changes involve the combination or releases of substances made up of elements.

Lavoisier said combustion involves combination with oxygen.
Priestley said combustion involves the release of phlogiston

Therefore A is false.

The theory of probability specifies measures satisfying Kolmogoroff’s axioms.

Bayesians say probability is opinion.
Frequentists say probability is frequency

Therefore the theory of probability is false.

Jansson’s “methodology” assumes that concepts of causation and explanation never change, and that historical figures are always articulate, and never make errors of judgement in the application of a concept, and that if some historical figure would only apply a concept under restrictive circumstances (e.g., no action at a distance), an account of the concept must agree with that judgement or posit a new concept.  Individuation of concepts is a vague and arbitrary matter—are there the concept of causality, Leibniz’s concept of causality, Newton’s concept of causality, etc.?  On her view, so far as I can see, for every sentence about causal relations, general or specific, about which some scientists sometime have disagreed, two new concepts will be needed.  Not much to be learned from that.

Robert Batterman and Colin Rice
Revise and resubmit
Another essay on explanation (will philosophers of science ever let up on this) whose exact point is difficult to identify.
We have argued that there is a class of explanatory models that are explanatory for reasons that have largely been ignored in the literature. These reasons involve telling a story that is focused on demonstrating why details do not matter. Unlike mechanist, causal, or difference-making accounts, this story does not require minimally accurate mirroring of model and target system.
We call these explanations minimal model explanations and have given a detailed account of two examples from physics and biology. Indeed, minimal model explanations are likely common in many scientific disciplines, given that we are often interested in explaining macroscale patterns that range over extremely diverse systems. In such instances, a minimal model explanation will often provide the deeper understanding we are after. Furthermore, the account provided here shows us why scientists are able to use models that are only caricatures to explain the behavior of real systems.
The idea seems to be that there are theories that find features and relations among them that entail phenomenological regularities, no matter the rest of the features of a system, and no matter whether the features in question are exactly exemplified in a system.  There are two examples, one from fluid dynamics, the other Fisher’s opaque explanation of the 1:1 sex ratio in many species based on the equal effort required to raise males or female offspring, but the differential average reproductive return to raising males if females are in excess or raising females if males are in excess.  I don’t understand the fluid dynamics model, and Fisher’s requires a lot of extra assumptions and ceteris paribus clauses to go through, (grant the equal cost of rearing male and female offspring but imagine that one male can fertilize many females and there is a predator that prefers males exclusively) but never mind.

What I don’t understand about this paper is why most theories in the physical sciences don’t satisfy B and C’s criteria for a minimal model. Thermodynamics? The details of the molecular constitution of a system are largely ignored. Relativity? It doesn’t matter whether the system is made of wood or iron, the Lorentz tranformations still hold; it doesn’t matter how the light is generated, its velocity is still the same. Newtonian celestial mechanics? Doesn’t matter that Jupiter is made of gas, Mercury of rock, and Pluto of ice, still the same planetary motions. Even theories that probe into the internal structure of a system are minimal with respect to some other theories. Dalton appealed only to masses of elemental particles—that, and a few assumptions yields the law of definite proportions. Berzelius added electrical forces between atoms, which were gratuitous for deriving definite proportions.

What is not clear in this paper is how B & C intend to distinguish between minimal models and almost every theory that shows a set of features, individual or aggregate, or approximations to such features, and related laws, of a kind of system suffice for phenomenological relations. That is what physical theories generally do. Their fluid flow example almost suggests that all that is required is an algorithm that generates the phenomena from (perhaps) measurable features a system.  So, considering that example, the authors might have asked: when is an algorithm for generating the phenomena an explanation of the phenomena? They did not.

Dean Peters

Revise and resubmit

Peters’ essay is useful in two respects. First, it treats the question in the title as turning on this: what parts of the data confirm what parts of a theory?  That adds a little structure to the philosophical discussions of realism. And, second, it provides a succinct critical review of bad proposals to answer the question. Peters’ has his own answer, which is not obviously useful. Here it is:

“So, to pick out the essential elements of the theory under the ESSA, start with a subtheory consisting of statements of its most basic confirmed empirical consequences or perhaps its confirmed phenomenological laws. These, after all, are the parts of a theory that even empiricists agree we should be “realists” about. Further propositions are added to this subtheory by a recursive procedure. Consider any theoretical posit not in the subtheory. If it entails more propositions in the subtheory than are required to construct it, tag it as confirmed under the unification criterion, and so add it to the subtheory. Otherwise, leave it out. When there are no more theoretical posits to consider in this way, the subtheory contains the essential elements of the original theory.”

 The proposal as developed is insubstantial: “Consider any theoretical posit not in the subtheory. If it entails more propositions in the subtheory than are required to construct it” – what does “required to construct it” mean? 

In criticizing other proposals, Peters appeals to logical consequences, and proceeds with a distinguished set of “posits”—i.e., axioms.  Hold him to the same standard. Theories can be axiomatized in an infinity of ways. We need an account of the invariance of the result of the procedure—whatever it is—over different axiomatizations, or an account of “natural axiomatizations” and warrant for using them exclusively. The work of Ken Gemes and Gerhard Schurz is relevant here.  So it seems to me that Peters has an idea—conceivably ultimately a good idea—that he did not do the work to make good on.
Roger DeLanghe


This is a very nice essay providing a simple economic model in which there are balancing incentives for scientists to adopt and contribute to an existing theory or to propose a new one.  Lots that might be done to expand the picture for more realism, and it would be nice if those pursuing Kitcher’s original idea assembled some relevant data. 

Marius Stan

Unity for Kant’s Natural Philosophy

I have no opinion about this essay, which is on how Kant might have sought, although he did not, synthetic a priori grounds for Euler’s torque law. Nor do I see why anyone should care. Clearly, some do.

Carlos Santana


This well argued and lucid essay shows that there is a model in which agents with ambiguous signaling (under replicator dynamics) invade a population of unambiguous signalers, but not vice-versa. Despite the considerable empirical evidence the author (a graduate student at Penn) gives for the insufficiency of other explanations of the frequency of ambiguity in human and animal communication, I am worried by the following thought. The evolution of language—or at least signaling-- we expect to have gone from the very ambiguous to the more precise. That is what syntactic structure and an expanded lexicon afford. So if signaling by ambiguous strategies cannot be invaded by signaling by “standard” (i.e., perfectly precise) strategies, how did more precise, if still ambiguous in some respects, signaling systems evolve?  It strikes me that the author may have proved the wrong result.