Sunday, October 26, 2014

Biased science

Most people think of bias in personal terms, but in science the most pernicious forms of bias are institutional, not personal. They are not, in other words, the result of rogue scientists fudging their findings to support their pet theories. Rather, they are the result of biased processes for publishing scientific findings that are, in fact, perfectly legitimate.

The key point to understand is this: For any scrupulously conducted scientific study or experiment, there is always some chance that its findings are wrong. Reporting bias and publication bias are effectively institutional preferences for selecting the results of just such studies and experiments for publication, while thousands of others that find no such results never see the light of day.  Both forms of bias are rampant in science, and their causes are many.

Social sciences suffer from major reporting bias, because most negative results are not reported. Franco, et al. (2014) conclude that out of 221 survey-based experiments funded by the National Science Foundation from 2002 to 2012, two-thirds of those with results that did not support a tested hypothesis were not even submitted for publication. Strong results were 60% more likely to be submitted and 40% more likely to be published than null results.  Only 20% of those with null results ever appeared in print. (See graphic here.)

It is not much better in clinical studies. Reporting bias leads to over-estimating or under-estimating the effect of a drug intervention, and reduces the confidence with which evaluators can accept a test result or make judgments about the significance of such results. For any medicines or medical devices regulated by the FDA, posting at least basic results to ClinicalTrials.gov is mandatory within one year of study completion, but compliance is low. A 2013 estimate puts the failure to publish or post basic results at over 50%. No study results get reported at all in 78% of 171 unpublished but registered studies completed before 2009.

Reporting bias infects the evidence evaluation process for randomized controlled trials (RCTs), the basic experimental design for testing scientific hypotheses. That RCTs have limits is well-known. Each requires a large number of diverse participants to achieve statistical significance. Often the random assignment of participants or sufficient blinding of subjects and investigators is not feasible, and lots of hypotheses cannot be tested due to ethical concerns. For instance, sham or ineffective treatments given to seriously suffering patients harm those who might otherwise benefit. We also shouldn’t do an antisocial behavior RCT in a simulated prison environment and to get accurate data. When RCTs are ethical and well-designed, the critical opinions of experts is crucial, since a risk of bias is always present.

Peer-review assesses the value of RCTs, but the effectiveness of this process is compromised when relevant data are missing.Without effective peer-review we consumers of science and its applications have no coherent reason to believe what scientists tell us about the value of medical interventions or the danger of environmental hazards. 

Not sharing, publishing, or making accessible negative results has numerous bad consequences. Judgments based on incomplete and unreliable evidence harms us. We probably accept many inaccurate scientific conclusions. Ioannidis (2005), for example, contends that reporting bias results in most published research findings being false.

Reporting bias harms participants in studies who are exposed to unnecessary risks. Society fails to benefit from the inclusion of relevant RCTs with negative results in peer-review evaluations. Researchers waste time and money testing hypotheses that have already been shown to be false or dubious. Retesting drug treatments already observed to be ineffective, or no more effective than a placebo squanders resources. Our scientific knowledge base lacks defeaters that would otherwise undercut flawed evidence and false beliefs about the value of a drug. RCTs and the peer-review process are designed to detect these but fail due to selective reporting.

RCT designs are based on prior research findings. When publishers, corporate sponsors, and scientists are unaware of previous negative results and prefer positive to negative results, many hypotheses with questionable results worthy of further testing are overlooked. Since all trials do not have an equal chance of being reported, datasets skew positive (erroneously) and this affects which hypotheses scientists choose to examine, accept, or reject.

Mostly positive results in the public record make the effect of a drug with small or false positive effects appear stronger than it actually is, which in turn misleads stakeholders (patients, physicians, researchers, regulators, sponsors) who must make decisions about resources and treatments on the basis of evidence which is neither the best nor available. Studies of studies (meta-analyses) reveal this phenomenon with popular, widely prescribed antiviral and antidepressant medications. Ben Goldacre tells a disturbing story about the CDC, pharmaceutical companies and antivirals.


A meta-analysis uses statistical methods to summarize the results of multiple independent studies. RCTs must be statistically powerful enough to reject the null hypothesis, i.e., the one researchers try to disprove before accepting an alternative hypothesis. Combining RCTs into a meta-analysis increases the power of a statistical test and resolves controversies arising from conflicting claims about drug effects. In separate meta-analyses from 2008 of antidepressant medications (ADMs) Kirsch,et al., and Turner, et al., find only marginal benefits over placebo treatments. When unpublished trial data get added back to the dataset, the great benefit previously reported in the literature becomes clinically insignificant. This is disturbing news: For all but the most severely depressed patients ADMs don’t work, and they may appear to work in the severely depressed because the placebo stops working, which magnifies the apparent affect of the ADM compared to placebo-controls.

Even when individual scientists behave well, the scientific establishment is guilty of misconduct when it fails to make all findings public.  In order for science to be the self-correcting, truth-seeking process it clams to be, we need access to all the data.

Scott Merlino
Department of Philosophy
Sacramento State

Sunday, October 19, 2014

The happiness of philosophers

Last week I promised to reveal some data to help answer three questions:
1. Are philosophers happy?
2. Are philosophers happier than non-philosophers?
3. Does practicing philosophy make people happier?
Here is a bit of information on the three studies that I am drawing from.

Study 1: Professionals (philosophers vs others).

From 2009-2013, thousands of people from around the world participated in the International Wellbeing Study, a multilingual online survey led by Aaron Jarden and his colleagues. I used international English-language philosophy email lists to encourage philosophers to take the survey. 96 philosophers took the survey in English. For this study, I compared these 96 philosophers with 96 random English-speaking non-philosophers. The philosophers were a broad mix of graduate students and all levels of professor. Essentially, this study compares very experienced philosophers with roughly equivalent non-philosophers.

Study 2: Upper Level Classes (philosophy majors in a philosophy class vs non-philosophy majors in a history class).

In 2013, I conducted a short paper survey on happiness in two very similar undergraduate summer classes: an upper-level history class and an upper-level philosophy class. There were 29 philosophy majors in the philosophy course and 63 non-philosophy majors in the history course. All of the philosophy majors would have completed 2-6 philosophy courses more than the history students. Matt McDonald input the data and helped with the analysis for this study. Essentially, this study compares somewhat experienced philosophers with roughly equivalent non-philosophers.

Study 3: Introductory Ethics Class (philosophy majors vs others).

Earlier in 2013, I also conducted a very similar short paper survey at the very beginning of a large introductory ethics course. 33 of the responding students declared themselves to be philosophy majors, while the remaining 130 reported not being philosophy majors. It is very unlikely that any of these students would have taken more than 1 or 2 philosophy classes prior to this one, although the philosophy majors are likely to have taken 1 or 2 already. The survey consisted of several questions about happiness chosen from the International Wellbeing Study. Essentially, this study compares inexperienced philosophers with roughly equivalent non-philosophers.

Hopefully it is clear that I have collected data that compare philosophers with non-philosophers at three (very rough) stages of the philosophical life-cycle: novice, apprentice, and professional.

Notes

Please note that all the questions were self-report questions with multi-option response scales. For example: “…please select the point on the scale that you feel is most appropriate in describing you.” “In general, I consider myself:” [scale of 1-6, where 1 is labelled “Not a very happy person” and 6 is labelled “A very happy person”]. For this blog entry, all response scales (and responses) were converted to 0-10 scales to make comparisons easier. Finally, ‘FoF’ on the figure means ‘frequency of feeling’. Analyses are preliminary. Not all differences are statistically significant.



1. Are philosophers happy?

Yes. They scored above average on all of the relevant scales. The three groups of philosophers also claimed to be happy 38-44% of the time (on average; when also given corresponding questions about feeling unhappy and neutral). But above average results like this are true of the vast majority of English-speaking Westerners (see here for a detailed PDF report). So, nothing too surprising or interesting so far.

2. Are philosophers happier than non-philosophers?

No. The bottom half of the figure above shows the differences between philosophers and non-philosophers in each group on several measures of happiness. The figure shows that the philosophers in each of the three studies reported being less happy than the non-philosophers. As your eyes travel up the figure, you’ll also notice that the philosophers in each group reported being less satisfied with life, usually having worse self-esteem, and being less optimistic about their future. In fact, the only question that philosophers scored higher on is their reported belief that happiness is something that we “cannot change very munch”. So, philosophers are less happy than non-philosophers, but we don’t yet know if philosophy is the cause of the difference.

3. Does practicing philosophy make people happier?

Because each of the three pairings of philosophers with non-philosophers roughly represents a different life stage of the “homo philosophicus” we can compare the differences at each life stage to see if more experience with philosophy exacerbates the problem. Sure, it would be better to track individual philosophers from cradle to grave, but that study would take a lot of money and time (my whole life or more!). Look back at the figure. In nearly every case, the differences between philosophers and non-philosophers increase as the amount of experience in philosophy increases.

Yikes! Compared to our relevant non-philosophy cohorts, we fall further and further behind in the happiness stakes! Note that novice philosophers are slightly less happy than their non-philosopher counterparts. So, philosophy seems to attract less happy people. But, the longer those novices practice philosophy, the more they fall behind those who do non-philosophical things with their time.

It’s not all bad, though. The ~10% difference in reported satisfaction with life actually decreases as philosophical experience increases. So philosophy might make us less happy, but more satisfied (but only just enough to catch up with farmers, postal workers, and school teachers).

The silver lining

But look again. As philosophical experience increases, the reported importance of happiness decreases. Contra hedonism, we learn that happiness isn’t all that important. Which is lucky, because philosophers also believe less and less (comparatively) that our happiness is the kind of thing we can change. Philosophers are also 10% less optimistic than non-philosophers. Given the approximate ~10% optimism bias most people have (see this meta-analysis), that makes philosophers realistic. Philosophers track truths that are relevant to their lives better than non-philosophers. Indeed, it may be a tacit understanding of this more accurate epistemic position that affords philosophers the smugness to offset the hit to our life satisfaction caused by being less happy! You might say that philosophers have exchanged some happiness for some truth. 

Socrates would approve.

Dan Weijers
Department of Philosophy
Sacramento State

Sunday, October 12, 2014

Does philosophy make us happy?

In Ancient Greece, well-born young men could decide what to do with their lives. Some chose to pursue pleasure via wealth. Others chose to chase power and glory through politics and war. A minority, including the likes of Aristotle, chose to pursue the good life through philosophy. Aristotle joined Plato’s academy. Other young men sought out philosophical training wherever they could find it. Women did not have as much choice. They could mostly be found in the garden of Epicurus.

In the same way that philosophers were the first mathematicians and scientists, they were also the first self-help gurus. Many Ancient Greek philosophers were and are still famous for their explicit application of philosophical insights to human life. Correspondingly, many young men sought out philosophy specifically because they desired guidance on how to live well.

What about these days? Self-help gurus are regarded with considerable suspicion by all but those who they’ve saved from sweating the small stuff or from following 6 easy steps instead of 5. Fortunately, philosophers are no-longer viewed as self-help gurus. Unfortunately, we have the opposite reputation—very honest and earnest, but as uplifting as a deflated balloon. Indeed, the amount of fun enjoyed at dinner parties seems to decrease markedly as soon as philosophers speak. Our general skepticism, liberal but unnecessary use of Latin, and penchant for pointing out the many other potential views on any topic, tends to kill the joy. So, in our contemporary era of extreme specialization, philosophers are rarely sought out for their advice on living well.

But philosophers still do discuss the good life. And we are very smart (right?!). So perhaps people should come to us for advice on how to live well. These speculations, and my love of philosophy, have led me to a few questions:
1. Are philosophers happy?
2. Are philosophers happier than non-philosophers?
3. Does practicing philosophy make people happier?
What is happiness exactly? We could call it: a preponderance of positive over negative feelings and a sense of satisfaction with our lives. This is the preferred definition in happiness studies. Along quantitative hedonistic lines, I prefer to call it simply: a preponderance of good feelings over bad. (I reduced the satisfaction with life to good and bad feelings—getting what I want without any felt reward doesn’t seem to make my life better). Regardless, both of these definitions lead to a further question.
4. What is the role of happiness in the good life?
While the first three questions are relative newcomers for me, this last question has shaped much of my research. Answers to 4 abound in ancient and contemporary philosophy. Interested readers can pursue this question further here. Importantly, most theories of the good life afford a prominent position for happiness in their hierarchies of value. Happiness might share first place with truth and friendship, or perhaps with beauty and virtue. Happiness might not be the ultimate good, but it’s certainly worth inviting to the party.

In next week’s Dance of Reason entry, I will report on 3 studies that involved asking philosophers and non-philosophers about their happiness. The studies cover participants from many nations and philosophers at all levels—from first year student to full professor. I’ll use data from the studies to answer questions 1, 2, and 3 above. But, I know that data is not always as convincing as a good anecdote. So, I’ll share my experience and opinion now.

I love philosophy, but it has not made me any happier. It may have even forcibly popped some comfortable bubbles… bubbles that can’t be re-inflated because the pointy arguments responsible remain stuck in my brain. The only exception to this is meat-eating. Moral philosophy forced me do away with it. Luckily my personal greed finally overcame the argument responsible. That argument is still sharp and pointy, and it’s still stuck in my head. I’ve learned to ignore it most of the time. But, I’d be happier without that occasional guilt.

To the extent that I am happy, it’s because of my optimistic disposition, my ability to selectively focus my attention, and the fact that nothing very bad has ever happened to me.

So, philosophy hasn’t made me happy. But I do appreciate that I have such a meaningful job. Philosophers can affect real change in the minds of people who will go on to impact the lives of thousands or millions of others. But, I don’t suppose that is unique to philosophy. Any kind of teacher could do something similar. And many professions can bring about positive change in the world. Consider also being a parent. Parents have a smaller audience (possible exception of Prof. DeSilvestro), but they have a deeper impact.

But, one anecdote does not make a complete proof. So, I call on all philosophy students, faculty, and philosophically-inclined bystanders to help me out. Please share your stories and views in the comments section. Some readers have dedicated all of their working lives to philosophy. Others are planning on doing so. Still others are planning on not doing so! How do you think your decisions have and will affect your happiness and/or your wellbeing? I’d particularly like to hear from students because most of us faculty have committed to philosophy in a way that would create extreme cognitive dissonance for us if we thought philosophy made us into the miserable grouches we are today!

Dan Weijers
Department of Philosophy
Sacramento State

Sunday, October 5, 2014

What crazy idea in philosophy do you think should be taken more seriously?


Kyle Swan: Open borders

People widely regard open borders as, not just crazy, but an insidious, masochistic attempt to bring down America.

But economists estimate that international barriers to labor mobility waste about an entire earth’s GDP, about $75 trillion, every year. This is mostly due to labor’s place premium: grab, say, a Cambodian construction worker, plop him down in the US, and his earning potential increases 8 times. No one thinks that implementing an open borders regime would be entirely smooth and seamless, but it seems like $75 trillion could offset a bunch of potential problems and disruptions.

Notice also that permitting labor mobility isn’t charity. It isn’t charity to stop preventing people from trying to improve their lives. Not being cruel to others is the least we expect from each other. It isn’t our fault that Cambodians are very, very poor. But it is our fault that we keep them poor by preventing people from offering them jobs that pay 8 times better than ones they have.

This is the point of Michael Huemer’s (Colorado) Starvin’ Marvin thought experiment. Many think that it’s morally worse to harm than to fail to confer a benefit. Even if that’s right, it’s obvious that closing borders is an instance of the former rather than the latter.

It was really, really bad when US law permitted employers to discriminate against women and minorities. But these days US law requires employers to discriminate against people with the bad luck to have been born in poor countries.


Patrick Smith:  There are no ordinary objects

Here’s a properly insane idea: there are no ordinary objects. No rocks, no trees, no books, no Buicks. How on earth could we think this? The notion that there are no ordinary objects that constitute much of our perceptual experience seems to be the result of extreme delusion. But how insane of an idea is it?
Let’s consider three propositions:
(1) There is at least one book here. 
(2) If something is an object (like a book), then it is composed of a finite number of atoms (but many, many of them). 
(3) If something is an object like a book (composed of a finite number of atoms, but many, many of them), then removing one or two atoms will not change the fact that there is at least one book here.
This is an example of an inconsistent triad: three propositions that lead to an inconsistency if we accept all as true. If (3) is true, and you apply (3) over and over, you end up having to say that there are no atoms in the situation and there is also a book. Since this is absurd, something has to give.

Now, (2) looks like science and (3) looks obvious, right? So, to preserve consistency, (1) has
to go. And books are the same sort of thing as rocks, trees, and Buicks. So, there are no ordinary objects.


Brad Dowden:  You can be of two minds

Not figuratively. Literally. And be in two places at once. A crazy idea.

This crazy idea needs to be taken more seriously because it is implied by rejecting the idea that you are an Ego Object thinking inside your brain. That classical “you” is an illusion. The new you is not some brain add-on, as if otherwise you would be a philosophical zombie.

To appreciate the implication, imagine that Derek Parfit invents a teleportation machine in his ontology laboratory. You step into his machine in Oxford, where the machine creates an atom by atom blueprint of your body, then sends this information by radio to Paris. A physical copy is created ten minutes later in the Paris machine using Paris atoms. Meanwhile only a pile of dust leaves the Oxford machine. You now exist in Paris.

But the same information that is sent to Paris is sent to a St. Petersburg machine. Two yous are created in these two machines simultaneously. The two yous start having different thoughts, and soon are no longer identical, but they’d be you. At the moment of creation, you begin being of two minds and being in two places at once. This is double survival after annihilation, not no survival. From the first-person perspective, they’d both know they are you in just the way that you can know that you are you after a night’s sleep without yet opening your eyes and looking into the mirror.


Emrys Westacott: The goal of philosophy is not truth

The pursuit of truth is the form that philosophical activity takes, but truth is not its end or purpose.

We assume that the goal of philosophy is truth because Plato said so, because science pursues truth and we’d like to be respected the way scientists are, and because the Truth is generally assumed to be a good thing–it hangs out with the Good and the Beautiful, “sets you free”, etc.

But the real purpose and value of philosophy is to be a medium through which a culture reflects upon itself. It shares this function with literature, film, and the other arts, as well as the social sciences. Methods vary between and within disciplines, but each discipline contributes to an endless, ongoing conversation about matters concerning humanity: how we conceive of ourselves and our activities, how we relate to nature, how we relate to each other, as well as normative variations on these questions. The great conversation may sometimes produce beneficial practical consequences, but its primary value is that it is enjoyable in itself and deepens the reflective dimension of human existence.

This view of philosophy should be taken more seriously because it might help assuage the anxiety philosophers feel over the fact that they aren’t scientists making well-defined contributions to the store of human knowledge. It’s crazy because even in the act of advancing it one seems to be announcing the discovery of an important truth.


Randy Mayes: False knowledge

The traditional analysis of knowledge as justified, true belief had a great run, but it lost its mojo in the late 20th century. No widely accepted analysis has emerged to take its place. Today there is much skepticism concerning the justification condition, and some concerning the belief condition. Few philosophers, however, dispute the truth condition. "Jane knows X, and X is false" is widely condemned as crazy talk.

There is a familiar sense of the term knowledge, regarding which such statements are incoherent. However, the familiar sense of a term is not of singular interest in philosophical inquiry. Human understanding of the world grows because we permit familiar meanings to change. One important mechanism of conceptual change is naturalization, which typically occurs in the service of scientific inquiry. Roughly, we appropriate a term whose ordinary meaning is informed by our sense of how the world ought to be, and tweak it to be useful in understanding how the world is.

Today, in disciplines like information science and artificial intelligence, researchers study knowledge as a natural phenomenon, not a normative concept. They seek to explain how knowledge is acquired, preserved, transmitted and consumed. From this perspective, it is unwise to stipulate that all knowledge is true because it is an open empirical question whether the presence or absence of truth has explanatory value. Knowledge, is, roughly, usable information. Some of it may be true, much of it clearly is not. Philosophers are crazy not to take this concept of knowledge seriously.


Scott Merlino: Supernaturals are superfluous

Descriptions and explanations do not need actual supernaturals to make sense out of what we observe, feel, think, do or say. This is a rational reason for not believing that angels, demons, faeries, ghost, gods, vampires, werewolves, witches, or zombies exist. Doing philosophy (logic, epistemology, metaphysics, ethics) demonstrates this. Have you got a proof that shows that God is necessary? Well, Norman Malcolm (1960) has an ontological argument that shows the opposite.

What makes this a crazy idea is that most people believe in supernaturals (Harris Poll, 2013). Supernaturals are non-physical, non-mental, non-sensible agents that are unconstrained by spacetime or natural laws. Angels visit, ghosts haunt, devils make us sin, gods make and destroy worlds.

We should take this view more seriously because it blinds us to fundamental incompatibilities between religion and science. They disagree about the necessity of a divine creator: Either an intelligent creator must exist, given evidence, or it is not the case that one must exist, given the same evidence. They disagree about how best to explain, say, amazing diversity and complexity in nature. Theists think one cannot explain it without the god of Abraham, Isaac, and Jacob, but cosmology and evolutionary theory show how one can. Each worldview cultivates contrary attitudes about common-sense, traditional beliefs, and novel assertions. Accept claims based on faith, regardless of evidence, or, believe only those claims grounded in testable evidence. Supernaturals, not being amenable to investigation, could be imaginary and we would not notice any difference.


Tom Pyne: Formal causes are real

It’s crazy to think that after 400 years we should see the revival of formal causes. Nonetheless, I think we will – and should.

Aristotelian natural philosophy appealed to the substantial form of the object and the capacities its substantial form bestows. Thus, a medieval Aristotelian explanation of why a material object continues in motion after it leaves the thrower’s hand involves the reception of an impressed form (impetus) which continued it in motion. In short, an acceleration, whose effects were describable by the Galilean formula ‘f = ma.’

Galileo was consciously anti-Aristotelian. He produced algebraic formulas which tell how much mass or acceleration is required to produce a given force, but offer no account of why.

Philosophers of science drew the wrong lesson: since the great increase in scientific knowledge occurred after the breakdown of the Aristotelian synthesis, that synthesis prevented it. Post hoc, ergo propter hoc. On the contrary, medieval natural philosophers had the resources to produce the scientific revolution – and came close to doing so.

Instead of substantial forms we have ‘laws of nature,‘ and centuries of debate over their force. Since they’re inductive, what confidence do we have about the future? Since they’re idealizations, they’re (strictly speaking) false. What’s with that? Do metaphysical cops enforce the law of gravity?

The capacities and powers substantial forms bestow on their objects are dispositional. In suitable circumstances the disposition is manifested, otherwise not. And if not no ‘law’ is broken. There is no natural necessity to cause problems.


Russell DiSilvestro: First Over third

Your first-person perspective sometimes has more legitimate epistemic authority for you than any combination of third-person perspectives—even in limit cases where it’s you versus your time’s unanimous scientific opinion.

Crazy, ain’t it? After all, some lunatics say as much: “don’t bother me with scientific or other “facts”; I just know I’m right, period.” Perhaps each of us can be gripped by a dogma that we think we know—perhaps via introspection—to be the truth about some matter. This happens in most areas of philosophy, including ethics—think of your strongest (a-)moral intuition—and philosophy of religion—think of your most vivid (anti-)religious experience. Silly subjectivism!

More seriously, sometimes your first-person perspective may rightly trump all third-person ones—combined. Forget what the crowd on the street all says; you—and only you—saw the whole traffic accident unfold from the window above the intersection. Even an equatorial tribesman who’s never seen or heard of frozen water should believe in it when his own lyin’ eyes are staring at an ice cube. (‘Course, same goes for when he hears a third-person report about ice by a trusty Scotsman—even one named David Hume.)

Also, isn’t the third-person edifice of empirical natural science built entirely of…first-person stones? Of scientist observations from individual experiments? Isn’t trying to do such science without first-person perspectives like trying to make bricks without straw (while building a pyramid)? The authority of first- over third- may be that of vine over branches.


Dan Weijers: Quantitative prudential hedonism is our best theory of prudential value

Quantitative prudential hedonism—all and only pleasure is intrinsically good for us (and the opposite for pain), and the value of pleasure or pain felt at any moment is dictated solely by the intensity of the pleasure/pain. 

Some implications: 
a) the good life is a life with many pleasures and few pains; 
b) non-pleasures that seem good for us are only good to the extent that they lead to pleasure for us;  
c) events and experiences that do not increase our pleasure or decrease our pain cannot be good for us; 
d) the source or “quality” of the pleasure does not affect its value, only the amount of pleasure affects value.

Quantitative prudential hedonism is thought to be crazy because another of its implications: connecting to a flawless machine that ensures a constant and intense feeling of pleasure BUT NO OTHER EXPERIENCES AT ALL is the best thing that anyone could do to further their own welfare. To most people, this sounds like crazy talk!

But how are we to know what being attached to such a machine would be like? J.S. Mill would have us believe that we’d be in the best position to judge the comparative value of our life and this machine life if we had experienced both. But no one has experienced a life of constant pleasure. So how do we know it won’t be super great?! And remember, any reason you give, I will try to make irrelevant by reducing them to pleasure and pain.

Sunday, September 28, 2014

In defense of causes

Students in Early Modern Philosophy seem shocked to learn that the scientific revolution beginning with Galileo was in effect an attack on the notion of causality. They assume that scientific explanations are causal explanations. Appeals to causality, however, did not figure in early modern science. Nor (while common in ordinary affairs) do they figure in our contemporary explanatory practices, whether inside science or out.

Scientific laws make no mention of causes, for instance. Galileo’s law for a freely-falling body

d = 1/2gt2 

tells us how far the object has fallen in a given number of seconds. The value for ‘g’ – 32.2 ft/sec2 – cannot be said to ‘cause’ the object to fall. (I propose a way to say that it does.)

In the ‘Deductive-Nomological’ model logical deduction takes the place of causality. You derive the phenomenon in question as a logical conclusion from general laws plus a statement of initial conditions. Indeed, to its advocates it was a virtue that it avoided issues of causality.

Increasingly, scientific explanations take the form of statistical correlations, leaving the question of causality entirely aside. The belief seems to be that, once you grasp the patterns of statistical variation, you have access to everything that it is possible – or necessary – to know.

So, are causal relations real features of the world or are they not? If they are, then explanations omitting them are spurious.

On the other hand, if they are not real, some difficult philosophical questions arise. A formulation of a law of nature is logically contingent. So if we take it to express a natural necessity then (barring the postulation of a Lawgiver) there will be no explanation why the particular law in question is the law. On the other hand if we accept a statistical correlation as the explaining formula, then the fact of the correlation itself cries out for explanation.

To be fair, three important considerations made denying a place for causality in explanation seem a reasonable thing to do – one historical, one epistemic, the third conceptual.

First, causes formed a central feature in Aristotelian natural philosophy. It is easier now to see that the apparent incompatibility between Aristotelian and Early Modern forms of explanation arose from features of a particular historical situation; it isn’t logical or metaphysical. 16th century Aristotelian natural philosophy was routinized and degenerate, but in the 14th century it was still very much alive and fruitful in results. Natural philosophers were mathematicizing Aristotle’s principles of moving bodies. William of Heytesbury, a member of the ‘Mertonian Calculators,’ derived the mean speed theorem usually attributed to Galileo. Jean Buridan improved on Aristotle by postulating that a moving body possessed an impetus. This impetus was proportional to the object’s weight, not identical to it as in Aristotle. It was an enduring property, thus it did not require continued action to maintain it. More importantly, in Buridan’s application of impetus to freely-falling bodies it causes a change the momentum of the body: that is, like Galileo’s factor g, it was an acceleration. To be sure, on Buridan’s account objects with more mass should fall faster. But this was also true in Galileo’s earlier Pisan dynamical theory (1589), which was not a significant improvement on Buridan. It’s now reasonable to claim that the resources necessary to have produced the Scientific Revolution were available to thinkers within the Aristotelian synthesis. Thus it is merely a contingent historical fact that the New Science makes no appeal to causes.

Second, it is reasonable to suppose that, even if there are causal relations, we have no independent cognitive access to them. All we can hope for are empirically discoverable natural laws or statistical correlations. However, it is also reasonable to suppose that we do have such access. The view that we don’t was of course codified in philosophy by Hume and has become one of the deep prejudices in philosophy. Pace Hume, we observe causes quite frequently. As John Searle points out, when a car backfiring makes you jump, you experience the causal relation: you don’t need to experience two backfires to get the connection. Wittgenstein’s advice to philosophers is particularly helpful here: “Don’t think, but look!”

Third, the prevailing debate on causality concerns whether it is a relation between events or states of affairs. However, this is a symptom rather than a cause of the modern avoidance of appeal to the relation. It is a logicizing of the relation, reducing it to a species of entailment. It leaves us unsure about such fundamental issues as whether it is even a temporal relation at all.

Are these considerations a sufficient excuse for continuing to avoid causality? I think not. If there is no fundamental conflict between two fundamental styles of explanation, then causal explanations and the whole panoply of contemporary science can work together. Indeed they should.

The concept of causality I favor would make it not just a relation between events or states of affairs, but between individuals in a number of categories – including events and states of affairs. Abstractly, the properties of an individual N give it causal powers to affect, and to be affected by, other individuals. Those powers would be described dispositionally and functionally. Sometimes N’s causal powers result in effects on individual J and sometimes they don’t, depending on J’s own powers as well as features of the environment. 

So Galileo’s g does ‘cause’ an object to fall. It is a measure of an object’s disposition to accelerate. Accordingly, causes don’t ‘necessitate’. Air resistance could affect the distance the object falls in a given time. Unlike Hume, we could still say that N is being affected, still has the disposition, even though it is not manifesting it. Dispositions and functions can be said to be ‘realized’ by the micro-entities of standard science.

The advantage to this explanatory move is that the temptations toward instrumentalism and eliminativism so common in our present explanatory practices would be much diminished, if they do not vanish entirely.

Who’s with me?

Thomas Pyne
Department of Philosophy 
Sacramento State

Sunday, September 21, 2014

Explanation and illusion


This is a sequel to a post I wrote in  June called "The explanatory reductio." The point there was this: Although an explanation is an attempt to understand an accepted fact, sometimes our inability to provide an explanation becomes a reason for rejecting the 'fact' instead. If you can't explain that beautiful creature in your bed, maybe it's not your bed.  Here I offer the following observation, and provide you with some examples:

Many of the greatest intellectual insights in human history resulted from someone explaining a widely accepted fact, as a grand illusion.

1. The moon illusion

As everyone knows, the moon is larger when it appears on the horizon than when it is high in the night sky. Sometime back in human prehistory ancient people wondered: Why does the moon grow and shrink like that? One evening, while sitting at his fire after a fruitless day of hunting, the famous cave philosopher Og, had an epiphany.

Og say moon not get small. Moon only look  small. Moon like prey, get small when run away.

Og was the first to articulate the appearance/reality distinction, an incredible leap in human understanding. Later astronomers accepted Og's view that the changing size of the moon is illusory, but they rejected his explanation. They believed all celestial bodies move in perfect circles, with the earth at the exact center of their orbits.  Aristotle proposed a new explanation: The atmosphere acts as a lens, magnifiying the moon's appearance. Wrong again, as can be easily shown. Cover the moon with a quarter while it is on the horizon, and cover it again when it is high in the sky. The distance between the coin and your eye is the same. There is still no accepted explanation of this phenomenon, and there probably will not be one until we understand which of several different illusions relating to relative size and distance our brain circuitry is falling for.

2. Planetary retrograde

The powerful idea that heavenly bodies move in perfect spheres around the earth was dogged for centuries by the strange phenomenon of planetary retrograde. Long before Aristotle, astronomers from different cultures had observed that planets would sometimes start moving backwards. Apollonius and, later, Ptolemy explained the retrograde as a real phenomenon. Their theory of epicycles held that the planets did not move in simple circles, but in epicycles like this:


The epicycle is the smaller orbit, and at the bottom of the epicyclic orbit, the planet itself really does move backwards. Genius. And wrong.  One of the great insights embedded in the heliocentric system proposed by Nicholas Copernicus is that retrograde motion is an illusion that occurs when earth's orbit either overtakes, or is overtaken by, the orbits of other planets.

3: Galilean relativity

Aristotle's physics accepted as real a variety of phenomena that turn out to be illusions. One of the most important- the basis of his categorical distinction between the heavens and the earth - is that rest and uniform motion are different states of matter. For Aristotle the natural state of an object is rest; it requires no explanation. For an object to move, and to continue moving, a cause is required. This illusion was finally shattered by Galileo (the most famous early adopter of the Copernican view) who realized that motion is not a property of an object at all; it is just an artifact of the chosen reference frame. Any object that can be described as moving in one reference frame, can be described as motionless in another. Galileo's insight was ultimately codified in Newton's First Law of Motion.

4: Darwin's theory of evolution

What accounts for the design of the universe? As most philosophy students know, William Paley argued that, just as the evident design of a watch suggests the existence of a watchmaker, so the evident design of the universe suggests the existence of a universe maker. Even David Hume, who rejected the designer hypothesis as childish anthropomorphism, admitted that the design of the universe required an explanation- and that he didn't have one. Enter Charles Darwin. In the Origin of Species Darwin provided a theory of the emergence, not of design, but of the illusion of design. The illusion of design could, he proved, result from a process involving competition, reproduction and blind, natural 'selection'. Darwin's insight effectively completed the Copernican revolution, destroying the nearly universal belief that human origins and human capacities are beyond human understanding.

5: Continental drift

What Darwin did to the eternality of species, Alfred Wegener did to the immobility of continents. Until the late 20th century every schoolkid was taught that these gigantic land masses were necessarily immobile. That uncanny fit between the west coast of Africa and the East coast of South America?  Coincidence. The similar fossil distributions on adjacent coastlines? Strategically placed land bridges that have, sadly, vanished without a trace. Wegener's theory of continental drift, which hypothesized an ancient super continent called Pangaea, represented the immobility of the continents as an illusion. They simply moved too slowly for us to notice. Wegener's theory was widely and understandably ridiculed, for he proposed no mechanism. But long after Wegener's death his view was mostly vindicated by the now universally accepted theory of plate tectonics.


These episodes are exemplary, not just because the thinkers were brave or creative enough to challenge orthodoxy, but because they succeeded in developing a model that explains both the illusion and the real underlying phenomenon.  Even most who achieve this are not ultimately successful. Plato agreed with Parmenides that the reality of the physical world is an illusion.  He explained it poetically as a shadow cast by the real world of the Forms. But the shadow was a metaphor drawn from the physical world, and the perfect world of Forms turns out to exist only in our imaginations. There are many other examples of the kind I have given above, historical and contemporary, successes and failures.  Perhaps our dancers will identify some of them.

G. Randolph Mayes
Department of Philosophy
Sacramento State

Sunday, September 14, 2014

Summer of discontent: Ruminations on a hundred years of the same-old, same-old

This past August 1st marked one hundred years since the beginning of World War I. What might have remained a squabble between an empire (Austro-Hungary) and a minor neighboring state (Serbia) set into motion the domino train of mutual defense treaties that led to a world war – the first world war – the first of two world wars.

I’ve been hearing a lot this summer about airplanes, surface to air missiles, bombing raids, hostile crossings of borders into neighboring territory. This summer began with the tragic downing of Malaysian Airlines flight MH-17 over eastern Ukraine by, presumptively, Russian backed pro-Russian Ukrainian separatists. It also saw the brutal and destructive Israeli response to the Palestinian response to the Israeli settler’s response to the suspected Palestinian killing of three Israeli teenagers in June (which was itself a likely response to yet something else egregious done by Israel or Israelis, etc., etc., etc.). As this summer draws to a close, President Obama declared the US and its usual allies[1] will respond to the rise of a presumptive new Caliphate (ISIL) in what we call the Middle East with bombing raids wherever necessary. What links these events to World War I?

Irredentism and revanchism. These are 20th Century concepts many of us thought could safely be relegated to the surely-we’ve-outgrown-these-horrid-practices dustbin. Clearly, such relegation, and the optimism which supported it, was premature.

Irredentism refers to the practice of justifying unilateral reshaping of borders according to the ethnic or nationalist connections between disparate populations contained across those borders. Russia’s annexation of Crimea is an excellent example. Ethnic Russians live there. Russia should extend to wherever there are ethnic Russians. Therefore, Russia should extend to include Crimea. But there reside also nationalist Ukrainians, so one nation’s solution to inadequate borders in irredentist terms recreates the very problem for those who were thereby severed from their own national community. The dissolution of the Austro-Hungarian Empire at the end of WWI and the expansion of the German nation ahead of WWII were both expressions of irredentist claims to national unity, liberation or reunification. Borders, ever again a problem.

Revanchism (best pronounced with a hint of French inflection) refers to the practice of justifying the reclamation of lost territory, in a kind of national avenging. The challenge with revanchism is that it relies on the fiction that there is an identifiable prior settled state of affairs which has been upset in the seizure of territory by a neighbour. It also relies on the fiction that such action can heal, compensate for, or unproblematically reunify wrongly severed peoples and places. Here, the Israel-Palestine conflict serves as a perfect example of revanchism in action and of the tragedy of its pursuit. On both sides, the motivation is to regain territory taken by others. For the Israelis, it is the ancient loss of Judea, Samaria and all the rest of Greater Israel. For the Palestinians, it is the more recent run of generations of mutual and reciprocal territorial incursion, occupation, annexation, and settlement that is the legacy of the UN’s effort to compensate a people for one tragedy (the Shoah inflicted upon the Jews in Europe by Germany) by imposing a parallel tragedy on another people (the Nakba inflicted by the UN carving up British Mandate Palestine to create Israel).  Not surprisingly both terms, Shoah and Nakba, refer to the same existential catastrophe of a people being destroyed by the aspirations of another.

What this exposes is, I think, the inherent fragility of national borders and the feeling of charade we play whenever we demand that borders should be here or there or none at all.  I think it also has to do with the appeal of appealing to democratic self-determination by peoples who cannot stand each other, never have, and don’t want to continue pretending that they do. There will always be some excluded “us” trapped unjustly behind the borders with “them”.

It’s not like this problem is uniquely a Russian and Ukrainian or an Israeli and Palestinian problem…. Pick a border from any map and there you will find that border contested by some political community. Neither is it a problem of states that have recently undergone dissolution, like the USSR or Yugoslavia, or resolution, like Germany. Well, there might be one border not in dispute… the US-Canada border (though some Americans still claim manifest destiny, and some Canadians still rue the 54-40 Agreement). Ok, perhaps two such borders… add the Czech-Slovak border, which was an internal regional border made into a national border creating two new states from the old.

Clearly borders are a problem, mere lines on maps, a collective fiction. But they are also lines in the collective imagination, in the definition of a people, and in the very real actions which either affirm belonging or sting with exclusion. In the nation state, we find the curious and increasingly unstable blending of a geo-political entity with an ethno-cultural entity and healthy dose of nationalist ideology. A people in a place. A unified people in a unified place. The appeal is understandable, but the practice is troubling.

What makes a border worth respecting? As US Ambassador to the UN, Samantha Power, recently scolded her Russian counterpart, “borders are not suggestions.”[2] But what are they, then, if not suggestions of limits, to some, and opportunity, to others? How to ensure they are not mere suggestions against an ill-tempered neighbour who believes there are compatriot nationals or fellow folk on the other side of the border clamouring for liberty or reunion and just a little bit of help?

As Cara Nine astutely asks, "If we don’t understand why territorial rights are justified in a general, principled form, then how do we know that they can be justified in any particular solution to a dispute?"[3]

This may just be the defining political question of the 21st Century. We’d better get better at answering it, lest this 21st Century become too much more a repetition of the 20th.

Christina Bellon
Department of Philosophy
Sacramento State




[1] Including UK, Canada, France, and Germany among increasing numbers of others. Interestingly, Iran may join the coalition.
[2] US Mission to the UN, Briefing Room Statements, transcript of remarks delivered to the UN Security Council, April 13, 2014. Available at http://usun.state.gov/briefing/statements/224764.htm. Last Accessed 14 Sept, 2014.
[3] Cara Nine, Global Justice and Territory (Oxford University Press, 2012).