Wednesday, February 26, 2014

Science, technology, and the meaning of life


by Dan Weijers

From each of our perspectives, our own lives are extremely important. We compulsively check our phone or email accounts for that life-changing message (“did I get accepted or not?”), and we celebrate accumulating 1000 “friends” on Facebook (“I’m probably the most connected person I know!”).

Yet, from an objective, third-person, perspective, e.g. the point of view of the universe, our lives seem to be meaningless. No matter what we accomplish in our lives, it seems like our actions will have no discernable impact on the universe as a whole. You may have already given up hope of being a famous singer or sports star. Maybe you are striving to be a scientist, so that you can cure cancer or discover a new renewable energy source. But, from the universe’s point of view, even these esteemed careers and achievements will not earn you a meaningful life.

Thomas Nagel and others have discussed this absurd contrast between how meaningful and significant our lives are from the inside when compared to an objective viewpoint (usually referred to as the absurd). Nagel would have us embrace the irony of the situation and enjoy a cosmic giggle at our own expense. Of course, many religions provide explanations for our Earthly existence that resolve this absurdity. For example, eternal reunification with the creator of the universe might be offered as a reward for an Earthly life well lived.

The absurd, then, is only a potential cause for concern in the non-religious people who cannot find humour in the meaninglessness of their lives apparent from an objective perspective. Leo Tolstoy, as recounted in his My Confession, was in this situation. Tolstoy was a beloved family man, famous author, and wealthy landowner. Despite these advantages, Tolstoy became paralyzed by the objective meaninglessness of his life. He questioned the ultimate significance of his actions, but could not find any satisfying answers.

Based on a firm belief in science as the method for learning about the universe, Tolstoy was convinced that the universe would eventually die and that all humans and their legacies would be completely annihilated. As a result, he believed that it was impossible for him to leave his mark on the universe. In his words, and from his scientific outlook, he thought it impossible to connect our finite lives with something infinite or permanent.

Just when Tolstoy was about to abandon all hope of breaking out of his paralyzing depression, he realized that the vast majority of people did not share his dismal view of life. Recognizing that it was religious faith that allowed others to feel connected to something infinite, Tolstoy buried his earlier opinion of religion as “monstrous” and became a Christian of sorts. Fortunately for Tolstoy, this enabled him to break free from the grip of the absurd.

What I’d like to do here is propose a naturalist account of the meaning of life that could have provided Tolstoy with another option (and provides another option for anyone currently in the situation he was in). As such, this account is only intended to appeal to people who don’t believe in Gods and souls, and find the absurd distressing.

I call the account Optimistic Naturalism, and it entails belief in these two principles:

Infinite Consequence: performing an action that has infinite consequences for life is sufficient to confer True Meaning on the life of the actor, if the actor finds those particular infinite consequences to be subjectively meaningful (in part) because they are infinite. 

Scientific Optimism: continual scientific and technological advancement might allow our actions to have infinite consequences for life in a purely physical universe.

Following Susan Wolf’s view of the important kind of meaning, I take True Meaning to mean the meaning that arises from the right kind of connection between the subjective and objective points of view. For example Infinite Consequence is the view that performing an action that has infinite consequences for life is sufficient to confer True Meaning on the life of the actor, if the actor finds those particular infinite consequences to be subjectively meaningful (in part) because they could be infinite. For example, if I develop a technology that enables humans to avoid the supernova of our sun, and I find this meaningful partly because I believe it will help enable life to continue for infinity, then, if life does persist continuously, I will have lived a truly meaningful life.
Here is a brief defence of the main principles.

Is having an infinite consequence really objectively meaningful? First, realize that most objective viewpoints are multi-subjective standards, which are unavoidably tainted by the socio-cultural values of the individuals involved. By taking our standard as the point of view of the universe, we can step back until all residue of subjective value has disappeared. Our finite lives and legacies become so small from this vantage point that they pale into insignificance. But, infinite consequences are not quite like this. No matter how far we step back, and no matter how distant the objective viewpoint is, infinite consequences will never vanish into insignificance. When all the values and finite consequences have disappeared into the distance, actions with infinite consequences remain, ineluctably influencing future events.

Is Scientific Optimism too optimistic? How can we avoid the big chill (when the universe effectively becomes inert)? One live theory in cosmology, Eternal Inflation, predicts that new parts of the universe will always bubble out from our existing one. If this theory is correct, then the right kinds of advanced technology might enable some form of life to escape into new parts of the universe whenever the existing parts are becoming uninhabitable and thereby persist for infinity. Of course, great advances in science and technology would be required to enable us to take advantage of these ‘bubbles’ in this way. But, until recently, humans couldn’t even fly, and now we can fly to space and back!

Dan Weijers
Philosophy Program
Victoria University of Wellington

Monday, February 24, 2014

Instrumental technology and the responsibility problem

by Arthur Ward

Some people view technology as simply a means to an end. The user has a set of goals and a technological artifact can help the user achieve those goals, for better or for worse, but does not alter or introduce new goals. Call this the instrumental view of technology. A difficulty with this view is that it contradicts our own experiences: working in a beautiful library can help one focus, holding a weapon can make one bolder, and driving a powerful car can lead one to be reckless, etc. Interfacing with technology, it seems, does sometimes have the power to change us in various ways.

An alternative view, call it the non-instrumental view, is that some technology is more than a means to an end, and can actually alter some of the ends that a user wants to pursue. Technology can change us. A difficulty with this latter view is that it appears to lessen the moral responsibility of a person who uses a piece of technology for an evil end. “It wasn’t entirely my fault,” they might exclaim, “I was influenced/lured/tempted/seduced by technology X!” Call this the Responsibility Problem.

Here I will argue that the Responsibility Problem can be dealt with, and the non-instrumental view adopted. I will look at two technologies, controversies over which might gain clarity by adopting this analytic scheme: guns and social networks. Both, I argue, are clear examples of non-instrumental technologies that can affect some users in a negative way, influencing them to act wrongly when they would not have erred in the absence of the technology. Recognizing this fact, I think, will not lead to letting evil-doers off the hook, but will facilitate very necessary precautions and regulations with the technologies.

Guns

The stakes of the debate over guns are nicely laid out by Evan Selinger in his Atlantic article on guns. As he describes, the Instrumentalist view of gun technology is summed up by the familiar slogan “guns don’t kill people; people kill people.” This proclaims the ethical neutrality of guns, placing any blame for evil acts committed with a gun solely onto the person firing the gun and not the technology itself. Selinger argues that this position is untenable if we observe that people act more bold, reckless, or aggressive when holding a gun (at least some kinds of gun: if not a hunting rifle, perhaps a handgun or an assault rifle). I think Selinger is clearly right about this. This isn’t to say that merely holding a gun is enough to turn any decent person into a maniac - the claim is more modest than that: sometimes some guns in some cases impact some people in such a way as to affect their personality and alter their goals. I daresay it’s unlikely that George Zimmerman would have stalked and picked a fight with Trayvon Martin had he not been armed.

Social Networks

Looking at social networks brings forth a more pervasive example that many undergraduates have experienced: cyberbullying. There will always be bullies, but the technological advances of social networks like Facebook and Twitter have allowed for the proliferation of anonymous, venomous, bullying that can occur 24/7 online instead of being limited to the “schoolyard.” There are good reasons to think that cyberbullying is such a problem specifically because of features of the technology, namely the ability to instantly communicate in an asynchronous way from a distance without getting visual or other sensory feedback from one’s actions. In other words: it’s easy to trash-talk someone when you don’t have to look them in the face. There is a mountain of research demonstrating that our sympathetic response as humans is highly sensitive to immediate feedback such as a smile, frown, or grimace. When this interpersonal connection is severed, through distance or anonymity, we become less sympathetic towards one another. The Non-instrumental view of technology helps us see the import of this finding: Facebook is making some of us meaner! Some people who are normally kind and thoughtful can become colder and ruder online. This is an empirical claim, and research on this is in its infancy. Though, I think if we’re honest, many of us have caught this tendency in our own online behavior, and we surely recognize it in others.

The Responsibility Problem

Does the non-instrumentality of technology lead down a slippery slope away from personal responsibility and towards something like the twinkie defense? I don’t think so. For one thing, I’m convinced it’s the correct view, and where it leads we’ll just have to deal with that reality. But that aside, I don’t think we should worry about people dodging responsibility and foisting it on technology instead. While the external effects of technology can be powerful, the internal effects on our own personality and goals are usually very subtle, so much so as to often go unnoticed. “I didn’t realize I was speeding, I just felt kind of excited!” he exclaimed. In the vast majority of cases, the threat of technology to our free will is negligible. And note that if a technology did have a noticeable powerful effect on our will (consider a strong psychotropic drug), people would be very comfortable with lessened moral responsibility.

So, if we shouldn’t worry about the Responsibility problem, what are the stakes of being an instrumentalist versus a non-instrumentalist in the first place? I think the answer to that is non-instrumentalism should lead to cautious oversight, regulation, and education surrounding technologies such as guns and social networks. Their very minor effects on us can lead to enormous impacts further downstream, and for that reason it would be folly to take a Laissez-faire attitude towards them. What exactly those regulations look like is obviously where all the action is, and I don’t touch that here. But to protect each other, especially those more vulnerable to the lure of some technologies, taking a “guns don’t kill people; people kill people” is unwise and unsound.

Arthur Ward
Lyman Briggs College
Michigan State University

Friday, February 21, 2014

Knowing without caring: unmotivated psychopaths


by Lily Frank

When we think of psychopaths what often comes to mind are Hollywood depictions of despicable murderers totally unmoved by the suffering of others. Indeed, clinical diagnoses pf psychopathy includes a long list of traits that coincide with “amoral” behavior that would fit a Hollywood script. Psychopaths are callous, lack empathy and remorse, lie frequently, are cunning and manipulative, abandon relationships, and are at high risk for criminal activity and violence. They are otherwise just like everyone else.

Psychopaths have been used to support philosophical theories about morality. One of these views is the view expressed by David Hume, and many others since, that moral judgments involve emotions in some way. Roughly, the argument is that the moral failures of psychopaths are in an important way connected to their emotional deficits. Psychopaths do not empathize and have flat affect and this is why, it is supposed, they also fail to act morally. This theory is often paired with the view that when we make a moral judgment we are always motivated to some extent to act on our judgment. If someone is completely unmoved by their own moral judgments, on this view, they never truly made the moral judgment to begin with.

But this is all too quick. Despite their abnormal emotional profile and bad behavior, there is evidence that psychopaths make the same moral judgments that we do. When presented with hypothetical moral situations (i.e. the trolley problem or deciding whether or not to help a bleeding victim even though their blood would ruin your car seats) the answers of psychopaths were not significantly different from the answers of healthy controls or non-psychopathic delinquents (Cima et al. 2010).

There seems to be a contradiction between what the psychopaths say about right and wrong and how they behave. One straightforward way to understand the relationship between what psychopaths say and how they behave is that psychopaths make judgments and have beliefs about what the right thing to do is, but are unmotivated to do it. In other words the “psychopaths know right from wrong, but don’t care” (Cima, Tonnaer, and Hauser 2010). If this is the case, it bears on a central debate in moral psychology about how our moral judgments motivate us to act on them.

An alternative way to understand the psychopaths is that they are motivated by their moral judgments; they are just motivated very little. Their motivation to do the right thing is easily overshadowed by a motivation to do the wrong thing. This explains their bad behavior. This seems to be a common part of our everyday lives. For example, I know I should give up my seat to the elderly person on the bus, I am motivated to give up my seat, but I have a much stronger motivation not to-I am tired and want to rest.

Another way to understand the moral judgments of psychopaths is that they aren’t really making moral judgments at all. Instead they are mimicking other people’s moral judgments. When a psychopath says that he thinks he should stop to help an injured person on the side of the road, what he really means is, other people think that someone in that situation should help the injured person. This interpretation leaves intact the theory that moral judgments involve emotions and that they always involve some motivation to act on them.

How can we choose between competing explanations of the psychopath’s moral psychology? Looking at a related condition, sometimes called “acquired sociopathy,” suggests that the “know but don’t care” hypothesis is more persuasive. This related condition results from an injury to the ventromedial prefrontal cortext (VMPC) region of the brain (Damasio et al. 1990; Damasio 1994).

Injuries to the VMPC do not impact patient’s ability to reason or what they know, but the injuries cause dramatic changes in personality and severe impairments in emotional function, making once normal people behave much more like psychopaths. A famous example is Phineas Gage who was a 19th Century railroad foreman. Before an explosion propelled a tamping iron through his prefrontal cortex, he was well-liked, successful, personable, and restrained. Amazingly his memory, intelligence, and language abilities were intact, but he became very rude, impulsive, cursed excessively, could not be trusted to keep his commitments and could not keep a job. EVR, a contemporary patient with injury to the same region, was successful and happily married before surgery to remove tumors. Afterwards, he retained a high IQ, superior or normal performance on wide range of tests, including the moral judgment interview. He shortly divorced twice, went bankrupt, and was unable to hold down a job.

Besides the case reports of anti-social behavior, there is reason to think that they lack moral motivation. When presented with emotionally or morally charged images (pictures of natural disasters, mutilated bodies, or nudes) they don’t have a normal arousal response. The arousal response can be seen as correlated with motivation in this context (Damasio, Tranel & Damasio 1990; Roskies 2003, 2006, 2008).This makes the suggestion that these patients are motivated by their moral judgments, just very little, less plausible.

Is it possible that these patients are parroting the moral judgments of other rather than expressing their own moral views? It would be puzzling if this were the case, since before their accidents or injuries these patients were able to make and act on their moral judgments just like the rest of us. To say that they are merely parroting other people’s judgments after their injury is to suggest that the injury caused them to lose some kind of knowledge or facility with concepts, which none of the cognitive tests indicate they have lost (Roskies 2003, 2006, 2008).

Lily Frank
Department of Philosophy
City University of New York

Tuesday, February 18, 2014

Are clinical trials outsourced to the developing world exploitative?

by Roger Stanev

The number of clinical trials outsourced by pharmaceutical companies to developing countries has surged since the mid-1990s. Current estimates suggest that 40% of the total number of trials are now conducted in less developed regions of the world, e.g., India, Brazil, Mexico, South Africa, and Eastern Europe. The situation has raised important questions about the ethics of clinical research, including the question of whether or not such trials are exploitative.

On one side of the debate, some argue that outsourcing is not exploitative because it brings global benefits by allowing pharmaceutical companies to test new drugs more quickly, cheaply and effectively than they can in the U.S. (or other developed countries). It is cited, for instance, that the average cost of running a phase-III clinical trial in developing countries is approximately 70% cheaper than in the U.S., and that this is beneficial because cheaper research leads to cheaper drugs, which would mean more affordable drugs for impoverished populations. Proponents of outsourcing also argue that it transfers resources and expertise to less developed regions of the world while also providing researchers with access to a larger, more genetically diverse population, as well as ‘clean-patients,’ i.e., participants who are less likely to have been treated with other drugs or by other clinical trials.

On the other side of the debate, some argue that outsourced trials are exploitative because these populations are poor, with low levels of literacy, and often powerless to defend their own interests; the trials are exploitative because they exploit the vulnerability of people in the developing world. There is also a concern that, because necessary legal and ethical institutions safeguarding the interests of trial participants are not in place in many of these countries—e.g., less stringent ethical reviews, the under-reporting of side effects, and lower risks of litigation—there is an increased likelihood of research ethical misconduct.

If we suppose, as Emanuel et al does, that “the fundamental ethical challenge of all research with humans is to avoid exploitation”, then it is reasonable to say that our first order of business should be determining whether any outsourced clinical trial is exploitative. (In this case, we should focus on the moral sense of ‘exploitation’ as wrongful or impermissible exploitation.) What makes a clinical trial wrongfully exploitative? Consider accounts of exploitation that appear to give us truth conditions for wrongful exploitation claims, such as: a clinical trial is exploitative if the distribution of risks and potential benefits to trial participants is unfair. For example, if potential trial participants take most of the risks (e.g., possible serious side effects, undue burdens) but little expected benefits (e.g., cannot afford the drug if proven effective), whereas Glaxo Smith Kline (GSK) takes little risks but high potential benefits (e.g., high profit margins under low risk), then the trial is exploitative. So, if the drug being tested is not likely to be affordable in the host country or if the health care infrastructure cannot support its proper distribution and use, then it is exploitative. It would be exploitative to ask individuals in a developing country to participate in research, since they will not enjoy any of its potential benefits. But is this right?

Let’s look at a specific case. In the 1990s, GSK conducted clinical trials of a short-regimen of AZT in Uganda. These placebo controlled trials assessed whether lower doses of AZT than those used in rich countries (such as the U.S.) could reduce the rate of mother-to-child transmission of HIV. The existing normal regimen of AZT which was previously evaluated and approved in the U.S. (protocol 076) was deemed too costly for needy African populations. The average cost for the full regimen was over $1K per woman per year, whereas the annual health budget of African countries was under $10 per person—Uganda’s health budget at the time was estimated to be under $3 per person per year.

Even though GSK was taking advantage of the unfair situation in which trial participants found themselves in Uganda (i.e., poor and without routine access to healthcare), GSK could argue that it was not taking unfair advantage of the trial participants per se. They could further argue that, if the pharmaceutical market in which GSK and Ugandans operate is competitive, GSK does not exploit Ugandans if GSK pays Ugandans the ‘market’ price to participate in the trials, and that Ugandan participants should complain to their government representatives, not GSK, if their country cannot afford to pay for the healthcare infrastructure required for its citizens to afford and access the drug’s potential benefits. Moreover, what if GSK wanted to pay for the health care infrastructure required for Ugandans, but said it couldn’t because of alleged highly competitive market GSK finds itself in?

But, in my opinion, this trial was exploitative because the Ugandans were unable to afford and access the drugs they tested. Even if competitive market pressures made it unfeasible for GSK to pay for more, the trial was still exploitative. A clinical trial outsourced to the developing world is exploitative unless the trial’s sponsors provide actual benefits to the vulnerable population, including raising the host country’s health infrastructure baseline—raising it in ways that make the potential drug affordable and accessible to the needy.


Roger Stanev
Department of Philosophy
University of South Florida

Sunday, February 9, 2014

The real is-ought problem

I think most people who have heard of the is-ought problem would say that it is about the fact that you can not derive an 'ought' from an 'is': nothing follows from facts about the way the world is, concerning the way the world ought to be; nothing follows from the way people do behave, about the way we ought to behave.  Of course, this is not a problem unless people try to produce such derivations, and it may seem like we do. For example, people often say silly things like "That would be cruel, therefore you shouldn't do it."

But that is not a derivation; it is ordinary enthymematic reasoning. If it is to be a derivation, then we have to add a principle, say, "If the purpose of an act is to derive enjoyment from the suffering of another being, then it ought not to be done." This was Hume's point when he filed his original complaint.

In truth, deriving an ought from an is is no more problematic than deriving an is from an is or an ought from an ought. No statement, by itself, implies any other statement. P does not even imply P except in light of the principle: If A, then A (where A is any proposition, e.g, P.)

Now I told you that so I can tell you this:  There is an is-ought problem, it's just not that one.  The real is-ought problem is this:

We often fail to distinguish the ought from the is.

In the simplest and least interesting case, this is just because we are liars. We are asked what is the case, and we say what ought to be the case instead. When James says No when Molly asks whether he is having an affair, he is affirming the way the world ought to be, not saying how it is.

Things get more interesting when the questions challenge us epistemically. We can have a hard time telling the difference between is and ought, and sometimes we just can't tell at all.

Why?  Let's start at the bottom.

Perhaps there is a way the world fundamentally is, but that world is not simply Given to us in perception. Kant was the first to fully penetrate this illusion, teaching that our shared empirical reality is something our minds construct using a shared set of  rules, what he called "apriori intuitions." Quine later naturalized the Kantian insight as the scientific project of discovering the rules by which the "meager input" of sensory irritation is amplified into the "torrential output" of our theoretical vocabulary. And cognitive neuroscience now aims at exactly this: producing a materialistically adequate account of this process, the mathematical rules by which the brain sustains and deploys an internal model of the world on the basis of meager electronic pulses generated by specialized nerve cells in our sense organs.

Whether you think of them as mental categories or as regulated patterns of neural activity, the rules of cognition and perception are fundamentally tools for norming our sensory input. We expect the world to behave as it ought to, and that is how we always initially try to represent it. Now, of course, we survive because our default expectations can be swiftly overridden when things go obviously awry. But usually the world appears just slightly off. And in such cases we naturally and unconsciously resolve ambiguity in the direction of the normal. We have to. If we didn't, we'd go nuts.

Fortunately, the requirements of sanity are usually compatible with the way the world works. Weirdness is almost always just random fluctuation from the norm. Strange sounds in the night, strange feelings in the body, strange looks from other people are usually meaningless. (We just remember the ones that aren't.)

But even when it's not just noise, denial can be useful.  Honey, is something the matter? No dear, why do you ask? But that's not true, is it? The truth, if you could attend to it, is that your partner coming home from work with a nasty cold has made you upset, and in fact upset with her. Now there will be no movie; now there will be no sex. But these are feelings you are ashamed of and your morals swiftly denihilate them.

The use of fundamentally moral rules to resolve epistemic ambiguity is a very common, but largely hidden phenomenon.  Some of my favorite examples are from sports. Umpires and referees are paid to make the call, even when it is just too close to call. In these cases, officials must see the world as it ought to have happened. Was he really tagged out at second? He deserved to be, he had no business stealing. Was that ball really wide of the line?  It ought to have been, she shanked it.

This is an example of what Daniel Kahneman calls substitution. We answer a different question than the one being asked, because it is more available, and because it is typically a satisficing substitute.

But I called this the real is-ought problem. What makes it a problem? The answer is just that our instinct to norm the world can go too far and we recognize only too late that we have a major problem on our hands.  It is too easy to be glib here: The signs of a failing marriage or a weakening economy or a climate disaster or an imminent terrorist attack were all around us. How could we have missed them? Someone must have been asleep at the wheel. Well, that's called hindsight bias, and it is typically a gross exaggeration.  Sure, the signs were all around us, right there with all the signs that everything was just dandy.

The real is-ought problem, then, is not an error, it is a task.  It is the task of getting better, both as individuals and as a group, at distinguishing signal from noise, at discerning when the world is not behaving as it ought.

G. Randolph Mayes
Professor, Department of Philosophy
Sacramento State University