1. 9 Someone might say that my large numbers are obfuscatory; that this problem can be reduced to a much more simple one: Imagine that there is a one in two chance that a treatment will reduce a single patient’s nausea a little bit. Imagine that the treatment also had a one in 150 chance of reducing the patient’s lifespan by four months. Question: Is it rational for the patient to opt for the treatment or not?. Here some simple division has been done. A one in 10,000 chance of helping 5,000 people has been reduced to a one in two chance of helping one person; and a one in 15,000 chance of hurting 100 subjects has been reduced to a one in 150 chance of helping that same, single person. This reduction is suspect. One reason is that the original problem involves different groups of people. That fact may (or may not) be morally relevant, so it cannot simply be eliminated. Another reason is that the reduction ignores the possibility that morality requires us to be (the impersonal equivalent of) risk-averse — i.e., to discount the value of outcomes based upon their likelihoods. Let me explain. There is an intuitive difference between (a) a one in two chance of getting two units of benefit, and (b) a one in 10,000 chance of getting 10,000 units of benefit. Choosing the first can seem rational even if the latter does not. The reason is that a person might be risk-averse: He might count the value of 10,000 units of benefit, in a risky situation, as less than five-thousand times the value of two units of benefit, when the latter might be gained in a less risky situation. In light of this, there seem to be some situations in which it’s illegitimate to reduce problems of decision-making, at least without the further assumption that the decision-maker isn’t, or shouldn’t be, risk-averse. Of course, that is a point about rationality, not morality. And one might think this difference is important. After all, I am asking an explicitly utilitarian question: Whether doing something has a greater expected value than something else. In other words, I have asked a question of maximization. And one might think that this sort of “reduction” is legitimate if we only want to know how to maximize goodness. Yes and no, I think. On one hand, it’s natural to interpret the question “does doing (a) instead of (b) maximize expected value?” as one in which there is no risk-aversion. That is, it’s natural to interpret it as one in which reduction is legitimate. But on the other hand, that is not the only interpretation of the question. One need not assume that a moral person should be risk-neutral with respect to benefits, just as one need not assume that a rational person will be risk-neutral. Morality might require us to play it safe with the interests and lives of others. Of course, someone might think that a risk-averse decision-maker could not claim to be “maximizing potential benefit.” As I’ve admitted, in a way that’s right — the most natural interpretation of “maximize potential benefit” is one in which there is no risk-aversion. But it seems to me there is another way in which a person can still be said to maximizing potential benefit, because after the benefits are discounted for risk-aversiveness, he then goes on to maximize benefit.
2. 5. Improving Informed Consent for Research Radiation Studies , National Institutes of Health Radiation Safety Committee (October 17, 2001).
3. 17. This example brings out a complexity in liberalism that was not discussed earlier. I have said that if an offer wouldn’t be accepted by an informed and competent decision-maker, then we should ban it. But another commonly-accepted liberal principle is that the government should not bother banning things that no one is going offer, or that no one would do anyway. On those grounds we might not bother banning the hand-feeding of the lions, even though such a ban would be justified by liberal principles.
4. What Makes Clinical Research Ethical?
5. 45 CFR 46: Federal Regulations and Institutional Review Boards