Monday, November 16, 2009

The Ambiguity of "Utility"

The term plays an important role in both philosophy and economics. In philosophy, it is associated with Jeremy Bentham and utilitarianism; in that context utility means, roughly, happiness. In Bentham's view, one ought to act so as to maximize the total of human utility, misleadingly described as "the greatest good for the greatest number."

To an economist, on the other hand, your utility function describes not how you should act but how you will act. "The utility to me of consuming an apple is greater than that of consuming an orange" means that, given the choice, I will choose the former over the latter.

We expect people to choose what makes them happy (cynics and psychologists are welcome to leave the conversation at this point, if they feel left out). Hence we would expect at least a close correlation between utility in the economist's sense and utility in the philosopher's sense. That matters, because one of the things economists do, when they are not making a point of being objective, value-free scientists, is to draw conclusions about what people ought to do—for instance, that they ought to abolish tariffs and price controls. Those conclusions frequently depend on the assumption, stated or unstated, that maximizing utility in the economist's sense will also maximize it in the philosopher's sense. That point was probably clearer a little over a century ago when the economic arguments were being made by an economist, Alfred Marshall, who was a utilitarian and not afraid to make explicit the utilitarian foundations of his economic conclusions.

The concept of utility is, however, ambiguous in other and subtler ways. Imagine, for instance, that you are going to die six months from now. Is your utility greater if you have several months advanced warning, as cancer patients often do, or if your death comes as a complete surprise?

Spending several months knowing that you are about to die would be, for most of us, a very unpleasant experience. If utility is another word for happiness, imagined as a characteristic of what is going on inside your head, the second alternative is almost certainly preferable to the first.

But happiness, in that sense, is not all that matters to people. If one could somehow choose in advance whether, when and if you were in the situation described, it would be the first alternative or the second, many of us would choose the first. Many of us, after all, have things we would like to get done before dying—things to be said to children, wife, friends, perhaps enemies as well. Projects to be completed whose completion matters, if only to our sense of having lived a life worth living. Arrangements to be made for the future of those dear to us. A close friend, not all that long ago, spent a good deal of his last few months reducing to something more like order his crowded and cluttered house for the benefit of his wife and daughters.

For a different slant on the same problem, consider the experience machine, as hypothesized by Robert Nozick—or the real world equivalent in which I spend a good many hours a week, now that virtual reality is really here. Nozick's version provides you with the illusion of a life—the entire rest of your life, stretched out over the length of time it would actually occupy. The proprietor has somehow determined the life you are going to live and believably guarantees you a modest improvement, an illusory life in which things turn out marginally better, in a variety of dimensions, than they would have in the real thing. Assuming you believe him, do you accept the offer?

If the economist's utility and the philosopher's are the same, if choice is entirely about happiness, and if happiness is really a state of mind, then the answer is obviously "yes." For me and, I suspect, many other people, it is just as obviously "no." I don't merely want the illusion of accomplishing things, I want the reality.

Which is one reason why I don't spend all of my life in World of Warcraft.

33 comments:

Michael F. Martin said...

There is some evidence that humans are unique at least to the degree to which individuals are willing to cooperate with non-kin to achieve social goals. In reflecting on Nozick in the past, I have often wondered if what's missing from the experience machine is not authentic interpersonal interaction. It's not experience that counts for us so much as knowledge that our experience has an impact on others'?

This touches on another point you've raised here. To the extent our utility is dependent on others', we have a nice mess of nonlinearity. Some of us like it that way.

pjsw said...

There's good reason to believe that economists and philosophers do not use "utility" in the same way, and that its meaning has shifted significantly over time. However, I would surely botch the explanation were I to attempt it, so I'll just direct you to the fine book, "Ethics out of Economics" by John Broome (Cambridge University Press, 1999), Introduction and Part I, "Preference and value."

RKN said...

I think the experience machine wager described by Nozick stipulated that the life-long illusory experience would be indistinguishable from reality, in which case a rational hedonist may well select the "modestly improved" reality.

William H Stoddard said...

On the matter of "we expect people to do what makes them happy," do the people you're close to include anyone who's clinically depressed? I've lived with a clinically depressed woman for a quarter century. One of the most counterintuitive parts of this condition is that a depressed person will respond to "Why don't you do X? doing X makes you happy" with "I know, but I still don't want to do it."

This seems to have several lines of thought behind it:

"I know I'll be happy after I've done it, but I haven't done it and I can't bear to think of making the effort."

"Happiness isn't worth attaining because it won't last."

"I don't deserve to be happy; I'm a worthless person and I ought to be miserable."

If economics had been developed by clinically depressed people, it might look very different. . . .

William H Stoddard said...

On a more philosophical note, I find it impossible to believe in utility, except as a convenient theoretical fiction.

The idea seems to be that there is a sort of N-dimensional hyperspace that maps all states of the world that I might experience, and a family of (N-1)-dimensional hypersurfaces that partition that hyperspace into regions at equal potential. Presumably, if you wanted a completely descriptive/predictive model of human behavior, you would say that each person's set of hypersurfaces was constantly changing to reflect their internal state; for example, as the time since I last ate increases, the utility of most things I might eat increases also. Since economists are mostly interested in predicting purchases, they often write as if each person's utility function were constant, which is at least a workable approximation, though it might fail for biological appetite . . . for example, the practical wisdom is that you should not buy groceries when you're hungry!

But the complexity of the situations we might encounter is so high that I find it unbelievable that anyone actually knows the utility of all those situations. Especially since we're talking about a world where new goods can be created. What is the utility to me of a DVD of Serenity? Five years ago, I couldn't know, because Serenity hadn't been filmed. Twenty years ago, I couldn't know, because DVDs weren't on the market. When that DVD became available, did I then have to update my utility function to map its equipotential relationships relative to every other good and service I might buy, or spend time on making? That seems like a huge computational task.

Insofar as we do compute something like "utility," I don't think there's any one internal measure that we use to do so. Rather, when we have to decide between A and B, we need to find some common measure that approximates what "benefit" we gain from each of them. That measure need not be the same for all pairs of goods, and for some specific pairs of goods, it may be hard to find a single measure that works for both of them . . . and thus, hard to choose between them. I think making choices is not simply a computational process, but a creative process.

I imagine some economists know this, and take it into account. But a lot of writing about economics that I've seen sounds as if economists thought that the utility function was already defined, and we just had to do a table lookup to determine which of any two things had more utility.

Alex Perrone said...

I don't think the thought experiment really contrasts utility and happiness so easily. Yes the illusory life would make you happier in that life, but the thought experiment is not just comparing the lives, but the choice of lives. Given the choice, it seems like answering "no" is equivalently to saying "I am not happy with the thought that the life would be illusory."

Alex Perrone said...
This comment has been removed by the author.
Alex Perrone said...

I assume that happiness should be maximized, and I really can't see a knockdown weakness to that. One could imagine happy slaves and so on to show that happiness and freedom are incompatible, but then must also run the counterfactual to see if those people are happier free. The fact is, we don't know what kind of state would make people most happy.

But given the goal of maximizing happiness, some interesting things follow. Since a utility function is a function that describes how one would choose from alternatives, and since how one chooses from alternatives does not always make one most happy (evidence coming from psychology), utilitarianism is not the best political philosophy.

As a corollary, capitalism, which rests on maximizing utility (otherwise put: always permitting one to choose what one would choose if given all alternatives), is not the ideal economic system under the standard of happiness.

Alex Perrone said...

Consider a real-world twist on the experience machine. You have a choice of two kinds of states: one that maximizes utility, or one that maximizes happiness.

Of course, you must note that you would be happier in the happy state. The utility state simply allows you to prefer what you would in fact choose, perhaps to your or others' detriment.

Anonymous said...

There are some problems with defining a utility function purely by observing what choices people actually make. First and most obviously, those choices only give you an *ordering* of utilities, not the *magnitude* of any of those utilities; the "less-than" operator may be well-defined, but the "plus" and "minus" operators not.

Second, describing choice as maximizing a (more or less numeric-valued) utility function presupposes that choices have the properties of an ordered set, such as transitivity, which may not always hold of real people in real scenarios. In other words, even the "less than" operator may not behave the way it "obviously" must.

David Friedman said...

Hudebnik writes that observing choices only gives you an ordering of utility, not magnitudes. That is not true if we follow Von Neumann's approach to defining utility. VN utility is cardinal, not ordinal, and one can deduce it by observing choices made under conditions of uncertainty.

I agree, of course, that the usual economic models of choice don't perfectly describe real world human behavior. In this case as in many others, one has to trade off the advantages of a more realistic theory against the disadvantages.

Anonymous said...

Josiah Neeley said:

Sorry for the off topic comment. Prof. Friedman, I am currently reading your The Machinery of Freedom and am quite enjoying it. Do you have a suggestion as to which of your books I ought to read next?

David Friedman said...

"Do you have a suggestion as to which of your books I ought to read next?"

That depends what you want to learn. If you want to learn more economics, read _Hidden Order_. If you want to learn about the application of economics to Law, _Law's Order_. If you want learn about possible futures driven by possible technological revolutions, read _Future Imperfect_.

And if you want to read a story, with some economics and other such stuff implicit in it, read my novel _Harald_.

All of those, except for _Hidden Order_, are available to be read for free online; _Harald_ is part of the Baen free library, the others on my web page. So you can sample them that way, before deciding if you want to obtain the hardcopy.

Anonymous said...

I just looked up the von Neumann-Morgenstern definition; thanks for the suggestion. In brief, it says my preference for option A over option B is "twice as strong" as my preference for option A over option C iff I would be indifferent between option C and a 50-50 random choice between options A and B. (Have I got that right?)

There are still questions about its applicability to real people, of course. Von Neumann and Morgenstern start with the explicit assumptions that preference is transitive, antisymmetric, and total (as well as a couple of other interesting properties), all of which are questionable in reality.

And it's not clear that there's a linear relationship between the probabilistic notion of expected value and people's own notions of value: risk-preferrers weigh positive outcomes more heavily, and risk-avoiders weigh negative outcomes more heavily, than a strict EV calculation would do. So, all else being equal, the utility function inferred from choices will assign option C a higher utility (relative to A and B) for a risk-avoider and a lower utility for a risk-preferrer.

Which doesn't mean the utility function isn't useful in predicting an individual's choices, but we have to remember its limitations: we can't meaningfully "add" or "subtract" utilities for different people, nor "add" utilities for different outcomes for the same person.

matt said...

To me the idea has never made much sense: Literally any behavior whatsoever can be described as the maximization of some utility function. At that level it's a tautology.

To make the concept useful you have to assume happiness or money or consumption or whatever is a reasonably proxy for utility. While such an assumption might be extremely powerful and useful within the domain of a particular theoretical model, in other domains I think it is an extremely poor model.

David Friedman said...

Hudebnik correctly describes one implication of VN utility.

"Risk-avoider" is a misleading term. What it actually means, in economics, is someone for whom money has declining (VN) marginal utility. So given the choice between a lottery with expected value $X and a certain payment of $X he will prefer the latter. That is a statement about his utility function for money, not for risk.

For details, see:

http://www.daviddfriedman.com/Academic/Price_Theory/PThy_Chapter_13/PThy_Chapter_13.html

and go down to the subhead "Choice in an Uncertain World."

Paul Birch said...

The objection that, for real people, utility may sometimes be intransitive is one I have seen before. It is in error. It is logically impossible for the utility function to be intransitive, however irrational the player.

It is however possible for a person's utility function to be rapidly variable or unstable. The simplest case is the old saw, "the grass is always greener on the other side of the fence". When you're on side A, B>A; when you're on side B, A>B. But unless you're Shrodinger's Cat, you can't be on both sides at once; the act of crossing the fence is what triggers the change of utility function, but at any instant that function remains well-behaved.

If utilities were often radically unstable this way, economic theory would have major problems; but so would living! We need reasonably stable utilities in order to function (but completely stable utilities would be as bad, or once we started eating we'd never stop - we need variable appetites and diminishing marginal utilities).

Anonymous said...

The objection that, for real people, utility may sometimes be intransitive is one I have seen before. It is in error. It is logically impossible for the utility function to be intransitive, however irrational the player.

It is however possible for a person's utility function to be rapidly variable or unstable.


Is there any meaningful distinction between a utility function which is actually intransitive and one that's so unstable that, in the time it takes to ask three questions, it looks intransitive?

Let's try a different thought experiment that removes time from the equation. Take a large sample of people with (you have reason to believe) fairly similar utility functions. Divide them randomly into three groups: ask one group to choose between options A and B, another between B and C, and the third between A and C, all simultaneously. The results could well be A>B, B>C, and C>A -- it may be unlikely, but not "logically impossible". Could we interpret that as evidence for truly intransitive preferences? What if we repeated the experiment with different random groupings and got the same result?

Paul Birch said...

There are indeed important differences. An intransitive utility function, if it were possible, would be mathematically pathological. Unusable. By contrast, a variable utility function is mathematically well-behaved. We can work with it. Variable utility functions are telling us useful things about how appetites and preferences shift.

There is no way any experiment can ever indicate or measure an intransitive utility. If your thought experiment gave such a result (unlikely, but, as you say, not impossible) it would merely mean that the three groups had different instantaneous preferences because you subjected them to different conditions (eg., asked them to choose between different pairs). This would be a useful datum. One could then explore why these particular marginal utilities were so finely balanced and easily perturbed.

Anonymous said...

"Risk-avoider" is a misleading term. What it actually means, in economics, is someone for whom money has declining (VN) marginal utility. So given the choice between a lottery with expected value $X and a certain payment of $X he will prefer the latter. That is a statement about his utility function for money, not for risk.

OK, I've read the chapter you cite, and I'm not sure what it has to do with money. In fact, you give an example yourself of being "risk-averse" with respect to ice cream cones rather than money. It seems to me that one can similarly define "risk-averse" and "risk-preferring" (in an absolute sense) with respect to anything with a well-defined numeric measure.

Furthermore, one could also usefully define "risk-averse" and "risk-preferring" as relative terms ("person X is more risk-averse than person Y") even for things that don't have a well-defined numeric measure (only an ordering), by observing that, when offered a choice of two lotteries, person X chooses the one with "compressed" outcomes (more-likely but less-bad worst case, and/or more-likely but less-good best case) while person Y chooses the one with "stretched" outcomes (the opposite).

For example, consider two people who both enjoy walking in the park. They're confronted with the question of whether to go for a walk late at night. The (relative) risk-avoider says no, choosing the certain cost of getting to go for fewer walks in the park over the unlikely but very costly possibility of getting mugged; the (relative) risk-preferrer does the opposite. Without a numeric way to equate the pleasure of walking with the disadvantages of getting mugged, we can't say absolutely what "risk-neutral" would be, but we can unambiguously say one of them is more risk-averse than the other.

Anonymous said...

Paul writes:

If your thought experiment gave such a result (unlikely, but, as you say, not impossible) it would merely mean that the three groups had different instantaneous preferences because you subjected them to different conditions (eg., asked them to choose between different pairs).

Again, I think this is a distinction without a difference. Suppose (for simplicity) that we got the exact same result regardless of how the random groups were chosen. Then one could describe the utility function of each of the people in the sample conditionally as "If you ask me to choose between A and B, I prefer A; if you ask me to choose between B and C, B; if between A and C, C." Does this differ in any measurable way from having an actually intransitive utility function?

An intransitive utility function, if it were possible, would be mathematically pathological. Unusable.

Quite true, but does that prevent it from happening in the real world?

David Friedman said...

Hudebnik points out, correctly, that one could be risk averse in something other than money. But putting it in terms of "He is risk averse" without specifying in what makes it sound as though risk aversion is a feature of his attitude towards risk, and so should apply whatever the risk is in. My point was that that is not the case. The fact that someone is risk averse in money tells us essentially nothing about his attitude to risk in length of life.

Paul Birch said...

Hudebnik writes:
"I think this is a distinction without a difference."

It's the difference between a three-legged stool and a three-legged biped. The latter is nonsense. Intransitive utility functions are nonsense - literally. Variable utility functions are not. They are the rule. In the real world, utility functions, however complex, variable, irrational or unstable they might be, cannot be mathematically pathological. No utility measurement whatsoever can ever give an intransitive result. To claim that it does would be to close your eyes to what was actually going on - that people's preferences were being changed by what you were doing, or perhaps that their declared or revealed preferences included some other component you hadn't expected (such as a desire for the approval of the experimemnter).

Anonymous said...

Paul writes:
No utility measurement whatsoever can ever give an intransitive result.

But I just described a scenario, not violating any known laws of physics, that looks like an intransitive utility function in every measurable way. There may be a number of different possible psychological reasons for the person to behave that way, but if we limit ourselves to observations of people's actual economic behavior, we have no way to distinguish among those reasons.

If f is a real-valued function, then yes, it is logically impossible for f(A) > f(B) > f(C) > f(A). So what do you do if you observe the experimental results that I described? You can hypothesize that the function is conditional, changes in response to what question is asked, changes from millisecond to millisecond, and redefine the thing into uselessness. Or you can hypothesize, more simply, that there is no such function having the properties we expect of a utility function.

Von Neumann and Morgenstern prove that there must be such a numeric function, under the assumptions that people's actual choices among lotteries are transitive, antisymmetric, convex, etc. Without those assumptions, the simplest explanation is that the numeric-valued function doesn't exist.

Anonymous said...

David writes:
putting it in terms of "He is risk averse" without specifying in what makes it sound as though risk aversion is a feature of his attitude towards risk, and so should apply whatever the risk is in... The fact that someone is risk averse in money tells us essentially nothing about his attitude to risk in length of life.

Sure. It's plausible that a person's risk-aversion in regard to different goods would be positively correlated, and that's something a psychologist could study experimentally. But there's no a priori reason to believe it, or to say in this general sense that "Person X is risk-averse".

One could, however, say "Person X is more risk-averse than Person Y" in regard to a particular class of goods, even if the goods in question don't have neat numeric measures like money or ice cream cones so you can't talk about convex or concave utility curves.

Paul Birch said...

Hudebnik:
There is no measurement or observation that "looks like" an intransitive utility function, because there is no such thing and never could be. It is a mathematical nonsense.

By contrast, variable utility functions are just what we would expect to see, and what we all know quite well from personal experience. Saying that utility changes in response to external conditions is not "redefining it into uselessness", it is a crucial, obviously true, and highly illuminating part of the whole theory of economics.

Your hypothetical experiments would be very easy to understand in terms of variable utility functions. They would pose no particular difficulty for standard economic theory.

Anonymous said...

There is no measurement or observation that "looks like" an intransitive utility function, because there is no such thing and never could be. It is a mathematical nonsense.

I shouldn't have used the phrase "utility function", because that presupposes something real-valued. Yes, a real-valued function on which < is intransitive would be a mathematical nonsense.

I should perhaps have used the phrase "choice function" instead -- a two-parameter function on outcomes which tells which (if either) the person prefers.

I recognize that there are realistic situations in which the most natural interpretation is that the choice function (and hence the inferred utility function) has changed over time, either in response to the way choices are presented or to something else. But if the choice function doesn't change (as in my thought experiment), it seems unnecessarily complicated to conclude that "the utility function" is changing and changing back rapidly and consistently, as opposed to the (simpler, IMHO) interpretation that there is no real-valued utility function corresponding to this particular choice function -- that the person really does prefer A to B, B to C, and C to A. Or the person really does prefer both A and B to any nontrivial random mixture of them. Telling the person that it's mathematically impossible to have those preferences is unlikely to change them :-)

Paul Birch said...

Hudebnik:
The utility or choice function we are discussing is the one that describes how people actually choose. Because people actually do choose, it must exist, in some form, and it must be real. Faced with a choice of (only) three options one cannot do other than choose one of them - there is no such thing as an intransitive choice. In your thought experiment you did not offer a choice of three options; you provided three different two-way choices, under necessarily different conditions. The precise utility or choice function is necessarily different in the three cases; in some circumstances it could be sufficiently and consistently different to get the result you want. But the interpretation you would want to place upon such a result is neither simpler nor reasonable; it is sheer nonsense - as logically impossible and meaningless as a three-legged biped.

Anonymous said...

Paul writes:
The utility or choice function we are discussing is the one that describes how people actually choose. Because people actually do choose, it must exist, in some form, and it must be real.

What is "it"? You use "it" as though it referred to both the choice function and the utility function. They're not the same thing: the choice function takes two arguments, returns a boolean, and comes directly from empirical observation, while the utility function takes one argument, returns a real, and is inferred from the choice function.

I'm not disputing that the choice function exists: it merely states people's actual choices. What I'm disputing is that every choice function can naturally be viewed as maximizing some real-valued utility function. The obvious counterexample is a choice function which doesn't obey transitivity, since comparison on a real-valued utility function must obey transitivity.

In your thought experiment you did not offer a choice of three options; you provided three different two-way choices...

If I understand this, you're suggesting that we can demonstrate the impossibility of an intransitive choice function by asking the same people yet a fourth question: "which of A, B, and C do you like best?" If they answer "A", then they cannot possibly prefer C to A. Right?

But if, as you say, changing from one two-way question to another can change the choice function, then surely changing to a three-way question can also change the choice function; in particular, the answer to a three-way question doesn't tell you the answers to two-way questions. So it shouldn't surprise you when, given a two-way choice again, the subjects stubbornly continue preferring C to A.

(This is reminiscent of one of the assumptions in Arrow's impossibility theorem: a person's (or group's) preference for C over A is unchanged by the presence or absence of option B. It was problematic for Arrow, and it's still problematic here.)

Von Neumann and Morgenstern (as I understand it -- I haven't read the original paper, only a few Webbed summaries of it) had no problem saying "if the choice function has certain properties, then we can infer a real-valued utility function that is being optimized." Paul seems to be working in the other direction: he takes the existence of the real-valued utility function on faith, regardless of whether the choice function has the properties that lead naturally to it.

In a trivial sense, Paul is right: if we allow "the utility function" to vary arbitrarily with what question is asked, we can always define a real-valued function that does the job. For example, in my thought experiment,
if the question is A vs. B, then U(A) = 10, U(B) = 0, and U(C) = 6
if the question is B vs. C, then U(A) = -638, U(B) = 9876.5, and U(C) = 6789
if the question is A vs. C, then U(A) = 5280, U(B) = 17, and U(C) = 62003782949786120
if the question is A vs. B vs. C, then U(A) = 2, U(B) = -2, and U(C) = 0.
That's a real-valued function consistent with the choices I described. But it's utterly arbitrary; it carries no more information than the boolean-valued choice function did, because it has no pattern. If we allow that kind of arbitrariness in utility functions, then as Matt said we're in the realm of tautology.

I can imagine observing, usefully (for marketers or political pollsters), that a person's choice between two alternatives changes with how the question is asked, or with what other alternatives are available. But if that's actually happening, then it becomes much harder to infer a utility function from the choices, and if you could, it wouldn't tell you anything useful.

If there's anything more to be said on this, maybe we should take it to private e-mail.

Paul Birch said...

Hudebnik:

There is no such thing as an intransitive choice. There never can be, no matter what experimental scheme you think up. So it is unscientific - indeed, plain irrational - to try to "explain" choices by means of a non-existent and unmeasurable intransitive function when we know that respectable variable preferences can do the job. I don't care whether you call it a choice function or a utility function. It works just the same either way.

Of course, the full utility function has an enormously high number of dimensions, covering every degree of freedom and describing every condition under which a choice may take place. So in one part of behaviour space we may have A>B, in another B>A; in one part we may have A>B, in another B>C, in a third C>A. So what? That's not intransitivity. It's just that the hills go down and across as well as up. Not arbitrarily, but with predictable regularities as well as more random or less regular elements.

Anonymous said...

Let me see if I can summarize where we agree and where we disagree.

We agree, I think, that the numerical value of a utility function is never observed directly; it is only inferred from the choices people make. As a corollary, there is no conceivable way to experimentally distinguish between two utility functions that differ by a constant factor or a constant additive term.

We agree also (with von Neumann and Morgenstern) that if the choices people make have certain properties (antisymmetry, transitivity, probabilistic convexity, etc.) which seem to hold in the great majority of real-world situations, then one can infer a real-valued utility function, uniquely (up to constant factors and additive terms, i.e. as "unique" as it could possibly be).

The difference comes in those unusual circumstances when the "choices people make," as taken literally, don't follow those rules, such as in my thought experiment where people consistently choose A over B, B over C, and C over A in pairwise comparisons, or where people choose both A and B over any probabilistic mix of the two. In such a situation (I think we agree) one can't generally infer a static, unique utility function; if there is to be a utility function, it has to be more complex and conditional, which adds enough degrees of freedom that it's no longer unique.

In interpreting such a situation, we have to decide between two notions of what a utility function is supposed to do:

* Utility functions are useful because they allow you to describe whatever people actually do;

* Utility functions are useful because they're simple and regular enough to extrapolate from observed choices to predicted choices.

These are both nice properties, but they're incompatible. A notion of "utility function" that makes predictions is falsifiable, i.e. not compatible with every conceivable observation. OTOH, a notion of "utility function" that can handle whatever the universe gives us can't rule out any possible universe and therefore can't make predictions.

We can view much of science's history in terms of the conflict between these two goals. One can always achieve either one at the expense of the other: one can achieve universal applicability at the cost of tautology (and therefore uselessness), or one can achieve simplicity and regularity at the cost of inapplicability (and therefore uselessness). In practice, any science (physics, biology, economics, etc.) usually starts with a simple, regular model, then makes it progressively more complex to accommodate the observed data, but without jumping all the way to an unfalsifiable model.

I said earlier that if a "utility function" can be conditional on what choices are offered, then it can accommodate any conceivable behavior and therefore cannot predict any behavior. To me, that makes it useless. I presume that to Paul, the advantage of being able to use utility functions for anything outweighs the disadvantage of losing predictive power. De gustibus non disputandum est.

We still have the question of uniqueness. Once a utility function is allowed to be conditional on the question, we can no longer combine information from different questions to jointly infer a single utility function, uniquely (up to a linear function). So any real values you assign to the utility function are arbitrary -- not only up to any linear function, but up to any order-preserving transformation. To me, calling this a "real-valued" function is putting lipstick on a pig; it's really just a choice function, about which no numerical reasoning can be done. I suppose there's no harm in giving it real values, but the numbers don't actually carry any information except in their ordering. Using real numbers for this strikes me as false advertising, like measuring a distance by lengths of your arm and reporting it to an accuracy of microns.

jdgalt said...

I would not want to move into Nozick's experience machine, but not because one's accomplishments there wouldn't be "real" -- that's a question-begging term in context. I wouldn't go there because it amounts to moving to a new universe, which is bound to be (in practical terms) different in a lot of ways, thus wasting a lot of my useful knowledge.

The issue of my utility being dependent on others', or vice versa, I see as their problem.

But there are other issues with knowing exactly when one will die. An amoral person who had that knowledge would, at the very least, borrow and spend all the money he can get, if he can be sure that the collectors won't come around until he's dead. An even more amoral person would steal, rape, rob, whatever he would enjoy in his last few days.

Conversely, if others (or just the government) knew when you will die, they could do unto you as they like with no consequences just by getting *their* timing right.

The real reason this thought experiment breaks down, of course, is that anyone who knows when you will die almost certainly has the power to change that date or even prevent it (whether or not that knowledge came from the fact that your death is going to be by their hand in the first place, as in Harlan Ellison's story about the Ticktockman).

gcallah said...

"We expect people to choose what makes them happy..."

Wow, that's really optimistic. I only expect them to choose what they think will make them happy, something that they quite obviously often fail miserably at... just look at suicide divorce, and depression stats -- unless, of course, they chose suicide because it made them happy!