So decision theory is basically the theory of what decisions rational agents make, for different definitions of rational (think game theory, except you usually talk about 1 player or completely cooperative games). One proposal was that rational agents maximize the expected value of a utility function. This means that there was some function from outcomes to real numbers representing how much they “liked” that outcome. When making a decision, the agent figures out the probability that each outcome for each option they could choose, and then they choose the option that maximizes the expected value of the utility of the outcome. This theory was able to explain why people avoided risky financial situations even if on average they would make a net positive of money. Their utility function “cared” more about not losing money than gaining it. This intuition behind why someone would have a utility function like that is the fact that quality of life scales sublinearly with wealth (if you gain $1,000, you’ll probably be rather happy. If you lose $1,000, you might end up homeless).
Anyways John von Neumann and Oskar Morgenstern proved a theorem about this hypothesis called the Von Neumann–Morgenstern utility theorem. They introduced a new concept called VNMrational. A VNMrational agent satisfies 4 axioms, stated in the article. The theorem then proved that if an agent is VNMrational, then there exists some utility function (commonly called the VNM utility function) such that the agents decisions coincide with the decisions that maximize that utility function, even if they are not aware of it. This was surprising, since the axioms for VNMrationality seemed much weaker than the expected utility hypothesis.
Anyways, enough background. Why I am writing this post? Well, the cool thing is that the VNM theorem is constructive. In fact, for a small number of outcomes, you can efficiently find the VNM utility function of an agent by answering a series of “would you rather” type questions. The activity below tells you how to calculate it.
Note that this activity will only give a completely accurate result if you are VNM rational. (This is not a given, since some people think that being VNM rational is not required to be rational in the regular sense of the word.) Also, this activity is meant for fun; although the results may be interesting, they will not turn you into a hyperrational robot person or anything like that. Feel free to give silly answers to the questions.

List some possible outcomes your life could have. Ideally you’d list all of them, but because that could make the list too long, only list a couple. For example, “creating a robot dinosaur army, taking over southern France, and then dying peaceful” could be an outcome.

Determine which outcome you like the best, and which you like the least. If there is a tie, choose one arbitrarily. However, if you are indifferent between all the outcomes, assign each outcome a utility of 0, and jump to step 4. Otherwise, call the best outcome A, and the worst outcome B. Assign A a utility of 100, and B a utility of 0.

Now, for every other scenario C, determine C’s utility using these questions. (Note that there is a shortcut at the end. It is quicker, but more complicated conceptually.)

(OPTIONAL) Pick any positive real number q and any real number r (it does not matter which ones you chose). For each outcome, replace its utility with qx + r, where x is the old utility.
And you are done! Now, for any decision, figure out what probability you will have for each outcome given each option. The one that maximizes the utility function is the VNMrational choice, assuming you did the above steps correctly.
Now, some more observations. You probably noticed that during option 4, you could had many different options for q and r. This is because the Von Neumann utility function is not unique. Any q and r will give you a valid one. However, you can get from any Von Neumann utility function to another using step 4. In other words it is unique up to addition of constants and multiplication of positive constants. This seemingly small problem actually has a lot of implications. Utilitarians, for example, propose that we should take actions that maximize the average or total utility across all individuals (or some other aggregate of utilities). The fact that VNM utility is not unique means that you can not simply calculate everyone’s VNM utilities and use that in your calculation. If you want to use VNM utility functions, you have to specify which one to use for each person. (If you are using average utility, you actually only need to be unique up to an additive constant, since the additive constants will not change the ordering of decisions under the average. Multiplicative constants will still change the result though.)
Another thing you probably noticed is that this is not a super practical way of making decisions. First off, your life usually has an extremely large number of outcomes, and second off, calculating how small decisions (like which shoe to tie first) effect their probabilities is basically impossible. Although this activity is mostly for entertainment value, there are some cases ways to make it more practical decision making tool, if you are so inclined. One thing you can do is only use it for one, big, decision at a time, and list the outcomes of the scenario it effects, instead of looking all the way to the end of your life. For example, you could try using these to decide what major to choose based on how it will affect your income (calculating utils of income is much faster than a general set of outcomes, since utility is usually monotonic (but not linear) with respect to wealth). For more complicated decisions, you can use what is known as an Influence diagram, which breaks a situation done into chunks, and then depicts how they are related. These are apparently actually used by large businesses, although I am having trouble finding any specific examples (which would happen either way, since businesses are usually pretty secretive about decision making). They have been widely studied by decision theorists, though.
So, what do you all think? Anyone want to share their results?