Standard decision theory has some striking consequences. First, if you have any irrational preferences, it does not make sense to ascribe credences to you---decision theory treats you as having no opinions at all. Second, you should always prefer to look at more information before making a decision if the information is free, even if you think the information is likely biased.
These consequences seem wrong. Your preferences are sometimes irrational, but you clearly still have opinions about things. If we could only ascribe credences to agents with perfectly rational preferences, decision theory couldn't give any useful advice to imperfect agents like us. Furthermore, people sometimes prefer not to look at more information before making a decision. For example, when grading a paper, I prefer not to know the student's name. This behavior seems perfectly rational, even laudable.
Does this mean that decision theory is broken? No. I argue that we can fix its problems. The key to ascribing credences to non-ideal agents is to start by looking at comparative probability judgments, like thinking it's equally likely to be sunny and rainy tomorrow. Comparative probability is tied to preferences in a straightforward way---you think that sunshine and rain are equally likely if you are indifferent between betting on them. If sunshine and rain are the only two possibilities for the weather tomorrow and you think they're equally likely, you assign probability .5 to both. I show that if your preferences satisfy some minimal constraints, we can extend this procedure to fix your entire credence function while allowing many of your preferences to be irrational (Chapter 2).
The key to explaining why it can be rational to reject free information is to recognize that we can be uncertain about how we will react to evidence. If I were certain that I always respond to evidence in a perfectly rational way (by 'conditionalization'), then perhaps it would be rational to look at my students' names before grading papers. Indeed, certainty that one will react to evidence in a perfectly rational way is an assumption built into Good's well-known theorem about the value of information. But this assumption might fail. I know that I'm not always rational---for example, I might give too much weight to the fact that George got an `A' on the first paper and treat this as better evidence than it is that his current paper deserves a high mark. Once I take this possibility into account, I might be better off ignorant (Chapter 3).
Uncertainty about how we will react to evidence has other consequences as well. For example, it can lead us to make sequences of choices which, taken together, yield sure loss. Many decision theorists have claimed that such choices always indicate irrationality. I argue that this bit of conventional wisdom needs revision (Chapter 4).
One upshot is that by widening the scope of decision theory to include non-ideal agents, we enable the decision theorist to give vindicating explanations of common phenomena, like having opinions without being fully rational and avoiding information before making a decision. Another upshot is that a more sophisticated decision theory is relevant for designing beneficial AI systems, since existing ideas for doing so assume an implausibly strong conception of rationality (Chapter 5).