Decision theory can be used to test the logic of decision making---one may ask whether a given set of decisions can be justified by a decision-theoretic model. Indeed, in principal-agent settings, such justifications may be required---a manager of an investment fund may be asked what beliefs she used when valuing assets and a government may be asked whether a portfolio of rules and regulations is coherent. In this paper we ask which collections of uncertain-act evaluations can be simultaneously justified under the maxmin expected utility criterion by a single set of probabilities. We draw connections to the the Fundamental Theorem of Finance (for the special case of a Bayesian agent) and revealed-preference results.
We suggest that one way in which economic analysis is useful is by oﬀering a critique of reasoning. According to this view, economic theory may be useful not only by providing predictions, but also by pointing out weaknesses of arguments. It is argued that, when a theory requires a non-trivial act of interpretation, its roles in producing predictions and oﬀering critiques vary in a substantial way. We oﬀer a formal model in which these diﬀerent roles can be captured.
We present a model of inductive inference that includes, as special cases, Bayesian reasoning, case-based reasoning, and rule-based reasoning. This uniﬁed framework allows us to examine, positively or normatively, how the various modes of inductive inference can be combined and how their relative weights change endogenously. We establish conditions under which an agent who does not know the structure of the data generating process will decrease, over the course of her reasoning, the weight of credence put on Bayesian vs. non-Bayesian reasoning. We show that even random data can make certain theories seem plausible and hence increase the weight of rule-based vs. case-based reasoning, leading the agent in some cases to cycle between being rule-based and case-based. We identify conditions under which minmax regret criteria will not be eﬀective.
This paper examines circumstances under which subjectivity enhances the eﬀectiveness of inductive reasoning. We consider a game in which Fate chooses a data generating process and agents are characterized by inference rules that may be purely objective (or data-based) or may incorporate subjective considerations. The basic intuition is that agents who invoke no subjective considerations are doomed to “overﬁt” the data and therefore engage in ineﬀective learning. The analysis places no computational or memory limitations on the agents — the role for subjectivity emerges in the presence of unlimited reasoning powers.
An agent is asked to assess a real-valued variable Yp based on certain characteristics Xp = (X1p,…,Xmp), and on a database consisting (X1i,…,Xmi,Yi) for i = 1,…,n. A possible approach to combine past observations of X and Y with the current values of X to generate an assessment of Y is similarity-weighted averaging. It suggests that the predicted value of Y, Ysp, be the weighted average of all previously observed values Yi, where the weight of Yi, for every i =1,…,n, is the similarity between the vector X1p,…,Xmp, associated with Yp, and the previously observed vector, X1i,…,Xmi. We axiomatize this rule. We assume that, given every database, a predictor has a ranking over possible values, and we show that certain reasonable conditions on these rankings imply that they are determined by the proximity to a similarity-weighted average for a certain similarity function. The axiomatization does not suggest a particular similarity function, or even a particular functional form of this function. We therefore proceed to suggest that the similarity function be estimated from past observations. We develop tools of statistical inference for parametric estimation of the similarity function, for the case of a continuous as well as a discrete variable. Finally, we discuss the relationship of the proposed method to other methods of estimation and prediction.
Economic theory reduces the concept of rationality to internal consistency. The practice of economics, however, distinguishes between rational and irrational beliefs. There is therefore an interest in a theory of rational beliefs, and of the process by which beliefs are generated and justiﬁed. We argue that the Bayesian approach is unsatisfactory for this purpose, for several reasons. First, the Bayesian approach begins with a prior, and models only a very limited form of learning, namely, Bayesian updating. Thus, it is inherently incapable of describing the formation of prior beliefs. Second, there are many situations in which there is not suﬀicient information for an individual to generate a Bayesian prior. Third, this lack of information is even more acute when we address the beliefs that can be attributed to a society. We hold that one needs to explore other approaches to the representation of information and of beliefs, which may be helpful in describing the formation of Bayesian as well as non-Bayesian beliefs.
Prediction is based on past cases. We assume that a predictor can rank eventualities according to their plausibility given any memory that consists of repetitions of past cases. In a companion paper, we show that under mild consistency requirements, these rankings can be represented by numerical functions, such that the function corresponding to each eventuality is linear in the number of case repetitions. In this paper we extend the analysis to rankings of events. Our main result is that a cancellation condition a la de Finetti implies that these functions are additive with respect to union of disjoint sets. If the set of past cases coincides with the set of possible eventualities, natural conditions are equivalent to ranking events by their empirical frequencies. More generally, our results may describe how individuals form probabilistic beliefs given cases that are only partially pertinent to the prediction problem at hand, and how this subjective measure of pertinence can be derived from likelihood rankings.
A decision maker has to choose one of several random variables, with uncertainty known distributions. As a Bayesian she behaves as if she knew the distributions. In his paper we suggest an axiomatic derivation of these (subjective) distributions, which is much more economical than the derivations by de Finetti or Savage. They derive the whole joint distribution of all the available random variables.
A predictor is asked to rank eventualities according to their plausibility, based on past cases. We assume that she can form a ranking given any memory that consists of ﬁnitely many past cases. Mild consistency requirements on these rankings imply that they have a numerical representation via a matrix assigning numbers to eventuality-case pairs, as follows. Given a memory, each eventuality is ranked according to the sum of the numbers in its row, over cases in memory. The number attached to an eventuality-case pair can be interpreted as the degree of support that the past case lends to the plausibility of the eventuality. Special instances of this result may be viewed as axiomatizing kernel methods for estimation of densities and for classiﬁcation problems. Interpreting the same result for rankings of theories or hypotheses, rather than of speciﬁc eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theories, such that her rankings of theories agree with rankings by the likelihood functions.
Keywords: Inductive inference, case-based reasoning, case-based decision theory, maximum likelihood
A decision maker faces a decision problem, or a game against nature. For each probability distribution over the state of the world (nature’s strategies), she has a weak order over her acts (pure strategies). We formulate conditions on these weak orders guaranteeing that they can be jointly represented by expected utility maximization with respect to an almost-unique state-dependent utility, that is, a matrix assigning real numbers to act-state pairs. As opposed to a utility function that is derived in another context, the utility matrix derived in the game will incorporate all psychological or sociological determinants of well-being that result from the very fact that the outcomes are obtained in a given game.