We assume that people have a need to make statements, and construct a model in which this need is the sole determinant of voting behavior. In this model, an individual selects a ballot that makes as close a statement as possible to her ideal point, where abstaining from voting is a possible (null) statement. We show that in such a model, a political system that adopts approval voting may be expected to enjoy a signiﬁcantly higher rate of participation in elections than a comparable system with plurality rule.
People reason about real-estate prices both in terms of general rules and in terms of analogies to similar cases. We propose to empirically test which mode of reasoning ﬁts the data better. To this end, we develop the statistical techniques required for the estimation of the case-based model. It is hypothesized that case-based reasoning will have relatively more explanatory power in databases of rental apartments, whereas rule-based reasoning will have a relative advantage in sales data. We motivate this hypothesis on theoretical grounds, and ﬁnd empirical support for it by comparing the two statistical techniques (rule-based and case-based) on two databases (rentals and sales).
People may be surprised by noticing certain regularities that hold in existing knowledge they have had for some time. That is, they may learn without getting new factual information. We argue that this can be partly explained by computational complexity. We show that, given a database, ﬁnding a small set of variables that obtain a certain value of R2 is computationally hard, in the sense that this term is used in computer science. We discuss some of the implications of this result and of fact-free learning in general.
Keywords: Computational complexity, Linear regression, Rule-based reasoning
A decision maker is asked to express her beliefs by assigning probabilities to certain possible states. We focus on the relationship between her database and her beliefs. We show that, if beliefs given a union of two databases are a convex combination of beliefs given each of the databases, the belief formation process follows a simple formula: beliefs are a similarity-weighted average of the beliefs induced by each past case.
An agent is asked to assess a real-valued variable y based on certain characteristics x = (x1,…,xm), and on a database consisting of n observations of (x1,…,xm,y). A possible approach to combine past observations of x and y with the current values of x to generate an assessment of y is similarity-weighted averaging. It suggests that the predicted value of y, ysn+1, be the weighted average of all previously observed values yi, where the weight of yi is the similarity between the vector x1n+1,…,xmn+1, associated with yn+1, and the previously observed vector, x1i,…,xmi. This paper axiomatizes, in terms of the prediction yn+1, a similarity function that is a (decreasing) exponential in a norm of the diﬀerence between the two vectors compared.