arXiv:1103.4601v2 [cs.LG] 6 May 2011 Doubly Robust Policy Evaluation and Learning Miroslav Dud´ık MDUDIK@YAHOO-INC.COM John Langford JL@YAHOO-INC.COM Yahoo! Research, New York, NY, USA 10018 Lihong Li LIHONG@YAHOO-INC.COM Yahoo! Research, Santa Clara, CA, USA 95054 Abstract We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as con- textual bandits, encompasses a wide variety of applications including health-care policy and In- ternet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely ei- ther on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and over- come the weaknesses of the two approaches by applying the doubly robust technique to the prob- lems of policy evaluation and optimization. We prove that this approach yields accurate value es- timates when we have either a good (but not nec- essarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demon- strates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and bet- ter policies. As such, we expect the doubly robust approach to become common practice. 1. Introduction We study decision making in environments where we re- ceive feedback only for chosen actions. For example, in Internet advertising, we find only whether a user clicked Appearing in Proceedings of the 28 th International Conference on Machine Learning, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s). on some of the presented ads, but receive no information about the ads that were not presented. In health care, we only find out success rates for patients who received the treatments, but not for the alternatives. Both of these prob- lems are instances of contextual bandits (Auer et al., 2002; Langford & Zhang, 2008). The context refers to additional information about the user or patient. Here, we focus on the offline version: we assume access to historic data, but no ability to gather new data (Langford et al., 2008; Strehl et al., 2011). Two kinds of approaches address offline learning in con- textual bandits. The first, which we call the direct method (DM), estimates the reward function from given data and uses this estimate in place of actual reward to evaluate the policy value on a set of contexts. The second kind, called inverse propensity score (IPS) (Horvitz & Thompson, 1952), uses importance weighting to correct for the incor- rect proportions of actions in the historic data. The first approach requires an accurate model of rewards, whereas the second approach requires an accurate model of the past policy. In general, it might be difficult to accurately model rewards, so the first assumption can be too restrictive. On the other hand, it is usually possible to model the past pol- icy quite well. However, the second kind of approach often suffers from large variance especially when the past policy differs significantly from the policy being evaluated. In this paper, we propose to use the technique of dou- bly robust (DR) estimation to overcome problems with the two existing approaches. Doubly robust (or doubly pro- tected) estimation (Cassel et al., 1976; Robins et al., 1994; Robins & Rotnitzky, 1995; Lunceford & Davidian, 2004; Kang & Schafer, 2007) is a statistical approach for estima- tion from incomplete data with an important property: if either one of the two estimators (in DM and IPS) is correct, then the estimation is unbiased. This method thus increases the chances of drawing reliable inference. For example, when conducting a survey, seemingly ancil- lary questions such as age, sex, and family income may be asked. Since not everyone contacted responds to the sur- Doubly Robust Policy Evaluation and Learning vey, these values along with census statistics may be used to form an estimator of the probability of a response condi- tioned on age, sex, and family income. Using importance weighting inverse to these estimated probabilities, one esti- mator of overall opinions can be formed. An alternative es- timator can be formed by directly regressing to predict the survey outcome given any available sources of information. Doubly robust estimation unifies these two techniques, so that unbiasedness is guaranteed if either the probability es- timate is accurate or the regressed predictor is accurate. We apply the doubly robust technique to policy value esti- mation in a contextual bandit setting. The core technique is analyzed in terms of bias in Section 3 and variance in Sec- tion 4. Unlike previous theoretical analyses, we do not as- sume that either the reward model or the past policy model are correct. Instead, we show how the deviations of the two models from the truth impact bias and variance of the doubly robust estimator. To our knowledge, this style of analysis is novel and may provide insights into doubly ro- bust estimation beyond the specific setting studied here. In Section 5, we apply this method to both policy evaluation and optimization, finding that this approach substantially sharpens existing techniques. 1.1. Prior Work Doubly robust estimation is widely used in statistical in- ference (see, e.g., Kang & Schafer (2007) and the ref- erences therein). More recently, it has been used in Internet advertising to estimate the effects of new fea- tures for online advertisers (Lambert & Pregibon, 2007; Chan et al., 2010). Previous work focuses on parame- ter estimation rather than policy evaluation/optimization, as addressed here. Furthermore, most of previous anal- ysis of doubly robust estimation studies asymptotic be- havior or relies on various modeling assumptions (e.g., Robins et al. (1994), Lunceford & Davidian (2004), and Kang & Schafer (2007)). Our analysis is non-asymptotic and makes no such assumptions. Several other papers in machine learning have used ideas related to the basic technique discussed here, al- though not with the same language. For benign bandits, Hazan & Kale (2009) construct algorithms which use re- ward estimators in order to achieve a worst-case regret that depends on the variance of the bandit rather than time. Sim- ilarly, the Offset Tree algorithm (Beygelzimer & Langford, 2009) can be thought of as using a crude reward estimate for the “offset”. In both cases, the algorithms and estima- tors described here are substantially more sophisticated. 2. Problem Definition and Approach Let X be an input space and A = {1, . . ., k} a finite action space. A contextual bandit problem is specified by a distri- bution D over pairs (x,⃗r) where x ∈X is the context and ⃗r ∈[0, 1]A is a vector of rewards. The input data has been generated using some unknown policy (possibly adaptive and randomized) as follows: • The world draws a new example (x,⃗r) ∼D. Only x is revealed. • The policy chooses an action a ∼p(a | x, h), where h is the history of previous observations (that is, the concatenation of all preceding contexts, actions and observed rewards). • Reward ra is revealed. It should be emphasized that other rewards ra′ with a′ ̸= a are not observed. Note that neither the distribution D nor the policy p is known. Given a data set S = {(x, h, a, ra)} collected as above, we are interested in two tasks: policy evaluation and policy optimization. In policy evaluation, we are interested in estimating the value of a stationary policy π, defined as: V π = E(x,⃗r)∼D[rπ(x)] . On the other hand, the goal of policy optimization is to find an optimal policy with maximum value: π∗= argmaxπ V π. In the theoretical sections of the paper, we treat the problem of policy evaluation. It is expected that better evaluation generally leads to better optimiza- tion (Strehl et al., 2011). In the experimental section, we study how our policy evaluation approach can be used for policy optimization in a classification setting. 2.1. Existing Approaches The key challenge in estimating policy value, given the data as described in the previous section, is the fact that we only have partial information about the reward, hence we can- not directly simulate our proposed policy on the data set S. There are two common solutions for overcoming this limitation. The first, called direct method (DM), forms an estimate ˆ̺a(x) of the expected reward conditioned on the context and action. The policy value is then estimated by ˆV π DM = 1 |S| X x∈S ˆ̺π(x)(x) . Clearly, if ˆ̺a(x) is a good approximation of the true ex- pected reward, defined as ̺a(x) = E(x,⃗r)∼D[ra | x], then the DM estimate is close to V π. Also, if ˆ̺ is unbiased, ˆV π DM is an unbiased estimate of V π. A problem with this method is that the estimate ˆ̺ is formed without the knowledge of π and hence might focus on approximat- ing ̺ mainly in the areas that are irrelevant for V π and not sufficiently in the areas that are important for V π; see Beygelzimer & Langford (2009) for a more refined analy- sis. The second approach, called inverse propensity score (IPS), is typically less prone to problems with bias. Instead of Doubly Robust Policy Evaluation and Learning approximating the reward, IPS forms an approximation ˆp(a | x, h) of p(a | x, h), and uses this estimate to correct for the shift in action proportions between the old, data- collection policy and the new policy: ˆV π IPS = 1 |S| X (x,h,a,ra)∈S raI(π(x) = a) ˆp(a | x, h) where I(·) is an indicator function evaluating to one if its argument is true and zero otherwise. If ˆp(a | x, h) ≈ p(a | x, h) then the IPS estimate above will be, approxi- mately, an unbiased estimate of V π. Since we typically have a good (or even accurate) understanding of the data- collection policy, it is often easier to obtain a good esti- mate ˆp, and thus IPS estimator is in practice less suscepti- ble to problems with bias compared with the direct method. However, IPS typically has a much larger variance, due to the range of the random variable increasing. The issue be- comes more severe when p(a | x, h) gets smaller. Our ap- proach alleviates the large variance problem of IPS by tak- ing advantage of the estimate ˆ̺ used by the direct method. 2.2. Doubly Robust Estimator Doubly robust estimators take advantage of both the esti- mate of the expected reward ˆ̺a(x) and the estimate of ac- tion probabilities ˆp(a | x, h). Here, we use a DR estimator of the form first suggested by Cassel et al. (1976) for re- gression, but previously not studied for policy learning: ˆV π DR = 1 |S| X (x,h,a,ra)∈S " (ra −ˆ̺a(x))I(π(x) = a) ˆp(a | x, h) + ˆ̺π(x)(x) i . (1) Informally, the estimator uses ˆ̺ as a baseline and if there is data available, a correction is applied. We will see that our estimator is accurate if at least one of the estimators, ˆ̺ and ˆp, is accurate, hence the name doubly robust. In practice, it is rare to have an accurate estimation of either ̺ or p. Thus, a basic question is: How does this estimator perform as the estimates ˆ̺ and ˆp deviate from the truth? The following two sections are dedicated to bias and vari- ance analysis, respectively, of the DR estimator. 3. Bias Analysis Let ∆denote the additive deviation of ˆ̺ from ̺, and δ a multiplicative deviation of ˆp from p: ∆(a, x) = ˆ̺a(x) −̺a(x), δ(a, x, h) = 1 −p(a | x, h)/ˆp(a | x, h) . We express the expected value of ˆV π DR using δ(·, ·, ·) and ∆(·, ·). To remove clutter, we introduce shorthands ̺a for ̺a(x), ˆ̺a for ˆ̺a(x), I for I(π(x) = a), p for p(π(x) | x, h), ˆp for ˆp(π(x) | x, h), ∆for ∆(π(x), x)), and δ for δ(π(x), x, h). In our analysis, we assume that the estimates ˆp and ˆ̺ are fixed independently of S (e.g., by splitting the original data set into S and a separate portion for estimating ˆp and ˆ̺). To evaluate E[ ˆV π DR], it suffices to focus on a single term in Eq. (1), conditioning on h: E(x,⃗r)∼D,a∼p(·|x,h) " (ra −ˆ̺a)I ˆp + ˆ̺π(x) # = Ex,⃗r,a|h " (ra −̺a −∆)I ˆp + ̺π(x) + ∆ # = Ex,a|h " (̺a −̺a)I ˆp + ∆ 1 −I/ˆp  # + Ex[̺π(x)] = Ex|h  ∆ 1 −p/ˆp  + V π = Ex|h[∆δ] + V π . (2) Even though x is independent of h, the conditioning on h remains in the last line, because δ, p and ˆp are functions of h. Summing across all terms in Eq. (1), we obtain the following theorem: Theorem 1 Let ∆and δ be defined as above. Then, the bias of the doubly robust estimator is |ES[ ˆV π DR] −V π| = 1 |S| ES h X (x,h)∈S ∆δ i . If the past policy and the past policy estimate are stationary (i.e., independent of h), the expression simplifies to |E[ ˆV π DR] −V π| = |Ex[∆δ]| . In contrast (for simplicity we assume stationarity): |E[ ˆV π DM] −V π| = |Ex[∆]| |E[ ˆV π IPS] −V π| = |Ex[̺π(x)δ]| , where the second equality is based on the observation that IPS is a special case of DR for ˆ̺a(x) ≡0. In general, neither of the estimators dominates the others. However, if either ∆≈0, or δ ≈0, the expected value of the doubly robust estimator will be close to the true value, whereas DM requires ∆≈0 and IPS requires δ ≈0. Also, if ∆≈0 and δ ≪1, DR will still outperform DM, and similarly for IPS with roles of ∆and δ reversed. Thus, DR can effectively take advantage of both sources of informa- tion for better estimation. 4. Variance Analysis In the previous section, we argued that the expected value of ˆV π DR compares favorably with IPS and DM. In this sec- tion, we look at the variance of DR. Since large deviation Doubly Robust Policy Evaluation and Learning bounds have a primary dependence on variance, a lower variance implies a faster convergence rate. We treat only the case with stationary past policy, and hence drop the de- pendence on h throughout. As in the previous section, it suffices to analyze the second moment (and then variance) of a single term of Eq. (1). We use a similar decomposition as in Eq. (2). To simplify derivation we use the notation ε = (ra −̺a)I/ˆp. Note that, conditioned on x and a, the expectation of ε is zero. Hence, we can write the second moment as Ex,⃗r,a " (ra −ˆ̺a)I ˆp + ˆ̺π(x) !2# = Ex,⃗r,a[ε2] + Ex[̺2 π(x)] + 2Ex,a  ̺π(x)∆ 1 −I/ˆp  + Ex,a  ∆21 −I/ˆp 2 = Ex,⃗r,a[ε2] + Ex[̺2 π(x)] + 2Ex  ̺π(x)∆δ  + Ex  ∆21 −2p/ˆp + p/ˆp2 = Ex,⃗r,a[ε2] + Ex[̺2 π(x)] + 2Ex  ̺π(x)∆δ  + Ex  ∆21 −2p/ˆp + p2/ˆp2 + p(1 −p)/ˆp2 = Ex,⃗r,a[ε2] + Ex ̺π(x) + ∆δ 2 + Ex  ∆2 · p(1 −p)/ˆp2 = Ex,⃗r,a[ε2] + Ex ̺π(x) + ∆δ 2 + Ex " 1 −p p · ∆2(1 −δ)2 # . Summing across all terms in Eq. (1) and combining with Theorem 1, we obtain the variance: Theorem 2 Let ∆, δ and ε be defined as above. If the past policy and the policy estimate are stationary, then the variance of the doubly robust estimator is Var  ˆV π DR  = 1 |S| Ex,⃗r,a[ε2] + Varx  ̺π(x) + ∆δ  + Ex " 1 −p p · ∆2(1 −δ)2 #! . Thus, the variance can be decomposed into three terms. The first accounts for randomness in rewards. The second term is the variance of the estimator due to the randomness in x. And the last term can be viewed as the importance weighting penalty. A similar expression can be derived for the IPS estimator: Var  ˆV π IPS  = 1 |S| Ex,⃗r,a[ε2] + Varx  ̺π(x) −̺π(x)δ  + Ex " 1 −p p · ̺2 π(x)(1 −δ)2 #! . The first term is identical, the second term will be of similar magnitude as the corresponding term of the DR estimator, provided that δ ≈0. However, the third term can be much larger for IPS if p(π(x) | x) ≪1 and |∆| is smaller than ̺π(x). In contrast, for the direct method, we obtain Var  ˆV π DM  = 1 |S|Varx  ̺π(x) + ∆  . Thus, the variance of the direct method does not have terms depending either on the past policy or the randomness in the rewards. This fact usually suffices to ensure that it is significantly lower than the variance of DR or IPS. How- ever, as we mention in the previous section, the bias of the direct method is typically much larger, leading to larger er- rors in estimating policy value. 5. Experiments This section provides empirical evidence for the effective- ness of the DR estimator compared to IPS and DM. We consider two classes of problems: multiclass classification with bandit feedback in public benchmark datasets and es- timation of average user visits to an Internet portal. 5.1. Multiclass Classification with Bandit Feedback We begin with a description of how to turn a k-class clas- sification task into a k-armed contextual bandit problem. This transformation allows us to compare IPS and DR us- ing public datasets for both policy evaluation and learning. 5.1.1. DATA SETUP In a classification task, we assume data are drawn IID from a fixed distribution: (x, c) ∼D, where x ∈X is the feature vector and c ∈{1, 2, . . ., k} is the class label. A typical goal is to find a classifier π : X 7→ {1, 2, . . ., k} minimizing the classification error: e(π) = E(x,c)∼D [I(π(x) ̸= c)] . Alternatively, we may turn the data point (x, c) into a cost- sensitive classification example (x, l1, l2, . . . , lk), where la = I(a ̸= c) is the loss for predicting a. Then, a classifier π may be interpreted as an action-selection policy, and its classification error is exactly the policy’s expected loss.1 To construct a partially labeled dataset, exactly one loss component for each example is observed, following the ap- proach of Beygelzimer & Langford (2009). Specifically, given any (x, l1, l2, . . . , lk), we randomly select a label a ∼UNIF(1, 2, . . . , k), and then only reveal the compo- nent la. The final data are thus in the form of (x, a, la), 1When considering classification problems, it is more natural to talk about minimizing classification errors. This loss minimiza- tion problem is symmetric to the reward maximization problem defined in Section 2. Doubly Robust Policy Evaluation and Learning Dataset ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast Classes (k) 8 6 26 10 5 10 6 4 10 Dataset size 336 214 20000 5620 5473 10992 6435 846 1484 Table 1. Characteristics of benchmark datasets used in Section 5.1. which is the form of data defined in Section 2. Further- more, p(a | x) ≡1/k and is assumed to be known. Table 1 summarizes the benchmark problems adopted from the UCI repository (Asuncion & Newman, 2007). 5.1.2. POLICY EVALUATION Here, we investigate whether the DR technique indeed gives more accurate estimates of the policy value (or clas- sification error in our context). For each dataset: 1. We randomly split data into training and test sets of (roughly) the same size; 2. On the training set with fully revealed losses, we run a direct loss minimization (DLM) algorithm of McAllester et al. (2011) to obtain a classifier (see Ap- pendix A for details). This classifier constitutes the policy π which we evaluate on test data; 3. We compute the classification error on fully observed test data. This error is treated as the ground truth for comparing various estimates; 4. Finally, we apply the transformation in Section 5.1.1 to the test data to obtain a partially labeled set, from which DM, IPS, and DR estimates are computed. Both DM and DR require estimating the expected condi- tional loss denoted as l(x, a) for given (x, a). We use a linear loss model: ˆl(x, a) = wa · x, parameterized by k weight vectors {wa}a∈{1,...,k}, and use least-squares ridge regression to fit wa based on the training set. Step 4 is repeated 500 times, and the resulting bias and rmse (root mean squared error) are reported in Fig. 1. As predicted by analysis, both IPS and DR are unbiased, since the probability estimate 1/k is accurate. In contrast, the linear loss model fails to capture the classification error accurately, and as a result, DM suffers a much larger bias. While IPS and DR estimators are unbiased, it is apparent from the rmse plot that the DR estimator enjoys a lower variance. As we shall see next, such an effect is substantial when it comes to policy optimization. 5.1.3. POLICY OPTIMIZATION We now consider policy optimization (classifier learning). Since DM is significantly worse on all datasets, as indicated in Fig. 1, we focus on the comparison between IPS and DR. Here, we apply the data transformation in Section 5.1.1 to 0 0.1 0.2 ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast bias IPS DR DM 0.1 0.2 ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast rmse IPS DR DM Figure 1. Bias (upper) and rmse (lower) of the three estimators for classification error. See Table 2 for precise numbers. the training data, and then learn a classifier based on the loss estimated by IPS and DR, respectively. Specifically, for each dataset, we repeat the following steps 30 times: 1. We randomly split data into training (70%) and test (30%) sets; 2. We apply the transformation in Section 5.1.1 to the training data to obtain a partially labeled set; 3. We then use the IPS and DR estimators to impute un- revealed losses in the training data; 4. Two cost-sensitive multiclass classification algo- rithms are used to learn a classifier from the losses completed by either IPS or DR: the first is DLM (McAllester et al., 2011), the other is the Filter Tree reduction of Beygelzimer et al. (2008) applied to a decision tree (see Appendix B for more details); Doubly Robust Policy Evaluation and Learning Dataset ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast bias (IPS) 0.004 0.003 0 0 0 0 0 0 0.006 bias (DR) 0.002 0.001 0.001 0 0 0 0 0.001 0.007 bias (DM) 0.129 0.147 0.213 0.175 0.063 0.208 0.174 0.281 0.193 rmse (IPS) 0.137 0.194 0.049 0.023 0.012 0.015 0.021 0.062 0.099 rmse (DR) 0.101 0.142 0.03 0.023 0.011 0.016 0.019 0.058 0.076 rmse (DM) 0.129 0.147 0.213 0.175 0.063 0.208 0.174 0.281 0.193 Table 2. Comparison of results in Figure 1. Dataset ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast IPS (DLM) 0.52933 0.6738 0.93015 0.64403 0.08913 0.5358 0.40223 0.39507 0.72973 DR (DLM) 0.28853 0.50157 0.60704 0.09033 0.0831 0.12663 0.17133 0.31603 0.5292 IPS (FT) 0.46563 0.90783 0.9393 0.84017 0.3701 0.73123 0.69313 0.63517 0.81147 DR (FT) 0.32583 0.45807 0.47197 0.17793 0.05283 0.0956 0.18647 0.38753 0.59053 Offset Tree 0.34007 0.52843 0.5837 0.3251 0.04483 0.15003 0.20957 0.37847 0.5895 Table 3. Comparison of results in Figure 2. 5. Finally, we evaluate the learned classifiers on the test data to obtain classification error. Again, we use least-squares ridge regression to build a lin- ear loss estimator: ˆl(x, a) = wa · x. However, since the training data is partially labeled, wa is fitted only using training data (x, a′, la′) for which a = a′. Average classification errors (obtained in Step 5 above) of the 30 runs are plotted in Fig. 2. Clearly, for policy opti- mization, the advantage of the DR is even greater than for policy evaluation. In all datasets, DR provides substantially more reliable loss estimates than IPS, and results in signif- icantly improved classifiers. Fig. 2 also includes classification error of the Offset Tree reduction, which is designed specifically for policy opti- mization with partially labeled data.2 While the IPS ver- sions of DLM and Filter Tree are rather weak, the DR ver- sions are competitive with Offset Tree in all datasets, and in some cases significantly outperform Offset Tree. Finally, we note DR provided similar improvements to two very different algorithms, one based on gradient descent, the other based on tree induction. It suggests the generality of DR when combined with different algorithmic choices. 5.2. Estimating Average User Visits The next problem we consider is estimating the average number of user visits to a popular Internet portal. Real user visits to the website were recorded for about 4 mil- 2We used decision trees as the base learner in Offset Trees. The numbers reported here are not identical to those by Beygelzimer & Langford (2009) probably because the filter-tree structures in our implementation were different. lion bcookies3 randomly selected from all bcookies during March 2010. Each bcookie is associated with a sparse bi- nary feature vector of size around 5000. These features describe browsing behavior as well as other information (such as age, gender, and geographical location) of the bcookie. We chose a fixed time window in March 2010 and calculated the number of visits by each selected bcookie during this window. To summarize, the dataset contains N = 3854689 data: D = {(bi, xi, vi)}i=1,...,N, where bi is the i-th (unique) bcookie, xi is the corresponding binary feature vector, and vi is the number of visits. If we can sample from D uniformly at random, the sample mean of vi will be an unbiased estimate of the true aver- age number of user visits, which is 23.8 in this problem. However, in various situations, it may be difficult or im- possible to ensure a uniform sampling scheme due to prac- tical constraints, thus the sample mean may not reflect the true quantity of interest. This is known as covariate shift, a special case of our problem formulated in Section 2 with k = 2 arms. Formally, the partially labeled data consists of tuples (xi, ai, ri), where ai ∈{0, 1} indicates whether bcookie bi is sampled, ri = aivi is the observed number of visits, and pi is the probability that ai = 1. The goal here is to evaluate the value of a constant policy: π(x) ≡1. To define the sampling probabilities pi, we adopted a sim- ilar approach as in Gretton et al. (2008). In particular, we obtained the first principal component (denoted ¯x) of all features {xi}, and projected all data onto ¯x. Let N be a univariate normal distribution with mean m + ( ¯m −m)/3 3A bcookie is unique string that identifies a user. Strictly speaking, one user may correspond to multiple bcookies, but it suffices to equate a bcookie with a user for our purposes here. Doubly Robust Policy Evaluation and Learning 0.2 0.4 0.6 0.8 ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast Classification Error IPS (DLM) DR (DLM) Offset Tree 0.2 0.4 0.6 0.8 ecoli glass letter optdigits page-blocks pendigits satimage vehicle yeast Classification Error IPS (Filter Tree) DR (Filter Tree) Offset Tree Figure 2. Classification error of DLM (upper) and filter tree (lower). Note that the representations used by DLM and the trees differ radically, conflating any comparison between the ap- proaches. However, the Offset and Filter Tree approaches share a similar representation, so differences in performance are purely a matter of superior optimization. See Table 3 for precise numbers. and standard deviation ( ¯m −m)/4, where m and ¯m were the minimum and mean of the projected values. Then, pi = min{N(xi · ¯x), 1} was the sampling probability of the i-th bcookie, bi. To control data size, we randomly subsampled a fraction f ∈{0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05} from the entire dataset D. For each bcookie bi in this subsample, set ai = 1 with probability pi, and ai = 0 otherwise. We then calculated the IPS and DR estimates on this subsam- ple. The whole process was repeated 100 times. The DR estimator required building a reward model ˆ̺(x), which, given feature x, predicted the average number of visits. Again, least-squares ridge regression was used to fit a linear model ˆ̺(x) = w · x from sampled data. Fig. 3 summarizes the estimation error of the two methods with increasing data size. For both IPS and DR, the esti- mation error goes down with more data. In terms of rmse, 0 1 2 3 4 5 6 7 8 9 10 11 0.0001 0.0005 0.001 0.005 0.01 0.05 rmse subsample rate IPS DR -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 0.0001 0.0005 0.001 0.005 0.01 0.05 bias subsample rate IPS DR Figure 3. Comparison of IPS and DR: rmse (top), bias (bottom). The ground truth value is 23.8. the DR estimator is consistently better than IPS, especially when dataset size is smaller. The DR estimator often re- duces the rmse by a fraction between 10% and 20%, and on average by 13.6%. By comparing to the bias and std metrics, it is clear that DR’s gain of accuracy came from a lower variance, which accelerated convergence of the esti- mator to the true value. These results confirm our analysis that DR tends to reduce variance provided that a reasonable reward estimator is available. 6. Conclusions Doubly robust policy estimation is an effective technique which virtually always improves on the widely used inverse propensity score method. Our analysis shows that doubly robust methods tend to give more reliable and accurate es- timates. The theory is corroborated by experiments on both benchmark data and a large-scale, real-world problem. In the future, we expect the DR technique to become common practice in improving contextual bandit algo- rithms. As an example, it is interesting to develop a vari- ant of Offset Tree that can take advantage of better re- ward models, rather than a crude, constant reward esti- mate (Beygelzimer & Langford, 2009). Doubly Robust Policy Evaluation and Learning Acknowledgements We thank Deepak Agarwal for first bringing the doubly ro- bust technique to our attention. A. Direct Loss Minimization Given cost-sensitive multiclass classification data {(x, l1, . . . , lk)}, we perform approximate gradient descent on the policy loss (or classification error). In the experiments of Section 5.1, policy π is specified by k weight vectors θ1, . . . , θk. Given x ∈ X , the policy predicts as follows: π(x) = arg maxa∈{1,...,k}{x · θa}. To optimize θa, we adapt the “towards-better” version of the di- rect loss minimization method of McAllester et al. (2011) as fol- lows: given any data (x, l1, . . . , lk) and the current weights θa, the weights are adjusted by θa1 ←θa1 + ηx, θa2 ←θa2 −ηx where a1 = arg maxa {x · θa −ǫla}, a2 = arg maxa {x · θa}, η ∈(0, 1) is a decaying learning rate, and ǫ > 0 is an input parameter. For computational reasons, we actually performed batched up- dates rather than incremental updatess. We found that the learn- ing rate η = t−0.3/2, where t is the batched iteration, worked well across all datasets. The parameter ǫ was fixed to 0.1 for all datasets. Updates continued until the weights converged. Furthermore, since the policy loss is not convex in the weight vec- tors, we repeated the algorithm 20 times with randomly perturbed starting weights and then returned the best run’s weight according to the learned policy’s loss in the training data. We also tried us- ing a holdout validation set for choosing the best weights out of the 20 candidates, but did not observe benefits from doing so. B. Filter Tree The Filter Tree (Beygelzimer et al., 2008) is a reduction from cost-sensitive classification to binary classification. Its input is of the same form as for Direct Loss Minimization, but its output is a binary-tree based predictor where each node of the Filter Tree uses a binary classifier—in this case the J48 decision tree imple- mented in Weka 3.6.4 (Hall et al., 2009). Thus, there are 2-class decision trees in the nodes, with the nodes arranged as per a Fil- ter Tree. Training in a Filter Tree proceeds bottom-up, with each trained node filtering the examples observed by its parent until the entire tree is trained. Testing proceeds root-to-leaf, implying that the test time compu- tation is logarithmic in the number of classes. We did not test the all-pairs Filter Tree, which has test time computation linear in the class count similar to DLM. References Asuncion, A. and Newman, D. J. UCI machine learn- ing repository, 2007. http://www.ics.uci.edu/∼mlearn/ MLRepository.html. Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. The nonstochastic multiarmed bandit problem. SIAM J. Computing, 32(1):48–77, 2002. Beygelzimer, A. and Langford, J. The offset tree for learning with partial labels. In KDD, pp. 129–138, 2009. Beygelzimer, A., Langford, J., and Ravikumar, P. Multiclass clas- sification with filter-trees. Unpublished technical report: http:// www.stat.berkeley.edu/∼pradeepr/paperz/filter-tree.pdf, 2008. Cassel, C. M., S¨arndal, C. E., and Wretman, J. H. Some results on generalized difference estimation and generalized regres- sion estimation for finite populations. Biometrika, 63:615–620, 1976. Chan, D., Ge, R., Gershony, O., Hesterberg, T., and Lambert, D. Evaluating online ad campaigns in a pipeline: causal models at scale. In KDD, 2010. Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borgwardt, K., and Sch¨olkopf, B. Dataset shift in machine learning. In Covariate Shift and Local Learning by Distribution Matching, pp. 131–160. MIT Press, 2008. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. The WEKA data mining software: An update. SIGKDD Explorations, 11(1):10–18, 2009. Hazan, E. and Kale, S. Better algorithms for benign bandits. In SODA, pp. 38–47, 2009. Horvitz, D. G. and Thompson, D. J. A generalization of sampling without replacement from a finite universe. J. Amer. Statist. As- soc., 47:663–685, 1952. Kang, J. D. Y. and Schafer, J. L. Demystifying double robustness: a comparison of alternative strategies for estimating a popula- tion mean from incomplete data. Statist. Sci., 22(4):523–539, 2007. With discussions. Lambert, D. and Pregibon, D. More bang for their bucks: assess- ing new features for online advertisers. In ADKDD, 2007. Langford, J. and Zhang, T. The epoch-greedy algorithm for con- textual multi-armed bandits. In NIPS, pp. 1096–1103, 2008. Langford, J., Strehl, A. L., and Wortman, J. Exploration scaveng- ing. In ICML, pp. 528–535, 2008. Lunceford, J. K. and Davidian, M. Stratification and weighting via the propensity score in estimation of causal treatment ef- fects: A comparative study. Statistics in Medicine, 23(19): 2937–2960, 2004. McAllester, D., Hazan, T., and Keshet, J. Direct loss minimization for structured prediction. In NIPS, pp. 1594–1602, 2011. Robins, J. and Rotnitzky, A. Semiparametric efficiency in multivariate regression models with missing data. J. Amer. Statist. Assoc., 90:122–129, 1995. Robins, J. M., Rotnitzky, A., and Zhao, L. P. Estimation of re- gression coefficients when some regressors are not always ob- served. J. Amer. Statist. Assoc., 89(427):846–866, 1994. Strehl, A., Langford, J., Li, L., and Kakade, S. Learning from logged implicit exploration data. In NIPS, pp. 2217–2225, 2011. 0 1 2 3 4 5 6 7 8 9 10 11 0.0001 0.0005 0.001 0.005 0.01 0.05 std subsample rate IPS DR