Loading Events

PhD Thesis Defense

June

9
Wed
Jingyan Wang Robotics Institute,
Carnegie Mellon University
Wednesday, June 9
12:00 pm to 1:00 pm
Understanding and Mitigating Biases in Evaluation

Abstract:
There are many problems in real life that involve collecting and aggregating evaluation from people, such as hiring, peer grading and conference peer review. In this thesis, we focus on three sources of biases that arise in such problems, and propose methods to mitigate them. First, we study human bias, that is, the bias in the evaluation reported by the evaluators. We consider miscalibration, where different people have different calibration scales. We propose randomized algorithms that provably extract useful information under arbitrary miscalibration, and subsequently propose a heuristic to correct the scores computationally. We also consider the bias induced by the outcome experienced by people, and propose an adaptive algorithm that debiases people’s ratings under mild assumptions of the biases. Second, we study estimation bias, where algorithms yield different performance on different subgroups of the population. We analyze the statistical bias (defined as the expected value of the estimate minus the true value) when using the maximum-likelihood estimator on pairwise comparison data, and then propose a simple modification of the estimator to reduce the bias. Third, we study policy bias, where the design of evaluation procedure may induce undesirable outcomes. We compare two different schemes of distributing a large-scale multi-faceted evaluation task to many evaluators, in terms of accuracy and fairness. Finally, we briefly describe our outreach efforts to reduce the bias caused by the alphabetical-ordering authorship in scientific publications, and to analyze the gender distribution of conference paper awards.

Thesis Committee Members:
Nihar Shah, Chair
Artur Dubrawski
Jeff Schneider
Ariel Procaccia, Harvard University
Avrim Blum, Toyota Technological Institute at Chicago