Theory Of Disagreement-Based Active Learning

Research Interests: My general interest in research applies to systems that can improve their performance through experience, a topic known as machine learning. I focus on the statistical analysis of Machine Learning. The essential questions I would like to answer are: “What can we learn from empirical observation/experimentation?” and “How much observation/experimentation is necessary and sufficient to learn it?” This over-the-board theme overlaps with several academic disciplines, including statistical learning theory, artificial intelligence, statistical inference, algorithmic and statistical information theories, probability theory, scientific philosophy, and cognitive theory. Yang, L., Hanneke, S., and Carbonell, J. (2010). Active Bayes learning with any binary query. In the procedure of the 21st International Conference on Learning Theory in Algorithmics (ALT). [pdf] [ps] Also available in the jargon of information theory. [pdf] [ps] Teaching: Spring 2018: ORF 525, Statistical learning and nonparametric estimation. Spring 2012: 36-752, Advanced Probability Overview. Fall 2011: 36-755, Advanced Statistical Theory I. Spring 2011: 36-752, Advanced Probability Overview. Fall 2010 Mini 1: 36-781, Advanced Statistical Methods I: Active Learning Fall 2010 Mini 2: 36-782, Advanced Statistical Methods II: Advanced Topics in Machine Learning Theory Spring 2010: 36-754, Advanced Probability II: Stochastic Processes.

Fall 2009: 36-752, Advanced Probability Overview. At ALT 2010 and Machine Learning Summer School 2010 in Canberra, Australia, I gave a tutorial on active learning theory. [Sheets] Active learning theory. [pdf] [ps] This is an overview of some recent advances in active learning theory, with particular emphasis on guarantees of complexity labels for methods based on differences of opinion. The current version (v1.1) was updated on September 22, 2014. Some of the significant recent advances in active learning that have not yet been covered by the survey: [ZC14], [WHE-Y15], [HY15]. An abridged version of this survey was published in the Foundations and Trends in Machine Learning series, Volume 7, Issues 2-3, 2014. Hanneke, S., Kanade, V.

and Yang, L. (2015). Learn with a drift target concept. In the proceedings of the 26th International Conference on Algorithmal Learning Theory (ALT). [pdf] [ps] [arXiv] See also this reference to a result of the complexity of effective agnostic learning, implicit in the conceptual drift document below: [pdf] Short biography: Before arriving at TTIC, I was an independent scientist who worked at Princeton in 2012-2018, outside of a brief one-semester stay as a visiting professor at Princeton University in 2018. Prior to that, from 2009 to 2012, I was a visiting professor in the Department of Statistics at Carnegie Mellon University, also associated with the Department of Machine Learning. I received my Ph.D. in 2009 from the Department of Machine Learning at Carnegie Mellon University, which was co-advised by Eric Xing and Larry Wasserman. My final thesis focused on the theoretical foundations of active learning.

From 2002 to 2005, I studied computer science at the University of Illinois at Urbana-Champaign (UIUC), where I was a professor. Dan Roth and the cognitive computation group students worked on semi-supervised learning. Before that, I was studying computer science at Webster University in St. John`s. .

WordPress Themes