News / Press

               Robert D. Nowak

Video: 2-Minute Explanation of Machine Learning Research

Video: Machine Learning for Data Science

Sparsity and Active Learning
Sparsity is a form of hidden low-dimensional structure that is commonly exploited in machine learning and signal processing.  Lasso Regression and Compressed Sensing are two powerful tools that employ nonlinear methods to take advantage of sparsity. 
Here is our demo that aired on NPR illustrating these tools using audio signals and images. We are currently researching adaptive data collection processes that use information gleaned from previously collected data in order to focus and optimize the gathering of new data. This can lead to significant improvements in the tradeoff between error rate and signal-to-noise ratio, as shown in our recent papers on adaptive compressed sensing and multi-armed bandits. This work is  motivated in part by applications in systems biology in which the nature of the available experimental techniques dictates how cellular systems can be measured or probed.

Learning Graphs and Networks
Graphical models are used to represent high dimensional systems of all sorts, including communication, social, and biological networks.  Recently, researchers have shown that it is possible to identify the structure of sparse graphical models from very few measurements of the underlying process.  However, this requires simultaneously measuring all the variables involved, an impossibility in many real-world systems.  We are exploring new theory and methods for graph and network inference from incomplete or missing data. For example, we show that the topology of the Internet can be identified from highly incomplete distances in our Sigcomm paper and study the problem of estimating the evolutionary history of a set of species (phylogenetic tree estimation) from genomic data.

Interactive Machine Learning with Humans
Human-computer interaction is an incredibly important area of research.  From recommendation systems to human subject studies in cognitive science, the need to formally study human reasoning and judgment through the lens of statistical learning theory has never been greater.  Asking humans to judge options based on a numeric scale is notoriously problematic.  Such data are plagued by calibration issues, errors, and inconsistencies.  For these reasons, pairwise comparisons are often preferred in practice. In a paired comparison, a person is asked to compare just two objects and decide which is preferred. We are investigating the basic theory of statistical learning from pairwise comparisons and applications to ranking and preference learning and derivative-free optimization. We've put some of our theory into practice with an ipad app that learns your beer preferences.
      Rob's pic