Fall 2019

Learning under complex structure

About

Some of the most significant advances in the recent history of statistics and data science have relied on our ability to express and exploit structure in data. This structure may be simple as in the case of parametric models such as linear regression, low rank matrix estimation or principal component analysis, where the data is assumed to be the superposition of a linear algebraic structure and some well behaved (e.g., Gaussian) noise. In other cases, this structure may be simple provided that the correct data representation is known, as is the case in wavelet thresholding for natural image denoising, where linearity arises in the wavelet basis space. The recent explosion of data that is routinely collected has led scientists to contemplate more and more sophisticated structural assumptions. In some cases such as models with latent variables, new models aim at capturing heterogeneity in the data, whereas in others, complex structures arise naturally as algebraic structures governed by the rigid laws of physics. Understanding how to harness and exploit such structure is key to improving the prediction accuracy of various learning procedures. The ultimate goal is to develop a set of tools that leverage underlying complex structures to pool information across observations and ultimately improve statistical accuracy as well as computational efficiency of the deployed methods. Bringing together computer scientists, mathematicians and statisticians will have a transformative impact in this fast developing avenue of research.

Program

The workshop will take place at MIT (room 1-190) on January 27-29, 2020.

Confirmed speakers

  • Nima Anari (Stanford)
  • Alex Dimakis (UT Austin)
  • Aude Genevay (MIT)
  • Tommi Jaakkola (MIT)
  • Sham Kakade (U. of Washington)
  • Tengyuan Liang (Booth School of Business)
  • Tyler Maunu (MIT)
  • Jonathan Niles-Weed (NYU/IAS)
  • Ryan O'Donnell (CMU)
  • Ioannis Panageas (SUTD)
  • Miki Racz (Princeton)
  • Andrej Risteski (CMU)
  • Elina Robeva (U. of British Columbia)
  • Sebastien Roch (U. of Wisconsin, Madison)
  • Yaron Singer (Harvard)
  • Justin Solomon (MIT)
  • Vasilis Syrgkanis (Microsoft Research)
  • Caroline Uhler (MIT/ETH Zurich)
  • Gregory Valiant (Stanford)
  • Cynthia Vizant (NC State)
  • Alex Wein (NYU)
  • Yihong Wu (Yale)

Organizers

  • Costis Daskalakis (MIT)
  • Stefanie Jegelka (MIT)
  • Jonathan Kelner (MIT)
  • Ankur Moitra (MIT)
  • Philippe Rigollet (MIT) -- Lead organizer

Program

Day 1: Monday, January 27, 2020

Time Speaker Title
8:50 - 9:00 Opening remarks
9:00 - 9:45 Ioannis Panageas Depth-width trade-offs for ReLU networks via Sharkovsky's theorem.
9:45 - 10:30 Ryan O'Donnell Learning quantum states.
10:30 - 11:00 Coffee break
11:00 - 11:45 Gregory Valiant How bad is worst-case data if you understand where it comes from?
11:45 - 12:30 Yaron Singer From predictions to decisions.
12:30 - 2:00 Lunch Break
2:00 - 2:45 Miki Racz Trace reconstruction problems with applications to DNA data storage
2:45 - 3:30 Sebastien Roch Some statistical questions in evolutionary genomics.
3:30 - 4:00 Break
4:00 - 4:45 Yihong Wu Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in O(sqrt{n}) iterations.
4:45 - 5:30 Vasilis Syrgkanis Statistical learning for causal inference.

Day 2: Tuesday, January 28, 2020

Time Speaker Title
9:00 - 9:45 Elina Robeva Learning totally positive distributions.
9:45 - 10:30 Caroline Uhler Causal inference through permutation-based algorithms.
10:30 - 11:00 Coffee Break
11:00 - 11:45 Cynthia Vizant Log-concave polynomials, matroids, and expanders.
11:45 - 12:30 Nima Anari Limited correlations, fractional log-concavity, and fast mixing random walks.
12:30 - 2:00 Lunch Break
2:00 - 2:45 Sham Kakade The provable effectiveness of policy gradient methods in reinforcement learning.
2:45 - 3:30 Andrej Risteski Fast convergence for Langevin diļ¬€usion with matrix manifold structure.
3:30 - 4:00 Break
4:00 - 6:00 Reception and Poster Session in 2-290

Posters

Author Title
Rajat Talak A Theory of Uncertainty Variables for Learning Complex Structures
Kaizheng WangAn $\ell_p$ analysis of eigenvectors with applications to spectral clustering
Yoni ShtiebelComputational Linguistics
Anirudh SridharCorrelated Randomly Growing Graphs
Igor GilitschenskiDeep Orientation Uncertainty Learning based on a Bingham Loss
Aleksandrina GoevaDiscovering Spatially Coherent Gene Expression Patterns
Niloy BiswasEstimating Convergence of Markov Chains with Couplings
Lorenzo MasoeroPredicting and maximizing the number of new genomic variants in a future experiment
Feng LiuHierarchical Graphical Structure Learning for EEG Source Imaging
Marwa El HalabiMinimizing approximately submodular functions
Jean-Baptiste SebyMulti-Trek Separation in Linear Structural Equation Models
Qiuyi WuMusic Mining In Topic Modeling Approach For Improvisational Learning
Maryam AliakbarpourNew directions in testing properties of distributions
Sai Ganesh NagarajanOn the Analysis of EM for truncated mixtures of two Gaussians
Alireza FallahOn Theory of Model-Agnostic Meta-Learning Algorithms
Nir RosenfeldPredicting Choice with Set-Dependent Aggregation
Julia GaudioSparse High-Dimensional Isotonic Regression
Eren Can KizildagStationary Points of Shallow Neural Networks with Quadratic Activation Function
Raj AgrawalThe Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions
Eshaan NichaniUnderstanding Alignment and the Role of Depth in Linear Neural Networks
Miri AdlerUnderstanding variations in single-cell data using evolutionary tradeoff theory

Workshop, Day 3: Wednesday, January 29

Time Speaker Title
9:00 - 9:45 Tengyuan Liang On restricted lower isometry of kernels, risk of minimum-norm interpolants, and multiple descent phenomenon.
9:45 - 10:30 Alex Wein Understanding statistical-vs-computational tradeoffs via the low-degree likelihood ratio.
10:30 - 11:00 Coffee Break
11:00 - 11:45 Alex Dimakis Deep generative models and inverse problems.
11:45 - 12:30 Tommi Jaakkola Learning to represent and generate molecular graphs.
12:30 - 2:00 Lunch Break
2:00 - 2:45 Jonathan Niles-Weed Estimation of the Wasserstein distance in the spiked transport model.
2:45 - 3:30 Aude Genevay Learning with Sinkhorn divergences: from optimal transport to MMD.
3:30 - 4:00 Break
4:00 - 4:45 Justin Solomon Approximating and manipulating probability distributions with optimal transport.
4:45 - 5:30 Tyler Maunu Gradient descent algorithms for Bures-Wasserstein barycenters.

Practical information