RESEARCH

My research interests lie broadly in statistics and applied probability with a current focus on statistical machine learning algorithms, such as message passing. These algorithms can be used for inference and optimization in many applications including communications systems and compressed sensing. In recent work I have obtained sharp theoretical guarantees on the performance of message passing algorithms in these settings. In addition to algorithmic development, much of my work has used concentration of measure tools and applied probability ideas to extend performance guarantees of such algorithms to the non-asymptotic regime.

A key problem in statistics is recovering a signal or an unknown function from noisy measurements. A usual technique is to seek a best estimate which minimizes the empirical l2-distance between the measurements and any estimate belonging to some constraining set; e.g. the signal is sparse or the function is smooth. Problems of this form arise in many of the topics I study, to name a few: channel coding, non-parametric function estimation, and compressive sensing. In these areas, iterative algorithms have proven to be very successful in efficiently finding the constrained least squares solution. I discuss here, in detail, my work with approximate message passing (AMP) and other iterative algorithms applied to channel coding and more general constrained least squares problems.

Asymptotics in Channel Coding

The use of smart devices and wireless networks is ubiquitous, creating a pressing need for low-complexity communications schemes that reliably deliver high data rates. I study the additive white Gaussian noise (AWGN) channel which is a commonly-used model of both wired and wireless communication. My research focuses on building efficient decoders that are provably capacity-achieving with low error probability and good performance at practical block lengths.

  • C. Rush, A. Greig, and R. Venkataramanan, “Capacity-achieving Sparse Superposition Codes with Approximate Message Passing Decoding,” (submitted to) IEEE Transactions on Information Theory. [preprint]
  • C. Rush, A. Greig, and R. Venkataramanan, “Capacity-achieving Sparse Regression Codes with Approximate Message Passing Decoding”, Proc. IEEE Int. Symp. on Information Theory (ISIT), 2015.
  • C. Rush and A. Barron, “The Method of Nearby Measures”,€ Proc. Workshop on
    Information Theoretic Methods in Science and Engineering Proceedings (WITMSE), 2013. [pdf]

Approximate Message Passing

Approximate message passing (AMP) has been studied extensively as an approximation to loopy belief propagation algorithms, like min-sum or sum-product, on dense factor graphs where traditional message passing is infeasible. My work involves extending AMP to the high-dimensional regime of the sparse superposition decoding problem:

  • C. Rush, A. Greig, and R. Venkataramanan, “Capacity-achieving Sparse Superposition Codes with Approximate Message Passing Decoding,” (submitted to) IEEE Transactions on Information Theory. [preprint]

Current and Future Work

Many problems, including compressed sensing and channel coding, are often studied asymptotically as n, either the signal length or the code length, tends to infinity.  This allows for the usual limit theorems of probability theory — Laws of Large Numbers and Central Limit Theorem, for example — to be used in the analysis. However, practical implementations demand study of such problems in the non-asymptotic regime since performance guarantees at practically-sized n are desirable.

My current work develops non-asymptotic analysis of AMP both when used as a decoder for SPARCS and the undersampling ratio approaches 0 and in compressed sensing when the signal is i.i.d. and the undersampling ratio approaches a constant.

  • C. Rush and R. Venkataramanan, “Finite Sample Analysis of Approximate Message Passing”, (in preparation for) IEEE Transactions on Information Theory.

Rigorous analysis of the performance of approximate message passing (AMP) is given only for Gaussian design matrices. My work and that of others provide empirical evidence that AMP works for a larger class of design matrices, including but likely not limited to, spatially-coupled Hadamard and Rademacher design. In a future project I wish to explore whether analysis of AMP can be extended to a more general class of matrices. I believe that the success of AMP relies not on the Gaussianity of the design matrix, but rather on some more general property, like the restricted isometry property of compressed sensing, which Gaussian matrices often satisfy.

I am also currently exploring problems related to approximation theory and non-parametric function estimation using high-dimensional libraries of functions.  I hope to continue studying and further explore the use of Greedy algorithms as a means of finding function estimates in this scenario.

Other Work

I have also collaborated with geneticists to use information theory ideas to predict genetic interaction via the maximum entropy arrangement:

  • G. Carter, C. Rush, F. Uygun, N. Sakhanenko, D. Galas, and T. Galitski,  “A systems-biology approach to modular genetic complexity”, Chaos 20.2 (2010): 026102. [pdf]

I am also generally interested in the use of technology, and specifically data visualization in R, as a means of introducing students to and getting students excited about statistics.

  • S. Wang and C. Rush, “Visualization as the Gateway Drug to Statistics in Week One”, United States Conference on Teaching Statistics (USCOTS), 2015. [poster]

I have worked closely with researchers and scientists at Yale’s School of Medicine studying the effects of physical and computer-presented brain-training exercises on children’s thinking abilities and cognitive capacities.

  • B. Wexler, M. Iselli, S. Leon, W. Zaggle, C. Rush, A. Goodman, and E. Ahmet, “Brain Exercises: Neuro- priming and Far Transfer Academic Benefit in Children”, (submitted to) Nature.
  • B. Wexler, L. Vitulano, C. Moore, L. Katsovich, S. Smith, C. Rush, H. Grantz, V. Eicher, J. Dziura, J. Dong, J. Leckman, “Response to Integrated Computer-Presented and Physical Brain-Training Exercises for Children with Attention-Deficit/Hyperactivity Disorder: Exploratory Analysis”, (submitted to) Developmental Psychology.
  • E. Ahmet, C. Rush, and B. Wexler, “Carry-over of Brain-training Effects from Kindergarten to First Grade”, (in preparation).

Leave a Reply

Your email address will not be published. Required fields are marked *