Our paper, Laplacian Matrix Sampling for Communication-efficient Decentralized Learning, is the first systematic optimization of a key hyperparameter in decentralized learning, the mixing matrix, based on a general cost model that can represent a number of important cost metrics (e.g., energy, load, time). The work was conducted in collaboration with IBM and ARL, but the topic was picked by my PhD student Daniel. Congratulations and good job, Daniel!