Paper on “Laplacian Matrix Sampling” accepted to JSAC SI: Communication-Efficient Distributed Learning over Networks

Our paper, Laplacian Matrix Sampling for Communication-efficient Decentralized Learning, is the first systematic optimization of a key hyperparameter in decentralized learning, the mixing matrix, based on a general cost model that can represent a number of important cost metrics (e.g., energy, load, time). The work was conducted in collaboration with IBM and ARL, but the topic was picked by my PhD student Daniel. Congratulations and good job, Daniel!

Powered by WordPress. Designed by WooThemes

Skip to toolbar