1. Nima Anari, Callum Burgess, Kevin Tian, and Thuy-Duong Vuong. 2023. Quadratic speedups in parallel sampling from determinantal distributioPnrso.-In ceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, 367-377. Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild. 2020. Sampling arborescences in parallealr.Xiv preprint arXiv: 2012. 0950.2 Nima Anari, Yizhi Huang, Tianyu Liu, Thuy-Duong Vuong, Brian Xu, and Katherine Yu. 2023. Parallel discrete sampling via continuous walPkrso.I-n ceedings of the 55th Annual ACM Symposium on Theory of Computin, g103-[ 29 ] 116.
2. An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation
3. A lower bound for parallel submodular minimization
4. Alexander Barvinok. 2016C. ombinatorics and complexity of partition functi o. ns Vol. 30. Springer. [ 33 ] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learnAedrvsa. nces in neural information processing system, s33, 1877-1901.
5. Deeparnab Chakrabarty, Yu Chen, and Sanjeev Khanna. 2021. A polynomial lower bound on the number of rounds for parallel submodular function mini-mization. In62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 202. 2IEEE, 37-48.