
Title |
|---|
![]() Memory-Query Tradeoffs for Randomized Convex OptimizationIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023 |
![]() Krylov Methods are (nearly) Optimal for Low-Rank ApproximationIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023 |
![]() Query lower bounds for log-concave samplingIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023 |
![]() Optimal Query Complexities for Dynamic Trace EstimationNeural Information Processing Systems (NeurIPS), 2022 |
![]() Efficient Convex Optimization Requires Superlinear MemoryAnnual Conference Computational Learning Theory (COLT), 2022 |
![]() Improved analysis of randomized SVD for top-eigenvector approximationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022 |
![]() Low-Rank Approximation with Matrix-Vector ProductsSymposium on the Theory of Computing (STOC), 2022 |
![]() Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond
rank-one updatesAnnual Conference Computational Learning Theory (COLT), 2021 |
![]() Hutch++: Optimal Stochastic Trace EstimationSIAM Symposium on Simplicity in Algorithms (SOSA), 2020 |
![]() Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph ProblemsInternational Workshop and International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM), 2020 |
![]() The gradient complexity of linear regressionAnnual Conference Computational Learning Theory (COLT), 2019 |