About Me
Greetings, fellow denizen of the interwebs! My name is Antonio (I usually go by Tony).
I am currently at Salesforce AI Research working on generative AI.
Before, I worked on building reliable and useful LLM agents at Dialect AI. We were in YCombinator's S22 batch.
In the summer of 2022, I finished my PhD at Stanford University. I was fortunate to be advised by Prof. James Zou in the Stanford Laboratory for Machine Learning, Genomics, and Health. I was gratefully supported by a Stanford Bio-X Graduate Fellowship. Get in touch via twitter (@tginart).
When I'm not writing code, cleaning datasets, or (failing at) proving theorems, you can probably find me tumbling down shredding the ski slopes or hero calling at the poker table.
I am a full-stack ML researcher who is passionate about building the next generation of machine intelligence. I've led a variety of state-of-the-art projects in machine learning, resulting in papers published in top AI venues (for example, NeurIPS), contributions to open-source projects with thousands of users (such facebookresearch/dlrm and huggingface/accelerate), and contributions to production systems serving billions.
Research Interests
I am broadly interested in artificial intelligence, cybernetics, and information science & engineering. My doctoral research is on theory and algorithms for large-scale machine learning. I work on making ML systems more efficient, scalable, secure and easier to deploy.
Experience
Co-Founder, Dialect (YCombinator) (2022 - 2023)
Building the future of generally reliable and capable LLM agents for complex process automation.
Graduate Researcher, Stanford University (2017 - 2022)
Invented deletion-efficient data management algorithms for unsupervised learning (100× speedup)
Invented minimax rate optimal policy for ML deployment monitoring based on expert supervision (>25% increase in label efficiency)
Part-time Student Researcher, Facebook AI (Fall & Winter 2021)
State-of-the-art privacy-preserving protocols for large-scale transformer models (>1000× more privacy per user for given model quality)
Research Intern, Facebook AI (Summer 2019, Summer 2021)
Designed embedding architecture for deep recommendation models that uses 16× fewer parameters and trains 3× faster without accuracy loss
Open-source contributor to facebookresearch/dlrm (over 3K stars on GitHub)
Technical Staff Intern, Johns Hopkins University Applied Physics Laboratory (Summer 2017)
R&D for RL-based control algorithms with application to air & missile defense
Undergraduate Researcher, UC Berkeley Laboratory of Information and Systems Science (Summer 2016)
Invented state-of-the-art compression algorithm for genomic data (6× better than gzip)
Undergraduate Researcher & Teaching Assistant, Washington University in St. Louis (2015-2016)
Developed a real-time NLP-based ML classifier for social media streams with application to public health monitoring
Software Development Intern, Answers.com (Summer 2014)
Full-stack web development: LAMP stack backend & JS frontend
Fixed dozens of bugs on production site and added several features
Publications and Manuscripts
A.A. Ginart, L. van der Maaten, J. Zou, C. Guo. SubMix: Practical Private Prediction for Large-Scale Language Models. Preprint, 2022.
A.A. Ginart, M. Zhang, J. Zou. MLDemon: Deployment monitoring for machine learning systems. International Conference on Artificial Intelligence and Statistics (AISTATS), 2022. Challenges in Deploying and Monitoring Machine Learning Systems @ ICML, 2021.
A.A. Ginart, M. Naumov, D. Mudigere, J. Yang, J. Zou. Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems. International Symposium on Information Theory (ISIT), 2021. PeRSonAl @ ISCA, 2020. Github.
A.A. Ginart, E. Zhang, Y. Kwon, J. Zou. Competing AI: How does competition feedback affect machine learning?. International Conference on Artificial Intelligence and Statistics (AISTATS), 2021. CoopAI @ NeurIPS, 2020. Press: Stanford HAI.
S. Feizi, F. Farnia, T. Ginart, D. Tse. Understanding GANs in the LQG Setting: Formulation, Generalization and Stability. IEEE Journal on Selected Areas in Information Theory, 2020.
A.A. Ginart, M. Y. Guan, G. Valiant, J. Zou. Making AI forget you: Data deletion in machine learning. Advances in Neural Information Processing Systems (NeurIPS), 2019. Spotlight. Github. Press: The Register, IEEE Spectrum.
A.A. Ginart*, J. Hui*, K. Zhu*, I. Numanagic, T.A. Courtade, S.C. Sahinalp, D.N. Tse. Optimal compressed representation of high throughput sequence data via light assembly. Nature Communications, 2018. Github.
A.A. Ginart, S. Das, J.K. Harris, R. Wong, H. Yan, M. Krauss, P.A. Cavazos-Rehg. Drugs or Dancing? Using Real-Time Machine Learning to Classify Streamed "Dabbing" Homograph Tweets. IEEE International Conference on Healthcare Informatics (ICHI), 2016.
Education
PhD in Electrical Engineering, Stanford University (2022)
MS in Electrical Engineering, Stanford University (2020)
BS in Computer Engineering, summa cum laude, Washington University in St. Louis (2017)