Tony A. Ginart

GitHub | LinkedIn | Scholar  

About Me

Greetings, fellow denizen of the interwebs! My name is Antonio (I usually go by Tony). 

I am working on building reliable and useful LLM agents at Dialect AI. We were in YCombinator's S22 batch.

In the summer of 2022, I finished my PhD at Stanford University. I was fortunate to be advised by Prof. James Zou in the Stanford Laboratory for Machine Learning, Genomics, and Health. I was gratefully supported by a Stanford Bio-X Graduate Fellowship. Get in touch at tginart at stanford dot edu.

When I'm not writing code, cleaning datasets, or (failing at) proving theorems, you can probably find me tumbling down shredding the ski slopes or hero calling at the poker table. 

I am a full-stack ML researcher who is passionate about building the next generation of machine intelligence. I've led a variety of state-of-the-art projects in machine learning, data science, and data processing while blending theoretical, algorithmic, and systems work. Many of these have resulted in papers published in top AI venues (for example, NeurIPS), contributions to open-source projects with thousands of users, and contributions to production systems serving billions. 

Research Interests

I am broadly interested in artificial intelligence, cybernetics, and information science & engineering. My doctoral research is on theory and algorithms for large-scale machine learning. I work on making ML systems more efficient, scalable, secure and easier to deploy. 

Experience

Publications and Manuscripts

A.A. Ginart, L. van der Maaten, J. Zou, C. Guo. SubMix: Practical Private Prediction for Large-Scale Language Models. Preprint, 2022.

A.A. Ginart, M. Zhang, J. Zou. MLDemon: Deployment monitoring for machine learning systemsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022. Challenges in Deploying and Monitoring Machine Learning Systems @ ICML, 2021.

A.A. Ginart, M. Naumov, D. Mudigere, J. Yang, J. Zou. Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems. International Symposium on Information Theory (ISIT), 2021. PeRSonAl @ ISCA, 2020. Github.

A.A. Ginart, E. Zhang, Y. Kwon, J. Zou.  Competing AI: How does competition feedback affect machine learning?.  International Conference on Artificial Intelligence and Statistics (AISTATS), 2021. CoopAI @ NeurIPS, 2020. Press: Stanford HAI. 

S. Feizi,  F. Farnia, T. Ginart, D. Tse. Understanding GANs in the LQG Setting: Formulation, Generalization and Stability. IEEE Journal on Selected Areas in Information Theory, 2020.

A.A. Ginart, M. Y. Guan, G. Valiant, J. Zou. Making AI forget you: Data deletion in machine learning. Advances in Neural Information Processing Systems (NeurIPS), 2019. Spotlight. Github. Press: The Register, IEEE Spectrum.  

A.A. Ginart*, J. Hui*, K. Zhu*, I. Numanagic, T.A. Courtade, S.C. Sahinalp, D.N. Tse. Optimal compressed representation of high throughput sequence data via light assembly. Nature Communications, 2018. Github.

A.A. Ginart, S. Das, J.K. Harris, R. Wong, H. Yan, M. Krauss, P.A. Cavazos-Rehg. Drugs or Dancing? Using Real-Time Machine Learning to Classify Streamed "Dabbing" Homograph Tweets. IEEE International Conference on Healthcare Informatics (ICHI), 2016.

Education