About
Frontier AI, engineered for the real world.
I’m an AI/ML engineer at Google working on Gemini. I build on the systems layer between model capability and deployed value: evaluation, reliability, edge constraints, and production infrastructure.
My current thesis is simple: the next generation of AI work will be won by people who can make frontier models useful under real constraints. Not just stronger benchmarks. Reliable systems. Clear interfaces. Trustworthy deployment paths. Fast feedback from users.
Current focus
- Frontier AI infrastructure: model systems, deployment constraints, evaluation, and reliability.
- High-trust deployment: practical engineering for environments where correctness, trust, and operational judgment matter.
- Research-to-production: applying a machine learning research foundation to systems people can actually use.
Proof points
Before Google, I built large-scale ML and cloud platforms at Capital One, including internal model-training infrastructure used by thousands of data scientists, machine learning engineers, and analysts. Earlier work included banking ML workflows for fraud, customer segmentation, risk, credit application systems.
My research background is in deep learning, computer vision, and medical image analysis. At Vanderbilt University, I developed new ML algorithms and published peer-reviewed research. That background still shapes my engineering taste: models matter, but deployment, observability, data quality, and human trust determine whether they create value.
Trajectory
I’m optimizing for the overlap between elite engineering and founder-grade market taste: high-talent-density teams, hard AI systems problems, fast research-to-production loops, and real deployment pain that can become new infrastructure.
Explore: Projects · Publications · Writing · GitHub · LinkedIn
