I currently am a Second Year Ph.D. Student and CGRS Fellow at the University of Wisconsin-Madison. I am advised by Aws Albarghouthi and Fred Sala.
News
- Co-Organized the third Deep Learning for Code Workshop at ICLR 2025.
- I will be working on agents in summer 2024 at Replit.
- I will be interning at Magic AI from May to August 2023.
- I will be interning at X, The Moonshot Company from May to December 2022.
- Co-Organizing the Deep Learning For Code Workshop at ICLR 2022.
Featured
-
Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput
Outcome Reward Models for code verification allow one to trade accuracy for speed in the generate-then-rank paradigm. This can be further improved through a generate-prune-then-rank approach where a weaker verifier prunes solutions prior to ranking, thus saving work on incorrect tokens. We show that this hybrid approach can be 11.65 times faster than running the whole test suite while only being 8.33% less accurate.
Publications
-
Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput
G. Orlanski , N. Roberts , A. Albarghouthi , and F. Sala -
Measuring The Impact Of Programming Language Distribution
G. Orlanski , K. Xiao , X. Garcia , J. Hui , J. Howland , J. Malmaud , J. Austin , R. Singh , and M. Catasta -
Reading StackOverflow Encourages Cheating: Adding Question TextImproves Extractive Code Generation.
G. Orlanski and A. Gittens