Speaker: Petros Maniatis
Location: Soda 510
Date: February 2, 2024
Time: 12-1pm PST
Large Sequence Models for Software Development Activities
Large language models for software are increasingly moving beyond code completion to the full range of software development activities that engineers engage in day-to-day. I will present an overview of Google’s efforts to build large foundation models trained on Google’s own internal software-engineering logs, dubbed DIDACT. These logs, and the resulting DIDACT models, capture not just code and code editing, but also code review, bug fixing, code-change vetting and approvals, and other such developer activities, and enable diverse and powerful software-engineering AI assistants. I will dive more deeply into one such internal assistant built on top of DIDACT, and deployed to all Google engineers, used to suggest code edits that resolve code-review comments. I will detail not just the assistant’s design, but also the journey to adapt it for effective use in production. This comment-resolution assistant is addressing more than 7.5% of code-review comments at Google, providing significant positive impact on real-world engineer productivity. This talk covers work by teams from Google’s DeepMind, Research, and Core Systems & Experiences divisions.
Petros Maniatis is a Senior Staff Research Scientist at Google DeepMind, in the Learning for Code Team. Prior to that, he was a Senior Research Scientist at Intel Labs, working in Intel’s Berkeley Research Lab and then at the Intel Science and Technology Center on Secure Computing at UC Berkeley. He received his MSc and Ph.D. from the Computer Science Department at Stanford University. Before Stanford, he obtained his BSc with honors at the Department of Informatics of the University of Athens in Greece. His current research interests lie primarily in the confluence of machine learning and software engineering.