I was invited to serve on the artifact evaluation committee of the POPL 2020 conference.
See the SIGPLAN Empirical Evaluation Checklist for the “why” and the “how” on conducting rigorous evaluations,
and consider submitting the supporting artifacts for your research papers.
Deadlines: 10th July (papers) · 21st October (artifacts)
We define overfitting in the context of CEGIS-based SyGuS, and show that there exists a tradeoff between expressiveness and performance. We present two mitigation strategies: (1) a black-box approach that any existing tool can use, and (2) a white-box technique called hybrid enumeration.
I am visiting the PL group at Princeton University for the spring quarter. I would be collaborating with Prof. David Walker’s group and my advisor Prof. Todd Millstein, who is currently visiting there as well, on invariant synthesis and related problems.
I was invited to serve on the artifact evaluation committees of SAS 2019 and SPLASH-OOPSLA 2019 conferences.
Artifact evaluation ensures that the results claimed in research papers are easily and accurately reproducible.
Unlike OOPSLA, SAS requests authors of all papers (not just accepted papers) to submit their artifacts, which are due immediately after the paper submission deadline. The SAS program committee will also have access to the artifact reviews by the artifact evaluation committee.
SAS Deadlines: 25th April (papers) · 25th April (artifacts)
OOPSLA Deadlines: 5th April (papers) · 8th July (artifacts)
I was invited to join the organizing committee of the annual SyGuS competition.
Preparations for the 6th SyGuS-Comp,
to be held with SYNT@CAV 2019, have already begun!
Please take a look at the SyGuS language standard v2.0,
and submit your benchmarks and/or solvers.
Deadlines: 1st May (benchmarks) · 14th June (solvers)
I was invited to serve on the program committee of the DebugML workshop at ICLR 2019.
The goal of this workshop is to discuss a variety of topics,
such interpretability, verification, human-in-the-loop debugging etc. to help developers build robust ML models.
Please consider submitting your work.
Deadline: 1st March
We propose a static analysis for slicing functional programs, which precisely captures structure-transmitted dependencies and provides a weak form of context sensitivity — weakened to guarantee decidability. We also show an incremental version this technique that is efficient in practice.
I am super excited to attend this two-day workshop at the Microsoft Research Redmond campus and meet other Microsoft Research PhD fellows & award winners.
For the second time, our loop invariant inference tool LoopInvGen
(based on PIE) won the Inv track of SyGuS-Comp 2018.
We received a FLoC Olympic Games medal, which is awarded every 4 years.
We solved $91\%$ of the benchmarks — was the fastest solver for $89\%$ of them ($16\%$ more than the runner-up), and produced the shortest invariants for $74\%$ of them ($20\%$ more than the runner-up).
In UCLA news: Computer Science
We present a technique, called FlashProfile, to generate hierarchical data profiles. Existing tools, including commercial ones, generate a single flat profile, and are often overly general or incomplete. Furthermore, we show that data profiles can improve accuracy and efficiency of PBE techniques.
This two-week-long DeepSpec Summer School discussed state-of-the-art for building systems with zero bugs, i.e., specification and verification of full functional correctness of everything from low-level hardware instructions to user-level software. The lectures covered several verification systems built on top of the Coq proof assistent, such as the Verified Software Toolchain (VST), CertiKOS, VeLLVM and more.
I passed the OQE, and have now advanced to candidacy!
I passed the WQE — one step closer to getting my PhD!
This one-week-long Summer School on Formal Techniques discussed the state-of-the-art tools for formal verification, including practical exercises in CVC4, PVS, Why3, and others. I would definitely recommended it to anyone interested in the field.
I am a teaching assistant for the upper-division Programming Languages (CS 131) course, taught by my advisor Prof. Todd Millstein, for the second time (after Fall 2014). For those interested, my notes are publicly available in the S16_TA_CS131 repo.
We present a technique, called the precondition inference engine (PIE), which uses on-demand feature learning to automatically infer a precondition that would satisfy a given postcondition. We use PIE to also construct a novel automatic verification system called LoopInvGen.