These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Riker: Always-Correct and Fast Builds from Simple Specifications

Daniel Barowy

Build systems are responsible for building software correctly and
quickly. Unfortunately, traditional build tools like make are correct
and fast only when developers precisely enumerate dependencies for
every incremental build step. Forward build systems improve
correctness over traditional build tools by discovering dependencies
automatically, but existing forward build tools have two fundamental
flaws. First, they are incorrect; existing forward build tools miss
dependencies because their models of system state are incomplete.
Second, they rely on users to manually specify incremental build
steps, increasing the programmer burden for fast builds.

This work introduces Riker, a forward build system that guarantees
fast, correct builds. Riker builds are easy to specify; in many cases
a single command such as gcc *.c suffices. From these simple
specifications, Riker automatically discovers fast incremental rebuild
opportunities. Riker models the entire POSIX filesystem—not just
files, but directories, pipes, and so on. This model guarantees that
every dependency is checked on every build so every output is correct.

We use Riker to build 14 open source packages including LLVM and
memcached. Riker incurs a median overhead of 8.8% on the initial full
build. On average, Riker's incremental builds realize 94% of make's
incremental speedup with no manual effort and no risk of errors.


Extensibility for Fun(ctional Languages) via Nested Family Polymorphism

Anastasiya Kravchuk-Kirilyuk

Modular, extensible code is highly desirable, but difficult to
achieve. Due to implementation challenges such as parameter clutter
and inherited code that is not guaranteed to be type-safe, modularity
often gives way to code duplication. As a solution to this problem, we
present FamFun: a functional, family-polymorphic system with
extensible ADTs, extensible pattern matching, and nested inheritance.
In our language, all constructs except functions serve as
extensibility hooks, cutting down on parameter clutter. We support
extensible pattern matching, where new match cases can be specified
separately for each new extension to the ADT. We also support
composable extensions by representing mixins through our extensible
families.


Diversity-Driven Automated Formal Verification

Emily First

Formally verified correctness is one of the most desirable properties
of software systems. But despite great progress made via interactive
theorem provers, such as Coq, writing proof scripts for verification
remains one of the most effort-intensive (and often prohibitively
difficult) software development activities. Recent work has created
tools that automatically synthesize proofs or proof scripts. For
example, CoqHammer can prove 26.6% of theorems completely
automatically by reasoning using precomputed facts, while TacTok and
ASTactic, which use machine learning to model proof scripts and then
perform biased search through the proof-script space, can prove 12.9%
and 12.3% of the theorems, respectively. Further, these three tools
are highly complementary; together, they can prove 30.4% of the
theorems fully automatically. Our key insight is that control over the
learning process can produce a diverse set of models, and that, due to
the unique nature of proof synthesis (the existence of the theorem
prover, an oracle that infallibly judges a proof’s correctness), this
diversity can significantly improve these tools’ proving
power. Accordingly, we develop Diva, which uses a diverse set of
models with TacTok’s and ASTactic’s search mechanism to prove 21.7% of
the theorems. That is, Diva proves 68% more theorems than TacTok and
77% more than ASTactic. Complementary to CoqHammer, Diva proves 781
theorems (27% added value) that Coq-Hammer does not, and 364 theorems
no existing tool has proved automatically. Together with CoqHammer,
Diva proves 33.8% of the theorems, the largest fraction to date. We
explore nine dimensions for learning diverse models, and identify
which dimensions lead to the most useful diversity. Further, we
develop an optimization to speed up Diva’s execution by 40X. Our study
introduces a completely new idea for using diversity in machine
learning to improve the power of state-of-the-art proof-script
synthesis techniques, and empirically demonstrates that the
improvement is significant on a dataset of 68K theorems from 122
open-source software projects.


Categorical Semantics for Modeling and Inference in Probabilistic Programs

Eli Sennesh

Probabilistic programming languages enable users to compose joint
densities over high-dimensional sample spaces as programs. In this
paper we show how to assign a denotational semantics to such programs
in terms of Markov categories with a weighting monad. We then also,
within the same framework, show how allowing random weights within the
weighting monad yields a denotational semantics for inference
programming. Since our semantics are categorical, we can take
advantage of both model structure and nested inference programming to
divide-and-conquer inference problems. We finall demonstrate a program
representation based on our semantics and show how to train model and
inference programs by variational methods.


Arugment for Complex Languages and Compilers

Steven Meyer

This paper explains why the CVC Verilog hardware description language
(HDL) optimized flow graph compiled simulator is fast.  CVC is
arguably the fastest full IEEE 1364 2005 standard commercial quality
compiled Verilog simulator available yet consists of only 95,000 lines
of C code and was developed by only two people.  The paper explains
how CVC validates the pre formalism computer science methodology best
expressed by Peter Naur's anti-formalism datalogy.  CVC development
history from a slow interpreter into a fast flow graph based machine
code compiled simulator is described.  The failure of initial efforts
that tried to convert CVC into interpreted execution of possibly auto
generated virtual machines is discussed.  Next, the Verilog HDL is
discussed from the compiler writing perspective.  The main body of the
paper first explains how concrete problem specific datalogy principles
can be used to create a simple fast Verilog compiler.  The second part
of the main body of the paper criticizes the competing Allen and
Kennedy theory from their "Optimizing Compilers" book that argues fast
Verilog simulation requires detail removing high level abstraction.
The paper concludes with a discussion of the importance of modern
pipelined CPU low level parallelism for fast Verilog and suggests why
special purpose hardware Verilog simulators and parallel Verilog
simulation distributed over many processors are not fast.  CVC is
available as open source software.


PaSh: Practically Correct, Just-in-Time Shell Script Parallelization

Konstantinos Kallas

Recent shell-script parallelization systems enjoy mostly automated
parallel speedups by compiling scripts ahead-of-time. Unfortunately,
such static parallelization is hampered by the dynamic behaviors
pervasive in shell scripts---e.g., variable expansion and command
substitution---which often requires reasoning about the current state
of the shell and filesystem.

We present a just-in-time (JIT) shell-script compiler, PaSh-JIT, that
intermixes evaluation and parallelization during a script's run-time
execution. JIT parallelization collects run-time information about the
system’s state, but must not alter the behavior of the original script
and must maintain minimal overhead. PaSh-JIT addresses these
challenges by (1) using a dynamic interposition framework, guided by a
static preprocessing pass, (2) developing runtime support for
transparently pausing and resuming shell execution; and (3) operating
as a stateful server, communicating with the current shell by passing
messages---all without requiring modifications to the system's
underlying shell interpreter.

When run on a wide variety of benchmarks, including the POSIX shell
test suite, PaSh-JIT (1) does not break scripts, even in cases that
are likely to break shells in widespread use; and (2) offers
significant speedups, whenever parallelization is possible. These
results show that PaSh-JIT can be used as a drop-in replacement for
any non-interactive shell use, providing significant speedups without
any risk of breakage.


Programming Zero Knowledge Proofs

Alex Ozdemir

Zero Knowledge Proofs (ZKPs) are a powerful cryptographic tool for
eliminating privacy/integrity trade-offs. They're also
user-programmable!

In this talk, I'll introduce the programming model for ZKPs. It's
primary feature of interest is the ability to exploit
non-deterministic data, provided by the prover.

At the end, I'll mention some connections between this programming
model and others, and I'll reference some work we've done exploring
that connection.


New foundations for probabilistic separation logic

John Li

Probabilistic reasoning frequently requires decomposing a situation
into probabilistically independent pieces. We present a separation
logic that supports this decomposition, based on a notion of
independence for probability spaces that we call "spatial
independence". This notion is inspired by an analogy with mutable
state where random sampling corresponds to dynamic allocation, random
variables to heap locations, and independence to disjointness of
heaps. Unlike prior work, our logic naturally accommodates continuous
random variables and observation (with the help of some equational
reasoning), and enjoys derived rules identical to those of standard
separation logic.


Omnisemantics: Smooth Handling of Nondeterminism

Samuel Gruetter

I would like to present a style of operational semantics that we call
omnisemantics, and whose judgments relate starting states to sets of
outcomes rather than to individual outcomes.

A single derivation of these semantics for a particular starting state
and program describes all possible nondeterministic executions (hence
the name omni), whereas in traditional small-step and big-step
semantics, each derivation only talks about one single execution.

This restructuring allows for straightforward modeling of both
nondeterminism and undefined behavior as commonly encountered in
sequential functional and imperative programs.  Specifically,
omnisemantics inherently assert safety, i.e. they guarantee that none
of the execution branches gets stuck, while traditional semantics need
either a separate judgment or additional error markers to specify
safety in the presence of nondeterminism.

Omnisemantics can be understood as an inductively defined
weakest-precondition semantics (or more generally,
predicate-transformer semantics) that does not involve invariants for
loops and recursion but instead uses unrolling rules like in
traditional small-step and big-step semantics.

Omnisemantics were previously described in association with several
projects, but we believe the technique has been underappreciated and
deserves a well-motivated and pedagogical presentation of its
benefits.

We also explore several novel aspects associated with these semantics,
in particular their use in type-safety proofs for lambda calculi,
partial-correctness reasoning, and forward proofs of compiler
correctness for terminating but potentially nondeterministic programs
being compiled to nondeterministic target languages.

All results have been formalized in Coq, and omnisemantics serve as
the semantic backbone throughout our verified software/hardware stack
that we're developing in the PLV group at MIT.


The WebAssembly Component Model

Lucy Menon

WebAssembly (Wasm) is a widely-used low-level virtual machine which
allows for the safe, portable, and efficient execution of (relatively)
low-level code. Wasm has largely displaced other technologies for
near-native code execution on the web, such as asm.js and Native
Client, and is rapidly expanding into non-Web use cases. However, a
Wasm module can only easily interact with other Wasm modules in a
shared address space; intra-module isolation and host interactions are
largely unspecified. (The latter may easily be seen, for example, in
the difficulty present in interacting with the DOM via WebAssembly on
the Web: DOM interactions must currently be proxied through JavaScript
stubs which apply appropriate (de)serialization.)

The WebAssembly component model seeks to address this gap by providing
a means of specifying safe, no-shared-memory, interfaces between
reusable _components_. Interfaces between components are specified via
a language-agnostic type algebra, which provides structured
(higher-level than Core Wasm) abstract types. A set of components may
be ahead-of-time linked together to produce a set of Wasm modules
whose host interface is well-specified by the signature of the source
components.

We will first discuss the design and formal foundations of the
component model, and continue to an overview of how it can be used to
address issues of isolation, concurrency, and embedding for Core
WebAssembly. We will also present our current high-level plans for
further formal guarantees to explore in the context of the component
model, and explore its potential uses as a real-world target for the
research community.

Joint work with Amal Ahmed, Andreas Rossberg, and Luke Wagner.


A Spectrum of Type Vigilance in Gradual Typing

Olek Gierczak

Gradually typed languages allow more and less precisely typed code to
coexist in a single language, but different gradual languages use
different runtime type enforcement strategies. Hence, two gradually
typed languages can have identical static type systems and yet enforce
different type-based guarantees. For example, the natural strategy
wraps terms in contracts; while the transient strategy checks only
tags on destructors and boundaries. Syntactic properties like type
soundness or complete monitoring offer a way to distinguish between
the runtime guarantees provided by the two strategies: transient
guarantees tag soundness while natural guarantees full type soundness
and guards all interactions between parts of the program with
different type precision. However, these syntactic approaches do not
directly explain the semantic consequences the choice of one or the
other strategy entails.  In this talk we present type vigilance, a
semantic property that ensures that terms behave according to their
runtime typing history, which is made up of all the type annotations
on the boundaries a term crosses at runtime.  We confirm that natural
is type vigilant; while transient is tag vigilant, hence,
strengthening the conclusions of the syntactic investigation of the
two strategies. However, using our semantic approach, we were also
able to develop a new type system for transient that is more faithful
to its enforcement behavior and for which it satisfies type vigilance.

This is joint work with Lucy Menon, Amal Ahmed, and Christos Dimoulas


Towards Automated Migration of Imperative Deep Learning Programs to Graph Execution

Raffi Khatchadourian

Efficiency is essential to support responsiveness w.r.t. ever-growing
datasets, especially for Deep Learning (DL) systems. DL frameworks
have traditionally embraced deferred execution-style DL code that
supports symbolic, graph-based Deep Neural Network (DNN)
computation. While scalable, such development produces DL code that is
error-prone, non-intuitive, and difficult to debug. Consequently, more
natural, less error-prone imperative DL frameworks encouraging eager
execution have emerged at the expense of run-time performance. While
hybrid approaches aim for the "best of both worlds," using them
effectively requires subtle considerations to make code amenable to
safe, accurate, and efficient graph execution—avoiding performance
bottlenecks and semantically inequivalent results.

This talk will discuss our current work on an automated refactoring
approach that assists developers in specifying whether and how their
otherwise eagerly-executed imperative DL code could be reliably and
efficiently executed as graphs at run-time in a semantics-preserving
fashion. Based on speculative and hybrid Python program analyses, the
approach will consist of refactoring preconditions for automatically
determining when it is safe and potentially advantageous to migrate
imperative DL code to graph execution. Conversely, existing decorator
parameters may be modified, or the approach may switch code already
running as graphs to eager execution. A research prototype as a
plug-in to the PyDev Eclipse IDE—along with a command-line
interface—is now in development. We envision our approach being useful
in optimizing imperative DL code to its full potential.


A computational model of confusion in code comprehension

Shashank Srikant

What cognitive mechanism underlies the act of feeling confused when
comprehending a seemingly inscrutable snippet of program?

Can that cognitive mechanism be encoded using a computational model?

If successfully encoded, can we use such a computational model to
discover "confusing" snippets of code in public repositories?

We investigate these questions in our work.

We propose a probabilistic framework to model confusion when reading
code.

We employ large language models of code as candidate cognitive systems
involved in code comprehension, using which we propose experiments to
evaluate our framework.

If the proposed framework is experimentally validated, we will have
established a computational mechanism which explains confusion.
Previous works just posit the existence of vaguely defined code
properties which correlate with the 'confusability' of a code snippet.
Additionally, we will have established large language models as
faithful representions of the cognitive machinery involved in code
comprehension.


Object-Based Semantics of Layering

Arthur Oliveira Vale

Large-scale software verification relies critically on the use of
compositional languages, semantic models, specifications, and
verification techniques. Recent work on certified abstraction layers
synthesizes game semantics, the refinement calculus, and algebraic
effects to enable the composition of heterogeneous components into
larger certified systems. However, in existing models of certified
abstraction layers, compositionality is restricted by the lack of
encapsulation of state.

We present a novel game model for certified abstraction layers where
the semantics of layer interfaces and implementations are defined
solely based on their observable behaviors. Our key idea is to
leverage Reddy's pioneer work on modeling the semantics of imperative
languages not as functions on global states but as objects with their
observable behaviors. We show that a layer interface can be modeled as
an object type (i.e., a layer signature) plus an object strategy. A
layer implementation is then essentially a regular map, in the sense
of Reddy, from an object with the underlay signature to that with the
overlay signature. A layer implementation is certified when its
composition with the underlay object strategy implements the overlay
object strategy. We also describe an extension that allows for
non-determinism in layer interfaces.


Last modified Sunday, September 18th, 2022 12:25:09pm