These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Finding Relevant Code with Causal Inference

James Koppel (MIT)

Given an effect of a program -- say, a window drawn on the screen, or
part of its output --  how do you determine what code caused that
effect? Previous work is based on variants of slicing, which has the
tendency to find large amounts of irrelevant code along with the
necessary pieces, and requires access to library code. Our preliminary
work promises much better results by taking data from program traces
and applying techniques that can distinguish causation from
correlation. We use a quantity called the causal influence, which
allows us to show when the correlation between two events is
sufficiently strong that it proves a causal relation and cannot be
purely due to spurious correlation.


Linear Models of Computation and Parametrization of Large Classes of Programs by Matrices

Michael Bukatin (Nokia Corporation, Burlington, MA)

We look at two classes of computations with streams admitting the
notion of linear combination of streams: probabilistic sampling and
generalized animations. Focusing on these classes of computations does
not seem to restrict the computational power of our models, because
one can sample from probability distributions over spaces of programs,
and because there is enough room for a variety of higher-order
programming methods in the architectures we consider in this work. In
the context of genetic programming, this architecture can be used to
express the notion of regulation of gene expression.

We consider dataflow programming with these types of streams and note
that adopting a discipline of structuring dataflow programs as
bipartite graphs connecting the vertices computed by built-in
transformations and the vertices computed by potentially infinitary
linear transformations allows to parametrize large classes of dataflow
graphs by matrices of real numbers, which makes it possible to modify
programs in continuous fashion.  Prototype implementations of the
architectures in question are at https://github.com/anhinga/fluid
repository.

There is a mathematical landscape adjacent to this topic which is
based on the fact that in the presence of partial contradictions
domains for denotational semantics tend to become embedded into
vectors spaces. This landscape brings together vector semantics of
programming languages, various expressions of partial inconsistency
including negative values of measure and probability and negative
degrees of set membership, non-monotonic reasoning, bilattices,
bitopologies, and more.

This is joint work with Steve Matthews (University of Warwick)


Phantom Monitors: A Simple Foundation for Modular Proofs of Fine-Grained Concurrent Programs

Christian J. Bell (MIT)

Formal verification of fine-grained concurrent programs poses many
thorny challenges, including the tension between support for optimized
implementation techniques and support for modular proofs of libraries
that use those techniques. Even surveying just the last few years,
many program logics have been proposed to address different subsets of
the challenges, with each program logic introducing its own vocabulary
of foundational concepts and proof rules. In this work, we take some
first steps along a different path to the same goal. We support
modular proofs of fine-grained concurrent code within the Coq proof
assistant. Instead of defining a program logic, we push almost all
work onto the very expressive higher-order logic built into Coq. We
define our foundation, phantom monitors, and explore its consequences
in verification of two classic data structures (the Treiber lock-free
stack and the Harris-Michael lock-free set). We then show how to
verify client programs of these data structures, almost completely
automatically, leading to highly trustworthy theorems whose statements
depend only on the chosen specs and on the operational semantics of
our object language.


Slimming Languages by Reducing Sugar: A Case for Semantics-Altering Transformations

Justin Pombrio (Brown University)

Splitting a language into a core language and a desugaring function
makes it possible to produce tractable semantics for real-world
languages. It does so by pushing much of the language's complexity
into desugaring. This, however, produces large and unwieldy core
programs, which has proven to be a significant obstacle to actual use
of these semantics.

In this paper we analyze this problem for a semantics of
JavaScript. We show that much of the bloat is semantic bloat: a
consequence of the language's rich semantics. We demonstrate how
assumptions about language use can confine this bloat, and codify
these through several transformations that, in general, do not
preserve the language's semantics. We experimentally demonstrate the
effectiveness of these transformations. Finally, we discuss the
implications of this work on language design and structure.

Joint work with Junsong Li, Joe Gibbs Politz, Shriram Krishnamurthi


What's Wrong with Git? A Conceptual Design Analysis

Santiago Perez De Rosso (MIT)

Git is a widely used version control system that is powerful but
complex. Some have claimed that the complexity of Git is not a
necessary consequence of its power and flexibility but instead arises
from its design. To explore this hypothesis, we analyzed the design of
Git using a theory of design that identifies concepts and their
distinct purposes. A selection of known complications of Git is
presented and explained using this structure. Based on this analysis,
we designed a reworking of Git (called Gitless) that attempts to
remedy the conceptual difficulties that lead to these
complications. We then evaluated Gitless in a small study to determine
whether users experienced fewer complications using Gitless in place
of Git.

This is joint work with Daniel Jackson (MIT CSAIL).


Nested Queries in Nondeterministic and Probabilistic Programming Languages

Jeffrey Mark Siskind (Purdue University)

Early Prolog lacked a construct to aggregate alternative results of
deterministic queries, though the setof and bagof extralogical
predicates were soon added. Early nondeterministic functional
languages, e.g. Schemer, also lacked a construct to aggregate
alternative possible values of a nondeterministic expression, though
the all-values construct was added to Screamer. Early probabilistic
languages, e.g. Church, Anglican, and Probabilistic C, also lacked a
construct to aggregate the samples produced by a stochastic processes
into a distribution, though a construct to do so was included in
R6RS-AD. We investigate the formal semantics of such constructs and
find that they raise thorny issues. One can freely nest
nondeterministic choice and aggregation constructs in nondeterministic
languages with a well defined semantics. Similarly, one can also
freely nest stochastic sampling and aggregation constructs in
probabilistic languages with a well defined semantics. But
surprisingly, arbitrary nesting of such constructs in a language that
provides both nondeterminism and stochasticity becomes incoherent
unless certain constraints are upheld. Even more surprisingly, this
thorny issue has little to do with the mixing of nondeterminism and
stochasticity per se. One can extend both nondeterministic and
stochastic languages independently with stratified (i.e. tagged)
aggregation constructs and cause the same thorny issues to arise. Yet
the mixed nondeterministic and stochastic languages, as well as adding
stratified aggregation to nondeterministic and/or stochastic
languages, is a useful modeling construct. Thus to achieve such
utility we must live with these issues.


Composable Modular Atomicity

Mohsen Lesani (MIT)

To preserve the consistency of data, operations that concurrently
access objects should execute atomically. Due to the intricacy of
synchronization, encapsulating atomicity in modules can improve both
the programmability and verifiabilty of user programs. Therefore,
mainstream programming languages offer libraries of concurrent data
types. An important question is how composable these concurrent data
types are. They usually guarantee linearizability that supports
horizontal composition. An execution may involve method calls on
multiple concurrent objects and each method call appears to take
effect atomically. However, they usually don't support vertical
composition. The execution of multiple method calls on a concurrent
object is not necessarily atomic. I present a sufficient condition
called condensability for atomicity of vertical compositions and a
tool called Snowflake that generates proof obligations for
condensability of composing methods in Java and discharges them using
an off-the-shelf SMT solver. As presented in CAV'14, Snowflake could
successfully verify a large fraction of composing methods in several
open-source applications. Then, I present universally composable
concurrent objects and how synthesis from high-level relational
specifications can directly target them. Finally, I briefly present
our current effort to build a verification framework for composable
concurrent objects in Coq.


Using a Portfolio of SMT Solvers in Software Development

Will Blair (Boston University)

SMT solvers are powerful tools that determine the satisfiability
of formulas from different first order theories. There are many
theories that could be useful for software development and yet are
outside the capability of automated solvers. Solving formulas that
require induction on algebraic data types, for example, cannot be
handled by most current solvers. Yet, earlier this year the CVC4
team demonstrated a new technique for proving formulas that require
inductive reasoning. As different solvers develop new capabilities
that complement one another, we would like ways to combine them in
order to improve the software development process.

In this talk, we will describe how dependent types in ATS can be
used as an interface for introducing different SMT solvers with
their own unique capabilities into software development.  The
correspondence between the ATS statics that support dependent types
and the well supported SMT-LIB language allows us to easily export
constraints to a variety of SMT solvers. This allows a programmer
to use the decision power of different SMT solvers without needing
to interact with them directly. We will illustrate this by discussing
a small example of using CVC4's new strategy for inductive proofs
to reason about list operations in ATS.


Session Types in Applied Type System

Hanwen Wu (Boston University)

Pi-calculus [Milner91, Milner99] is a powerful tool for describing
concurrent computations. Session types [Honda98] later becomes a type
discipline for pi-calculus that successfully captures the invariants
of communication channels. Since then session types have been actively
studied in a variety of literature. Despite the fact that practical
implementations are limited, its applications for enforcing
communication protocols, guaranteeing global progress, etc, are
promising.

In this talk, we will show that session types can be easily modeled in
ATS [Hongwei04], a statically typed programming language, with the
help of its dependent types and linear types. To the best knowledge of
the author, this is the first practical session type language that is
ready to use in real world. Session type linearity has been studied in
several lines of literature from Walder, Pfenning, etc. And that can
be clearly shown when we assign types to channels in ATS. Dependent
session types are also supported in ATS. Underlying implementation
after type erasure is done via Erlang, benefiting from its industrial
runtime. Examples will also be given to demonstrate the usage and
benefits of session types.


Tierless Programming and Differential Reasoning for Software-Defined Networks

Tim Nelson (Brown University)

Flowlog is a tierless language for programming SDN controllers. We
intentionally limit Flowlog's expressive power to enable powerful
reasoning support. To showcase this we present Chimp, a tool for
differential analysis of Flowlog programs. Given two versions of a
program, Chimp produces a set of concrete scenarios where the programs
differ in their behavior. By using tools like Chimp, network
developers can exploit the power of formal methods without having to
be trained in formal logic.


Last modified Sunday, November 8th, 2015 1:11:35pm