These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Implementing Data Race Detection

Benjamin Wood (Wellesley College)

Researchers have proposed data-race exceptions for clear semantics and
improved debugging support in shared-memory multithreaded programming
languages.  Implementing accurate and fast support for data-race
exceptions remains a challenge.  This talk considers a small family of
problems and opportunities in language implementation and dynamic
program analysis arising from experience designing a fully accurate
hardware-supported dynamic data-race detector.  For example: How can a
low-level (e.g., ISA-level) dynamic program analysis effectively
analyze programs in higher-level programming languages?  Can hardware
support for accurate data-race detection be generalized to support
other analyses?  Can software implementations of dynamic analyses
emulate performance optimizations that are successful in hardware
implementations?  This talk describes a mix of completed and ongoing
work undertaken with colleagues at the University of Washington,
Microsoft Research, and The Ohio State University.


A Core Quantitative Coeffect Calculus

Marco Gaboardi (University of Dundee)

Monadic notions of computation are well-established mechanisms to
express effects in pure functional languages. Less well-established is
the notion of comonadic computation. However, recent works have shown
the usefulness of comonads to structure context dependent
computations. In this talk, I will present a language inspired by a
generalized interpretation of linear type systems. In this language
linear types are indexed by a label-an element of a semiring-that
provides additional information on how a program uses its
context. This additional structure is used to express comonadic type
analysis. I will also discuss an ongoing work about a quantitative
calculus combining comonadic coeffects with monadic effects. I will
show how a dependency between coeffects and effects corresponds to a
distributivity law of (labeled) monads over (labeled) comonads. This
dependency can be used to describe context-dependent effect systems.


R7RS Scheme

Will Clinger (Northeastern University)

The R7RS (small) standard for Scheme was approved in 2013.
There are now at least eight implementations of R7RS, with
more in progress.  At least two of the eight implement the
previous R6RS standard as well, demonstrating near-perfect
backward compatibility with R6RS libraries and programs.
An R7RS (large) standard is being developed and is expected
to specify libraries that go well beyond those specified by
the R6RS.


Design of a Debugger for Memory Managers

Karl Cronburg (Tufts University)

The design and implementation of debuggers targeted at memory managers
is crucial given the ever-growing popularity of memory safe
systems. Year after year, new managed systems are created and existing
ones are constantly developed. This pattern results in an ever-growing
need for debugging tools targeted at these systems.

In this talk we will discuss the design of a memory management
debugger capable of verifying memory layout semantics. Our design uses
a system of memory permissions to model the memory management
stack. In this design we assign read / write permissions to regions of
memory. Permissions are based on where in the memory management stack
the region is and what code is running. The key idea is to
declaratively specify the layout and ownership of memory, and check
that associated invariants are maintained.

Our discussion will touch on the implementation of such a debugger for
the Jikes RVM Java Virtual Machine. This implementation leverages the
Valgrind dynamic binary instrumentation framework and Jikes RVM
specific knowledge to verify the read / write permissions of every
load / store / modify machine instruction of the JVM execution.


Datatype Derivatives are the Defunctionalization of Continuation-Passing Style

Jan-Willem Maessen

Conor McBride observed in a series of papers that the derivative of a
data type describes the single-hole contexts of that type -- that is,
to the context half of Huet's zippers.  This raises an intriguing
question: Is there a connection between the a zipper for a data type
and arbitrary functions that traverse that type?  It turns out there
is: If we perform a local transformation to continuation-passing style
and defunctionalize the result, we obtain a family of context types
that describe the way the function traverses the data type.  These
context types are generalized nicely by McBride's "Clowns to the Left
of Me, Jokers to the Right".  Defunctionalization of CPS reifies the
stack of the running program and as a result systematically transforms
recursion into iteration (tail recursion).  It also provides insight
into what we mean when we say that "A Zipper is an iterator for a pure
data structure".


Deep Specifications and Certified Abstraction Layers

Ronghui Gu (Yale Univeristy)

Modern computer systems consist of a multitude of abstraction layers
(e.g., OS kernels, hypervisors, device drivers, network protocols),
each of which defines an interface that hides the implementation
details of a particular set of functionality. Client programs built on
top of each layer can be understood solely based on the interface,
independent of the layer implementation. Despite their obvious
importance, abstraction layers have mostly been treated as a system
concept; they have almost never been formally specified or
verified. This makes it difficult to establish strong correctness
properties, and to scale program verification across multiple layers.

In this talk, I will present a novel language-based account of
abstraction layers and show that they correspond to a strong form of
abstraction over a particularly rich class of specifications which we
call deep specifications. Just as data abstraction in typed functional
languages leads to the important representation independence property,
abstraction over deep specification is characterized by an important
implementation independence property: any two implementations of the
same deep specification must have contextually equivalent
behaviors. We present a new layer calculus showing how to formally
specify, program, verify, and compose abstraction layers. We show how
to instantiate the layer calculus in realistic programming languages
such as C and assembly, and how to adapt the CompCert verified
compiler to compile certified C layers such that they can be linked
with assembly layers. Using these new languages and tools, we have
successfully developed multiple certified OS kernels in the Coq proof
assistant, the most realistic of which consists of 37 abstraction
layers, took less than one person year to develop, and can boot a
version of Linux as a guest.


Prioritized GC: Using the Garbage Collector to Support Caching

Diogenes Nunez (Tufts University)

Programmers routinely trade space for time to increase performance,
especially in the form of caching and memoization.  In managed
languages like Java or JavaScript, however, this trade-off is
complex. More space used by the application translates to higher
garbage collection overhead due to more frequent collections. The
amount of space used is hard to quantify as it depends on the
connectivity graph, and its effect on garbage collection overhead will
vary as memory demands change over time.  Existing runtime systems
provide no good support for implementing space-sensitive data
structures, forcing programmers into difficult and often brittle
choices about provisioning.

In this talk, we will look at how programmers have managed with this
issue and present our approach, the prioritized garbage collector. The
collector measures the sizes of objects in an order specified by the
application.  Then we present the sache, a data structure that
utilizes this collector to
(1) figure out how much space it should use for caching,
(2) figure out how much space its values are using, and
(3) evict values to enforce that limit
We will look at how each of these are done and conclude with results 
comparing against the cache in Google's Java library, Guava.


Integrating SMT into Software Development

William Blair (Boston University)

SMT solvers are powerful tools that determine the satisfiability of
formulas from different first order theories. These theories are relevant
to software verification, yet SMT solvers are considered low level tools
that programmers are unlikely to encounter. While languages may support
verification features by using SMT solvers, they are unlikely to expose an
interface to the solver for the programmer to use.

In this talk, we will describe how dependent types in ATS, a statically
typed functional programming language, can be a platform for integrating
SMT solvers into the software development process. In this process, program
types are refined to include static indices (classified by sorts) which
model properties of types. Typechecking a program that uses such types
automatically generates constraints which must be checked for validity in
order for the program to be well typed. In this talk, we describe how
exporting constraints to SMT solvers allows a programmer to take advantage
of domains understood by SMT solvers in program invariants. We will
demonstrate a current prototype of this system that uses Z3 to verify low
level programs that utilize fixed width bit vectors and discuss a more
generic design that could accomodate more powerful theorem provers in the
future.


Implementing a Stepper using Delimited Continuations

Youyou Cong (Ochanomizu University)

In this talk, I present an implementation of a stepper for a subset of
OCaml by manipulating delimited continuations. Our stepper receives a
program and returns a list of all the elements in the reduction
sequence. To construct each intermediate program, we capture the
context surrounding the redex that is being focused. In our
implementation, the context is obtained using the control operators
shift and reset.

Our stepper is implemented by writing an interpreter that is close to
a typical big-step interpreter. The difference is that it reconstructs
the current evaluation context in the cases where the interpreter
returns a non-value expression. When a redex is found, the context is
captured by the shift operator and is used to reconstruct the current
program.

This is joint work with Kenichi Asai (Ochanomizu University).


Tracking the Flow of Ideas in the Programming Languages Literature

Michael Greenberg (Pomona College)

How have conferences like ICFP, OOPSLA, PLDI, and POPL evolved over
the last 20 years? Did generalizing the Call for Papers for OOPSLA in
2007 or changing the name of the umbrella conference to SPLASH in 2010
have any effect on the kinds of papers published there? How do POPL
and PLDI papers compare, topic-wise? Is there related work that I am
missing? Have the ideas in O'Hearn's classic paper on separation logic
shifted the kinds of papers that appear in POPL? Does a proposed
program committee cover the range of submissions expected for the
conference? If we had better tools for analyzing the programming
language literature, we might be able to answer these questions and
others like them in a data-driven way. In this talk, we present some
early results showing how topic modeling, a branch of machine
learning, might help the programming language community better
understand our literature.


Last modified Thursday, June 4th, 2015 9:37:04am