These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Symmetric Multimodal Intelligent IDEs, Framework Adoption

Greg Sullivan

Under two DARPA-funded seedlings, we are studying (1) Symmetric
Multimodal Interactive Intelligent Development Environments, and (2)
Community-Based Framework Adoption Support.  Together, these two
research efforts aim to produce software design and development
environments where the computer-based agent is a collaborator rather
than just a glorified text editor.

These efforts bring together research in multimodal (specifically
sketch + speech) user interfaces, mixed initiative (aka symmetric)
HCI, software repository mining, and domain specific modeling.

In this brief talk, I will give an overview of how the various
research fields combine, and give a few storyboards demonstrating the
envisioned (class of) systems.


The OO/XML mismatch: Is there a logic-based solution?

Suad Alagic

The open problem of the mismatch between object-oriented languages and
XML has not been resolved. The sources of this problem are believed to
be caused by the mismatch of the underlying type systems. But if fact,
virtually all object-oriented interfaces to XML, as well as typed XML
oriented languages, suffer from the inability to express constraints
such as those available in XML Schema. These constraints include
specification of the ranges of the number of occurrences, keys and
referential integrity. The rules for type derivation by restriction in
XML Schema are based on constraints and hence cannot be expressed in
any of the well-known object-oriented interfaces to XML. Semantics of
XML Schema groups also represent a major issue for these
interfaces. Specification of those features is beyond the
expressiveness of type systems of mainstream programming
languages. Rather than relying on those type systems, we make a major
shift by using constraint (assertion) languages, to specify features
of XML Schema.  This makes it possible to use a prover technology to
verify constraints associated with XML Schema. More importantly,
object-oriented programs processing XML data are now subject to these
constraints expressed in an assertion language such as JML or
Spec#. The underlying support of this object-oriented/XML technology
is an extended virtual platform that manages object-oriented
specifications of XML types and structures via reflection and allows a
variety of static and dynamic verification techniques. The prover
technology is based on PVS.

Joint work with Mark Royer and David Briggs.


Resolving and Exploiting the k-CFA Paradox

David Van Horn

Low-level program analysis is a fundamental problem, taking the shape
of ``flow analysis'' in functional languages and ``points-to''
analysis in imperative and object-oriented (OO) languages. Despite the
similarities, the vocabulary and results in the two communities remain
largely distinct, with limited cross-understanding. One of the few
links is Shivers's kCFA work, which has advanced the concept of
``context-sensitive analysis'' and is widely known in both
communities. Recent results, however, indicate that the relationship
between the different incarnations of the analysis is not
understood. Van Horn and Mairson proved kCFA for k >= 1 to be
EXPTIME-complete, hence no polynomial algorithm exists. Yet there have
been multiple polynomial formulations of context-sensitive points-to
analyses in OO languages. Is functional kCFA a profoundly different
analysis from OO kCFA? We resolve this paradox by showing that OO
features conspire to make the exact same specification of kCFA be
polynomial-time: objects and closures are subtly different, in a way
that interacts crucially with context-sensitivity. This leads to a
significant practical result: by emulating the OO approximation, we
derive a polynomial hierarchy of context-sensitive CFAs for functional
programs, simultaneously achieving high precision and efficiency.

Joint work with Matt Might and Yannis Smaragdakis.


Coarse-Grained Transactions

Maurice Herlihy

Traditional transactional memory systems suffer from overly
conservative conflict detection, yielding so-called false conflicts,
because they are based on fine-grained, low-level read/write
conflicts. In response, the recent trend has been toward integrating
various abstract data-type libraries using ad-hoc methods of
high-level conflict detection. These proposals have led to improved
performance but a lack of a unified theory has led to confusion in the
literature. We clarify these recent proposals by defining a
generalization of transactional memory in which a transaction consists
of coarse-grained (abstract data-type) operations rather than simple
memory read/write operations. We provide semantics for both
pessimistic (e.g. transactional boosting) and optimistic
(e.g. traditional TMs and recent alternatives) execution. We show that
both are included in the standard atomic semantics, yet find that the
choice imposes different requirements on the coarse-grained
operations: pessimistic requires operations be left-movers, optimistic
requires right-movers. Finally, we discuss how the semantics applies
to numerous TM implementation details discussed widely in the
literature.

Joint work with Eric Koskinen and Matthew Parkinson.


Co-action

Jonathan Edwards

Co-action is a new form of synchronous reactive programming. From the
programmer's point of view, all actions take place simultaneously, and
all interdependencies between actions are resolved
instantaneously. Eliminating execution order avoids many of the
complexities and pitfalls of imperative programming. What is novel is
that co-action also supports an imperative model of state, with a heap
of dynamically allocated objects pointing to one another, and
assignment through pointer variables. Other forms of declarative
programming either eliminate pointers or fall back to explicitly
ordering execution. Co-action attempts to combine the best of both
declarative and imperative programming.

Declarative languages avoid pointers for good reason: to determine a
correct execution order, data flow dependencies must be known, but
pointer aliasing makes that undecidable at compile time. Co-action
instead finds a correct execution order at run time by monitoring
program execution and doing rollbacks to backtrack. The undecidable
problem at compile time is converted into a search problem at run
time. Co-actions are in a sense the dual of transactions, combining
synchronous actions instead of isolating asynchronous actions, and
parallelizing instead of serializing.

To better study and communicate these ideas, I have defined a small
formal calculus. The correctness criterion of an execution order is
Coherence: every field is written at most once, and prior to any
reads.  The completeness theorem states that if a coherent execution
order exists, then the co-action execution algorithm will find it. As
of this writing I have been unable to prove completeness, and am
seeking advice.

For more information on this project, see coherence-lang.org.


Does This Guarantee of Termination Make My Program Look Fat?

Yannis Smaragdakis

For the past couple of years I have been working with
sub-Turing-complete programming languages, and especially
Datalog. Datalog (with a minor extra assumption) captures the PTIME
complexity class, meaning that every Datalog program executes in
polynomial time, and every polynomial program is expressible in
Datalog. This would seem like a great property for most practical
purposes. Yet people's reaction is often to question its
desirability. One line of attack includes an appeal to the
"succinctness theorem", which people remember from grad school
(especially if their grad school was in upstate NY). This well-known
result in complexity theory states that one can always find programs
that are unboundedly shorter when expressed on a Turing machine than
in a language with guaranteed termination.

The purpose of this talk is to dispel the myth that the succinctness
theorem tells us anything about programming languages and real
programs. First, the theorem itself is based on the busy beaver
function and includes constants that are unimaginably large. Second,
the succinctness theorem can only apply to programs that not only
cannot themselves be expressed concisely in a terminating language,
but not even their running time bound can be expressed as an
acceptably succinct terminating function. In fact, the directly
opposite result is practically relevant: For programs whose running
time can be bounded by a succinct terminating function, a language
with guaranteed termination can always express the program with at
most a constant additive overhead, relative to the Turing machine
representation. For instance, in the case of Datalog, consider a
practically "large" polynomial bound, such as n^100. Any program with
running time below n^100 can be expressed in Datalog just as
succinctly as on a Turing machine except for a small constant. (Such
constants matter little in practice because they can be pre-processed
away.)

This is joint work with Neil Immerman.


Can FPGAs provide significant performance boosts to garbage collection?

Perry Cheng

With the growing prevalence of multi-core and non-standard processing
units, we ask the question of whether FPGA's can be used to greatly
accelerate the problem of garbage collection in a standard desktop or
server environment. FPGA's have enjoyed success in traditional
streaming applications like signal processing but the non-locality of
memory traversal in memory management makes this choice seem like an
uncomfortable fit. In this talk, I will argue why FPGA's (particularly
with front-side-bus connectivity) may be an attractive option for GC
because of performance and power.  I will present two preliminary
algorithms that show great potential (> 100% speedup).



Contracts Made Manifest

Michael Greenberg

Since Findler and Felleisen introduced higher-order contracts, many
variants have been proposed. Broadly, these fall into two groups: some
follow Findler and Felleisen in using latent contracts, purely dynamic
checks that are transparent to the type system; others use manifest
contracts, where refinement types record the most recent check that
has been applied to each value. These two approaches are commonly
assumed to be equivalent---different ways of implementing the same
idea, one retaining a simple type system, and the other providing more
static information. Our goal is to formalize and clarify this folklore
understanding.

Our work extends that of Gronski and Flanagan, who defined a latent
calculus lambda_C and a manifest calculus lambda_H , gave a
translation phi from lambda_C to lambda_H , and proved that, if a
lambda_C term reduces to a constant, then so does its phi-image. We
enrich their account with a translation psi from lambda_H to lambda_C
and prove an analogous theorem.

We then generalize the whole framework to dependent contracts, whose
predicates can mention free variables. This extension is both
pragmatically crucial, supporting a much more interesting range of
contracts, and theoretically challenging. We define dependent versions
of lambda_H and two dialects ("lax" and "picky") of lambda_C,
establish type soundness---a substantial result in itself, for
lambda_H---and extend phi and psi accordingly. Surprisingly, the
intuition that the latent and manifest systems are equivalent now
breaks down: the extended translations preserve behavior in one
direction but, in the other, sometimes yield terms that blame more.

Joint work with Benjamin C. Pierce, Stephanie Weirich.


An Ode to Arrows

Hai Liu

We study a number of embedded DSLs for autonomous ordinary
differential equations (autonomous ODEs) in Haskell.  A naive
implementation based on the lazy tower of derivatives is
straightforward but has serious time and space leaks due to the loss
of sharing when handling cyclic and infinite data structures. In
seeking a solution to fix this problem, we explore a number of DSLs
ranging from shallow to deep embeddings, and middle-grounds in
between. We advocate a solution based on arrows, an abstract notion of
computation that offers both a succinct representation and an
effective implementation.  Arrows are ubiquitous in their combinator
style that happens to capture both sharing and recursion elegantly.
We further relate our arrow-based DSL to a more constrained form of
arrows called causal commutative arrows, the normalization of which
leads to a staged compilation technique improving ODE performance by
orders of magnitude.

Joint work with Paul Hudak.


Last modified Monday, November 30th, 2009 9:56:17pm