[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Benjamin Pierce (University of Pennsylvania)

Combinators for Bi-Directional Tree Transformations: A Linguistic Approach to the View Update Problem

We propose a novel approach to the well-known VIEW UPDATE PROBLEM for
the case of tree-structured data: a domain-specific programming language
in which all expressions denote bi-directional transformations on
trees.  In one direction, these transformations -- dubbed LENSES -- map
a "concrete" tree into a simplified "abstract view"; in the other, they
map a modified abstract view, together with the original concrete tree,
to a correspondingly modified concrete tree.  Our design emphasizes both
robustness and ease of use, guaranteeing strong well-behavedness and
totality properties for well-typed lenses.

We identify a natural mathematical space of well-behaved bi-directional
transformations (over arbitrary structures), study definedness and
continuity in this setting, and state a precise connection with the
classical theory of "update translation under a constant complement"
from databases.  We then instantiate this semantic framework in the form
of a collection of LENS COMBINATORS that can be assembled to describe
transformations on trees.  These combinators include familiar constructs
from functional programming (composition, mapping, projection, conditionals,
recursion) together with some novel primitives for manipulating trees
(splitting, pruning, copying, merging, etc.).

We illustrate the expressiveness of these combinators by developing a number
of bi-directional list-processing transformations as derived forms and
sketch the real-world application of lenses in the Harmony generic data
synchronizer.

[Joint work with Nate Foster, Michael Greenwald, Jon Moore, Alan Schmitt.]


David Detlefs (Sun Microsystems)

Garbage-First Garbage Collection

Garbage-First is a server-style garbage collector, targeted for
multi-processors with large memories, that meets a soft real-time goal
with high probability, while achieving high throughput.  Whole-heap
operations, such as global marking, are performed concurrently with
mutation, to prevent interruptions proportional to heap or live-data
size.  Concurrent marking both provides collection "completeness" and
identifies regions ripe for reclamation via compacting evacuation.
This evacuation is performed in parallel on multiprocessors, to
increase throughput.


Eijiro Sumii (University of Pennsylvania)

A Bisimulation for Type Abstraction and Recursion

We present a sound, complete, and elementary proof method---based on
bisimulation---for contextual equivalence in lambda-calculus with full
universal, existential, and recursive types.  Unlike logical relations
(either semantic or syntactic), our development is elementary, using
only sets and relations and avoiding advanced machinery such as domain
theory, admissibility, and TT-closure.  Unlike other bisimulations,
ours is complete even for existential types.  The key idea is to
consider _sets_ of relations---instead of just relations---as
bisimulations.


Suad Alagic (University of Southern Maine)

Parametric Polymorphism for Java: Is There Any Hope in Sight?

In spite of years of research toward a solution for the problem of
extending Java with parametric polymorphism (genericity) the
officially accepted solution just about to be released allows
violation of the Java type system and turns a type safe language into
an unsafe one.  The run-time type information in this release is
incorrect which leads to major problems for the programmers relying on
the Java reflective capabilities.  We show that there are two basic
reasons for these problems. The first one is that the idiom underlying
this solution is provably incorrect. The second one is that the
problem of extending Java with parametric polymorphism does not have a
correct solution unless the Java Virtual Machine is extended to handle
it properly.  We explain the subtleties required by a correct
implementation technique that includes representation of parametric
classes in the standard Java class file format, representation of the
instantiated parametric class objects, extensions of the Java Core
Reflection to report type information about (instantiated) parametric
classes, and the loading techniques required by this solution for
extending Java with generics. Previous solutions for this problem are
analyzed as well. 

This is joint work with Brian Cabana and Jeff Faulkner.


Jue Wang (Boston University)

Generating Random Lambda Calculus Terms

We explore the problem of generating lambda calculus terms of a given
size uniformly at random. This work has several motivations. First,
through performing statistical sampling experiments with randomly
generated terms, we can study interesting properties of lambda
calculus terms such as the ratio of application nodes to lambda nodes
and the percentage of well typed terms in a given type system. Second,
random lambda calculus terms can serve as inputs to program analysis
algorithms such as type inference algorithms to evaluate both their
performance and correctness. To evaluate the performance of a given
algorithm, we can perform average case analysis of the algorithm
through empirical evaluation. To evaluate the correctness of a new
program manipulation algorithm that accomplishes the same task as
another well-known and established algorithm, we can run both
algorithms on random terms and compare the results to gain a measure
of confidence in the correctness of this algorithm.

In this talk, I will present an algorithm (implemented in OCaml) that
generates a random lambda calculus term of a given size, assuming an
uniform distribution over all terms of a given size. To improve the
efficiency of generating a term, the current algorithm makes use of
memoization techniques and also employs a system of number
representation that represents numbers approximately. In addition, I
will also discuss some of the possible applications for such a tool
and present some preliminary results.

This talk presents work done in collaboration with Franklyn Turbak
(Wellesley College).


Kenichi Asai (Northeastern University)

Partial Evaluation + Backtracking = Stepper

In this talk, I describe a preliminary experiment of writing a stepper
for the lambda calculus using the concepts of partial evaluation and
backtracking.  Given a program, a stepper produces a list of all the
intermediate programs that appear during the execution.  The basic
idea is to first search for a current redex and then perform one-step
reduction via partial evaluation where only the current redex is
regarded as static and all others as dynamic.  By repeating this
process, a list of all the intermediate programs is obtained.  Since
the next redex is found close to the current redex, this repeating
process can be implemented as a form of backtracking.

The use of backtracking makes the structure of the stepper very
simple.  In fact, it turns out to be almost identical to an online
partial evaluator written in monadic style.  By instantiating the
monad to the backtracking monad and with a simple twist, a partial
evaluator becomes a stepper.


Michael Hind (IBM Watson Research Center)

Dynamic Compilation and Adaptive Optimization in Virtual Machines

The past decade has witnessed the widespread adoption of programming
languages designed to execute on virtual machines, such as the Java
and C# programming language. However, such languages face significant
performance challenges beyond those confronted by traditional
languages. First, portable program representations and dynamic
language features, force the deferral of most optimizations until
runtime, inducing runtime optimization overhead. Second, modular
program representations preclude many forms of whole-program
interprocedural optimization. Third, virtual machines incur additional
costs for runtime services such as security guarantees and automatic
memory management. To address these challenges, mainstream virtual
machine implementations include substantial infrastructure for online
profiling, dynamic compilation, and feedback-directed optimization. As
a result, adaptive optimization has begun to mature as a widespread
production-level technology.

This tutorial will survey the state-of-the-art in the areas of dynamic
compilation and adaptive optimization in virtual machines. Dynamic
compilation is the process of dynamically optimizing a portable
representation, such as Java bytecodes, into native code. Adaptive
optimization is the online decision process that determines the
overall strategy for profiling and employing dynamic compilation. In
addition to surveying the state-of-the-art, this tutorial will also
debunk several misconceptions about these two topics.

This presentation is a shortened version of a tutorial presented at PLDI'04 by Stephen Fink, David Grove, and Michael Hind. You may also be interested in an early version of a survey paper that will appear in a future version of Proceedings of the IEEE in 2005.


Last modified Thursday, September 30th, 2004 7:51:06pmPowered by PLT