[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


I Didn't Want My Java DECAF!

Eliot Moss and Emery Berger (UMass Amherst)

In 1997, Eliot Moss gave an invited talk at OOPSLA in which he argued
that we SHOULD be able to make Java run as fast as FORTRAN. More than
five years later this vision is not realized. What has happened, or
not happened, and why? Was he simply wrong? In this provocative talk,
Moss and Berger will point out what they believe some of the barriers
to be and some of the research opportunities, and say a bit about
their Cooperative Robust Automatic Memory Management project, which
tackles some of the problems from a broader systems perspective.


Dynamic Native Optimization of Interpreters

Gregory T. Sullivan (MIT)

The DynamoRIO framework allows us to inspect and manipulate native
X86 programs as they execute.  We apply DynamoRIO to the task of
optimizing interpreters by semi-automatically removing interpretive
overhead.  In essense, we do partial evaluation at runtime, folding
references to immutable data, especially VM instructions, to
constants, and then doing aggressive constant propagation.  We rely on
annotations to the interpreter to recognize immutable data and "trace
constants".

A paper on this research, to be presented at IVME 03 in June, can be found here (PDF), or here (PS).


Perk and Filter For Better Java

John Cavazos and Eliot Moss (UMass Amherst)

Instruction scheduling is a compiler optimization that can improve
program speed, sometimes by 10% or more---but it can also be
expensive. Further, time spent optimizing is more important in a Java
just-in-time (JIT) compiler than in a traditional one because a JIT
compiles code at run time, adding to the running time of the program.
We found that, on any given block of code, instruction scheduling
often does not produce significant benefit and sometimes degrades
speed.  Thus, we hoped that we could focus scheduling effort on those
blocks that benefit from it.

Using supervised learning we induced heuristics to predict which
blocks benefit from scheduling.  The induced function chooses, for
each block, between list scheduling (the traditional approach), a new
scheduling algorithm that is faster though sometimes not as effective,
and not scheduling the block at all. Using the induced function we
reduced scheduling effort by a factor of 2.5-3 and obtained 80-90% of
the improvement of scheduling every block.  Deciding when to optimize,
and which optimization(s) to apply, is an important open problem area
in compiler research.  We show that supervised learning solves one of
these problems well.

Our paper describing this work can be found at ftp://ftp.cs.umass.edu/pub/osl/papers/nips03.ps.gz.


Predicting Problems Caused By Component Upgrades

Stephen McCamant, Michael D. Ernst (MIT)

We present a new, automatic technique to assess whether replacing
a component of a software system by a purportedly compatible
component may change the behavior of the system.  The technique
operates before integrating the new component into the system or
running system tests, permitting quicker and cheaper
identification of problems.  It takes into account the system's
use of the component, because a particular component upgrade may
be desirable in one context but undesirable in another.  No
formal specifications are required, permitting detection of
problems due either to errors in the component or to errors in
the system.  Both external and internal behaviors can be
compared, enabling detection of problems that are not immediately
reflected in the output.

The technique generates an operational abstraction for the old
component in the context of the system and generates an
operational abstraction for the new component in the context of
its test suite; an operational abstraction is a set of program
properties that generalizes over observed run-time behavior.  If
automated logical comparison indicates that the new component
does not make all the guarantees that the old one did, then the
upgrade may affect system behavior and should not be performed
without further scrutiny.  In case studies, the technique
identified several incompatibilities among software components.

For more information, see http://pag.lcs.mit.edu/~smcc/projects/upgrades.html.


Principled Interoperation of Programming Language Systems

John Ridgway and Jack Wileden (UMass Amherst)

Programmers have been interested in building systems from components
written in different programming languages for as long as there have
been different programming languages.  Unfortunately, such efforts
have always been fraught with peril due to semantic and implementation
gaps between the different languages.  These gaps are often understood
(well or poorly) by the programmers creating the multilingual systems,
but usually in an ad hoc manner.

The purpose of our work is to close these gaps, to do so in a
principled manner, and to work with real, widely-used programming
languages and programming language systems (PLSs).  Proceeding from
the assumption that we cannot change the underlying PLSs, we are
developing a foundation for describing what interoperation can and
cannot be done, and under what conditions.  Specifically, we are
adapting and extending the resource-bounded effects formalism,
developed by Trifinov and Shao for modeling single PLS interoperation,
to support reasoning about the multiple PLS interoperation more
typical in our setting.

In this talk we illustrate our approach by applying it to some
interoperation examples involving exceptions in Java and C++. If time
permits, we will mention other aspects of our work such as how to
handle explicit continuations and how to support dispatching types,
both single and multiple.


A Computational Interpretation of Classical S4 Modal Logic

Chung-chieh Shan (Harvard University)

"Can you both make it on Tuesday at noon?", said Alice to Bob and
Carol, trying to schedule a joint meeting among the three of them.
Her question expresses a shared plan, which can be formalized as a
constructive proof of the following proposition: if Bob and Carol each
know a boolean value, then so can Alice.  Plan execution can be modeled
by proof reduction.

In general, plans are programs, and programs are proofs.  In particular,
multiagent plans are distributed programs, and distributed programs
are modal proofs.  Inspired by these slogans, I present a new proof
system for classical S4 modal logic, based most directly on Wadler's
dual calculus for classical propositional logic and Ghani, de Paiva,
and Ritter's dual-zone calculus for intuitionistic modal logic.  The
system generalizes to multiple S4-modalities and implications among
them, thus modeling multiple agents that share references to proof terms
and perform distributed computations by confluent reductions.

An early paper describing this work can be found at http://www.eecs.harvard.edu/~ccshan/cs288/paper.pdf.


Can Continuous Testing Speed Software Development?

David Saff, Michael D. Ernst (MIT)

Many modern IDE's provide continuous compilation, which may
speed software development by providing rapid feedback about
compilation errors as source code is edited.  Continuous testing
extends this idea to provide rapid feedback about test failures as
source code is edited.  To support the intuitive appeal of this idea,
this paper evaluates the potential benefits of continuous testing,
using data collected from real developers.  The paper reports both the
theoretical limit of the productivity gains such a tool could
generate, and also the benefits that could be gained from tools built
around particular continuous testing strategies.

We show experimental evidence that reducing the time between the
introduction of an error and its discovery by a developer can lead to
improvements in overall development time.  This evidence is collected
by high-resolution background monitoring of developer behavior, and
analyzed using a model that infers the developer's beliefs and intent
from their recorded actions.  This model is then used to drive
simulations of developer behavior and productivity in response to
different environments, analyze the impact of changing the frequency
of testing and prioritizing tests within a suite, and show that
continuous testing promises even greater improvements.  Continuous
testing proves to be a strategy worthy of further research and
implementation. 


Last modified Sunday, June 1st, 2003 11:42:34pmPowered by PLT