[logo suggestions welcome]


These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.

New Models for Numerical Computing in the Java Programming Language

Guy L. Steele, Jr. (Sun Microsystems)

No abstract availble at this time.

Conservation and Uniform Normalization in Lambda Calculi with Erasing Reductions

Peter Møller Neergaard (Boston University)

It has been noted that many typed lambda; calculi are both weakly
normalizing and strongly normalizing. This has lead to the statement
of the Barendregt-Geuvers-Klop conjecture. The conjecture is concerned
with so-called pure type systems which include the well-known typed
lambda calculi like the simply-typed lambda calculus, polymorphism
(system F), and dependent types. The conjecture states that if all
terms in a pure type system are weakly normalizing, then all terms are
strongly normalizing. By definition, we say that all pure type systems
are uniformly normalizing. The literature contains many proofs of
instances of this conjecture. Most of these proofs depend on the
so-called Conservation Theorem for Lambda I (Lambda I is the
lambda calculus with the restriction that an abstraction variable
always appears in the body of an abstraction).

The Conservation Theorem can be stated in two equivalent forms: one
form is that Lambda I has conservation, i.e., that single-step
reductions of a term preserve infinite reduction paths from the term;
the other form is that Lambda I is uniformly normalizing. However,
in more general settings, conservation for an arbitrary set of terms
only implies uniform normalization if the set of terms is closed under
reduction. In particular, when introducing reductions erasing subterms
the two notions are not necessarily the same.

This work studies how to get uniform normalization for subsets of
terms in the presence of erasing reductions. This is first done for a
subset of the pure lambda calculus. Though a new result, this is
mainly a quick warm up for the main result: that simply-typed lambda
calculus extended with pairs and projections is uniformly
normalizing. This is achieved using a CPS-translation to map lambda
terms into a uniformly normalizing subset. Effectively, the proof uses
a normalizing strategy to semi-decide strong normalization of a term.

Speaker's Supplement

This talk will only present the main ideas. The person interested in the gory details can consult the following sources:

Analyzing Trace and Program Data

Steve Reiss (Brown University)

Raw trace data, or even raw structural or semantic data for a
realistic system, is too large to be used directly for software
understanding or visualization.  Instead, one must analyze this data
to summarize it, to find patterns, or to highlight potential problems
or areas of interest.  To address this, we have developed two general
purpose analysis facilities.  The first uses a program database and
allows arbitrary OQL or procedureal queries to retrieve and organize
the appropriate information.  The second is a framework for doing
arbitrary analyses of program trace data.  This facility has let us
implement a wide variety of different analyses including:

* Interval analysis where the overall call graph is summarized by
  breaking the execution into intervals and looking at either what
  classes were being executed or what types of memory was being
  allocated or freed during each of those intervals.

* Path analysis where the dynamic call graph is modeled as a directed
  acyclic graph and the behavior of each node of the graph is encoded
  directly, as a finite state machine, or using a context-free

* Lock analysis where we generate all sequences of nested locks that
  occur in the program to allow these to be checked for potential
  deadlock situations.

* Performance analysis where we collect performance data for sets of
  k-level calls.  When k=1 this yields a simple profile; when k=2 it
  gives information about who calls whom; when k>2 it provides more
  detailed information.

* Class analysis where we gather and encode information about how each
  class is being used by looking at the sequence of method calls for
  each object of the class.  The analysis can either include or ignore
  nested calls and the information can be encoded with path analysis.

* Thread path analysis where we encode the sequence of calls and returns
  performed by each thread, grouping similar threads together, to get
  an understanding of what the different threads are doing.

* Sampled traces where we generate what looks like a full call trace but
  only includes those calls that are in effect at some sampling

* Allocation analysis where we look at the number and size of objects
  that are allocated, either individually or grouped by class, along
  with information from the garbage collector.

* Race analysis where we look for either individual objects or classes
  that are used by multiple threads at the same time.

* Trace analysis where we can generate full information about all or a
  portion of the trace.

The facility also provides a visual front end and an XML-oriented back
end to let these analyses be combined in interesting ways.

Real-Time FRP

Paul Hudak (Yale University)

Functional reactive programming (FRP) is a declarative
programming language that combines a notion of continuous,
time-varying behaviors with a notion of discrete, event-based
reactivity.  In previous work we investigated key theoretical
properties of FRP but did not formally address the issue of cost.
But one of our goals is to use FRP in real-time applications, for
which it is crucial that the cost of running programs be known
before hand.  Ideally, this cost should also be fixed.

To address these concerns, we present Real-Time FRP (RT-FRP), a
statically-typed language where the cost of each execution step is
bounded by a constant, the space needed for running a program is
also constant, and well-formed programs are provably deadlock free.
These three properties guarantee that RT-FRP can be executed in
real-time and in constant space, making it a suitable model for
programming real-time applications and embedded systems.  The syntax
of RT-FRP is developed as a restricted subset of that of FRP.  The
exact restrictions are described using concepts inspired by the
notion of tail recursion.  Most but not all of our existing programs
fall in this restricted subset.  We deal with the balance through a
mechanism to integrate RT-FRP with an external language.

This is joint work with Zhanyong Wan and Walid Taha.

Rupiah: Towards an Expressive Static Type System for Java

Nate Foster (Williams College)

Despite Java's popularity, several practical limitations imposed by
the language's type system have become increasingly apparent in recent
years. A particularly glaring omission is the lack of a generic
mechanism. As a result, many recent projects have extended Java to
support polymorphism in the style of C++ templates or Ada
generics. One project, GJ, adds F-Bounded parametric polymorphism to
Java via a homogeneous translation (such that only one compiled class
is produced for each source class), and produces bytecode that is
compatible with the standard Java Virtual Machine. However while GJ's
simple translation based on erasure allows for maximum interaction
with existing Java code, the new parameterized types that it supports
do not operate consistently with Java's semantics for lightweight
reflection (i.e., checked type-casts and instanceof operations).

We present Rupiah, a languaged based on features adapted from LOOM (a
provably type-safe language), and implemented by a translation based
on GJ's. However its translation differs from GJ's in that it
harnesses Java's built in reflection to store information about
parameterized types. The resulting bytecode correctly executes checked
cast, instanceof, and array creation expressions because it can access
the necessary type information at run-time. We also add a ThisType
construct, which solves many of the problems that arise when binary
methods are mixed with inheritance, and we replace subtyping with a
different relation, matching. Finally, we add exact types, an
inheritable virtual constructor mechanism: ThisClass, and compiler
features to allow separate compilation of Rupiah source files. These
features are implemented in a modified javac compiler. Bytecode
emitted by our compiler runs on any Java 1.2 VM. Thus, our project
contributes a complete implementation of an extension of Java with a
more expressive type system that maintains a close fit with existing
Java semantics and philosophies.

Modular Verification of Layered Software Systems

Kathi Fisler (Worcester Polytechnic Institute), Shriram Krishnamurthi (Brown University) and Don Batory (University of Texas at Austin)

Advances in automated verification (model checking) of hardware
systems has inspired a wealth of research into using these techniques
to analyze software.  Unfortunately, most existing modular model
checking techniques betray their hardware roots: they assume that
modules compose in parallel.  In contrast, layered software systems,
which have proven successful in many domains, are really
quasi-sequential compositions of parallel compositions.  Most such
systems demand and inspire new modular verification techniques.  This
talk presents algorithms that exploit a layered (or feature-based)
architecture to drive compositional verification.  Our technique can
verify large classes of properties locally within a layer; we also
characterize when a global state space construction is unavoidable.
This work is motivated by our efforts to verify a military fire
simulation and support software system called FSATS.

Speaker's Supplement

This talk is based on a paper by Fisler and Krishnamurthi to be presented at the European Software Engineering Conference / Foundations of Software Engineering (ESEC/FSE), 2001.

Functional Polytypic Programming

Patrik Jansson (visiting Northeastern University from Chalmers University)

Polytypic programming is a paradigm for expressing algorithms
parametrized by types. Examples of such algorithms are equality tests,
traversals and pretty printers. Each polytypic instance can be
obtained by instantiating a template algorithm with (the structure of)
a datatype.

I will present an introduction to functional polytypic programming
explaining why it is useful and what it can express.

It Pays to be Persistent

Dina Goldin (University of Massachusetts, Boston), Scott Smolka (SUNY Stony Brook), Peter Wegner (Brown University)

Persistent models of computation include the notion of a "state", or
"memory", whose contents is not changed during the interaction of the
computing device with its environment. In objects, the values of the
attributes are persistent; in processes, it is the process state.

We consider models of computation for object-oriented and concurrent
computation, showing the equivalence of approaches based on transition
systems and on Turing Machines extended with persistence.  We also show
the greater expressiveness of "persistent" vs. "amnesic" computation for
these models.

Last modified Tuesday, May 8th, 2001 8:54:15pmPowered by PLT