[logo suggestions welcome]


These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.

The Comet Programming Language and System

Pascal Van Hentenryck (Brown University)

This talk gives an overview of the Comet system, a programming
language designed to solve complex and large scale combinatorial
optimization problems using local search. Comet integrates a small
number of concepts from declarative and object-oriented programming
which synergize to decrease the development time and the maintenance
of these applications significantly. Of particular interest is the
clean separation of concerns between modeling and search, and the high
reusability promoted by Comet. This contrasts with the ad-hoc nature
of most of these applications.

This is joint work with L. Michel. An early paper on Comet appeared in OOPSLA'02. See http://www.cs.brown.edu/people/pvh/oopsla02.ps.

Control Abstractions for Local Search Algorithms

Laurent Michel (University of Connecticut)

Combinatorial optimization problems are ubiquitous in many practical
applications. Yet most of them are challenging, both from
computational complexity and programming standpoints. Local search is
one of the main approaches to address these problems. However, it
often requires sophisticated incremental algorithms and data
structures, and considerable experimentation. Comet is a
constraint-based, object-oriented, language and architecture that
reduces the development time of local search algorithms significantly.
The focus of the talk is a set of control abstractions that are
essential to easily capture many complex search procedures. Control
primitives and abstractions likes selectors, events, neighbors,
continuations, states and checkpoints allow to separate orthogonal
aspects of the search procedure and will be exemplified on code
fragments drawn from applications like scheduling or routing.

Joint work with Pascal Van Hentenryck, Brown University.

Automatic Detection and Repair of Errors in Data Structures

Brian Demsky (MIT)

We present a system that accepts a specification of key data structure
constraints, then dynamically detects and repairs violations of these
constraints. Our approach involves two data structure views: a
concrete view at the level of the bits in memory and an abstract view
at the level of relations between abstract objects. The abstract view
facilitates both the specification of higher level data structure
constraints (especially constraints of linked data structures) and the
reasoning required to repair any inconsistencies. Our experience using
our system indicates that the specifications are relatively easy to
develop once one understands the data structures.
We have used our tool to repair inconsistencies in three applications:
a simplified Linux file system, an interactive game, and Microsoft
Word files. For this set of benchmark applications, our system can
effectively repair errors to deliver consistent data structures that
allow the program to continue to operate successfully within its
designed operating envelope. In the absence of this repair, the
programs usually failed. Our results therefore indicate that our
technique may significantly enhance the ability of applications to
recover from data structure errors.
Joint work with Martin Rinard, MIT.

Our technical report describing this system may be found at http://www.lcs.mit.edu/publications/specpub.php?id=1656.

FLAVERS: A Finite State Verification Technique for Software Systems

Jamieson M. Cobleigh (U. Mass Amherst)

Software systems are increasing in size and complexity and,
subsequently, are becoming ever more difficult to validate.  Finite
State Verification (FSV) has been gaining credibility and attention as
an alternative to testing and to formal verification approaches based on
theorem proving.  There has recently been a great deal of excitement
about the potential for FSV approaches to prove properties about
hardware descriptions but, for the most part, these approaches do not
scale adequately to handle the complexity usually found in software.  We
have developed FLAVERS, an FSV approach that creates a compact and
conservative, but imprecise, model of the system being analyzed, and
then assists the analyst in adding additional details as guided by
previous analysis results.  We will describe the approach and discuss
some experimental results demonstrating scalability.
Contributors:  Lori A. Clarke, Jamieson M. Cobleigh, Heather Conboy,
Matthew B. Dwyer, Gleb Naumovich, Leon J. Osterweil, U. Mass Amherst.

A Typed representation for XML documents

Dengping Zhu (Boston University)

When constructing programs for manipulating XML documents, we
immediately face the question as to what internal representation
should be chosen for XML documents so as to facilitate program
construction. Currently, most representations used in practice are
untyped in the sense that the type (DTD) of an XML document is not
reflected in the type of its representation (if the representation is
typed). In general, an untyped representation often involves the use
of a great number of tags, which not only consumes space to store but
also can incur tag checks at run-time.

We propose a typed representation for XML documents that consists of a
data part and a type part; the data part stores the data (but no tags)
in a document while the type part stores the type (DTD) of the
document.  With this representation, we can not only save significant
space when storing an XML document but also avoid run-time tag checks
that would otherwise be needed when processing the document. More
importantly, we can reap various software engineering benefits from
typed programming in the first place.

This talk presents work done under the direction of Hongwei Xi. The details of this work can be found at http://cs-people.bu.edu/zhudp/pubs/XML-rep.html.

Hybrid Modelling with Automatic Differentiation and Impulses

Henrik Nilsson (Yale University)

Functional Reactive Programming (FRP) is a framework for reactive
programming in a functional setting. FRP, in various incarnations, has
been applied to a number of domains, such as graphical animation,
graphical user interfaces, robotics, and computer vision. The latest
member of the Yale FRP family of languages is called Yampa and is
realized as an embedding in Haskell.  Recently, we in the functional
programming group at Yale have been interested in applying FRP-like
ideas to hybrid modeling and simulation of physical systems, where our
ultimate goal is a system where models can be expressed through
non-directed equations, so called non-causal modeling. As a small step
in that direction, we have adapted the technique of automatic
differentiation to the present causal setting of Yampa, and extended
it with a notion of Dirac impulses in order to deal correctly with
certain classes of discontinuities in the models. This talk reviews
the basic ideas behind automatic differentiation, in particular, Jerzy
Karczmarczuk's elegant version for a lazy functional language with
overloading, and then considers the integration with Yampa and with
Dirac impulses.

Linguistic Side Effects

Chung-chieh Shan (Harvard University)

As a natural language semanticist, I strive to scientifically explain
why "every student passed" entails "every diligent student passed",
why "a man is mugged every 11 seconds" is ambiguous, and why "nobody
asked any question" sounds better than "everybody asked any question".
Making a linguistic theory is like specifying a programming language:
one typically devises a type system to characterize what utterances
are acceptable, and a denotational semantics to explain which
statements entail which other ones.  Along this connection,
programming language research can inform linguistic theory and vice
versa; in particular, computational side effects are intimately
related to referential opacity in natural languages.  In this talk, I
will illustrate this link by using continuations and composable
contexts to analyze quantification (as in "every student passed").  No
prior knowledge of linguistics will be assumed.

(This talk describes joint work with Chris Barker and Stuart Shieber.)

Last modified Thursday, February 20th, 2003 7:49:47amPowered by PLT