These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Abstracting Abstract Machines: Static Analysis of Evaluation Order, Return-flow, Laziness and the Stack

David Van Horn (Northeastern)

Abstract interpretations of canonical abstract machines yield novel
and useful static analyses for higher-order languages.  We support
this claim by transforming the CEK machine of Felleisen and Friedman,
a lazy variant of Krivine's machine, and the stack-inspecting CM
machine of Clements and Felleisen into abstract interpretations of
themselves.  The resulting analyses bound temporal ordering of program
events; predict return-flow and stack-inspection behavior; and
approximate the flow and evaluation of by-need parameters.  For all of
these machines, we find that a series of well-known concrete machine
refactorings, plus a technique we call store-allocated continuations,
leads to a machine that abstracts into a static analysis simply by
bounding the machines' store.  We demonstrate that the technique
scales up uniformly to allow static analysis of realistic languages
features, including tail calls, conditionals, side effects,
exceptions, first-class continuations and even garbage collection.

(Joint work with Matthew Might, University of Utah)


Alias Analysis for Optimization of Dynamic Languages

Michael Gorbovitski, Yanhong A. Liu, Scott D. Stoller, K. Tuncay Tekle, and Tom Rothamel (SUNY Stony Brook)

Dynamic languages such as Python allow programs to be written more
easily using high-level constructs such as comprehensions for queries
and using generic code. Efficient execution of programs then requires
powerful optimizations . incrementalization of expensive queries and
specialization of generic code. Effective incrementalization and
specialization of dynamic languages require precise and scalable alias
analysis.

This paper describes the development and experimental evaluation of a
may-alias analysis for a full dynamic object-oriented language, for
program optimization by incrementalization and specialization. The
analysis is flow-sensitive; we show that this is necessary for
effective optimization of dynamic languages. It uses precise type
analysis and a powerful form of context sensitivity, called trace
sensitivity, to further improve analysis precision. It uses a
compressed representation to significantly reduce the memory used by
flow-sensitive analyses. We evaluate the effectiveness of this
analysis and several variants of it for incrementalization and
specialization of Python programs, and we evaluate the precision,
memory usage, and running time of these analyses on programs of
diverse sizes. The results show that our analysis has acceptable
precision and efficiency and represents the best trade-off between
them.


On The Design of Error Messages Aimed at Novice Programmers

Guillaume Marceau and Kathi Fisler (WPI), and Shriram Krishnamurthi (Brown)

How well do the error messages of DrScheme support learning, and when
errors fail to teach, in which ways do they fail?

To answer this question, we recorded the edits students applied to
their program in response to error messages over a span of 8 weeks. By
looking at the responses, we were able to come up with a hand-coded,
quantitative metric for the effectiveness of an error message. To
supplement the metric, we interviewed 4 different students about their
experience with the error messages.

From these observations, we draw broad recommendations to the creators
of pedagogical IDEs, compilers, and languages, on the design of error
messages aimed at novice programmers.


Debugging and Profiling Transactional Memory Programs

Yossi Lev (Brown)

Transactional memory (TM) has become increasingly popular in recent
years as a promising programming paradigm for writing correct and
scalable concurrent programs.  Despite its popularity, there has been
very little work on how to debug and profile transactional programs.
Our work addresses this situation by exploring the debugging and
profiling needs of transactional programs, explaining how the tools
should change to support these needs, and implementing preliminary
infrastructure to support this change.

This talk presents work done as part of my thesis work. For more information, see http://www.cs.brown.edu/people/levyossi/Thesis/Thesis.html.


Probabilistic Programming Without Interpretive Overhead

Chung-chieh Shan (Rutgers) and Oleg Kiselyov (FNMOC)

Probabilistic inference is popular in machine learning for writing
programs that classify music and locate planes.  It is also useful in
cognitive science for modeling how people interpret words and
recognize faces.  An expressive way to implement it is to build
probabilistic models and inference procedures as separate reusable
modules.  We do so by coding models and inference in the same
general-purpose language.  This way, inference procedures can reason
about themselves without interpretive overhead, so we can model
bounded-rational theories of mind, as in natural-language pragmatics.

We use existing facilities of the language, such as rich libraries,
optimizing compilers, and types, to develop realistic models whose
inference performance is competitive with the state of the art.  In
particular, we use first-class control operators to combine sharing
and nondeterminism (ICFP 2009) while deterministic parts of models run
at full speed.  A wide range of models can thus be expressed using
memoization, and we introduce a new, general algorithm for importance
sampling with look-ahead.


Mint: Java Multi-stage Programming Using Weak Separability

Walid Taha (Rice University)


The Essence of JavaScript

Arjun Guha, Claudiu Saftoiu, and Shriram Krishnamurthi (Brown)

We reduce JavaScript to a core calculus structured as a small-step
operational semantics. We present several peculiarities of the
language and show that our calculus models them. We explicate the
desugaring process that turns JavaScript programs into ones in the
core. We demonstrate faithfulness to JavaScript using real-world test
suites. Finally, we illustrate utility by defining a security
property, implementing it as a type system on the core, and extending
it to the full language.

For details, see the ECOOP 2010 paper and its supplemental material.


Work in Progress: Declassification Policy Inference

Jeff Vaughan and Steve Chong (Harvard)

Security-type systems can provide strong information security
guarantees but often require enormous programmer effort to be used in
practice. In this work, we describe inference of fine-grained,
human-readable declassification policies as a step towards providing
security guarantees that are proportional to a programmer's effort:
the programmer should receive weak (but sound) security guarantees for
little effort, and stronger guarantees for more effort.

We will present an information-flow type system with where policies
may be inferred from existing program structure.  The inference
algorithm can find precise and intuitive descriptions of potentially
dangerous information flows in a program, and policies specify what
information is released under what conditions.  A semantic security
condition specifies what it means for a program to satisfy a policy.

Our work demonstrates the soundness of an analysis for programs in a
simple imperative language with exceptions.  Furthermore, we have
extended the analysis to an object-sensitive interprocedural analysis
for single-threaded Java 1.4 programs and developed a prototype
implementation.

We will also discuss scalability, and usability challenges which must
be overcome before such analyses become practical for every-day Java
programmers.


Logical Types for Scheme

Sam Tobin-Hochstadt and Matthias Felleisen (Northeastern)

Scheme programmers incorporate both control-flow logic and
propositional logic in their reasoning about programs. Typed Scheme,
our type system for PLT Scheme, accommodates this form of reasoning by
assigning variable occurrences a subtype of their declared type based
on the predicates prior to the occurrence, a discipline we have dubbed
occurrence typing. It thus enables programmers to enrich Scheme code
with types on a modular basis, while requiring few changes to the code
itself.

Three years of practical experience with Typed Scheme has revealed
serious shortcomings in the original design of the type system.  Our
original system relied on a system of ad-hoc rules to relate
combinations of predicates; it could not reason about subcomponents of
data structures, and it could not follow sophisticated reasoning about
the relationship among predicate tests.

In this paper, we reformulate occurrence typing to eliminate these
shortcomings.  For each expression, we derive propositional logic
formulas that hold when the expression evaluates to true (or
false). We then apply a simple proof system to derive all the
necessary facts about variable occurrences. Finally, we empirically
demonstrate the greater utility of our system.


Last modified Wednesday, April 28th, 2010 7:59:59pm