These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.

**Growing a Syntax***Eric Allen, Ryan Culpepper, Janus Dam Nielsen, Jon Rafkind, and
Sukyoung Ryu (Sun Microsystems)*

In this paper we present a macro system for the Fortress programming language. Fortress is a new programming language designed for scientific and high-performance computing. Features include: implicit parallelism, transactions, and concrete syntax that emulates mathematical notation. Fortress is intended to grow over time to accommodate the changing needs of its users. Our goal is to design and implement a macro system that allows for such growth. The main challenges are (1) to support extensions to a core syntax rich enough to emulate mathematical notation, (2) to support combinations of extensions from separately compiled macros, and (3) to allow new syntax that is indistinguishable from core language constructs. To emulate mathematical notation, Fortress syntax is specified as a parsing expression grammar (PEG), supporting unlimited lookahead. Macro definitions must be checked for well-formedness before they are expanded and macro uses must be well encapsulated (hygienic, composable, respecting referential transparency). Use sites must be parsed along with the rest of the program and expanded directly into abstract syntax trees. Syntax errors at use sites of a macro must refer to the unexpanded program at use sites, never to definition sites. Moreover, to allow for many common and important uses of macros, mutually recursive definitions should be supported. Our design meets these challenges. The result is a flexible system that allows us not only to support new language extensions, but also to move many constructs of the core language into libraries. New grammar productions are tightly integrated with the Fortress parser, and use sites expand into core abstract syntax trees. Our implementation is integrated into the open-source Fortress reference interpreter.To our knowledge, ours is the first implementation of a modular hygienic macro system based on parsing expression grammars.

**Statically-Checked Metaprogramming for Web Applications***Adam Chlipala (Harvard)*

Web application programming is perhaps PL researchers' favorite example of a development domain high in grunt work and low in tricky algorithmic problems. The Ur/Web programming language recognizes this and proposes an approach to reifying commonalities across web applications, as modules and values in an ML-influenced language that imports more avant garde ideas from type theory. Ur/Web is a domain-specific language whose "well-typed programs can't go wrong" theorem covers the lifetime of a web application, rather than just single page generations. In particular, the type system rules out malformed HTML, dangling links, illegal generation of SQL queries, illegal marshaling and unmarshaling to and from SQL databases, and mismatches between the fields defined in an HTML form and the fields read by its handler code. All that is just the foundation on which the platform for avoiding grunt work is built. Ur/Web's type system is inspired by dependent type theory, though dependency is avoided via stratification. The key benefit that remains is type-level computation. Aspects of XML and SQL are described with the uniform mechanism of row types, and Ur/Web's type language includes recursive function definitions over rows. This makes it possible to write richly-typed functors and functions that have different run-time behavior based on row type arguments. For instance, we have implemented a functor that generates a standard "admin interface" for an arbitrary SQL table, as a function of a row type enumerating the table's fields. An additional value-level record, whose type is built by recursion on the incoming row type, provides custom code on a per-field basis. The signature of this functor gives complete information on what clients must provide and what they receive in return, and the Ur/Web type system guarantees that the functor can never build web applications with any of the problems that we enumerated at the end of the last paragraph.

Project Web Site (with demo).

**Choreography & Session Types***Marco Carbone (Queen Mary College, Univ. of London)*

Choreography has recently emerged as a pragmatic and concise way of describing communication-based systems such as financial and security protocols and web services. This discipline focuses on global message flows and offers a vantage viewpoint of the system being designed. In this talk I will introduce a model for choreography and show how global message flows can be mapped into executable code in a session-based setting. In particular, I will discuss how three principles of well-structured description and type structures play a fundamental role in the theory. I will also briefly introduce different extensions of choreography such as interactional exceptions and multiparty session types.

**Causal Commutative Arrows***Hai (Paul) Liu and Paul Hudak (Yale)*

Arrows are a popular form of abstract computation. Being more general than monads, they are more broadly applicable, and in particular are a good abstraction for signal processing and dataflow computations. Most notably, arrows form the basis for a domain specific language called *Yampa*, which has been used in a variety of concrete applications, including animation, robotics, sound synthesis, control systems, graphical user interfaces, and robotics. Our primary interest is in better understanding the class of abstract computations captured by Yampa. Unfortunately, arrows are not concrete enough to do this with precision. To remedy this situation we introduce the concept of *commutative arrows* that capture a kind of non-interference property of concurrent computations. We also add an *init* operator, and identify a crucial law that captures the causal nature of arrow effects. We call the resulting computational model *causal commutative arrows*. To study this class of computations in more detail, we define an extension to the simply typed lambda calculus called *CCA*, and give both its denotational and operational semantics. Our key contribution is the identification of a normal form for CCA called \emph{causal commutative normal form} (CCNF). Furthermore, by defining a terminating normalization procedure we have developed an optimization strategy that yields substantial improvements in performance over conventional implementations of arrows. We have implemented this technique in Haskell, and conducted benchmarks that validate the effectiveness of our approach.

**Alchemy: Transmuting Base Alloy Specifications into Implementations***Shriram Krishnamurthi (Brown), Dan Dougherty, Kathi Fisler,
Daniel Yoo (WPI)*

Alloy specifications are used to define lightweight models of systems. We present Alchemy, which compiles Alloy specifications into implementations that execute against persistent databases. Alchemy translates a subset of Alloy predicates into imperative update operations, and it converts facts into database integrity constraints that it maintains automatically in the face of these imperative actions. In addition to presenting the semantics and an algorithm for this compilation, we present the tool and outline its application to a non-trivial specification. We also discuss lessons learned about the relationship between Alloy specifications and imperative implementations.

More information is in our paper.

**Verifying Garbage Collectors with Boogie and Z3***Chris Hawblitzel (Microsoft) and Erez Petrank (Technion)*

Most safe programming languages rely on garbage collection to remove unreachable objects from memory. A bug in the garbage collector could undermine the language.s safety, so it.s useful to verify that the garbage collector works correctly. We describe how we used the Boogie verification generator and Z3 automated theorem prover to prove the partial correctness of two practical garbage collectors (one mark-sweep collector and one copying collector). In particular, we discuss how Z3.s support for arithmetic, arrays, and bit vectors allowed us to reason about the low-level details of memory layout. This low level of detail allowed us to verify not just the GC algorithm, but also the GC interface to real x86 code compiled from off-the-shelf C# programs.

**Automatic Differentiation of Functional Programs or Lambda the
Ultimate Calculus***Jeffrey Mark Siskind (Purdue)*

It is extremely useful to be able to take derivatives of functions expressed as computer programs to support function optimization and approximation, parameter estimation, machine learning, and ultimately computational science and engineering design. The established discipline of Automatic Differentiation (AD) has largely focussed on imperative languages, where it is most efficiently implemented as a source-to-source transformation performed by a preprocessor. This talk will present a novel formulation of AD for functional programs expressed in the lambda calculus. A key novel aspect of our formulation is that AD is performed by higher-order functions that reflectively transform closure bodies. Our methods exhibit an important closure property that prior AD formulations lack: the ability to transform the entire space of input programs, including those produced by the AD transformations themselves. This is crucial for solving the kinds of nested optimizations that arise in domains such as game theory and automatic control. Furthermore, since the input and output of our transformations is the lambda calculus, efficient implementation is facilitated by novel extensions of standard compilation techniques. We exhibit a novel "almost" union-free polyvariant flow-analysis algorithm, formulated as abstract interpretation, that partially evaluates calls to the AD operators, migrating reflective source-to-source transformation to compile time. This yields code with run-time performance that exceeds the best existing AD implementations for imperative languages by a factor of two and outperforms all AD implementations for functional languages by two to three orders of magnitude.

Joint work with Barak A. Pearlmutter.

**Complexity of Flow Analysis***David Van Horn (Northeastern) and Harry Mairson (Brandeis)*

Flow analysis aims to predict properties of programs before they are run, but how hard is this to do? We answer the question in the case of the kCFA hierarchy, a ubiquitous family of flow analyses for higher-order languages, by deriving tight lower bounds on the computational complexity of the hierarchy. For 0CFA, and many of its known approximations, a natural decision problem answered by analysis is complete for polynomial time. This theorem relies on the insight that linearity of programs subverts approximation in analysis, rendering analysis equivalent to evaluation. For any k > 0, we prove the decision problem is complete for exponential time. This theorem validates empirical observations that such control flow analysis is intractable. It also provides more general insight into the complexity of abstract interpretation.

Last modified Friday, November 14th, 2008 4:35:58pm |