These abstracts are for talks at this event.
NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.
Explicit Regions in Cyclone
Cyclone is a type-safe dialect of C that incorporates a number of advanced typing features while trying to remain faithful to C data representations, calling conventions, and semantics. One of the more novel aspects of Cyclone is its support for memory management. Like other safe languages, programmers can rely upon a (conservative) garbage collector to reclaim objects. However, when a programmer wants to stack-allocate objects, or wants to have explicit control over the lifetimes of heap-allocated objects, or cannot afford the overheads of the collector (e.g., due to real-time constraints), she can exploit Cyclone's support for type-safe, region-based memory management. Like the ML-Kit compiler, Cyclone's typing discipline for regions is based on the Tofte and Talpin calculus, but unlike the ML-Kit, programmers must put in some explicit region annotations. To minimize these annotations, we utilize a combination of default region annotations, local inference, and a novel treatment of region effects.
VAULT: Enforcing High-Level Protocols in Low-Level Software
This talk introduces VAULT, a new programming language being designed at Microsoft Research. There are two reasons for creating a new language. First, to combine programmer control over data layout and lifetime as in C or C++ with the memory safety of ML or Java. Second, to capture usage rules in interface descriptions in order to check both clients and the implementation itself. Usage rules enable stating for example the valid order of function calls and the ownership transfers of data. Both memory safety and usage rules are expressed as part of VAULT's type system. Our initial experience shows that the VAULT type system can enforce many important rules between device drivers and the W2K kernel. Some examples of rules are: mututal exclusion of memory accesses between kernel threads, allocation/deallocation rules of dynamically allocated data, and interrupt level restrictions.
Semantics for Dynamic Aspect-Oriented Programming
A characteristic of aspect-oriented programming, as embodied in AspectJ, is the use of advice to incrementally modify the behavior of a program. An advice declaration specifies an action to be taken whenever some condition arises during the execution of the program. The condition is specified by a formula called a pointcut designator or pcd. The events during execution at which advice may be triggered are called join points. In this model of aspect-oriented programming, join points are dynamic in that they refer to events during the execution of the program. We give a denotational semantics for a minilanguage that embodies the key features of dynamic join points, pointcut designators, and advice.
A Type System for Certified Binaries
A certified binary is a value together with a proof that the value satisfies a given specification. Existing compilers that generate certified code have focused on simple memory and control-flow safety rather than more advanced properties. In this talk, we present a general framework for explicitly representing complex propositions and proofs in typed intermediate and assembly languages. The new framework allows us to reason about certified programs that involve effects while still maintaining decidable typechecking. We show how to integrate an entire proof system (the calculus of inductive constructions) into a compiler intermediate language and how the intermediate language can undergo complex transformations (CPS and closure conversion) while preserving proofs represented in the type system. Our work provides a foundation for the process of automatically generating certified binaries in a type-theoretic framework. This is joint work with Bratin Saha, Valery Trifonov, and Nikolaos Papaspyrou.
Cycle Therapy: A Prescription for Fold and Unfold on Regular Trees
Cyclic data structures can be tricky to create and manipulate in declarative programming languages. These structures can be viewed as representing regular trees -- those trees which may be infinite but have only a finite number of distinct subtrees. In this talk, I will present abstractions that simplify the creation and manipulation of regular trees. I will show how the tree-generating UNFOLD (anamorphism) operator can be implemented in both eager and lazy languages so as to create cyclic structures (as opposed to merely infinite lazy structures) when the result is a regular tree. The tree-accumulating FOLD (catamorphism) operator yields an undefined result on any infinite tree when used with a strict combining function. As an alternative, I will define and show how to implement a CYCFOLD operator that has a more useful semantics than FOLD when used with a strict combining function on cyclic structures representing regular trees. I will also introduce CYCAMORES, an abstract data type that simplifies the manipulation of regular trees, and discuss ML and Haskell implementations of this data type.
This talk presents work done in collaboration with J.B. Wells (Heriot-Watt University). Our paper describing this work can be found at http://cs.wellesley.edu/~fturbak/pubs/ppdp01.html.
Automatic Generation and Checking of Program Specifications
Producing specifications by dynamic (runtime) analysis of program executions is potentially unsound, because the analyzed executions may not fully characterize all possible executions of the program. In practice, how accurate are the results of a dynamic analysis? We describe the results of an investigation into this question, comparing specifications generalized from program runs with specifications verified by a static checker. The surprising result is that for a collection of modest programs, small test suites captured all or nearly all program behavior necessary for a specific type of static checking, permitting the inference and verification of useful specifications. For ten programs of 100--800 lines, the average precision, a measure of correctness, was .95 and the average recall, a measure of completeness, was .94. This is a positive result for testing, because it suggests that dynamic analyses can capture all semantic information of interest for certain applications. The experimental results demonstrate that a specific technique, dynamic invariant detection, is effective at generating consistent, sufficient specifications. Finally, the research shows that combining static and dynamic analyses over program specifications has benefits for users of each technique.
Radioactive Decay Models for Generational Garbage Collection
Alternative algorithms for garbage collection are often compared by running or simulating them on a set of benchmarks. The usual result is that the best-performing algorithm depends on the benchmark. To make sense of these empirical results, we need a quantitative theory of garbage collection. To obtain this theory, we must develop models for object lifetimes that are both realistic and tractable. The radioactive decay model of object lifetimes is not very realistic, but it showed the way to a new family of algorithms for older-first generational garbage collection. We obtain a more realistic class of models by taking linear combinations of radioactive decay models. The parameters of these models serve as convenient summary statistics for the lifetimes of objects in real programs, and can predict which algorithms are likely to perform best for a given program.
|Last modified Tuesday, September 18th, 2001 5:12:40pm||Powered by|