These abstracts are for talks at this event.
NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.
Objects as Software Services: Toward a World without Releases
Software services seek to combine the advantages of traditional client applications and web services. Software services go beyond web services in supporting rich interaction when networks are slow, unreliable or nonexistent. At the same time, software services retain the advantages associated with web services: they are always available and always up to date. Software services have the potential to radically transform software development, as they enable a world without versions and releases. We discuss the advantages of dynamic object oriented language principles in facilitating the design of an object based platform for software services.
Combining Static Analysis and Run-Time Analysis for
Verification and Testing of Multithreaded Programs
Concurrent programs are notorious for containing errors that are difficult to reproduce and diagnose. Some common concurrency errors are data races, atomicity violations (informally, atomicity means that executing methods concurrently is equivalent to executing them serially), and deadlocks. This talk focuses on using the synergy between static and run-time analyses to efficiently detect these errors. We developed expressive type systems for proving absence of data races, atomicity violations, and deadlocks. To save the programmer from writing type annotations, we developed type inference algorithms for these type systems. Our inference algorithm for race types, which we call type discovery, is a novel combination of run-time analysis and static analysis. We also explored the use of static analysis to significantly decrease the overhead of run-time checking for these errors. Warnings from the typechecker are used to identify parts of the program from which run-time checking can safely be omitted. When the types are obtained by type discovery or type inference, the approach is completely automatic, scalable to very large programs, and significantly reduces the overhead of run-time checking for data races, atomicity violations, and deadlocks.
This is joint work with Prof. Scott Stoller, Liqiang Wang, and Amit Sasturkar. It includes work published earlier at International Conference on Verification, Model Checking and Abstract Interpretation (VMCAI) 2004, ACM SIGPLAN 2005 Symposium on Principles and Practice of Parallel Programming (PPoPP), IEEE/ACM International Conference on Automated Software Engineering (ASE) 2005, and Parallel and Distributed Systems: Testing and Debugging (PADTAD) Track of the 2005 IBM Verification Conference.
Statically Verified Typeful Compilation
The use of typed intermediate representations can significantly increase the reliability of a compiler. By type-checking the code produced at each transformation stage, one can identify bugs in the compiler that would otherwise be much harder to find. We propose to take the use of types in compilation a step further by verifying that the transformation itself is type correct, in the sense that it is impossible that it produces an ill typed term given a well typed term as input. We base our approach on higher-order abstract syntax (HOAS), a representation of programs where variables in the object language are represented by meta-variables. We use a representation that accounts for the object language's type system using generalized algebraic data types (GADTs). In this way, the full binding and type structure of the object language is exposed to the host language's type system. In this setting we encode a type preservation property of a CPS conversion in Haskell's type system, using witnesses of a type correctness proof encoded as values in a GADT.
Modular Verification of Assembly Code with Stack-Based
Runtime stacks are critical components of any modern software---they are used to implement powerful control structures such as function call/return, stack cutting and unwinding, coroutines, and thread context switch. Stack operations, however, are very hard to reason about: there are no known formal specifications for certifying C-style setjmp/longjmp, stack cutting and unwinding, or weak continuations (in C--). In many proof-carrying code (PCC) systems, return code pointers and exception handlers are treated as general first-class functions (as in continuation-passing style) even though both should have more limited scopes. In this paper we show that stack-based control abstractions follow a much simpler pattern than general first-class code pointers. We present a simple but flexible Hoare-style framework for modular verification of assembly code with all kinds of stack-based control abstractions, including function call/return, tail call, setjmp/longjmp, weak continuation, stack cutting, stack unwinding, multi-return function call, coroutines, and thread context switch. Instead of presenting a specific logic for each control structure, we develop all reasoning systems as instances of a generic framework. This allows program modules and their proofs developed in different PCC systems to be linked together. Our system is fully mechanized. We give the complete soundness proof and a full verification of several examples in the Coq proof assistant.
This is joint work with Zhong Shao, Alexander Vaynberg, Zhaozhong Ni at Yale University, and Sen Xiang at University of Science and Technology at China.
Automatic Test Factoring for Java
Test factoring creates fast, focused unit tests from slow system-wide tests; each new unit test exercises only a subset of the functionality exercised by the system test. Augmenting a test suite with factored unit tests should catch errors earlier in a test run. One way to factor a test is to introduce mock objects. If a test exercises a component T, which interacts with another component E (the ``environment''), the implementation of E can be replaced by a mock. The mock checks that T's calls to E are as expected, and it simulates E's behavior in response. We introduce an automatic technique for test factoring. Given a system test for T and E, and a record of T's and E's behavior when the system test is run, test factoring generates unit tests for T in which E is mocked. The factored tests can isolate bugs in T from bugs in E and, if E is slow or expensive, improve test performance or cost. Our implementation of automatic dynamic test factoring for the Java language reduces the running time of a system test suite by up to an order of magnitude.
Programming with Exceptions in JCilk
JCilk extends the Java language to provide call-return semantics for multithreading, much as Cilk does for C. Java's built-in thread model does not support the passing of exceptions or return values from one thread back to the "parent" thread that created it. JCilk imports Cilk's fork-join primitives spawn and sync into Java to provide procedure-call semantics for concurrent subcomputations. The talk presents how JCilk integrates exception handling with multithreading by defining semantics consistent with the existing semantics of Java's try and catch constructs, but which handle concurrency in spawned methods. JCilk's strategy of integrating multithreading with Java's exception semantics yields some surprising semantic synergies. In particular, JCilk extends Java's exception semantics to allow exceptions to be passed from a spawned method to its parent in a natural way that obviates the need for Cilk's inlet and abort constructs. This extension is "faithful" in that it obeys Java's ordinary serial semantics when executed on a single processor. When executed in parallel, however, an exception thrown by a JCilk computation signals its sibling computations to abort, which yields a clean semantics in which only a single exception from the enclosing try block is handled. The decision to implicitly abort side computations opens a Pandora's box of subsidiary linguistic problems to be resolved, however. For instance, aborting might cause a computation to be interrupted asynchronously, causing havoc in programmer understanding of code behavior. To minimize the complexity of reasoning about aborts, JCilk signals them "semisynchronously" so that abort signals do not interrupt ordinary serial code. In addition, JCilk propagates an abort signal throughout a subcomputation naturally with a built-in CilkAbort exception, thereby allowing programmers to handle clean-up by simply catching the CilkAbort exception. The semantics of JCilk allow programs with speculative computations to be programmed easily. Speculation is essential for parallelizing programs such as branch-and-bound or heuristic search. We show how JCilk's linguistic mechanisms can be used to program a solution to the "queens" problem.
Custom Object Layout for Garbage-Collected Languages
Modern architectures require data locality to achieve performance. However, garbage-collected languages like Java limit the ability of programmers to influence object locality, and so impose a significant performance penalty. We present custom object layout, an approach that allows programmers to control object layout in garbage-collected languages. Custom object layout cooperates with copying garbage collection. At collection time, the garbage collector invokes programmer-supplied methods that direct object placement. Custom object layout is particularly effective at improving the locality of classes with well-known traversal patterns, such as dictionary data structures. We show that using custom object layout can reduce cache misses by 50%-77% and thus improves the query performance of dictionary data structures by 20%.
XVP - An extended virtual platform
We argue that a major limitation of current object oriented languages and their virtual platforms is proper support for assertions (logic-based constraints). The main components of the required architecture will be presented in this talk. This architecture includes representation of logic-based constraints in class file and class object structures, a loader that performs appropriate actions on type parameters and constraints, and reflective capabilities that report information on type signatures and their associated constraints. A particularly distinctive component of this architecture, not considered as a part of a virtual platform, is an interface with a program verification system that performs checking of constraints and verifies behavioral subtyping requirements. This is accomplished by interfacing the program verification system with reflective capabilities and performing verification by accessing type signatures and their associated constraints in (dynamically) loaded class objects. The specifics are implemented by extending the Java Virtual Machine with the proposed functionalities.
Joint work with Suad Alagic and Dan Dillon.
|Last modified Monday, February 20th, 2006 3:37:03pm||Powered by|