PPLV Research Seminars
The PPLV group now hosts fortnightly invited research seminars.
If you are interested in giving a research seminar, please contact Fredrik Dahlqvist in the first instance.
Upcoming seminars:
Time / date:  Tuesday 21 February, 35pm 

Speaker:  Benjamin Kaminski, Aachen University 
Location:  Room 6.12MPEB 
Title:  A Weakest PreExpectation Semantics for MixedSign Expectations 
Abstract:  We start with a short tutorial on classical weakest precondition reasoning à la Dijkstra and how they can be extended to weakest preexpectations for reasoning about expected values in probabilistic programs à la Kozen / McIver & Morgan. We then present a weakestpreconditionstyle calculus for reasoning about expected values of mixedsign unbounded random variables after execution of a probabilistic program. The semantics of a whileloop is welldefined as the limit of iteratively applying a functional to a zeroelement just as in the traditional weakest preexpectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always welldefined, even if the expected values do not exist. We show that the calculus is sound, allows for compositional reasoning, and present an invariantbased approach for reasoning about preexpectations of loops. 
Past seminars:
Time / date:  Tuesday 29th November 2016, 34pm 

Speaker:  Jonathon Spring 
Location:  6.12 Malet Place Engineering Building 
Title:  Why Separation Logic Works 
Abstract: 
Separation Logic works because it merges the software engineer's conceptual model of a program's manipulation of computer memory with the logical model that interprets what sentences in the logic are true. Separation Logic is an interesting case because of its widespread success in verification tools used by Facebook and others daily. For these two senses of model — the engineering/conceptual and the logical — to merge in a genuine sense, each must maintain their norms of use from their home disciplines. When this occurs, the development of both the logic and engineering benefit greatly. Seeking this intersection of two different senses of the concept of model provides a strategy for how computer scientists and logicians may be successful. In this talk, we will focus on an exploration of the extent to which models merging leads to success in computer science, and thereby the extent to which it is a strategy for future development. 
Time / date:  Thursday 17th November 2016, 45pm 

Speaker:  Tyler Sorensen, Imperial College London 
Location:  1.02 Malet Place Engineering Building 
Title:  Exposing Errors Related to Weak Memory in GPU Applications 
Abstract:  In this presentation, I will discuss the systematic design of a testing environment that uses stressing and fuzzing to reveal errors in GPU applications that arise due to weak memory effects. This approach is evaluated across several CUDA applications that use finegrained concurrency, on seven GPUs spanning three Nvidia architectures. The results show that applications that rarely, or never, exhibit errors related to weak memory when executed natively can readily exhibit these errors when executed in the testing environment. The testing environment also provides a means to identify the root causes of erroneous weak effects, and automatically suggests how to insert fences that experimentally eliminate these errors. This empirical fence insertion method carries significantly lower overhead, in terms of execution time and energy consumption, than a more conservative, guaranteedsound approach. Joint work with Alastair Donaldson 
Time / date:  Wednesday 13th July 2016, 45pm 

Speaker:  Letterio Galletta, University of Pisa 
Location:  Roberts 110 
Title:  Adaptivity: Linguistic Mechanism and Static Analysis 
Abstract: 
Adaptive systems modify their behaviour in order to run always and everywhere. Their structure is therefore subject to continuous changes, which however could compromise the correct behaviour of applications and break the guarantees on their nonfunctional requirements. Recently, Context Oriented Programming (COP) was proposed as a viable paradigm to develop adaptive and contextaware software. It advocates languages with peculiar constructs that express contextdependent behaviour in a modular manner. In this way contextawareness is builtin, and the provided linguistic abstractions impose a good practice to programmers. This talk introduces COP, focusing in particular on ML_CoDa, a functional contextoriented language, and its two steps static analysis, to detect if adaptation errors may occur at run time. 
Time / date:  Monday 27th June 2016, 45pm 

Speaker:  Carsten Fuhs, Birkbeck University 
Location:  Room 405, 6672 Gower St 
Title:  SMT Techniques and Solvers in Automated Termination Analysis 
Abstract: 
Termination is the property of a program that regardless of the input, execution of the program will always come to a halt eventually. Although this property is undecidable, since the early 2000s fully automated techniques and tools for termination analysis have flourished in several communities: term rewriting, imperative programs, functional programs, logic programs, ... A common theme behind most of these tools is the use of constraintbased techniques to advance the proof of (non)termination. Recently, in particular SAT and SMT solvers are used as backends to automate these techniques. In my talk, I will give an overview of automated termination analysis from an SMT solving perspective. 
Time / date:  Friday 24th June 2016, 4:305:30pm 

Speaker:  James R. Wilcox, University of Washington 
Location:  Room 405, 6672 Gower St 
Title:  Verdi: A Framework for Implementing and Verifying Distributed Systems 
Abstract: 
Distributed systems are difficult to implement correctly because they must handle both concurrency and failures. Further, their behavior is often too complex to permit exhaustive testing. Bugs in these systems have led to the loss of critical data and unacceptable service outages. We present Verdi, a framework for implementing and formally verifying distributed systems in Coq. Verdi formalizes various network semantics with different faults, and the developer chooses the most appropriate fault model when verifying their implementation. Furthermore, Verdi's verified system transformers (VSTs) ease the verification burden by enabling the developer to first verify their system under an idealized fault model, then transfer the resulting correctness guarantees to a more realistic fault model without any additional proof burden. We have used Verdi to implement and verify the Raft consensus protocol. We proved that our implementation of Raft correctly provides linearizable state machine replication. This proof required iteratively discovering and proving 90 system invariants, consisting of over 50000 lines of code. At such a large scale, the primary challenge we faced during the verification process was proof maintenance. We will discuss proof engineering practices that were critical to our success. 
Time / date:  Thursday 16th June, 45pm 

Speaker:  Laurence Tratt, Kings College London 
Location:  Room 405, 6672 Gower St 
Title:  Finegrained Language Composition 
Abstract: 
Programming languages are islands, each disconnected from the rest. We choose a language for a task and, for better or worse, stick with it. Communicating between programs written in different languages is such a slow, arduous, task that we avoid doing it whenever possible. In this talk I will first show how language composition can lower, and even remove, the barriers between languages. We have pioneered new approaches to the two major challenges in language composition: editing and running composed programs. Using our novel editor 'Eco', users can write source files that contain fragments of multiple languages. We then run multilanguage programs using composed metatracing VMs. Our preliminary results suggest that performance of composed programs is often competitive with traditional monolanguage VMs. Joint work with Edd Barrett, Carl Friedrich Bolz, Lukas Diekmann, Geoff French, Sarah Mount, and Jasper Schulz. More at http://softdev.org/ 
Time / date:  Thursday 9th June 2016, 45pm 

Speaker:  Siddharth Krishna, New York University 
Location:  6.12 Malet Place Engineering Building 
Title:  Learning to Verify the Heap 
Abstract: 
We present a datadriven verification framework to automatically prove memory safety and functional correctness of heap programs. For this, we introduce a novel statistical machine learning technique that maps observed program states to (possibly disjunctive) separation logic formulas describing the invariant shape of data structures at relevant program locations. We then attempt to verify these predictions using a theorem prover, where counterexamples to a predicted invariant are used as additional input to the shape predictor in a refinement loop. After obtaining valid shape invariants, we use a second learning algorithm to strengthen them with data invariants, again employing a refinement loop using the underlying theorem prover. We have implemented our techniques in Cricket, an extension of the GRASShopper verification tool. Cricket is able to automatically prove memory safety and correctness of implementations of a variety of classical listmanipulating algorithms such as insertionsort. Joint work with Marc Brockschmidt, Yuxin Chen, Byron Cook, Pushmeet Kohli, Daniel Tarlow, and He Zhu. 
Biography:  Siddharth Krishna is a PhD student in the Computer Science Department of New York University, who works with Thomas Wies. He is interested in using Machine Learning for program verification and synthesis, and on using SMT and local theories to verify heap manipulating programs. 
Time / date:  Wednesday 1st June 2016, 45pm 

Speaker:  Nuno Lopes, Microsoft Research Cambridge 
Location:  1.03 Malet Place Engineering Building 
Title:  Network Verification at Microsoft 
Abstract:  In this talk I’ll show why network verification is so important today and why it will continue that way in the future. I’ll then survey the techniques currently used at scale by Microsoft, as well as new techniques being developed. 
Time / date:  Friday 27th May, 11am12pm 

Speaker:  Dana Scott, Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic (Emeritus), Carnegie Mellon University 
Location:  Room 405, 6672 Gower St 
Title:  Types and Typefree Lambda Calculus 
Abstract: 
Denotational semantics started in Oxford in late 1969. It was hoped that domain theory would provide a basis both for recursive definitions in programs and recursive definitions of semantical structures. Early troubles were encountered in using tops and bottoms, and soon researchers turned to operational semantics. Others wanted to expand work to communicating and parallel processes. Axiomatic and synthetic theories did not solve the problems, and as a result much research effort in domain theory faded. Late work by Reynolds and collaborators, however, has opened up new and promising approaches for programminglanguage semantics. Perhaps the much simplified modeling using enumeration operators can spark some new investigations, especially since it is so easy to add a rich type structure to the calculus. 
Time / date:  Thursday 12th May 2016, 45pm 

Speaker:  Paul Blain Levy, University of Birmingham 
Location:  1.02 Malet Place Engineering Building 
Title:  Transition Systems Over Games 
Abstract: 
This talk combines two areas of computer science.
We want to represent game semantics using transition systems, but traditional systems are too rigid because they have a fixed set of actions. So instead we develop a version of transition system in which each state sits over a position of our game. The actions are the currently available moves. To make this into a compositional semantics, we give a “transfer”, kind of program that converts moves between two games, giving an operation on strategies. The agreement between the transition systems and the transfer is given by a relation called a “stepped bisimulation”. (Joint work with Sam Staton) 
Time / date:  Wednesday 27th April 2016, 45pm 

Speaker:  Radu Grigore, University of Kent 
Location:  Executive Suite 103, Engineering Front Building 
Title:  Tree Buffers 
Abstract:  In runtime verification, the central problem is to decide if a given program execution violates a given property. In online runtime verification, a monitor observes a program's execution as it happens. If the program being observed has hard realtime constraints, then the monitor inherits them. In the presence of hard realtime constraints it becomes a challenge to maintain enough information to produce error traces, should a property violation be observed. In this paper we introduce a data structure, called tree buffer, that solves this problem in the context of automatabased monitors: If the monitor itself respects hard realtime constraints, then enriching it by tree buffers makes it possible to provide error traces, which are essential for diagnosing defects. We show that tree buffers are also useful in other application domains. For example, they can be used to implement functionality of capturing groups in regular expressions. We prove optimal asymptotic bounds for our data structure, and validate them using empirical data from two sources: regular expression searching through Wikipedia, and runtime verification of execution traces obtained from the DaCapo test suite. 
Time / date:  Wed 13th April 2016, 45pm 

Speaker:  Nikos Tzevelekos, Queen Mary University of London 
Location:  Executive Suite 103, Engineering Front Building 
Title:  ModelChecking Program Equivalence in Interface Middleweight Java 
Abstract:  Using game semantics, we investigate the problem of verifying contextual equivalences in Interface Middleweight Java (IMJ), an imperative object calculus in which program phrases are typed using interfaces. In particular, we show how to determine the decidability status of problem instances (over a fixed type signature) by examining the position of methods inside the term type and the types of its free identifiers. Our results build upon the recent fully abstract game semantics of IMJ. Decidability is proved by translation into visibly pushdown register automata over infinite alphabets with freshinput recognition, and the procedure has been implemented into a new tool called Coneqct. 
Time / date:  Thu 24th March 2016, 45pm 

Speaker:  Jens Groth, Information Security, UCL 
Location:  Roberts 1.10 
Title:  Cryptography for Eagles 
Abstract:  The talk will be a high level discussion about what cryptography is and what I see as important research directions. I aim to make it nontechnical and accessible for everyone and will try to leave time for discussion. Topics I will touch upon include multiparty computation: any efficiently computable function can be computed securely, i.e., the result is guaranteed to be correct and private inputs remain confidential, and zeroknowledge proofs: a theorem can be proved to be true without revealing anything but the fact it is true. Recently we have developed highly efficient zeroknowledge proofs; even large complex statements have small compact proofs. Challenges: Computeraided construction of cryptographic protocols and automated verification of security proofs. 
Time / date:  Wed 9th March 2016, 45pm 

Speaker:  Pawel Sobocinski, University of Southampton 
Location:  1.20 MPEB 
Title:  Graphical Linear Algebra and Applications 
Abstract:  I will introduce the symmetric monoidal theory of interacting Hopf monoids, which is a string diagrammatic language capable of capturing most standard concepts of finitedimensional linear algebra. I will briefly outline recent work on applications to signal flow graphs, a classical formalism in control theory and signal processing. Various results in the talk are the result of joint work with Filippo Bonchi and Fabio Zanasi, and with Brendan Fong and Paolo Rapisarda. I have been blogging about graphical linear algebra at http://graphicallinearalgebra.net 
Time / date:  Fri 26th February 2016, 45pm 

Speaker:  Anupam Das, ENS de Lyon 
Location:  Roberts 110 
Title:  An introduction to linear rewriting in propositional logic 
Abstract:  Linear term rewriting can be an effective way to control space complexity in the computation of normal forms, deciding reachability etc. They are also fundamental in proof theory, controlling the logical behaviour of a system when formulated with multiplicative rules. This research line begins with the observation that entailment between linear terms of Boolean algebras is already coNPcomplete, i.e. all propositional tautologies can be written as a 'linear inference'. I will present some characterisations of various subsystems as well as certain negative results, culminating in a recent result that there is no complete basis of linear rules. The proofs in this area are often simple and elegant once appropriate invariants are identified; this itself relies on a combination of tools from term rewriting, graph theory and Boolean function theory. Finally I present an abstraction of linear terms in a graphtheoretic setting and discuss some ongoing work where reduction behaviour is more wellbehaved, but where we are forced to leave the world of Boolean functions altogether. This will be a lowlevel and firstprinciples talk and should be accessible to all! 
Time / date:  Wed 24th Feb 2016, 45pm 

Speaker:  Matthew Hague, Royal Holloway, University of London 
Location:  Roberts 422 
Title:  Logic, Automata, and Types at Higher Orders 
Abstract:  The equivalence of alternating parity tree automata, the modal mucalculus, and parity games as definitional devices for tree languages is a keystone result underpinning the model checking of reactive systems. In this talk, we lift this equiexpressivity result to languages of highertype Bohm trees, which may be viewed as higherorder functions over trees. We equip Stirling's alternating dependency tree automata (ADTA) with an omegaregular winning condition, and show that they are equiexpressive with KobayashiOng intersection types for defining languages of infinite Bohm trees. We then introduce highertype mucalculus, which extends mucalculus with predicates for detecting variables, and corresponding constructs for detecting lambdaabstractions. We show that highertype mucalculus is equiexpressive with a subclass of intersection types (equivalently, ADTA) called parity permissive. We conjecture that parity permissive restricts the expressive power of intersection types and ADTA. 
Time / date:  Wed 10th February 2016, 45pm 

Speaker:  Dan Ghica, University of Birmingham 
Location:  1.02 MPEB 
Title:  Alpha equivalence for interleaved scopes 
Abstract:  I will describe the key syntactic and semantic aspects of a nominal framework allowing scopes of name bindings to be arbitrarily interleaved. Name binding (e.g. delta x.M) is handled by explicit namecreation and namedestruction brackets which admit interleaving. We will see an appropriate notion of alphaequivalence for such a language and study the syntactic structure required for alphaequivalence to be a congruence. I will sketch some potential applications to semantics of programming languages with explicit resource allocation and release. This is joint work with James Gabbay and Daniela Petrisan. 
Time / date:  Thu 28th January 2016, 45pm 

Speaker:  Jonathan Spring, PPLV/Information Security, UCL 
Location:  Roberts 110 
Title:  Avoiding Pseudoscience: Prudence, Logic, and Verification in Studying Information Security 
Abstract:  Pseudoscience is a (pejorative) term for some pursuit that is judged not to be science but whose participants lay claim to the respectability of science regardless. In this talk I'll discuss how we might judge whether some pursuit is (pseudo)science and how we might verify whether some sample work in information security passes this judgement. The result is a rough framework for improving work in computer science work such as program verification and information security. 
Time / date:  Fri 11th December 2015, 45pm 

Speaker:  Carsten Fuhs, Birkbeck 
Location:  6.12 MPEB (Malet Place Engineering Building) 
Title:  Automatic Quantitative Resource Analysis for Programs 
Abstract:  We present a modular approach to automatic resource complexity analysis of integer programs. Based on a novel alternation between finding symbolic time bounds for program parts and using these to infer bounds on the absolute values of program variables, we can restrict each analysis step to a small part of the program while maintaining a high level of precision. The resource bounds computed by our method are polynomial or exponential expressions that depend on the absolute values of input parameters. Our contributions are implemented in the opensource tool KoAT, and extensive experiments show the performance and power of our implementation in comparison with other tools. 
Time / date:  Wed 25th November 2015, 45pm 

Speaker:  Andrew Ruef, University of Maryland 
Location:  6.12 MPEB (Malet Place Engineering Building) 
Title:  Creating Industrial Software Analysis Systems or How I Learned to Stop Worrying and Just Smush Fuzzers and Symbolic Executors Together 
Abstract:  Software fails us every day and we want it to stop. Absent a magical wand to rid the world of legacy code, something should be done to find bugs in C programs so that someone can fix them. We have existing tools and techniques but how can we apply them to real code at scale and get results? In this talk, I’ll explore our efforts at creating systems that combine fuzzing, symbolic execution, and abstract interpretation, and discuss our results in finding memory corruption bugs in a somewhat controlled experiment. I’ll also talk about benefits and challenges of “industrial grade” system development and some of the theoretical ideas and future directions in combining information generated by analyzers using counterexample guided abstraction refinement. 
Time / date:  Fri 20th November 2015, 45pm 

Speaker:  Stefan Milius, FAU ErlangenNürnberg 
Location:  1.19 MPEB (Malet Place Engineering Building) 
Title:  Formal Verification of Systems of Synchronous Software Components 
Abstract: 
Software of distributed, safetycritical embedded systems, as e.g. found in the automotive, railway or avionics domain, has to satisfy very high requirements concerning correctness and fault tolerance. Therefore standards for the development of such systems highly recommend the usage of formal methods. For many complex control tasks the preferred architecture is a system composed from components that operate synchronously but communicate asynchronously with other components – a paradigm called GALS (globally asynchronous – locally synchronous) architecture. In this talk we present an approach to close the "methodological" gap between the formal verification of synchronous components and asynchronous system of components. We mainly focus on systems where the synchronous components are implemented in SCADE (a language and modelling tool for synchronous controller software). The idea is then to abstract such concrete components with contracts, which are a mixture of temporal logic formulas and nondeterministic state machines. Formal verification of global system properties is then done on a network of contracts that has a lower complexity than the original network of concrete components. We will first explain the verification approach in general. Then we introduce a modelling language for the specification of contracts, and we then explain the transformations used for model checking GALS systems. Our model checking method is based on bounded model checking using SAT and SMT solvers. Finally, we demonstrate the results of our approach and tool on an industrial case study. 
Time / date:  Wed 21st October 2015, 45pm 

Speaker:  Jon Williamson, University of Kent 
Location:  1.04 MPEB (Malet Place Engineering Building) 
Title:  Inductive Logic for Automated Decision Making 
Abstract:  According to Bayesian decision theory, one's acts should maximise expected utility. To calculate expected utility one needs not only the utility of each act in each possible scenario but also the probabilities of the various scenarios. It is the job of an inductive logic to determine these probabilities, given the evidence to hand. The most natural inductive logic, classical inductive logic, attributable to Wittgenstein, was dismissed by Carnap due to its apparent inability to capture the phenomenon of learning from experience. I argue that Carnap was too hasty to dismiss this logic: classical inductive logic can be rehabilitated, and the problem of learning from experience overcome, by appealing to the principles of objective Bayesianism. I then discuss the practical question of how to calculate the required probabilities and show that the machinery of probabilistic networks can be fruitfully applied here. This culminates in an objective Bayesian decision theory that has a realistic prospect of automation. 
Time / date:  Fri 9th October 2015, 45pm  

Speaker:  Sukriti Bhattacharya, PPLV, UCL  
Location:  6.12 MPEB (Malet Place Engineering Building)  
Title:  MaDSLaM: A Statistical Language Model For Malware Detection  
Abstract:  Malware continues to be an ongoing threat to modern computing. The detection of malware is an area of major concern not only to the research community but also to the general public. We introduce MaDSLaM, a framework for enabling statistical analysis on binary executables by filtering out malicious executables at the early stage of malware analysis phase with a sufficiently low false positive rate. MaDSLaM exploits the naturalness of software via natural language processing techniques by building language models that distinguish between malware and benign files at the executable binary level. We developed a proofofconcept version of MaDSLaM and evaluated it over a fairly large set of malware samples cover a wide range of malware families and malware authors. 
Time / date:  Fri 5th June 2015, 45pm 

Speaker:  Johannes Kinder, Royal Holloway, University of London 
Location:  6.12 MPEB 
Title:  High SystemCode Security with Low Overhead 
Abstract: 
Security vulnerabilities plague modern systems because writing secure systems code is hard. Promising approaches can retrofit security automatically via runtime checks that implement the desired security policy; these checks guard critical operations, like memory accesses. Alas, the induced slowdown usually exceeds by a wide margin what system users are willing to tolerate in production, so these tools are hardly ever used. As a result, the insecurity of realworld systems persists. We present an approach in which developers/operators can specify what level of overhead they find acceptable for a given workload (e.g., 5%); our proposed tool ASAP then automatically instruments the program to maximize its security while staying within the specified "overhead budget." Two insights make this approach effective: most overhead in existing tools is due to only a few "hot" checks, whereas the checks most useful to security are typically "cold"and cheap. We evaluate ASAP on programs from the Phoronix and SPEC benchmark suites. It can precisely select the best points in the securityperformance spectrum. Moreover, we analyzed existing bugs and security vulnerabilities in RIPE, OpenSSL, and the Python interpreter, and found that the protection level offered by the ASAP approach is sufficient to protect against all of them. Joint work with Jonas Wagner, Volodymyr Kuznetsov and George Candea (EPFL), presented at IEEE S&P (Oakland) 2015. 
Time / date:  Fri 8th May 2015, 45pm 

Speaker:  Max Kanovich, PPLV, UCL 
Location:  1.04 MPEB 
Title:  Discrete vs. Dense Times in the Analysis of CyberPhysical Security Protocols 
Abstract:  Many security protocols rely on the assumptions on the physical properties in which its protocol sessions will be carried out. For instance, Distance Bounding Protocols take into account the round trip time of messages and the transmission velocity to infer an upper bound of the distance between two agents. We classify such security protocols as CyberPhysical. Time plays a key role in design and analysis of many of these protocols. This work investigates the foundational differences and the impacts on the analysis when using models with discrete time and models with dense time. We show that there are attacks that can be found by models using dense time, but not when using discrete time. We illustrate this with a novel attack that can be carried out on most distance bounding protocols. In this attack, one exploits the execution delay of instructions during one clock cycle to convince a verifier that he is in a location different from his actual position. We propose a Multiset Rewriting model with dense time suitable for specifying cyberphysical security protocols. We introduce CircleConfigurations and show that they can be used to symbolically solve the reachability problem for our model. Finally, we show that for the important class of balanced theories the reachability problem is PSPACEcomplete. 
Time / date:  Fri 24th April 2015, 45pm 

Speaker:  Mike Dodds, University of York 
Location:  6.12 MPEB 
Title:  A Scalable, Correct TimeStamped Stack 
Abstract:  Concurrent datastructures, such as stacks, queues, and deques, often implicitly enforce a total order over elements in their underlying memory layout. However, much of this order is unnecessary: linearizability only requires that elements are ordered if the insert methods ran in sequence. We propose a new approach which uses timestamping to avoid unnecessary ordering. Pairs of elements can be left unordered if their associated insert operations ran concurrently, and order imposed as necessary at the eventual removal. We realise our approach in a new nonblocking data structure, the TS (timestamped) stack. Using the same approach, we can define corresponding queue and deque datastructures. In experiments on x86, the TS stack outperforms and outscales all its competitors – for example, it outperforms the eliminationbackoff stack by factor of two. In our approach, more concurrency translates into less ordering, giving lesscontended removal and thus higher performance and scalability. Despite this, the TS stack is linearizable with respect to stack semantics. The weak internal ordering in the TS stack presents a challenge when establishing linearizability: standard techniques such as linearization points work well when there exists a total internal order. We present a new stack theorem, mechanised in Isabelle, which characterises the orderings sufficient to establish stack semantics. By applying our stack theorem, we show that the TS stack is indeed linearizable. Our theorem constitutes a new, generic proof technique for concurrent stacks, and it paves the way for future weakly ordered datastructure designs. Associated paper: http://wwwusers.cs.york.ac.uk/~miked/publications/POPL.15.ts_stack.pdf 
Time / date:  Fri 10th April 2015, 45pm 

Speaker:  Azalea Raad, Imperial College London 
Location:  6.12 MPEB 
Title:  CoLoSL: Concurrent Local Subjective Logic 
Abstract:  A key difficulty in verifying sharedmemory concurrent programs is reasoning compositionally about each thread in isolation. Existing verification techniques for finegrained concurrency require reasoning about static global shared resource, impeding compositionality. This work introduces the program logic of CoLoSL, where each thread is verified with respect to its subjective view of the global shared state. This subjective view describes only that part of the global shared resource accessed by the thread. Subjective views may arbitrarily overlap with each other, and expand and contract depending on the resource required by the thread. This flexibility provides truly compositional proofs for sharedmemory concurrency, which we demonstrate on a range of examples including a concurrent computation of a spanning tree of a graph. 
Time / date:  Thu 9th April 2015, 45pm 

Speaker:  Zhé Hóu, Australian National University, Canberra 
Location:  6.12 MPEB 
Title:  Labelled Sequent Calculi and Automated Reasoning for Assertions in Separation Logic 
Abstract:  Separation Logic (SL) is an extension of Hoare Logic to reason about programs with mutable data structures. We start by looking at the abstract core of the assertion language of SL, namely Boolean BI (BBI). We give a sound, complete, and cuteliminable labelled sequent calculus for BBI, then we consider proof search using our calculus. Next we extend our method to capture BBI with various extra properties (a.k.a. separation theories) that are useful in memory/heap models. We show that our labelled calculi can handle BBI with any subset of separation theories. This results in a modular theorem prover for BBI with separation theories. Finally, we move to SL with concrete heap model, a la Reynolds, O'Hearn, Calcagno, Yang etc.. We give extensions of our calculi with rules for heaps and data structures. Our method handles all the logical connectives in separation logic, and resolves some problems in the literature, too. 
Time / date:  Fri 20th March 2015, 45pm 

Speaker:  Felix Winterstein, Imperial College London 
Location:  1.20 Malet Place Engineering Building (MPEB) 
Title:  Separation LogicAssisted Code Transformations for Efficient HighLevel Synthesis 
Abstract:  The capabilities of modern FPGAs permit the mapping of increasingly complex applications into reconfigurable hardware. Highlevel synthesis (HLS) promises a significant shortening of the FPGA design cycle by raising the abstraction level of the design entry to highlevel languages such as C/C++. Applications using dynamic, pointerbased data structures and dynamic memory allocation, however, remain difficult to implement well, yet such constructs are widely used in software. Automated optimizations that aim to leverage the increased memory bandwidth of FPGAs by distributing the application data over separate banks of onchip memory are often ineffective in the presence of dynamic data structures, due to the lack of an automated analysis of pointerbased memory accesses. In this work, we take a step towards closing this gap. We present a static analysis for pointermanipulating programs which automatically splits heapallocated data structures into disjoint, independent regions. The analysis leverages recent advances in separation logic, a theoretical framework for reasoning about heapallocated data which has been successfully applied in recent software verification tools. Our algorithm focuses on dynamic data structures accessed in loops and is accompanied by automated sourcetosource transformations which enable automatic loop parallelization and memory partitioning by offtheshelf HLS tools. We demonstrate the successful loop parallelization and memory partitioning by our tool flow using three reallife applications which build, traverse, update and dispose dynamically allocated data structures. Our case studies, comparing the automatically parallelized to the direct HLS implementations, show an average latency reduction by a factor of 2.5 across our benchmarks. 
Time / date:  Fri 6th March 2015, 45pm 

Speaker:  Gabrielle Anderson, PPLV, UCL 
Location:  309 Roberts Building 
Title:  Defeasible Reasoning in a Sequent Calculus 
Abstract:  When abstract models of argumentation are combined with accounts of the structure of arguments, two lines of research have emerged. In one, there is an attempt to reduce defeasible reasoning to inconsistency handling in classical logic; in the other, there are several approaches to modelling commmonsense reasoning using a combination of strict and defeasible rules, notably ASPIC+. In this work, we show how simple rules of knowledge representation, and of a sequent calculus for a logic handling levels of defeasible reasoning, can overcome the divisions among these lines by proving that they (1) encompass ASPIC+ formalisation of arguments and (2) respect the rationality postulates for argumentation reasoning. 
Time / date:  Fri 6th February 2015, 45pm 

Speaker:  Tyler Sorensen, PPLV, University College London 
Location:  1.20 Malet Place Engineering Building (MPEB) 
Title:  GPU concurrency: Weak behaviours and programming assumptions 
Abstract: 
Concurrency is pervasive and perplexing, particularly on graphics processing units (GPUs). Current specifications of languages and hardware are inconclusive; thus programmers often rely on folklore assumptions when writing software. To remedy this state of affairs, we conducted a large empirical study of the concurrent behaviour of deployed GPUs. Armed with litmus tests (i.e. short concurrent programs), we questioned the assumptions in programming guides and vendor documentation about the guarantees provided by hardware. We developed a tool to generate thousands of litmus tests and run them under stressful workloads. We observed a litany of previously elusive weak behaviours, and exposed folklore beliefs about GPU programming—often supported by official tutorials—as false. As a way forward, we propose a model of Nvidia GPU hardware, which correctly models every behaviour witnessed in our experiments. The model is a variant of SPARC Relaxed Memory Order (RMO), structured following the GPU concurrency hierarchy. 
Time / date:  Fri 23rd January 2015, 45pm 

Speaker:  Nathan Chong, PPLV, University College London 
Location:  508 Roberts Building 
Title:  Scalable Verification Techniques for DataParallel Programs 
Abstract: 
Verification techniques that can scale to reasoning about real, rather than synthetic or toy, programs are essential for practical program verifiers. GPUVerify is a verification tool, built at MSR and Imperial, to reason about dataparallel programs. In particular, massivelyparallel Graphics Processing Unit (GPU) programs. In this domain we have explored different characteristics of scalability: verifier precision, performance and automation. After covering the core verification method that enables GPUVerify to scale to realistic problem sizes, I will talk about:

Time / date:  Fri 12th December 2014, 45pm 

Speaker:  Daniel Lustig, Princeton University 
Location:  6.12 Malet Place Engineering Building (MPEB) 
Title:  Checking microarchitectural implementations of weak memory 
Abstract: 
In parallel programs, threads communicate according to the memory consistency model: the set of memory ordering rules enforced by a given architecture. A great deal of research effort has gone into rigorous formalization of memory models. However, comparatively little attention has been paid to the microarchitectural implementation of these models. Standard testingbased techniques are insufficient: unobserved behaviors are not guarantees of nonobservability, and failed tests offer little insight into the microarchitectural reason behind failure. I will present PipeCheck, a methodology and automated tool for verifying that a given microarchitecture correctly implements the memory consistency model specified by its architectural specification. PipeCheck adapts the formal notion of a "happens before" graph to the microarchitecture space. Architectural specifications such as preserved program order are then treated as propositions to be verified, rather than simply as assumptions. We also specify and analyze the behavior of common microarchitectural optimizations such as speculative load reordering. Using PipeCheck, we were able to validate the OpenSPARC T2 in just minutes, and we found and fixed a bug in a architectural simulator widely used in academia.PipeCheck has been nominated for the Best Paper award at MICRO 2014, to be held in Cambridge, UK this December. At the end, I will also briefly introduce ArMOR, a framework for defining dynamic binary translation layers that seamlessly translate between memory consistency models such as those used by x86, ARM, and Power. Bio: Daniel Lustig is a PhD candidate in Electrical Engineering at Princeton University. His research interests lie in the field of computer architecture, with a particular emphasis on designing memory systems for heterogeneous architectures. His recent work focuses verification of microarchitectural implementations of memory consistency models. Daniel earned his BSE from the University of Pennsylvania in 2009 and his MA from Princeton University in 2011. He also received an Intel PhD Fellowship in 2013. 
Time / date:  Wed 26th November 2014, 45pm 

Speaker:  Hristina Palikareva, Imperial College London 
Location:  1.20 Malet Place Engineering Building (MPEB) 
Title:  Constraint Solving in Symbolic Execution 
Abstract: 
Dynamic symbolic execution is an automated program analysis technique that employs an SMT solver to systematically explore paths through a program. It has been acknowledged in recent years as a highly effective technique for generating highcoverage test suites as well as for uncovering deep cornercase bugs in complex realworld software, and one of the key factors responsible for that success are the tremendous advances in SMTsolving technology. Nevertheless, constraint solving remains one of the fundamental challenges of symbolic execution, and for many programs it is the main performance bottleneck. In this talk, I will first give a very general introduction to symbolic execution, and then I will present the results reported in our CAV 2013 paper on integrating support for multiple SMT solvers in the dynamic symbolic execution engine KLEE. In particular, I will outline the key characteristics of the SMT queries generated during symbolic execution, describe several highlevel optimisations that KLEE employs to exploit those characteristics, introduce an extension of KLEE that uses a number of stateoftheart SMT solvers (Boolector, STP and Z3) through the metaSMT solver framework, and compare the solvers' performance when run under KLEE. In addition, I will discuss several options for designing a parallel portfolio solver for symbolic execution tools. This is joint work with Cristian Cadar. 
Time / date:  Wed 12th November 2014, 45pm 

Speaker:  Reuben Rowe, PPLV, University College London 
Location:  1.03 Malet Place Engineering Building (MPEB) 
Title:  Towards Type Inference for Nakano's Modality 
Abstract: 
Around fifteen years ago, Nakano introduced an extension to a system of recursive types for the Lambda Calculus consisting of a unary type constructor, or modality, guarding the occurrences of recursive references. This modest extension afforded a powerful result: the guarantee of a particular kind of termination (specifically, head normalisation). That is, programs typeable in this system are guaranteed to produce output. Since then, much research has been done to understand the semantics of this modality, and utilise it in more sophisticated type systems. Notable contributions to this effort include: the work of Birkedal et al. and Benton and Krishnaswami, who study semantic models induced by the modality; research by Appel at al. and Pottier who employ the modality in typed intermediate representations of programs; and Atkey and McBride’s work on coprogramming for infinite data. While some of this work explicitly addresses the possibility of type checking (e.g., that of Pottier), to the best of our knowledge type inference is still an unsolved problem. The lack of a type inference procedure presents a significant barrier to the wider adoption of Nakano’s technique in practice. In this talk, I will describe a (conservative) extension to Nakano’s system which allows for the development of a (sound but incomplete) unificationbased algorithm for deciding whether a term is typeable by constructing a suitably general type. 
Time / date:  Wed 15th October 2014, 45pm  

Speaker:  Carsten Fuhs, PPLV, UCL  
Location:  1.02 Malet Place Engineering Building (MPEB)  
Title:  Disproving termination with overapproximation  
Abstract:  When disproving termination using known techniques (e.g. recurrence sets), abstractions that overapproximate the program’s transition relation are unsound. We introduce live abstractions, a natural class of abstractions that can be combined with the recent concept of closed recurrence sets to soundly disprove termination. To demonstrate the practical usefulness of this new approach we show how programs with nonlinear, nondeterministic, and heapbased commands can be shown nonterminating using linear overapproximations. This talk is about joint work with Byron Cook, Kaustubh Nimkar, and Peter O'Hearn.  
Time / date:  Wed 1st October 2014, 45pm  

Speaker:  Ana Armas, University of Oxford  
Location:  1.03 Malet Place Engineering Building (MPEB)  
Title:  Ontology Module Extraction via Datalog Reasoning  
Abstract:  Module extractionthe task of computing a (preferably small) fragment M of an ontology T that preserves entailments over a signature Shas found many applications in recent years. Extracting modules of minimal size is, however, computationally hard, and often algorithmically infeasible. Thus, practical techniques are based on approximations, where M provably captures the relevant entailments, but is not guaranteed to be minimal. Existing approximations, however, ensure that M preserves all secondorder entailments of T w.r.t. S, which is stronger than is required in many applications, and may lead to large modules in practice. In this paper we propose a novel approach in which module extraction is reduced to a reasoning problem in datalog. Our approach not only generalises existing approximations in an elegant way, but it can also be tailored to preserve only specific kinds of entailments, which allows us to extract significantly smaller modules. An evaluation on widelyused ontologies has shown very encouraging results.  
Time / date:  Wed 2nd July 2014, 45pm  

Speaker:  Vincent Nimal, University of Oxford  
Location:  6.12 Malet Place Engineering  
Title:  Don’t sit on the fence: A static analysis approach to automatic fence insertion  
Abstract:  Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods for restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: our technique is suitable for large code bases. We implement it in our new musketeer tool, and detail experiments on more than 350 executables of packages found in Debian Linux 7.1, e.g.memcached (about 10000 LoC). This talk will be presented at CAV and should last about 15 minutes. This is joint work with Jade Alglave, Daniel Kroening and Daniel Poetzl. Please do not hesitate to comment the slides and structure of the talk  we will really appreciate it!  
Time / date:  Wed 18th June 2014, 45pm  

Speaker:  Caterina Urban, ENS Paris  
Location:  Roberts 309  
Title:  Automatic Inference of Ranking Functions by Abstract Interpretation  
Abstract:  We present a family of parameterized abstract domains for proving termination of imperative programs by abstract interpretation. The domains automatically synthesize piecewisedefined lexicographic ranking functions and infer sufficient preconditions for program termination. The abstract domains are parameterized by a numerical abstract domain for state partitioning and a numerical abstract domain for ranking functions. This parameterization allows to easily tune the tradeoff between precision and cost of the analysis. We describe instantiations of these domains with intervals, octagons, polyhedra and affine functions. We have implemented a prototype static analyzer for proving conditional termination of programs written in (a subset of) C and, using experimental evidence, we show that it is competitive with the state of the art and performs well on a wide variety of benchmarks.  
Time / date:  Wed 4th June 2014, 45pm  

Speaker:  Eric Koskinen, New York University  
Location:  6.12 Malet Place Engineering  
Title:  Local Temporal Reasoning  
Abstract:  I will discuss our recent development of the first method for proving temporal logic properties of programs written in higherorder languages such as C#, F#, Haskell, Ocaml, Perl, Ruby, Python, etc. By distinguishing between the finite traces and infinite traces in the specification, we obtain rules that permit us to reason about the temporal behavior of program parts via a typeandeffect system, which is then able to compose these facts together to prove the overall target property of the program. The type system alone is strong enough to derive many temporal safety properties, for example when using dependent (refinement) types and temporal effects. We also show how existing techniques can be used as oracles to provide liveness information (e.g. termination) about program parts and that the typeandeffect system can combine this information with temporal safety information to derive nontrivial temporal properties. Our work has application toward verification of higherorder software, as well as modular strategies for interprocedural programs. Joint work with Tachio Terauchi. To appear in LICS 2014.  
Time / date:  Wed 21st May 2014, 45pm  

Speaker:  Ross Duncan, Univ. Strathclyde  
Location:  Roberts 309  
Title:  Verifying Quantum Computations with Graphical Calculi  
Abstract:  Quantum computation takes place in a mathematical environment rather different to the familiar world of classical computing. Phenomena such as complementarity and entanglement don't have any counterpart in the systems usually studied by computer scientists, and the usual logical structures we rely on to analyse computation don't exist. However, studying the unique features of the quantum world reveals that quantum computation can be formalised using the theory of monoidal categories, where various internal algebras do the heavy lifting. This theory can be expressed in purely diagrammatic terms, and equational reasoning can be carried out by rewriting in a graphical calculus. I'll introduce the relevant structures and show how to verify the correctness of some quantum programs using these techniques.  
Time / date:  Wed 23rd April 2014, 45pm  

Speaker:  Will Sonnex, University of Cambridge  
Location:  Roberts 309  
Title:  Proving terms equivalent using fixpoint fusion  
Abstract:  When working with functional programming languages the traditional approach to proving two terms equivalent is to use proof by induction. We can use this to prove boolean properties by checking if such a term is equal to "True". Automated proof by induction is therefore a powerful tool for verifying functional programs.My work is focused on proving terms equivalent using term rewriting techniques, particularly fixpoint fusion, to simplify terms within a property until the property is trivially true. For equations this would be simplifying both sides to alphaequal terms. For properties this would be rewriting the term to "True". I will demo my automated term rewriting program "Elea", which is able to perform as well as the previous state of the art (up until 2 years ago, damn you HipSpec) in automated induction.  
Time / date:  Wed 9th April 2014, 45pm  

Speaker:  Gabrielle Anderson, UCL  
Location:  Roberts 309  
Title:  Bunched Resource Process Calculus  
Abstract:  The Synchronous Calculus of Resources and Processes by Pym and others aims to provide a methodology for systems modelling that follows intuition about how complex systems consist of objects as well as dynamics, and that provides compositionality in terms of behaviour and of logical properties. While this work achieved most of its aims it had one significant drawback: it did not have one direction of the Hennessey Milner property, that logically equivalent systems are also behaviourally equivalent. In this work we present a variant on SCRP that we call BRPC, and show that it has both directions of the HM property.  
Time / date:  Wed 26th March 2014, 45pm  

Speaker:  Mehrnoosh Sadrzadeh, Queen Mary University of London  
Location:  Malet Engineering 1.20  
Title:  Compositional Vector Space Models of Natural Language  
Abstract:  Vector space models of natural language originated in the work of Chomsky's supervisor: Z. Harris. They represent meanings of words by vectors whose coefficients are the frequency of cooccurrence of the word in different contexts. They have proved successful in various natural language tasks (e.g. automatic thesauri construction), but have been unable to represent meanings of sentences. I will present a model that solves this problem by combining the vector space model with the grammatical structures of sentences. The latter are typealgebras that vary over monoidal structures originated in the work of J. Lambek. The combination is made possible via a homomorphic passage from the monoidal structures to vector spaces. This is joint work with a bunch of people such as Coecke (Oxford) and Clark (Cambridge). The inspirations behind the work was from categorical models of quantum mechanics (which also has a vector space model) developed by Abramsky and Coecke.  
Time / date:  Wed 12th March 2014 45pm  

Speaker:  Gareth Smith, Imperial College London  
Location:  MPEB 1.03  
Title:  A Trusted Mechanised JavaScript Specification  
Abstract:  JavaScript is the most widely used web language for clientside applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for highlevel language compilation and JavaScript implementations, and to provide a platform for highassurance proofs of language properties. This talk presents JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We have a Coq proof that JSRef is correct with respect to JSCert and have assessed JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.  
Time / date:  Wed 13th November 2013, 23pm  

Speaker:  Pavle Subotic  
Location:  Roberts 421  
Title:  Exploring Interpolants  
Abstract:  Craig Interpolation is a standard method to construct and refine abstractions in model checking. To obtain abstractions that are suitable for the verification of a software programs or hardware designs, model checkers rely on theoremprovers to find the right interpolants, or interpolants containing the right predicates, in a generally infinite lattice of interpolants for any given interpolation problem. The problem of guiding solvers to the right interpolants has so far been investigated mainly on a syntactic level. In this talk I present a semantic and solverindependent framework for systematically exploring interpolant lattices, based on the notion of interpolation abstraction. I will discuss how interpolation abstractions can be constructed for a variety of logics, and how they can be exploited in the context of software model checking  
Time / date:  Friday 8th November 2013, 45pm  

Speaker:  Thomas Stroeder  
Location:  Roberts 309  
Title:  Proving Deadlock and Livelock Freedom  
Abstract:  In this talk we develop a general method for proving deadlock and livelock freedom of concurrent programs with shared memory. Our goal in this work is to support programs which use locks stored in mutable data structures. The key to our technique is the observation that dependencies between locks can be abstracted using recursion and nondeterminism in a sequential program such that the termination of the abstraction implies deadlock and livelock freedom. We discuss applications to examples including lockcoupling lists, the optimistic skiplist of Heller et al., the lockbased queue of Michael and Scott, and the balanced binary trees of Kung and Lehman, Bronson et al., and Schlatter Ellis. 
Time / date:  Fri 11th Oct 2013, 45pm  

Speaker:  Aquinas Hobor  
Location:  Roberts 508  
Title:  Decision Procedures over Sophisticated Fractional Permissions  
Abstract:  Fractional permissions enable sophisticated management of resource accesses in both sequential and concurrent programs. Entailment checkers for formulae that contain fractional permissions must be able to reason about said permissions to verify the entailments. We show how entailment checkers for separation logic with fractional permissions can extract equation systems over fractional shares. We develop a set decision procedures over equations drawn from the sophisticated boolean binary tree fractional permission model developed by Dockins et al. [4]. We prove that our procedures are sound and complete and discuss their computational complexity. We explain our implementation and provide benchmarks to help understand its performance in practice. We detail how our implementation has been integrated into the HIP/SLEEK verifi cation toolset. We have machinechecked proofs in Coq.  
Time / date:  Fri 27th Sept 2013, 45pm  

Speaker:  Viktor Vafeiadis  
Location:  Roberts 309  
Title:  Relaxed separation logic: A program logic for C11 concurrency  
Abstract:  The talk will introduce relaxed separation logic (RSL), a program logic for reasoning about concurrent programs running under the C11 relaxed memory model. From a user's perspective, RSL is an extension of concurrent separation logic (CSL) with proof rules for the various kinds of C11 atomic accesses. As in CSL, individual threads are allowed to access nonatomically only the memory that they own, thus preventing data races. Ownership can, however, be transferred via certain atomic accesses. For SCatomic accesses, arbitrary ownership transfer can occur; for acquire/release atomic accesses, ownership transfer can occur only in one direction; while for relaxed atomic accesses, no ownership transfer is possible.  
Time / date:  Fri 21st June 2013, 45pm  

Speaker:  Joel Ouaknine  
Location:  Malet Place Eng 1.02  
Title:  Decision Problems for Linear Recurrence Sequences  
Abstract:  Linear recurrence sequences (such as the Fibonacci numbers) permeate a vast number of areas of mathematics and computer science (in particular: program termination and probabilistic verification), and also have many applications in other fields such as economics, theoretical biology, and statistical physics. In this talk, I will focus on three fundamental decision problems for linear recurrence sequences, namely the Skolem Problem (does the sequence have a zero?), the Positivity Problem (are all terms of the sequence positive?), and the Ultimate Positivity Problem (are all but finitely many terms of the sequence positive?). This is joint work with James Worrell and Matt Daws.  
Time / date:  Fri 14th June 2013, 45pm 

Speaker:  Abi See 
Location:  Roberts 309 
Title:  Ramsey vs. lexicographic termination proving 
Abstract:  Termination proving has traditionally been based on the search for (possibly lexicographic) ranking functions. In recent years, however, the discovery of termination proof techniques based on Ramsey’s theorem have led to new automation strategies, e.g. sizechange, or iterative reductions from termination to safety. We revisit the decision to use Ramseybased termination arguments in the iterative approach, and describe a new iterative termination proving procedure that instead searches for lexicographic termination arguments. Using experimental evidence we show that this new method leads to dramatic speedups. 
Time / date:  Fri 15th March 2013, 12pm 

Speaker:  Zachary Kincaid 
Location:  Malet Place Eng 1.20 
Title:  Proving Correctness of Concurrent Programs with Data Flow Graphs 
Abstract:  We propose inductive data flow graphs, data flow graphs with incorporated inductive assertions, as the basis of an approach to verifying concurrent programs. An inductive data flow graph accounts for a set of dependencies between program events, and therefore stands as a representation for the set of executions which give rise to these dependencies. By representing information about dependencies rather than control flow, inductive data flow graphs can yield very succinct proofs. Our strategy for verifying concurrent programs defers reasoning about control to the proof checking step, a purely combinatorial problem, thus avoiding the need to reason about data and control simultaneously. 
Time / date:  Fri 8th March 2013, 11am12pm 

Speaker:  Alexey Gotsman 
Location:  6.12 MPEB 
Title:  Abstraction for Weakly Consistent Systems 
Abstract:  When constructing complex concurrent and distributed systems, abstraction is vital: programmers should be able to reason about system components in terms of abstract specifications that hide the implementation details. Nowadays such components often provide only weak consistency guarantees about the data they manage: in sharedmemory systems because of the effects of relaxed memory models, and in distributed systems because of the effects of replication. This makes existing notions of component abstraction inapplicable. In this talk I will describe our ongoing effort to specify consistency guarantees provided by modern sharedmemory and distributed systems in a uniform framework and to propose notions of abstraction for components of such systems. I will illustrate our results using the examples of the C/C++ memory model and eventually consistent distributed systems. This is joint work with Mark Batty (University of Cambridge), Sebastian Burckhardt (Microsoft Research), Mike Dodds (University of York) and Hongseok Yang (University of Oxford). 
Time / date:  Fri 1st March 2013, 45pm 

Speaker:  Shin Yoo 
Location:  MPEB 6.12 
Title:Fault  Localization Prioritization: Comparing Information Theoretic and Coverage Based Approaches 
Abstract:  Test case prioritization techniques seek to maximise early fault detection. Fault localization seeks to use test cases already executed to help find the fault location. There is a natural interplay between the two techniques; once a fault is detected, we often switch focus to fault fixing, for which localization may be a first step. In this paper we introduce the Fault Localization Prioritization (FLP) problem, which combines prioritization and localization. We evaluate three techniques: a novel FLP technique based on information theory, FLINT (Fault Localization using Information Theory), that we introduce in this paper, a standard Test Case Prioritization (TCP) technique and a ‘test similarity technique’ used in previous work. Our evaluation uses five different releases of four software systems. The results indicate that FLP and TCP can statistically significantly reduce fault localization costs for 73% and 76% of cases respectively and that FLINT significantly outperforms similaritybased localization techniques in 52% of the cases considered in the study. 
Time / date:  Fri 8th February 2013, 45pm 

Speaker:  Thomas Dillig 
Location:  MPEB 6.12 
Title:  Automated Error Diagnosis Using Abductive Inference 
Abstract:  When program veriﬁcation tools fail to verify a program, either the program is buggy or the report is a false alarm. In this situation, the burden is on the user to manually classify the report, but this task is timeconsuming, errorprone, and does not utilize facts already proven by the analysis. We present a new technique for assisting users in classifying error reports. Our technique computes small, relevant queries presented to a user that capture exactly the information the analysis is missing to either discharge or validate the error. Our insight is that identifying these missing facts is an instance of the abductive inference problem in logic, and we present a new algorithm for computing the smallest and most general abductions in this setting. We perform the ﬁrst user study to rigorously evaluate the accuracy and effort involved in manual classiﬁcation of error reports. Our study demonstrates that our new technique is very useful for improving both the speed and accuracy of error report classification. 
Time / date:  Monday 4th February 2013 

Speaker:  Mooly Sagiv 
Location:  MPEB 6.12 
Title:  Concurrent Data Representation Synthesis 
Abstract:  We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a highlevel optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark.<p>This is joint work with Alex Aiken and Peter Hawkins(Stanford), Katleen Fisher(DARPA), and Martin Rinard(MIT). This work is part of Petrer Hawkins thesis http://theory.stanford.edu/~hawkinsp/ Please also look into the December CACM article. 
Time / date:  Friday 1st February 2013, 4pm5pm 

Speaker:  Mooly Savig 
Location:  103 & 104 Executive Suite 
Title:  EffectivelyPropositional Reasoning About Reachabilitysis 
Abstract:  This paper proposes a novel method of harnessing existing SAT solvers to verify reachability properties of programs that manipulate linkedlist data structures. Such properties are essential for proving program termination, correctness of data structure invariants, and other safety properties. Our solution is complete, i.e., a SAT solver produces a counterexample whenever a program does not satisfy its specification. This result is surprising since even firstorder theorem provers usually cannot deal with reachability in a complete way, because doing so requires reasoning about transitive closure. Our result is based on the following ideas: 1. Programmers must write assertions in a restricted logic without quantifier alternation or function symbols. 2. The correctness of many programs can be expressed in such restricted logics, although we explain the tradeoffs. 3. Recent results in descriptive complexity can be utilized to show that every program that manipulates potentially cyclic, singly and doublylinked lists and that is annotated with assertions written in this restricted logic, can be verified with a SAT solver. We implemented a tool atop Z3 and used it to show the correctness of several linked list programs. This work is part of Peter Hawkins thesis http://theory.stanford.edu/~hawkinsp/ Please also look into the December CACM article. 
Time / date:  Friday 30th November 2012 

Speaker:  Dan Ghica 
Location:  MPEB 6.12 
Title:  Seamless Computing 
Abstract:  Computer architectures are evolving and diversifying: CPUbased, GPUbased, distributed, reconfigurable, cloudbased and so on. Along with them we are faced with a proliferation of domainspecific architecturedependent languages, leading to an environment in which programming is laborious and errorprone. In this talk I will outline an ongoing research project dedicated to the study of architectureindependent programming, from theoretical foundations (semantics and types) to compiler support. 
Time / date:  Fri 14th Sept 2012, 45pm 

Speaker:  Fabrizio Montesi 
Location:  Cruciform Foyer 101 Seminar Room 1 
Title:  Deadlockfreedombydesign: Multiparty Asynchronous Global Programming 
Abstract:  Chor (http://www.chorlang.org/) is a new programming language for communicating systems based, where both behavioural protocol specifications and implementations (choreographies) are given at a global level. Then, the code for each concrete endpoint entity of a system can be automatically generated, with the guarantees of soundness and completeness wrt the originating descriptions. Interestingly, by exploiting global descriptions, Chor defines a new class of asynchronous deadlockfree systems which eludes standard practices in objectoriented and sessionbased type systems. In this talk, I will give an overview of the formal model behind Chor and I will demonstrate its prototype implementation, an IDE and compiler that allow for the first time to experiment with the global programming paradigm. The target implementation language of the Chor compiler is Jolie (http://www.jolielang.org/), enabling the generated code to interoperate with a variety of environments. 
Time / date:  Wed 12th Sept 2012, 121pm 

Speaker:  Thomas Stroeder 
Location:  Malet Place Eng 1.20 
Title:  Automated Termination Analysis of Imperative Programs with Pointer Arithmetic and Heap Manipulations 
Abstract:  Automated termination analysis of imperative programs has been studied very intensively during the last decade and powerful tools performing this task have been developed. While some of these tools can already successfully analyze programs involving heap manipulations, the combination of pointer arithmetic and heap manipulations is still an open problem. Such combinations are especially used in C functions for string processing where a string is traversed using pointer arithmetic until a zero value is found on the heap. We will show a novel method for automated termination analysis of imperative programs based on the LowLevel Virtual Machine (LLVM) framework, symbolic evaluation graphs, and term rewriting. We will demonstrate its capabilities on the example of a standard C implementation of the strlen function called on zeroterminated strings and give an overview over preliminary experimental results with a prototype implementation on a small benchmark set of programs. 
Time / date:  Fri 7 Sept 2012, 11am12pm 

Speaker:  Marc Brockschmidt 
Location:  Cruciform Foyer 201 Seminar Room 3 
Title:  Proving Java Termination: Reducing the Ugly to the Bad 
Abstract:  We present symbolic evaluation graphs for Java programs, obtained by symbolicevaluation and abstraction on a simple abstract domain. As overapproximations of all possible program runs, these graphs then serve as basis for further analyses. We present one possible use, termination analysis, for which the graph is transformed into Term Rewrite Systems (TRSs) whose termination implies termination of the original program. For analysis of the resulting TRSs, we can then use existing standard tools. 
Time / date:  Fri 31st Aug 2012, 45pm 

Speaker:  Delphine Demange 
Location:  Cruciform Foyer 101 Seminar Room 1 
Title:  Plan B  A Buffered Memory Model for Java 
Abstract:  The Java Memory Model (JMM) is an ambitious attempt to provide a semantics for concurrent and possibly racy Java programs. It aims at providing a precise semantics that is portable across all architectures and enables a variety of compiler optimizations. Unfortunately, the JMM has proven to be challenging for users to understand and for compiler writers to use. In fact, the formal statement of the model is flawed and existing Java compilers do not comply with it. We propose to investigate an alternative proposal that has a tractable definition and intuitive semantics, relates easily to existing architectures, while still enabling useful optimizations. To this end, we introduce a Buffered Memory Model for Java. Beyond its simplicity advantages for the programmer, the model is also amenable for formal reasoning and verification. We also demonstrate that it can be implemented efficiently on x86TSO architectures.(joint work with V. Laporte, L. Zhao, D. Pichardie, S. Jagannathan and J. Vitek) 