Research Seminars

PPLV Research Seminars

The PPLV group now hosts fortnightly invited research seminars.

If you are interested in giving a research seminar, please contact Fredrik Dahlqvist in the first instance.

Upcoming seminars:

Past seminars:

Time / date: Thursday 13th July 2017 at 12pm
Speaker: Mark Briers (Alan Turning Institute)
Location: Roberts 309
Title: Defence and Security Strategic Programme at Turing
Abstract: Defence and Security Strategic Programme at Turing The Defence and Security sector has always been a data rich environment, resulting in many scientific advances being published in the data science (and related) literature over the last 30 years. This talk will provide an overview of data science within the defence and security sector, detailing the key statistical and computing methodologies employed, and the research challenges currently facing the domain. The talk will also provide details on how to get involved with the D. and S. Programme at Turing, and initial details of a forthcoming open call for proposals in the area of Cyber data science.
Bio: Mark Briers is the Strategic Programme Director at the Alan Turing Institute. He has worked in the defence and security industry for over 16 years, directing research programmes in the area of statistical data analysis. He completed his PhD in 2007 at Cambridge University where he developed Sequential Monte Carlo based techniques for state-space filtering and smoothing. He is an Honorary Senior Lecturer at Imperial College London, and a member of several committees at the Royal Statistical Society. His current research interests include scalable Bayesian inference, sequential inference, and anomaly detection.

Time / date: Tuesday 4 July 4-5pm
Speaker: Luca Vigano (King's College London)
Location: 6.12 MPEB
Title: Alpha-Beta Privacy
Abstract: Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce a novel and declarative way to specify privacy goals, called "alpha-beta privacy", and relate it to static equivalence. This new approach is based on specifying two formulae alpha and beta in first-order logic with Herbrand universes, where alpha reflects the intentionally released information and beta includes the actual cryptographic ("technical") messages the intruder can see. Then alpha-beta privacy means that the intruder cannot derive any "non-technical" statement from beta that he cannot derive from alpha already. We describe by a variety of examples how this notion can be used in practice. Even though alpha-beta privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for alpha-beta privacy.
Joint work with Sebastian Moedersheim and Thomas Gross.

Time / date: Thursday 8 June, 3-5pm
Speaker: Simon Docherty (UCL)
Location: MPEB 6.12.
Title: A Stone-Type Duality Theorem For Separation Logic via Its Underlying Bunched Logics.
Abstract: Stone-type duality theorems, which relate algebraic and relational/topological models of logic, are important because - in addition to elegant abstraction - they strengthen soundness and completeness to a categorical equivalence, yielding a framework through which both algebraic and topological methods can be brought to bear on a logic. In this talk we will show how duality can be extended to the structures that interpret Separation Logic. This approach synthesises a variety of techniques from modal, substructural and categorical logic and contextualises the 'resource semantics' interpretation underpinning Separation Logic amongst them.
The first half of the talk is a tutorial about the seminal duality theorem for Boolean algebras and the numerous ways it applies to logic and computer science. In the second half we introduce Separation Logic and extend the duality theorem to cover algebraic and topological models of it. The talk presupposes no prior knowledge of topology, category theory or separation logic.

Time / date: Tuesday 23 May, 3-5pm
Speaker: Gerco van Heerdt (UCL)
Location: Marconi room, Roberts Building, 11th floor.
Title: CALF: Categorical Automata Learning Framework.
Abstract: Automata learning is a technique that has successfully been applied in verification, with the automaton type varying depending on the application domain. Adaptations of automata learning algorithms for increasingly complex types of automata have to be developed from scratch because there was no abstract theory offering guidelines. This makes it hard to devise such algorithms, and it obscures their correctness proofs. In the first part of this talk, I recall the details of active learning algorithms for DFAs. The second part then introduces our category-theoretic formalism that provides an appropriately abstract foundation for studying automata learning. I will also discuss formal relations between algorithms for learning, testing, and minimisation within our framework.

Time / date: Tuesday 25 April, 2-5pm
Speakers: Robin Hirsh (UCL), Szabolcs Mikulas (Birkbeck), Paul Brunet (UCL)
Location: Roberts Building, 11th floor, the Marconi room.
Title: Special extended PPLV seminar on relational and algebraic methods.
1st Talk: Binary Relations and Games (Robin Hirsh).
2nd Talk: Axiomatising Domain Algebras, Szabolcs Mikulas (Szabolcs Mikulas).
3rd Talk: The Algebra of Languages with Intersection and Converse (Paul Brunet).
Abstracts: 1st Talk: I introduce a type of algebra of binary relations called Relation Algebra. I may mention some applications in temporal and spatial reasoning. Although it is undecidable whether a finite relation algebra can be represented as binary relations, we introduce a two-player, infinitely long game that tests representability. We use these games to establish a number of results.
This talk will be introductory. Some very basic knowledge about boolean algebras will help, as will a general fluency with mathematics, but I'll keep the talk self-contained.
For those who like to read ahead, the main material is covered here in sections 1 to 5.
2nd Talk: Domain algebras provide an elegant, one-sorted formalism for automated reasoning about program and system verification. Unfortunately, in the case of the basic signature of composition and domain, the class of relational domain algebras is not finitely axiomatizable (Hirsch-M). The proof exploits the fact that the natural ordering is unavailable in these algebras. On remedy is to expand the signature with and ordering - in fact we get a finitely axiomatized representation class for the signature consisting of composition, domain, range and converse (Bredikhin, Hirsch-M). But converse does not seem to be a natural operation from the verification point of view. In this talk we show that for the signature consisting of compostion, domain, range and join, the equational theory of relational domain algebras is finitely axiomatizable.
3rd Talk: We consider the equational theory generated by all algebras of languages with the regular operations (empty language, singleton empty word, union, composition and Kleene star), together with the intersection and mirror image. We start by building a free representation of this algebra. We do this by associating with every term some sort of language of graphs, in such a way that two terms are universally equivalent if and only if they are mapped to the same language. This allows us to obtain a decision procedure to test the validity of equations in this algebra. This part of the proof relies on the construction and comparison of Petri net-based automata. The analysis of this step yields a proof that the equational theory we consider here is complete for the complexity class ExpSpace. Using the free representation we described earlier, and leaving the Kleene star aside, we will conclude this presentation by showing a complete finite axiomatisation of this theory, that is a finite set of identities from which every universal law of this algebra can be deduced.

Time / date: Tuesday 11 April, 3-5pm
Speaker: Fabio Zanasi (UCL)
Location: 1.20 MPEB
Title: Formal semantics for Bayesian reasoning
Abstract: We establish a link between Bayesian inference (learning) and predicate and state transformer operations from programming semantics and logic. This setting is used to propose a formal definition of influence, based on the total variation distance for probability distributions. We distinguish between direct and `crossover' influence: the latter pinpoints non-locality phenomena in a probabilistic setting. As a proof of concept, we use these definitions to translate the d-separation criteria in a Bayesian network into formal, provable statements.
This is joint work with Bart Jacobs.

Time / date: Tuesday 28 March, 4-5pm
Speaker: Makoto Tatsuta (National Insitute of Informatics)
Location: 6.12 MPEB
Title: Equivalence of Inductive Definitions and Cyclic Proofs under Arithmetic
Abstract: A cyclic proof system, called CLKID-omega, gives us another way of representing inductive definitions and efficient proof search. The 2011 paper by Brotherston and Simpson showed that the provability of CLKID-omega includes the provability of the classical system of Martin-Lof's inductive definitions, called LKID, and conjectured the equivalence. By this year the equivalence has been left an open question. In general, the conjecture was proved to be false in FoSSaCS 2017 paper by Berardi and Tatsuta. However, if we restrict both systems to only the natural number inductive predicate and add Peano arithmetic to both systems, the conjecture is proved to be true in FoSSaCS 2017 paper by Simpson. This talk shows that if we add arithmetic to both systems, they become equivalent, namely, the conjecture holds. The result of this talk includes that of the paper by Simpson as a special case. In order to construct a proof of LKID for a given cyclic proof, this talk shows every bud in the cyclic proof is provable in LKID, by cutting the cyclic proof into subproofs such that in each subproof the conclusion is a companion and the assumptions are buds. The global trace condition gives some induction principle, by using an extension of Podelski-Rybalchenko termination theorem from well-foundedness to induction schema. In order to show this extension, this talk also shows that infinite Ramsey theorem is formalizable in Peano arithmetic.

Time / date: Tuesday 21 March, 3-5pm
Speaker: Michael Tautschnig, Queen Mary University London
Location: 6.12 MPEB
Title: The C Bounded Model Checker (and how to succeed at SV-COMP)
Abstract: CBMC implements bit-precise bounded model checking for C programs and has been developed and maintained for more than ten years.

The talk will start with a hands-on tutorial on how CBMC verifies the absence of violated assertions under a given loop unwinding bound by reducing the problem to a Boolean formula. The formula is passed to a SAT solver, which returns a model if and only if the property is violated.

CBMC's support ranges from basic integer programs to IEEE 754 floating point and concurrent programs. The tool chain around CBMC enables applications to full software systems, such as analysing a SAT solver and more recently has been applied across the entire Debian/GNU Linux distribution.

More recently, support for efficiently checking concurrent programs, including support for weak memory models, has been added to CBMC. At the 2017 TACAS Software Verification Competition (SV-COMP'17) CBMC has won the "Falsification" category, making CBMC the most efficient and precise bug hunting tool.

Background Reading: - - (with links to various publications on and around CBMC)

Time / date: Tuesday 7 March, 3-5pm
Speaker: Bernardo Toninho and Raymond Hu Imperial College London
Location: G.01, 66-72 Gower Street
Title: Session Types and Linear Logic and Lightweight Applications of Session Types in Java
Abstract: Session types is a well-established typing discipline for message passing concurrency, originally developed in the pi-calculus and later studied in functional and object-oriented settings. The first part of this talk is a tutorial on session types and its connections to linear logic. A recent interpretation of propositions as session types for intuitionistic linear logic has been given by Caires and Pfenning, viewing proofs as processes and proof reduction as inter-process communication. We discuss how several generalisations and extensions of this interpretation arguably form a basis for a logical foundation that captures several interesting features of message-passing concurrent computation. Specifically, we will detail how the basic interpretation can be extended to richer typed settings such as polymorphism and dependent type theories, and how to account for a notion of typed process equivalence that gives meaning to both proof conversions and type isomorphisms. We will then briefly introduce the key concepts and intuitions behind multiparty session types (MPST).

The second part of this talk will demonstrate Scribble, a toolchain based on MPST, and applications in Java by using MPST to generate protocol-specific APIs for implementing distributed endpoints. This practical approach builds on the linear understanding of session types as a hybrid form of session safety: the interaction structure of an endpoint is captured via static Java typing, complemented by lightweight run-time checks on linear usage of channels. Examples include Scribble specifications and Java implementations of interoperable clients/servers for standard protocols such as HTTP and SMTP, and Web service choreographies. Finally, we demonstrate a recent extension of MPST with explicit connection actions, that allow sessions with dynamic and optional participants.

Time / date: Tuesday 21 February, 3-5pm
Speaker: Benjamin Kaminski, Aachen University
Location: Room 6.12MPEB
Title: A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations
Abstract: We start with a short tutorial on classical weakest precondition reasoning à la Dijkstra and how they can be extended to weakest pre-expectations for reasoning about expected values in probabilistic programs à la Kozen / McIver & Morgan. We then present a weakest-precondition-style calculus for reasoning about expected values of mixed-sign unbounded random variables after execution of a probabilistic program. The semantics of a while-loop is well-defined as the limit of iteratively applying a functional to a zero-element just as in the traditional weakest pre-expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well-defined, even if the expected values do not exist. We show that the calculus is sound, allows for compositional reasoning, and present an invariant-based approach for reasoning about pre-expectations of loops.

Time / date: Tuesday 29th November 2016, 3-4pm
Speaker: Jonathon Spring
Location: 6.12 Malet Place Engineering Building
Title: Why Separation Logic Works

Separation Logic works because it merges the software engineer's conceptual model of a program's manipulation of computer memory with the logical model that interprets what sentences in the logic are true. Separation Logic is an interesting case because of its widespread success in verification tools used by Facebook and others daily.

For these two senses of model — the engineering/conceptual and the logical — to merge in a genuine sense, each must maintain their norms of use from their home disciplines. When this occurs, the development of both the logic and engineering benefit greatly. Seeking this intersection of two different senses of the concept of model provides a strategy for how computer scientists and logicians may be successful. In this talk, we will focus on an exploration of the extent to which models merging leads to success in computer science, and thereby the extent to which it is a strategy for future development.

Time / date: Thursday 17th November 2016, 4-5pm
Speaker: Tyler Sorensen, Imperial College London
Location: 1.02 Malet Place Engineering Building
Title: Exposing Errors Related to Weak Memory in GPU Applications
Abstract: In this presentation, I will discuss the systematic design of a testing environment that uses stressing and fuzzing to reveal errors in GPU applications that arise due to weak memory effects. This approach is evaluated across several CUDA applications that use fine-grained concurrency, on seven GPUs spanning three Nvidia architectures. The results show that applications that rarely, or never, exhibit errors related to weak memory when executed natively can readily exhibit these errors when executed in the testing environment. The testing environment also provides a means to identify the root causes of erroneous weak effects, and automatically suggests how to insert fences that experimentally eliminate these errors. This empirical fence insertion method carries significantly lower overhead, in terms of execution time and energy consumption, than a more conservative, guaranteed-sound approach. Joint work with Alastair Donaldson

Time / date: Wednesday 13th July 2016, 4-5pm
Speaker: Letterio Galletta, University of Pisa
Location: Roberts 110
Title: Adaptivity: Linguistic Mechanism and Static Analysis

Adaptive systems modify their behaviour in order to run always and everywhere. Their structure is therefore subject to continuous changes, which however could compromise the correct behaviour of applications and break the guarantees on their non-functional requirements.

Recently, Context Oriented Programming (COP) was proposed as a viable paradigm to develop adaptive and context-aware software. It advocates languages with peculiar constructs that express context-dependent behaviour in a modular manner. In this way context-awareness is built-in, and the provided linguistic abstractions impose a good practice to programmers.

This talk introduces COP, focusing in particular on ML_CoDa, a functional context-oriented language, and its two steps static analysis, to detect if adaptation errors may occur at run time.

Time / date: Monday 27th June 2016, 4-5pm
Speaker: Carsten Fuhs, Birkbeck University
Location: Room 405, 66-72 Gower St
Title: SMT Techniques and Solvers in Automated Termination Analysis

Termination is the property of a program that regardless of the input, execution of the program will always come to a halt eventually. Although this property is undecidable, since the early 2000s fully automated techniques and tools for termination analysis have flourished in several communities: term rewriting, imperative programs, functional programs, logic programs, ...

A common theme behind most of these tools is the use of constraint-based techniques to advance the proof of (non-)termination. Recently, in particular SAT and SMT solvers are used as back-ends to automate these techniques. In my talk, I will give an overview of automated termination analysis from an SMT solving perspective.

Time / date: Friday 24th June 2016, 4:30-5:30pm
Speaker: James R. Wilcox, University of Washington
Location: Room 405, 66-72 Gower St
Title: Verdi: A Framework for Implementing and Verifying Distributed Systems

Distributed systems are difficult to implement correctly because they must handle both concurrency and failures. Further, their behavior is often too complex to permit exhaustive testing. Bugs in these systems have led to the loss of critical data and unacceptable service outages.

We present Verdi, a framework for implementing and formally verifying distributed systems in Coq. Verdi formalizes various network semantics with different faults, and the developer chooses the most appropriate fault model when verifying their implementation. Furthermore, Verdi's verified system transformers (VSTs) ease the verification burden by enabling the developer to first verify their system under an idealized fault model, then transfer the resulting correctness guarantees to a more realistic fault model without any additional proof burden.

We have used Verdi to implement and verify the Raft consensus protocol. We proved that our implementation of Raft correctly provides linearizable state machine replication. This proof required iteratively discovering and proving 90 system invariants, consisting of over 50000 lines of code. At such a large scale, the primary challenge we faced during the verification process was proof maintenance. We will discuss proof engineering practices that were critical to our success.

Time / date: Thursday 16th June, 4-5pm
Speaker: Laurence Tratt, Kings College London
Location: Room 405, 66-72 Gower St
Title: Fine-grained Language Composition

Programming languages are islands, each disconnected from the rest. We choose a language for a task and, for better or worse, stick with it. Communicating between programs written in different languages is such a slow, arduous, task that we avoid doing it whenever possible.

In this talk I will first show how language composition can lower, and even remove, the barriers between languages. We have pioneered new approaches to the two major challenges in language composition: editing and running composed programs. Using our novel editor 'Eco', users can write source files that contain fragments of multiple languages. We then run multi-language programs using composed meta-tracing VMs. Our preliminary results suggest that performance of composed programs is often competitive with traditional mono-language VMs.

Joint work with Edd Barrett, Carl Friedrich Bolz, Lukas Diekmann, Geoff French, Sarah Mount, and Jasper Schulz. More at

Time / date: Thursday 9th June 2016, 4-5pm
Speaker: Siddharth Krishna, New York University
Location: 6.12 Malet Place Engineering Building
Title: Learning to Verify the Heap

We present a data-driven verification framework to automatically prove memory safety and functional correctness of heap programs. For this, we introduce a novel statistical machine learning technique that maps observed program states to (possibly disjunctive) separation logic formulas describing the invariant shape of data structures at relevant program locations. We then attempt to verify these predictions using a theorem prover, where counterexamples to a predicted invariant are used as additional input to the shape predictor in a refinement loop. After obtaining valid shape invariants, we use a second learning algorithm to strengthen them with data invariants, again employing a refinement loop using the underlying theorem prover.

We have implemented our techniques in Cricket, an extension of the GRASShopper verification tool. Cricket is able to automatically prove memory safety and correctness of implementations of a variety of classical list-manipulating algorithms such as insertionsort.

Joint work with Marc Brockschmidt, Yuxin Chen, Byron Cook, Pushmeet Kohli, Daniel Tarlow, and He Zhu.

Biography: Siddharth Krishna is a PhD student in the Computer Science Department of New York University, who works with Thomas Wies. He is interested in using Machine Learning for program verification and synthesis, and on using SMT and local theories to verify heap manipulating programs.

Time / date: Wednesday 1st June 2016, 4-5pm
Speaker: Nuno Lopes, Microsoft Research Cambridge
Location: 1.03 Malet Place Engineering Building
Title: Network Verification at Microsoft
Abstract: In this talk I’ll show why network verification is so important today and why it will continue that way in the future. I’ll then survey the techniques currently used at scale by Microsoft, as well as new techniques being developed.

Time / date: Friday 27th May, 11am-12pm
Speaker: Dana Scott, Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic (Emeritus), Carnegie Mellon University
Location: Room 405, 66-72 Gower St
Title: Types and Type-free Lambda Calculus

Denotational semantics started in Oxford in late 1969. It was hoped that domain theory would provide a basis both for recursive definitions in programs and recursive definitions of semantical structures. Early troubles were encountered in using tops and bottoms, and soon researchers turned to operational semantics. Others wanted to expand work to communicating and parallel processes. Axiomatic and synthetic theories did not solve the problems, and as a result much research effort in domain theory faded. Late work by Reynolds and collaborators, however, has opened up new and promising approaches for programming-language semantics. Perhaps the much simplified modeling using enumeration operators can spark some new investigations, especially since it is so easy to add a rich type structure to the calculus.

Video recording: part 1   part 2.

Time / date: Thursday 12th May 2016, 4-5pm
Speaker: Paul Blain Levy, University of Birmingham
Location: 1.02 Malet Place Engineering Building
Title: Transition Systems Over Games
Abstract: This talk combines two areas of computer science.
  • The notions of labelled transition system (a set of states, starting from which an action may be performed, leading to another state), trace (a possible sequence of actions from a starting state) and bisimulation (a relation that guarantees two states have matching behaviour) have provided a useful way of reasoning about computational systems.
  • Game semantics is a compositional way of describing the behaviour of higher-order programs employing private state, providing a high-level description in the sense that state is not mentioned.

We want to represent game semantics using transition systems, but traditional systems are too rigid because they have a fixed set of actions. So instead we develop a version of transition system in which each state sits over a position of our game. The actions are the currently available moves.

To make this into a compositional semantics, we give a “transfer”, kind of program that converts moves between two games, giving an operation on strategies. The agreement between the transition systems and the transfer is given by a relation called a “stepped bisimulation”.

(Joint work with Sam Staton)

Time / date: Wednesday 27th April 2016, 4-5pm
Speaker: Radu Grigore, University of Kent
Location: Executive Suite 103, Engineering Front Building
Title: Tree Buffers
Abstract: In runtime verification, the central problem is to decide if a given program execution violates a given property. In online runtime verification, a monitor observes a program's execution as it happens. If the program being observed has hard real-time constraints, then the monitor inherits them. In the presence of hard real-time constraints it becomes a challenge to maintain enough information to produce error traces, should a property violation be observed. In this paper we introduce a data structure, called tree buffer, that solves this problem in the context of automata-based monitors: If the monitor itself respects hard real-time constraints, then enriching it by tree buffers makes it possible to provide error traces, which are essential for diagnosing defects. We show that tree buffers are also useful in other application domains. For example, they can be used to implement functionality of capturing groups in regular expressions. We prove optimal asymptotic bounds for our data structure, and validate them using empirical data from two sources: regular expression searching through Wikipedia, and runtime verification of execution traces obtained from the DaCapo test suite.

Time / date: Wed 13th April 2016, 4-5pm
Speaker: Nikos Tzevelekos, Queen Mary University of London
Location: Executive Suite 103, Engineering Front Building
Title: Model-Checking Program Equivalence in Interface Middleweight Java
Abstract: Using game semantics, we investigate the problem of verifying contextual equivalences in Interface Middleweight Java (IMJ), an imperative object calculus in which program phrases are typed using interfaces. In particular, we show how to determine the decidability status of problem instances (over a fixed type signature) by examining the position of methods inside the term type and the types of its free identifiers. Our results build upon the recent fully abstract game semantics of IMJ. Decidability is proved by translation into visibly pushdown register automata over infinite alphabets with fresh-input recognition, and the procedure has been implemented into a new tool called Coneqct.

Time / date: Thu 24th March 2016, 4-5pm
Speaker: Jens Groth, Information Security, UCL
Location: Roberts 1.10
Title: Cryptography for Eagles
Abstract: The talk will be a high level discussion about what cryptography is and what I see as important research directions. I aim to make it non-technical and accessible for everyone and will try to leave time for discussion. Topics I will touch upon include multi-party computation: any efficiently computable function can be computed securely, i.e., the result is guaranteed to be correct and private inputs remain confidential, and zero-knowledge proofs: a theorem can be proved to be true without revealing anything but the fact it is true. Recently we have developed highly efficient zero-knowledge proofs; even large complex statements have small compact proofs. Challenges: Computer-aided construction of cryptographic protocols and automated verification of security proofs.

Time / date: Wed 9th March 2016, 4-5pm
Speaker: Pawel Sobocinski, University of Southampton
Location: 1.20 MPEB
Title: Graphical Linear Algebra and Applications
Abstract: I will introduce the symmetric monoidal theory of interacting Hopf monoids, which is a string diagrammatic language capable of capturing most standard concepts of finite-dimensional linear algebra. I will briefly outline recent work on applications to signal flow graphs, a classical formalism in control theory and signal processing. Various results in the talk are the result of joint work with Filippo Bonchi and Fabio Zanasi, and with Brendan Fong and Paolo Rapisarda. I have been blogging about graphical linear algebra at

Time / date: Fri 26th February 2016, 4-5pm
Speaker: Anupam Das, ENS de Lyon
Location: Roberts 110
Title: An introduction to linear rewriting in propositional logic
Abstract: Linear term rewriting can be an effective way to control space complexity in the computation of normal forms, deciding reachability etc. They are also fundamental in proof theory, controlling the logical behaviour of a system when formulated with multiplicative rules. This research line begins with the observation that entailment between linear terms of Boolean algebras is already coNP-complete, i.e. all propositional tautologies can be written as a 'linear inference'. I will present some characterisations of various subsystems as well as certain negative results, culminating in a recent result that there is no complete basis of linear rules. The proofs in this area are often simple and elegant once appropriate invariants are identified; this itself relies on a combination of tools from term rewriting, graph theory and Boolean function theory. Finally I present an abstraction of linear terms in a graph-theoretic setting and discuss some ongoing work where reduction behaviour is more well-behaved, but where we are forced to leave the world of Boolean functions altogether. This will be a low-level and first-principles talk and should be accessible to all!

Time / date: Wed 24th Feb 2016, 4-5pm
Speaker: Matthew Hague, Royal Holloway, University of London
Location: Roberts 422
Title: Logic, Automata, and Types at Higher Orders
Abstract: The equivalence of alternating parity tree automata, the modal mu-calculus, and parity games as definitional devices for tree languages is a keystone result underpinning the model checking of reactive systems. In this talk, we lift this equi-expressivity result to languages of higher-type Bohm trees, which may be viewed as higher-order functions over trees.

We equip Stirling's alternating dependency tree automata (ADTA) with an omega-regular winning condition, and show that they are equi-expressive with Kobayashi-Ong intersection types for defining languages of infinite Bohm trees. We then introduce higher-type mu-calculus, which extends mu-calculus with predicates for detecting variables, and corresponding constructs for detecting lambda-abstractions. We show that higher-type mu-calculus is equi-expressive with a subclass of intersection types (equivalently, ADTA) called parity permissive. We conjecture that parity permissive restricts the expressive power of intersection types and ADTA.

Time / date: Wed 10th February 2016, 4-5pm
Speaker: Dan Ghica, University of Birmingham
Location: 1.02 MPEB
Title: Alpha equivalence for interleaved scopes
Abstract: I will describe the key syntactic and semantic aspects of a nominal framework allowing scopes of name bindings to be arbitrarily interleaved. Name binding (e.g. delta x.M) is handled by explicit name-creation and name-destruction brackets which admit interleaving. We will see an appropriate notion of alpha-equivalence for such a language and study the syntactic structure required for alpha-equivalence to be a congruence. I will sketch some potential applications to semantics of programming languages with explicit resource allocation and release. This is joint work with James Gabbay and Daniela Petrisan.

Time / date: Thu 28th January 2016, 4-5pm
Speaker: Jonathan Spring, PPLV/Information Security, UCL
Location: Roberts 110
Title: Avoiding Pseudoscience: Prudence, Logic, and Verification in Studying Information Security
Abstract: Pseudoscience is a (pejorative) term for some pursuit that is judged not to be science but whose participants lay claim to the respectability of science regardless. In this talk I'll discuss how we might judge whether some pursuit is (pseudo)science and how we might verify whether some sample work in information security passes this judgement. The result is a rough framework for improving work in computer science work such as program verification and information security.

Time / date: Fri 11th December 2015, 4-5pm
Speaker: Carsten Fuhs, Birkbeck
Location: 6.12 MPEB (Malet Place Engineering Building)
Title: Automatic Quantitative Resource Analysis for Programs
Abstract: We present a modular approach to automatic resource complexity analysis of integer programs. Based on a novel alternation between finding symbolic time bounds for program parts and using these to infer bounds on the absolute values of program variables, we can restrict each analysis step to a small part of the program while maintaining a high level of precision. The resource bounds computed by our method are polynomial or exponential expressions that depend on the absolute values of input parameters. Our contributions are implemented in the open-source tool KoAT, and extensive experiments show the performance and power of our implementation in comparison with other tools.

Time / date: Wed 25th November 2015, 4-5pm
Speaker: Andrew Ruef, University of Maryland
Location: 6.12 MPEB (Malet Place Engineering Building)
Title: Creating Industrial Software Analysis Systems or How I Learned to Stop Worrying and Just Smush Fuzzers and Symbolic Executors Together
Abstract: Software fails us every day and we want it to stop. Absent a magical wand to rid the world of legacy code, something should be done to find bugs in C programs so that someone can fix them. We have existing tools and techniques but how can we apply them to real code at scale and get results? In this talk, I’ll explore our efforts at creating systems that combine fuzzing, symbolic execution, and abstract interpretation, and discuss our results in finding memory corruption bugs in a somewhat controlled experiment. I’ll also talk about benefits and challenges of “industrial grade” system development and some of the theoretical ideas and future directions in combining information generated by analyzers using counterexample guided abstraction refinement.

Time / date: Fri 20th November 2015, 4-5pm
Speaker: Stefan Milius, FAU Erlangen-Nürnberg
Location: 1.19 MPEB (Malet Place Engineering Building)
Title: Formal Verification of Systems of Synchronous Software Components

Software of distributed, safety-critical embedded systems, as e.g. found in the automotive, railway or avionics domain, has to satisfy very high requirements concerning correctness and fault tolerance. Therefore standards for the development of such systems highly recommend the usage of formal methods. For many complex control tasks the preferred architecture is a system composed from components that operate synchronously but communicate asynchronously with other components – a paradigm called GALS (globally asynchronous – locally synchronous) architecture.

In this talk we present an approach to close the "methodological" gap between the formal verification of synchronous components and asynchronous system of components. We mainly focus on systems where the synchronous components are implemented in SCADE (a language and modelling tool for synchronous controller software). The idea is then to abstract such concrete components with contracts, which are a mixture of temporal logic formulas and non-deterministic state machines. Formal verification of global system properties is then done on a network of contracts that has a lower complexity than the original network of concrete components. We will first explain the verification approach in general. Then we introduce a modelling language for the specification of contracts, and we then explain the transformations used for model checking GALS systems. Our model checking method is based on bounded model checking using SAT and SMT solvers. Finally, we demonstrate the results of our approach and tool on an industrial case study.

Time / date: Wed 21st October 2015, 4-5pm
Speaker: Jon Williamson, University of Kent
Location: 1.04 MPEB (Malet Place Engineering Building)
Title: Inductive Logic for Automated Decision Making
Abstract: According to Bayesian decision theory, one's acts should maximise expected utility. To calculate expected utility one needs not only the utility of each act in each possible scenario but also the probabilities of the various scenarios. It is the job of an inductive logic to determine these probabilities, given the evidence to hand. The most natural inductive logic, classical inductive logic, attributable to Wittgenstein, was dismissed by Carnap due to its apparent inability to capture the phenomenon of learning from experience. I argue that Carnap was too hasty to dismiss this logic: classical inductive logic can be rehabilitated, and the problem of learning from experience overcome, by appealing to the principles of objective Bayesianism. I then discuss the practical question of how to calculate the required probabilities and show that the machinery of probabilistic networks can be fruitfully applied here. This culminates in an objective Bayesian decision theory that has a realistic prospect of automation.

Time / date: Fri 9th October 2015, 4-5pm
Speaker: Sukriti Bhattacharya, PPLV, UCL
Location: 6.12 MPEB (Malet Place Engineering Building)
Title: MaDSLaM: A Statistical Language Model For Malware Detection
Abstract: Malware continues to be an ongoing threat to modern computing. The detection of malware is an area of major concern not only to the research community but also to the general public. We introduce MaDSLaM, a framework for enabling statistical analysis on binary executables by filtering out malicious executables at the early stage of malware analysis phase with a sufficiently low false positive rate. MaDSLaM exploits the naturalness of software via natural language processing techniques by building language models that distinguish between malware and benign files at the executable binary level. We developed a proof-of-concept version of MaDSLaM and evaluated it over a fairly large set of malware samples cover a wide range of malware families and malware authors.

Time / date: Fri 5th June 2015, 4-5pm
Speaker: Johannes Kinder, Royal Holloway, University of London
Location: 6.12 MPEB
Title: High System-Code Security with Low Overhead

Security vulnerabilities plague modern systems because writing secure systems code is hard. Promising approaches can retrofit security automatically via runtime checks that implement the desired security policy; these checks guard critical operations, like memory accesses. Alas, the induced slowdown usually exceeds by a wide margin what system users are willing to tolerate in production, so these tools are hardly ever used. As a result, the insecurity of real-world systems persists. We present an approach in which developers/operators can specify what level of overhead they find acceptable for a given workload (e.g., 5%); our proposed tool ASAP then automatically instruments the program to maximize its security while staying within the specified "overhead budget." Two insights make this approach effective: most overhead in existing tools is due to only a few "hot" checks, whereas the checks most useful to security are typically "cold"and cheap. We evaluate ASAP on programs from the Phoronix and SPEC benchmark suites. It can precisely select the best points in the security-performance spectrum. Moreover, we analyzed existing bugs and security vulnerabilities in RIPE, OpenSSL, and the Python interpreter, and found that the protection level offered by the ASAP approach is sufficient to protect against all of them.

Joint work with Jonas Wagner, Volodymyr Kuznetsov and George Candea (EPFL), presented at IEEE S&P (Oakland) 2015.

Time / date: Fri 8th May 2015, 4-5pm
Speaker: Max Kanovich, PPLV, UCL
Location: 1.04 MPEB
Title: Discrete vs. Dense Times in the Analysis of Cyber-Physical Security Protocols
Abstract: Many security protocols rely on the assumptions on the physical properties in which its protocol sessions will be carried out. For instance, Distance Bounding Protocols take into account the round trip time of messages and the transmission velocity to infer an upper bound of the distance between two agents. We classify such security protocols as Cyber-Physical. Time plays a key role in design and analysis of many of these protocols. This work investigates the foundational differences and the impacts on the analysis when using models with discrete time and models with dense time. We show that there are attacks that can be found by models using dense time, but not when using discrete time. We illustrate this with a novel attack that can be carried out on most distance bounding protocols. In this attack, one exploits the execution delay of instructions during one clock cycle to convince a verifier that he is in a location different from his actual position. We propose a Multiset Rewriting model with dense time suitable for specifying cyber-physical security protocols. We introduce Circle-Configurations and show that they can be used to symbolically solve the reachability problem for our model. Finally, we show that for the important class of balanced theories the reachability problem is PSPACE-complete.

Time / date: Fri 24th April 2015, 4-5pm
Speaker: Mike Dodds, University of York
Location: 6.12 MPEB
Title: A Scalable, Correct Time-Stamped Stack

Concurrent data-structures, such as stacks, queues, and deques, often implicitly enforce a total order over elements in their underlying memory layout. However, much of this order is unnecessary: linearizability only requires that elements are ordered if the insert methods ran in sequence. We propose a new approach which uses timestamping to avoid unnecessary ordering. Pairs of elements can be left unordered if their associated insert operations ran concurrently, and order imposed as necessary at the eventual removal.

We realise our approach in a new non-blocking data structure, the TS (timestamped) stack. Using the same approach, we can define corresponding queue and deque datastructures. In experiments on x86, the TS stack outperforms and outscales all its competitors – for example, it outperforms the elimination-backoff stack by factor of two. In our approach, more concurrency translates into less ordering, giving less-contended removal and thus higher performance and scalability. Despite this, the TS stack is linearizable with respect to stack semantics.

The weak internal ordering in the TS stack presents a challenge when establishing linearizability: standard techniques such as linearization points work well when there exists a total internal order. We present a new stack theorem, mechanised in Isabelle, which characterises the orderings sufficient to establish stack semantics. By applying our stack theorem, we show that the TS stack is indeed linearizable. Our theorem constitutes a new, generic proof technique for concurrent stacks, and it paves the way for future weakly ordered data-structure designs.

Associated paper:

Time / date: Fri 10th April 2015, 4-5pm
Speaker: Azalea Raad, Imperial College London
Location: 6.12 MPEB
Title: CoLoSL: Concurrent Local Subjective Logic
Abstract: A key difficulty in verifying shared-memory concurrent programs is reasoning compositionally about each thread in isolation. Existing verification techniques for fine-grained concurrency require reasoning about static global shared resource, impeding compositionality. This work introduces the program logic of CoLoSL, where each thread is verified with respect to its subjective view of the global shared state. This subjective view describes only that part of the global shared resource accessed by the thread. Subjective views may arbitrarily overlap with each other, and expand and contract depending on the resource required by the thread. This flexibility provides truly compositional proofs for shared-memory concurrency, which we demonstrate on a range of examples including a concurrent computation of a spanning tree of a graph.

Time / date: Thu 9th April 2015, 4-5pm
Speaker: Zhé Hóu, Australian National University, Canberra
Location: 6.12 MPEB
Title: Labelled Sequent Calculi and Automated Reasoning for Assertions in Separation Logic
Abstract: Separation Logic (SL) is an extension of Hoare Logic to reason about programs with mutable data structures. We start by looking at the abstract core of the assertion language of SL, namely Boolean BI (BBI). We give a sound, complete, and cut-eliminable labelled sequent calculus for BBI, then we consider proof search using our calculus. Next we extend our method to capture BBI with various extra properties (a.k.a. separation theories) that are useful in memory/heap models. We show that our labelled calculi can handle BBI with any subset of separation theories. This results in a modular theorem prover for BBI with separation theories. Finally, we move to SL with concrete heap model, a la Reynolds, O'Hearn, Calcagno, Yang etc.. We give extensions of our calculi with rules for heaps and data structures. Our method handles all the logical connectives in separation logic, and resolves some problems in the literature, too.

Time / date: Fri 20th March 2015, 4-5pm
Speaker: Felix Winterstein, Imperial College London
Location: 1.20 Malet Place Engineering Building (MPEB)
Title: Separation Logic-Assisted Code Transformations for Efficient High-Level Synthesis
Abstract: The capabilities of modern FPGAs permit the mapping of increasingly complex applications into reconfigurable hardware. High-level synthesis (HLS) promises a significant shortening of the FPGA design cycle by raising the abstraction level of the design entry to high-level languages such as C/C++. Applications using dynamic, pointer-based data structures and dynamic memory allocation, however, remain difficult to implement well, yet such constructs are widely used in software. Automated optimizations that aim to leverage the increased memory bandwidth of FPGAs by distributing the application data over separate banks of on-chip memory are often ineffective in the presence of dynamic data structures, due to the lack of an automated analysis of pointer-based memory accesses. In this work, we take a step towards closing this gap. We present a static analysis for pointer-manipulating programs which automatically splits heap-allocated data structures into disjoint, independent regions. The analysis leverages recent advances in separation logic, a theoretical framework for reasoning about heap-allocated data which has been successfully applied in recent software verification tools. Our algorithm focuses on dynamic data structures accessed in loops and is accompanied by automated source-to-source transformations which enable automatic loop parallelization and memory partitioning by off-the-shelf HLS tools. We demonstrate the successful loop parallelization and memory partitioning by our tool flow using three real-life applications which build, traverse, update and dispose dynamically allocated data structures. Our case studies, comparing the automatically parallelized to the direct HLS implementations, show an average latency reduction by a factor of 2.5 across our benchmarks.

Time / date: Fri 6th March 2015, 4-5pm
Speaker: Gabrielle Anderson, PPLV, UCL
Location: 309 Roberts Building
Title: Defeasible Reasoning in a Sequent Calculus
Abstract: When abstract models of argumentation are combined with accounts of the structure of arguments, two lines of research have emerged. In one, there is an attempt to reduce defeasible reasoning to inconsistency handling in classical logic; in the other, there are several approaches to modelling commmon-sense reasoning using a combination of strict and defeasible rules, notably ASPIC+. In this work, we show how simple rules of knowledge representation, and of a sequent calculus for a logic handling levels of defeasible reasoning, can overcome the divisions among these lines by proving that they (1) encompass ASPIC+ formalisation of arguments and (2) respect the rationality postulates for argumentation reasoning.

Time / date: Fri 6th February 2015, 4-5pm
Speaker: Tyler Sorensen, PPLV, University College London
Location: 1.20 Malet Place Engineering Building (MPEB)
Title: GPU concurrency: Weak behaviours and programming assumptions

Concurrency is pervasive and perplexing, particularly on graphics processing units (GPUs). Current specifications of languages and hardware are inconclusive; thus programmers often rely on folklore assumptions when writing software. To remedy this state of affairs, we conducted a large empirical study of the concurrent behaviour of deployed GPUs. Armed with litmus tests (i.e. short concurrent programs), we questioned the assumptions in programming guides and vendor documentation about the guarantees provided by hardware. We developed a tool to generate thousands of litmus tests and run them under stressful workloads. We observed a litany of previously elusive weak behaviours, and exposed folklore beliefs about GPU programming—often supported by official tutorials—as false.

As a way forward, we propose a model of Nvidia GPU hardware, which correctly models every behaviour witnessed in our experiments. The model is a variant of SPARC Relaxed Memory Order (RMO), structured following the GPU concurrency hierarchy.

Time / date: Fri 23rd January 2015, 4-5pm
Speaker: Nathan Chong, PPLV, University College London
Location: 508 Roberts Building
Title: Scalable Verification Techniques for Data-Parallel Programs

Verification techniques that can scale to reasoning about real, rather than synthetic or toy, programs are essential for practical program verifiers. GPUVerify is a verification tool, built at MSR and Imperial, to reason about data-parallel programs. In particular, massively-parallel Graphics Processing Unit (GPU) programs. In this domain we have explored different characteristics of scalability: verifier precision, performance and automation.

After covering the core verification method that enables GPUVerify to scale to realistic problem sizes, I will talk about:

  1. An empirical study of candidate-based invariant generation exploring the trade-offs between precision and performance.
  2. Barrier invariants: an abstraction for precise and scalable reasoning about data-dependent GPU programs.

Time / date: Fri 12th December 2014, 4-5pm
Speaker: Daniel Lustig, Princeton University
Location: 6.12 Malet Place Engineering Building (MPEB)
Title: Checking microarchitectural implementations of weak memory

In parallel programs, threads communicate according to the memory consistency model: the set of memory ordering rules enforced by a given architecture. A great deal of research effort has gone into rigorous formalization of memory models. However, comparatively little attention has been paid to the microarchitectural implementation of these models. Standard testing-based techniques are insufficient: unobserved behaviors are not guarantees of non-observability, and failed tests offer little insight into the microarchitectural reason behind failure.

I will present PipeCheck, a methodology and automated tool for verifying that a given microarchitecture correctly implements the memory consistency model specified by its architectural specification. PipeCheck adapts the formal notion of a "happens before" graph to the microarchitecture space. Architectural specifications such as preserved program order are then treated as propositions to be verified, rather than simply as assumptions. We also specify and analyze the behavior of common microarchitectural optimizations such as speculative load reordering. Using PipeCheck, we were able to validate the OpenSPARC T2 in just minutes, and we found and fixed a bug in a architectural simulator widely used in academia.PipeCheck has been nominated for the Best Paper award at MICRO 2014, to be held in Cambridge, UK this December. At the end, I will also briefly introduce ArMOR, a framework for defining dynamic binary translation layers that seamlessly translate between memory consistency models such as those used by x86, ARM, and Power.

Bio: Daniel Lustig is a PhD candidate in Electrical Engineering at Princeton University. His research interests lie in the field of computer architecture, with a particular emphasis on designing memory systems for heterogeneous architectures. His recent work focuses verification of microarchitectural implementations of memory consistency models. Daniel earned his BSE from the University of Pennsylvania in 2009 and his MA from Princeton University in 2011. He also received an Intel PhD Fellowship in 2013.

Time / date: Wed 26th November 2014, 4-5pm
Speaker: Hristina Palikareva, Imperial College London
Location: 1.20 Malet Place Engineering Building (MPEB)
Title: Constraint Solving in Symbolic Execution

Dynamic symbolic execution is an automated program analysis technique that employs an SMT solver to systematically explore paths through a program. It has been acknowledged in recent years as a highly effective technique for generating high-coverage test suites as well as for uncovering deep corner-case bugs in complex real-world software, and one of the key factors responsible for that success are the tremendous advances in SMT-solving technology. Nevertheless, constraint solving remains one of the fundamental challenges of symbolic execution, and for many programs it is the main performance bottleneck.

In this talk, I will first give a very general introduction to symbolic execution, and then I will present the results reported in our CAV 2013 paper on integrating support for multiple SMT solvers in the dynamic symbolic execution engine KLEE. In particular, I will outline the key characteristics of the SMT queries generated during symbolic execution, describe several high-level optimisations that KLEE employs to exploit those characteristics, introduce an extension of KLEE that uses a number of state-of-the-art SMT solvers (Boolector, STP and Z3) through the metaSMT solver framework, and compare the solvers' performance when run under KLEE. In addition, I will discuss several options for designing a parallel portfolio solver for symbolic execution tools. This is joint work with Cristian Cadar.

Time / date: Wed 12th November 2014, 4-5pm
Speaker: Reuben Rowe, PPLV, University College London
Location: 1.03 Malet Place Engineering Building (MPEB)
Title: Towards Type Inference for Nakano's Modality

Around fifteen years ago, Nakano introduced an extension to a system of recursive types for the Lambda Calculus consisting of a unary type constructor, or modality, guarding the occurrences of recursive references. This modest extension afforded a powerful result: the guarantee of a particular kind of termination (specifically, head normalisation). That is, programs typeable in this system are guaranteed to produce output.

Since then, much research has been done to understand the semantics of this modality, and utilise it in more sophisticated type systems. Notable contributions to this effort include: the work of Birkedal et al. and Benton and Krishnaswami, who study semantic models induced by the modality; research by Appel at al. and Pottier who employ the modality in typed intermediate representations of programs; and Atkey and McBride’s work on co-programming for infinite data.

While some of this work explicitly addresses the possibility of type checking (e.g., that of Pottier), to the best of our knowledge type inference is still an unsolved problem. The lack of a type inference procedure presents a significant barrier to the wider adoption of Nakano’s technique in practice. In this talk, I will describe a (conservative) extension to Nakano’s system which allows for the development of a (sound but incomplete) unification-based algorithm for deciding whether a term is typeable by constructing a suitably general type.

Time / date: Wed 15th October 2014, 4-5pm
Speaker: Carsten Fuhs, PPLV, UCL
Location: 1.02 Malet Place Engineering Building (MPEB)
Title: Disproving termination with overapproximation
Abstract: When disproving termination using known techniques (e.g. recurrence sets), abstractions that overapproximate the program’s transition relation are unsound. We introduce live abstractions, a natural class of abstractions that can be combined with the recent concept of closed recurrence sets to soundly disprove termination. To demonstrate the practical usefulness of this new approach we show how programs with nonlinear, nondeterministic, and heap-based commands can be shown nonterminating using linear overapproximations. This talk is about joint work with Byron Cook, Kaustubh Nimkar, and Peter O'Hearn.

Time / date: Wed 1st October 2014, 4-5pm
Speaker: Ana Armas, University of Oxford
Location: 1.03 Malet Place Engineering Building (MPEB)
Title: Ontology Module Extraction via Datalog Reasoning
Abstract: Module extraction---the task of computing a (preferably small) fragment M of an ontology T that preserves entailments over a signature S---has found many applications in recent years. Extracting modules of minimal size is, however, computationally hard, and often algorithmically infeasible. Thus, practical techniques are based on approximations, where M provably captures the relevant entailments, but is not guaranteed to be minimal. Existing approximations, however, ensure that M preserves all second-order entailments of T w.r.t. S, which is stronger than is required in many applications, and may lead to large modules in practice. In this paper we propose a novel approach in which module extraction is reduced to a reasoning problem in datalog. Our approach not only generalises existing approximations in an elegant way, but it can also be tailored to preserve only specific kinds of entailments, which allows us to extract significantly smaller modules. An evaluation on widely-used ontologies has shown very encouraging results.

Time / date: Wed 2nd July 2014, 4-5pm
Speaker: Vincent Nimal, University of Oxford
Location: 6.12 Malet Place Engineering
Title: Don’t sit on the fence: A static analysis approach to automatic fence insertion
Abstract: Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods for restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: our technique is suitable for large code bases. We implement it in our new musketeer tool, and detail experiments on more than 350 executables of packages found in Debian Linux 7.1, e.g.memcached (about 10000 LoC). This talk will be presented at CAV and should last about 15 minutes. This is joint work with Jade Alglave, Daniel Kroening and Daniel Poetzl. Please do not hesitate to comment the slides and structure of the talk -- we will really appreciate it!

Time / date: Wed 18th June 2014, 4-5pm
Speaker: Caterina Urban, ENS Paris
Location: Roberts 309
Title: Automatic Inference of Ranking Functions by Abstract Interpretation
Abstract: We present a family of parameterized abstract domains for proving termination of imperative programs by abstract interpretation. The domains automatically synthesize piecewise-defined lexicographic ranking functions and infer sufficient preconditions for program termination. The abstract domains are parameterized by a numerical abstract domain for state partitioning and a numerical abstract domain for ranking functions. This parameterization allows to easily tune the trade-off between precision and cost of the analysis. We describe instantiations of these domains with intervals, octagons, polyhedra and affine functions. We have implemented a prototype static analyzer for proving conditional termination of programs written in (a subset of) C and, using experimental evidence, we show that it is competitive with the state of the art and performs well on a wide variety of benchmarks.

Time / date: Wed 4th June 2014, 4-5pm
Speaker: Eric Koskinen, New York University
Location: 6.12 Malet Place Engineering
Title: Local Temporal Reasoning
Abstract: I will discuss our recent development of the first method for proving temporal logic properties of programs written in higher-order languages such as C#, F#, Haskell, Ocaml, Perl, Ruby, Python, etc. By distinguishing between the finite traces and infinite traces in the specification, we obtain rules that permit us to reason about the temporal behavior of program parts via a type-and-effect system, which is then able to compose these facts together to prove the overall target property of the program. The type system alone is strong enough to derive many temporal safety properties, for example when using dependent (refinement) types and temporal effects. We also show how existing techniques can be used as oracles to provide liveness information (e.g. termination) about program parts and that the type-and-effect system can combine this information with temporal safety information to derive nontrivial temporal properties. Our work has application toward verification of higher-order software, as well as modular strategies for interprocedural programs. Joint work with Tachio Terauchi. To appear in LICS 2014.

Time / date: Wed 21st May 2014, 4-5pm
Speaker: Ross Duncan, Univ. Strathclyde
Location: Roberts 309
Title: Verifying Quantum Computations with Graphical Calculi
Abstract: Quantum computation takes place in a mathematical environment rather different to the familiar world of classical computing. Phenomena such as complementarity and entanglement don't have any counterpart in the systems usually studied by computer scientists, and the usual logical structures we rely on to analyse computation don't exist. However, studying the unique features of the quantum world reveals that quantum computation can be formalised using the theory of monoidal categories, where various internal algebras do the heavy lifting. This theory can be expressed in purely diagrammatic terms, and equational reasoning can be carried out by rewriting in a graphical calculus. I'll introduce the relevant structures and show how to verify the correctness of some quantum programs using these techniques.

Time / date: Wed 23rd April 2014, 4-5pm
Speaker: Will Sonnex, University of Cambridge
Location: Roberts 309
Title: Proving terms equivalent using fixpoint fusion
Abstract: When working with functional programming languages the traditional approach to proving two terms equivalent is to use proof by induction. We can use this to prove boolean properties by checking if such a term is equal to "True". Automated proof by induction is therefore a powerful tool for verifying functional programs.My work is focused on proving terms equivalent using term rewriting techniques, particularly fixpoint fusion, to simplify terms within a property until the property is trivially true. For equations this would be simplifying both sides to alpha-equal terms. For properties this would be rewriting the term to "True". I will demo my automated term rewriting program "Elea", which is able to perform as well as the previous state of the art (up until 2 years ago, damn you HipSpec) in automated induction.

Time / date: Wed 9th April 2014, 4-5pm
Speaker: Gabrielle Anderson, UCL
Location: Roberts 309
Title: Bunched Resource Process Calculus
Abstract: The Synchronous Calculus of Resources and Processes by Pym and others aims to provide a methodology for systems modelling that follows intuition about how complex systems consist of objects as well as dynamics, and that provides compositionality in terms of behaviour and of logical properties. While this work achieved most of its aims it had one significant drawback: it did not have one direction of the Hennessey Milner property, that logically equivalent systems are also behaviourally equivalent. In this work we present a variant on SCRP that we call BRPC, and show that it has both directions of the HM property.

Time / date: Wed 26th March 2014, 4-5pm
Speaker: Mehrnoosh Sadrzadeh, Queen Mary University of London
Location: Malet Engineering 1.20
Title: Compositional Vector Space Models of Natural Language
Abstract: Vector space models of natural language originated in the work of Chomsky's supervisor: Z. Harris. They represent meanings of words by vectors whose coefficients are the frequency of co-occurrence of the word in different contexts. They have proved successful in various natural language tasks (e.g. automatic thesauri construction), but have been unable to represent meanings of sentences. I will present a model that solves this problem by combining the vector space model with the grammatical structures of sentences. The latter are type-algebras that vary over monoidal structures originated in the work of J. Lambek. The combination is made possible via a homomorphic passage from the monoidal structures to vector spaces. This is joint work with a bunch of people such as Coecke (Oxford) and Clark (Cambridge). The inspirations behind the work was from categorical models of quantum mechanics (which also has a vector space model) developed by Abramsky and Coecke.

Time / date: Wed 12th March 2014 4-5pm
Speaker: Gareth Smith, Imperial College London
Location: MPEB 1.03
Title: A Trusted Mechanised JavaScript Specification
Abstract: JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. This talk presents JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We have a Coq proof that JSRef is correct with respect to JSCert and have assessed JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.

Time / date: Wed 13th November 2013, 2-3pm
Speaker: Pavle Subotic
Location: Roberts 421
Title: Exploring Interpolants
Abstract: Craig Interpolation is a standard method to construct and refine abstractions in model checking. To obtain abstractions that are suitable for the verification of a software programs or hardware designs, model checkers rely on theoremprovers to find the right interpolants, or interpolants containing the right predicates, in a generally infinite lattice of interpolants for any given interpolation problem. The problem of guiding solvers to the right interpolants has so far been investigated mainly on a syntactic level. In this talk I present a semantic and solver-independent framework for systematically exploring interpolant lattices, based on the notion of interpolation abstraction. I will discuss how interpolation abstractions can be constructed for a variety of logics, and how they can be exploited in the context of software model checking

Time / date: Friday 8th November 2013, 4-5pm
Speaker: Thomas Stroeder
Location: Roberts 309
Title: Proving Deadlock and Livelock Freedom
Abstract: In this talk we develop a general method for proving deadlock and livelock freedom of concurrent programs with shared memory. Our goal in this work is to support programs which use locks stored in mutable data structures. The key to our technique is the observation that dependencies between locks can be abstracted using recursion and non-determinism in a sequential program such that the termination of the abstraction implies deadlock and livelock freedom. We discuss applications to examples including lock-coupling lists, the optimistic skiplist of Heller et al., the lock-based queue of Michael and Scott, and the balanced binary trees of Kung and Lehman, Bronson et al., and Schlatter Ellis.

Time / date: Fri 11th Oct 2013, 4-5pm
Speaker: Aquinas Hobor
Location: Roberts 508
Title: Decision Procedures over Sophisticated Fractional Permissions
Abstract: Fractional permissions enable sophisticated management of resource accesses in both sequential and concurrent programs. Entailment checkers for formulae that contain fractional permissions must be able to reason about said permissions to verify the entailments. We show how entailment checkers for separation logic with fractional permissions can extract equation systems over fractional shares. We develop a set decision procedures over equations drawn from the sophisticated boolean binary tree fractional permission model developed by Dockins et al. [4]. We prove that our procedures are sound and complete and discuss their computational complexity. We explain our implementation and provide benchmarks to help understand its performance in practice. We detail how our implementation has been integrated into the HIP/SLEEK verifi cation toolset. We have machine-checked proofs in Coq.

Time / date: Fri 27th Sept 2013, 4-5pm
Speaker: Viktor Vafeiadis
Location: Roberts 309
Title: Relaxed separation logic: A program logic for C11 concurrency
Abstract: The talk will introduce relaxed separation logic (RSL), a program logic for reasoning about concurrent programs running under the C11 relaxed memory model. From a user's perspective, RSL is an extension of concurrent separation logic (CSL) with proof rules for the various kinds of C11 atomic accesses. As in CSL, individual threads are allowed to access non-atomically only the memory that they own, thus preventing data races. Ownership can, however, be transferred via certain atomic accesses. For SC-atomic accesses, arbitrary ownership transfer can occur; for acquire/release atomic accesses, ownership transfer can occur only in one direction; while for relaxed atomic accesses, no ownership transfer is possible.

Time / date: Fri 21st June 2013, 4-5pm
Speaker: Joel Ouaknine
Location: Malet Place Eng 1.02
Title: Decision Problems for Linear Recurrence Sequences
Abstract: Linear recurrence sequences (such as the Fibonacci numbers) permeate a vast number of areas of mathematics and computer science (in particular: program termination and probabilistic verification), and also have many applications in other fields such as economics, theoretical biology, and statistical physics. In this talk, I will focus on three fundamental decision problems for linear recurrence sequences, namely the Skolem Problem (does the sequence have a zero?), the Positivity Problem (are all terms of the sequence positive?), and the Ultimate Positivity Problem (are all but finitely many terms of the sequence positive?). This is joint work with James Worrell and Matt Daws.

Time / date: Fri 14th June 2013, 4-5pm
Speaker: Abi See
Location: Roberts 309
Title: Ramsey vs. lexicographic termination proving
Abstract: Termination proving has traditionally been based on the search for (possibly lexicographic) ranking functions. In recent years, however, the discovery of termination proof techniques based on Ramsey’s theorem have led to new automation strategies, e.g. size-change, or iterative reductions from termination to safety. We revisit the decision to use Ramsey-based termination arguments in the iterative approach, and describe a new iterative termination proving procedure that instead searches for lexicographic termination arguments. Using experimental evidence we show that this new method leads to dramatic speedups.

Time / date: Fri 15th March 2013, 1-2pm
Speaker: Zachary Kincaid
Location: Malet Place Eng 1.20
Title: Proving Correctness of Concurrent Programs with Data Flow Graphs
Abstract: We propose inductive data flow graphs, data flow graphs with incorporated inductive assertions, as the basis of an approach to verifying concurrent programs. An inductive data flow graph accounts for a set of dependencies between program events, and therefore stands as a representation for the set of executions which give rise to these dependencies. By representing information about dependencies rather than control flow, inductive data flow graphs can yield very succinct proofs. Our strategy for verifying concurrent programs defers reasoning about control to the proof checking step, a purely combinatorial problem, thus avoiding the need to reason about data and control simultaneously.

Time / date: Fri 8th March 2013, 11am-12pm
Speaker: Alexey Gotsman
Location: 6.12 MPEB
Title: Abstraction for Weakly Consistent Systems
Abstract: When constructing complex concurrent and distributed systems, abstraction is vital: programmers should be able to reason about system components in terms of abstract specifications that hide the implementation details. Nowadays such components often provide only weak consistency guarantees about the data they manage: in shared-memory systems because of the effects of relaxed memory models, and in distributed systems because of the effects of replication. This makes existing notions of component abstraction inapplicable. In this talk I will describe our ongoing effort to specify consistency guarantees provided by modern shared-memory and distributed systems in a uniform framework and to propose notions of abstraction for components of such systems. I will illustrate our results using the examples of the C/C++ memory model and eventually consistent distributed systems. This is joint work with Mark Batty (University of Cambridge), Sebastian Burckhardt (Microsoft Research), Mike Dodds (University of York) and Hongseok Yang (University of Oxford).

Time / date: Fri 1st March 2013, 4-5pm
Speaker: Shin Yoo
Location: MPEB 6.12
Title:Fault Localization Prioritization: Comparing Information Theoretic and Coverage Based Approaches
Abstract: Test case prioritization techniques seek to maximise early fault detection. Fault localization seeks to use test cases already executed to help find the fault location. There is a natural interplay between the two techniques; once a fault is detected, we often switch focus to fault fixing, for which localization may be a first step. In this paper we introduce the Fault Localization Prioritization (FLP) problem, which combines prioritization and localization. We evaluate three techniques: a novel FLP technique based on information theory, FLINT (Fault Localization using Information Theory), that we introduce in this paper, a standard Test Case Prioritization (TCP) technique and a ‘test similarity technique’ used in previous work. Our evaluation uses five different releases of four software systems. The results indicate that FLP and TCP can statistically significantly reduce fault localization costs for 73% and 76% of cases respectively and that FLINT significantly outperforms similarity-based localization techniques in 52% of the cases considered in the study.

Time / date: Fri 8th February 2013, 4-5pm
Speaker: Thomas Dillig
Location: MPEB 6.12
Title: Automated Error Diagnosis Using Abductive Inference
Abstract: When program verification tools fail to verify a program, either the program is buggy or the report is a false alarm. In this situation, the burden is on the user to manually classify the report, but this task is time-consuming, error-prone, and does not utilize facts already proven by the analysis. We present a new technique for assisting users in classifying error reports. Our technique computes small, relevant queries presented to a user that capture exactly the information the analysis is missing to either discharge or validate the error. Our insight is that identifying these missing facts is an instance of the abductive inference problem in logic, and we present a new algorithm for computing the smallest and most general abductions in this setting. We perform the first user study to rigorously evaluate the accuracy and effort involved in manual classification of error reports. Our study demonstrates that our new technique is very useful for improving both the speed and accuracy of error report classification.

Time / date: Monday 4th February 2013
Speaker: Mooly Sagiv
Location: MPEB 6.12
Title: Concurrent Data Representation Synthesis
Abstract: We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a high-level optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark.<p>This is joint work with Alex Aiken and Peter Hawkins(Stanford), Katleen Fisher(DARPA), and Martin Rinard(MIT). This work is part of Petrer Hawkins thesis Please also look into the December CACM article.
Time / date: Friday 1st February 2013, 4pm-5pm
Speaker: Mooly Savig
Location: 103 & 104 Executive Suite
Title: Effectively-Propositional Reasoning About Reachabilitysis
Abstract: This paper proposes a novel method of harnessing existing SAT solvers to verify reachability properties of programs that manipulate linked-list data structures. Such properties are essential for proving program termination, correctness of data structure invariants, and other safety properties. Our solution is complete, i.e., a SAT solver produces a counterexample whenever a program does not satisfy its specification. This result is surprising since even first-order theorem provers usually cannot deal with reachability in a complete way, because doing so requires reasoning about transitive closure. Our result is based on the following ideas: 1. Programmers must write assertions in a restricted logic without quantifier alternation or function symbols. 2. The correctness of many programs can be expressed in such restricted logics, although we explain the tradeoffs. 3. Recent results in descriptive complexity can be utilized to show that every program that manipulates potentially cyclic, singly- and doubly-linked lists and that is annotated with assertions written in this restricted logic, can be verified with a SAT solver. We implemented a tool atop Z3 and used it to show the correctness of several linked list programs. This work is part of Peter Hawkins thesis Please also look into the December CACM article.

Time / date: Friday 30th November 2012
Speaker: Dan Ghica
Location: MPEB 6.12
Title: Seamless Computing
Abstract: Computer architectures are evolving and diversifying: CPU-based, GPU-based, distributed, reconfigurable, cloud-based and so on. Along with them we are faced with a proliferation of domain-specific architecture-dependent languages, leading to an environment in which programming is laborious and error-prone. In this talk I will outline an ongoing research project dedicated to the study of architecture-independent programming, from theoretical foundations (semantics and types) to compiler support.

Time / date: Fri 14th Sept 2012, 4-5pm
Speaker: Fabrizio Montesi
Location: Cruciform Foyer 101 Seminar Room 1
Title: Deadlock-freedom-by-design: Multiparty Asynchronous Global Programming
Abstract: Chor ( is a new programming language for communicating systems based, where both behavioural protocol specifications and implementations (choreographies) are given at a global level. Then, the code for each concrete endpoint entity of a system can be automatically generated, with the guarantees of soundness and completeness wrt the originating descriptions. Interestingly, by exploiting global descriptions, Chor defines a new class of asynchronous deadlock-free systems which eludes standard practices in object-oriented and session-based type systems. In this talk, I will give an overview of the formal model behind Chor and I will demonstrate its prototype implementation, an IDE and compiler that allow for the first time to experiment with the global programming paradigm. The target implementation language of the Chor compiler is Jolie (, enabling the generated code to interoperate with a variety of environments.

Time / date: Wed 12th Sept 2012, 12-1pm
Speaker: Thomas Stroeder
Location: Malet Place Eng 1.20
Title: Automated Termination Analysis of Imperative Programs with Pointer Arithmetic and Heap Manipulations
Abstract: Automated termination analysis of imperative programs has been studied very intensively during the last decade and powerful tools performing this task have been developed. While some of these tools can already successfully analyze programs involving heap manipulations, the combination of pointer arithmetic and heap manipulations is still an open problem. Such combinations are especially used in C functions for string processing where a string is traversed using pointer arithmetic until a zero value is found on the heap. We will show a novel method for automated termination analysis of imperative programs based on the Low-Level Virtual Machine (LLVM) framework, symbolic evaluation graphs, and term rewriting. We will demonstrate its capabilities on the example of a standard C implementation of the strlen function called on zero-terminated strings and give an overview over preliminary experimental results with a prototype implementation on a small benchmark set of programs.

Time / date: Fri 7 Sept 2012, 11am-12pm
Speaker: Marc Brockschmidt
Location: Cruciform Foyer 201 Seminar Room 3
Title: Proving Java Termination: Reducing the Ugly to the Bad
Abstract: We present symbolic evaluation graphs for Java programs, obtained by symbolicevaluation and abstraction on a simple abstract domain. As over-approximations of all possible program runs, these graphs then serve as basis for further analyses. We present one possible use, termination analysis, for which the graph is transformed into Term Rewrite Systems (TRSs) whose termination implies termination of the original program. For analysis of the resulting TRSs, we can then use existing standard tools.

Time / date: Fri 31st Aug 2012, 4-5pm
Speaker: Delphine Demange
Location: Cruciform Foyer 101 Seminar Room 1
Title: Plan B - A Buffered Memory Model for Java
Abstract: The Java Memory Model (JMM) is an ambitious attempt to provide a semantics for concurrent and possibly racy Java programs. It aims at providing a precise semantics that is portable across all architectures and enables a variety of compiler optimizations. Unfortunately, the JMM has proven to be challenging for users to understand and for compiler writers to use. In fact, the formal statement of the model is flawed and existing Java compilers do not comply with it. We propose to investigate an alternative proposal that has a tractable definition and intuitive semantics, relates easily to existing architectures, while still enabling useful optimizations. To this end, we introduce a Buffered Memory Model for Java. Beyond its simplicity advantages for the programmer, the model is also amenable for formal reasoning and verification. We also demonstrate that it can be implemented efficiently on x86-TSO architectures.(joint work with V. Laporte, L. Zhao, D. Pichardie, S. Jagannathan and J. Vitek)
This page was last modified on 18 Jul 2017.