Events

CTCA research seminars

2013

April 24 Alex Shafarenko AstraKahn: a Pressure Cooker for Distributed Computing
The talk will outline the principles behind a new coordination language AstraKahn
loosely associated with S-Net. I will focus on the bottom layer of AstraKahn, essentially
a virtual machine for stream processing, called Topology and Progress Layer (TPL).
The most remarkable feature of TPL is its flexible evaluation order that can self-adapt
between two classical extremes: lazy and eager evaluation, in order to support
resource sharing under progress control whilst at the same time maximising the
utilisation of the platform concurrency.

Time permitting, I will touch upon the two higher layers of AstraKahn: the Constraint Layer
and the Data and Instrumentation Layer, and set the agenda of potential PhD projects
in that area.

April 11 Pavel Zaichenkov Cholesky Decompositon: lessons learnt for S-Net
Cholesky decomposition is a linear algebra problem which has a variety of applications (for instance, it is used for solution of systems of linear equations). We present a recursive tiled algorithm for solving the problem. The number of parallel workers varies through the time, the amount of serial computations inside each worker is changed as well. This is the reason why the example is useful as a demonstration of computational power of the language and the compiler for mathematical problems in general. We consider this example as an industrial use case, as we compare performance with the implementation done for Intel Concurrent Collections (CnC).

In the talk we give the details of the algorithm, provide implementation details for S-Net, compare the performance with CnC and do a superficial analysis about the result.

March 14 Olga Tveretina A Conditional Superpolynomial Lower Bound for Extended Resolution
Extended resolution is a propositional proof system that simulates polynomially very powerful proof systems such as Frege systems and extended Frege systems. Feasible interpolation has been one of the most promising approaches for proving lower bounds for propositional proof systems and for bounded arithmetic. In this talk I will show that an extended resolution refutation of an unsatisfiable CNF representing the clique-graph colouring principle admits feasible interpolation. It gives us a conditional result: if there is a superpolynomial lower bound on the non-monotone circuit size for this class of formulas then extended resolution has a superpolynomial lower bound.
January 17 Vicent Sanz-Marco Fault-tolerant Coordination of S-Net Stream-processing Networks
Fault tolerance and robustness are system properties of increasing importance, both in the domain of embedded computing as well as in the domain of high-performance computing.

We study the applicability of fault-tolerance mech- anisms in the context of stream-processing networks, in particular based on the coordination-language S-Net. We identify three basic fault tolerance mechanisms and discuss the technical solutions for them within S-Net. The applicability of these mechanisms depends on the requirement of the concrete application domain. The con- tribution of this paper is a feasibility study of tool-supported fault tolerance mechanisms in a flexible coordination language allowing for asynchronous execution. As part of this feasibility study we dis- cuss potential extensions of the S-Net language and runtime system in order to implement the identified solutions.

2012

December 6 Nguyen Vu Thien Nga What is Stream Processing?
The term stream processing has been introduced widely in the literature. A large quantitive of research projects have used this term to describe their own systems.

This presentation provides an overview of stream processing systems and different approaches of constructing them.

These approaches include data flow models with functional and logic programming, reactive and signal processing systems, and stream processing in hardware design and verification.

The talk also introduces new and recent stream processing systems such as S-Net, StreamIt, Concurrent Collections, MapReduce, etc.

November 22 Olga Tveretina A Semidecidable Procedure for Reachability Analysis of Piecewise Constant Derivative Systems
This talk will focus on a class of hybrid systems called piecewise constant-derivative systems, for which the reachability problem has been proven decidable in two dimensions and undecidable in three or more dimensions. The decidability results for two dimensions rely on the existence of a periodic trajectory after a finite number of steps.
The periodic trajectory does not generally exist in higher dimensions. Nevertheless, higher dimensional systems also feature a degree of regularity. I will introduce the notion of quasicycle that represents this kind of acyclic behaviour. I will show that certain types of quasicycle permit computation of exact reachable sets and will illustrate the effectiveness of my approach on some examples.
November 15 Michael Zolda Dependable Stream Processing
In today’s world we have become highly dependent on various kinds of computer-controlled systems. Many of us have jobs that classify us as what has come known as “information workers”. As such, we depend on the availability of a reliable high-speed communication infrastructure that allows us to obtain and share the information we require on a day-to-day basis to do our work. At the same, we entrust our lives to safety-critical embedded systems on a day-to-day basis, whenever we use our car. On what basis can we decide how much trust we can reasonably put into a particular system?

The key term for such considerations is “dependability”, an umbrella term that encompasses such concepts as: availability, reliability, safety, integrity, and maintainability. Given our high dependence on critical systems of various kinds, engineers in the high-dependability domain are spending considerable effort to create trustworthy systems.

At the same time as our dependence on such systems is ever-increasing, we are currently facing the prospect of highly unreliable hardware. Given the current trend in semiconductor industry, the only viable way to serve the ever-increasing demand for processing power and energy-efficiency seems to be to build highly-efficient, but inherently unreliable hardware.

Last but not least, systems-on-chip in nano-scale process technology are beginning to resemble communication networks, as has been acknowledged by the International Technology Roadmap for Semiconductors in 2001.

By putting these individual trends together, we see a clear need for Dependable Stream Processing: While stream processing seems to be ideally suited for the network-based processor architectures we can expect on a medium term, applying methods from the field of dependable computing will ensure that future systems will be able to provide trustworthy services, even on unrealiable hardware.

November 8 Pavel Zaichenkov Programming for HPC in equations
In this seminar we are discussing the project which aims to cover performance issues in programs that are formulated as sets of equations. Equations are a very natural way to formalize program in terms of mathematical objects, but we need to accomplish a transformation into a low-level representation to be able to execute program on target architecture. There are no universal techniques to do this. In this project we are going to translate a high-level “mathematical” program into intermediate language intended for high-performance computing.
October 11 Santanu Dash An adaptive approach to type-based flow analysis
Program analysis has moved beyond the traditional domain of program optimisations. Today, it finds application in multiple areas like program verification, security, quality assurance and even, high-level system design. The nature of program analysis is such that we need to infer program properties at different program points. Type inference in modern-day compilers does something similar and infers properties of expressions as a part of the normal compilation process. Due to evident similarities between program analysis and type inference, there has been a recent interest in integrating the former with the latter. However, computational complexity of type inference stands in the way of uncovering any synergy between the two. Type inference typically requires one to compute transitive closure of atomic flow constraints which can be prohibitively expensive for large programs. As a part of this research seminar, we discuss alternative approaches to computing the transitive closure of atomic flow constraints which can prove beneficial to type inference and consequently, program analysis as well.