Applied and Computational Category Theory

Category Theory is a well-known powerful mathematical modeling language with a wide area of applications in mathematics and computer science, including especially the semantical foundations of topics in software science and development. Since about 30 years there have been workshops including these topics. More recently, the ACCAT group established by Jochen Pfalzgraf at Linz and Salzburg has begun to study interesting applications of category theory in Geometry, Neurobiology, Cognitive Sciences, and Artificial Intelligence. It is the intention of this ACCAT workshop to bring together leading researchers in these areas with those in Software Science and Development in order to transfer categorical concepts and theories in both directions. The workshop will consist of 12 invited lectures where extended abstracts will be presented.

Contact: Jochen Pfalzgraf (, Hartmut Ehrig (



Algebraic Aspects of Stochastic Systems

This satelite event has been canceled.

As stochastic systems become more important in such diverse areas as model checking, concurrency theory, testing, control theory etc., it becomes increasingly important and interesting to study the (co-) algebraic properties of these systems that center around notions like bisimulations, congruences, simplicity; the recent incorporation of continuous time systems is particularly exciting. The purpose of this workshop is to study stochastic and related systems from an algebraic point of view, and to relate them to applications. Papers are requested in the following areas: This list is not exhaustive.

Contact: Ernst-Erich Doberkat (



Fifth International Workshop on Automated Verification of Infinite-State Systems

The fifth in a series of workshops, AVIS 2006 is a forum for theoreticians, tool builders, and practitioners interested in methods and tools for the automatic verification of large practical systems. Model checking is the focus of current formal methods research. However, it is limited in scope due to the state explosion problem. For large practical systems theorem proving – a process that requires manual effort and mathematical sophistication to use – is the only viable alternative. There has been a recent emergence of hybrid techniques that affords users with full automation by combining the ease-of-use of model checking with the power of theorem proving. AVIS is a forum for exchanging ideas and experiences in this emerging area of research.

Contact: Ramesh Bharadwaj (



8th International Workshop on Coalgebraic Methods in Computer Science

During the last few years, it has become increasingly clear that a great variety of state-based dynamical systems, like transition systems, automata, process calculi and class-based systems, can be captured uniformly as coalgebras. Coalgebra is developing into a field of its own interest presenting a deep mathematical foundation, a growing field of applications, and interactions with various other fields such as reactive and interactive system theory, object oriented and concurrent programming, formal system specification, modal logic, dynamical systems, control systems, category theory, algebra, analysis, etc. The aim of the workshop is to bring together researchers with a common interest in the theory of coalgebras and its applications.

Contact: John Power (



Fifth Workshop on Compiler Optimization Meets Compiler Verification

COCV provides a forum for researchers and practitioners working onoptimizing and verifying compilation, and on related fields such astranslation validation, certifying and credible compilation,programming language design and programming language semantics forexchanging their latest findings, and for plumbing the mutual impactof these fields on each other. By encouraging discussions andco-operations across different, yet related fields, the workshopstrives for bridging the gap between the communities, and forstimulating synergies and cross-fertilizations among them.

Contact: Wolf Zimmermann (



Designing Correct Circuits

The 2006 DCC workshop aims to bring together academic and industrial researchers in formal methods for hardware design and verification. It will allow participants to learn about the current state of the art and provide a venue for debate about how more effective methods can be developed.

Much research in formally-based hardware design and verification now takes place in industry, rather than in academia. For the long term survival of our field, we must ensure that academics and industrial researchers continue to work together on the real problems facing microprocessor designers and those developing System on a Chip solutions. A major aim of the workshop is to open the necessary communication channels.

Contact: Mary Sheeran (, Tom Melham (



First International Workshop on Emerging Applications of Abstract Interpretation

Abstract interpretation is almost 30 years old. These 30 years witnessed a great success of this methodology, in particular in analysis and verification of programming languages and systems: static program analysis, program compilers, program verification, program transformation, program semantics. This workshop focusses on emerging applications of abstract interpretation in nontraditional or even innovative areas, like security, model checking, embedded and real-time systems, systems biology, software watermarking and obfuscation, hardware verification, etc. The workshop aim is to spread the methods of abstract interpretation towards nontraditional areas and to share common experiences in using abstract interpretation as an approximation technique.

Contact: Francesco Ranzato (



Formal Foundations of Embedded Software and Component-Based Software Architectures

The aim of this workshop is to bring together researchers from academia and industry interested in formal modelling approaches as well as associated analysis and reasoning techniques with practical benefits for embedded software and component-based software engineering.

Recent years have seen the emergence of formal and informal techniques and technologies for the specification and implementation of component-based software architectures. With the growing need for safety-critical embedded software and the increased relevance of reliability and scalability for enterprise software, this trend has been amplified.

FESCA therefore is interested in formal methods known from the area of embedded software development and software engineering and tries to cross-fertilize their research and application. One strength of FESCA is the link established between the embedded software design community and the formal software engineering community by exploring how formal approaches developed within one community affect or can be exploited by the other.

Contact: Ralf Reussner (Ralf.Reussner@Informatik.Uni-Oldenburg.DE)



Workshop on Future Research Challenges for Software and Services

With Framework VII program in the horizon, the aim of this workshop is twofold:

The workshop will require registration and will be open to all interested parties, including ETAPS 2006 participants. There will be a specific Call for Papers. EU projects on Software Technologies will be encouraged to present their views as well as the Industrial initiatives and available road-mapping support actions.

Contact: Tiziana Margaria (



Fifth International Workshop on Graph Transformation and Visual Modeling Techniques

GT-VMT 2006 is the fifth workshop of a series that serves as a forum for all researchers and practitioners interested in the use of graph-based notation, techniques and tools for the specification, modeling, validation, manipulation and verification of complex systems. The aim is to promote engineering approaches for the design and implementation of visual modeling techniques that are based on robust formalizations. Contributions are welcome from communities working on popular visual modeling notations like UML, Petri nets, Graph Transformation, Business Process/Workflow Models.

This year's workshop will have an additional focus on Models for Mobile Systems and Services (including Service-Oriented and GRID computing architectures) where huge and highly dynamic graph-like structures offer a challenging ground for graph transformation techniques and tools.

Contact: Roberto Bruni (



Sixth Workshop on Language Descriptions, Tools and Applications

The LDTA workshops bring together researchers from academia and industry interested in the field of formal language definitions and language technologies, with an emphasis on tools developed for or with these language definitions. This active area of research involves the following basic approaches:

Although various specification formalisms like attribute grammars, action semantics, operational semantics, and algebraic approaches have been developed, they are not widely exploited in current practice. A goal of LDTA is to increase the use of these formalisms through demonstrations of their practical utility in, among others, the following application domains:

Contact: Eric Van Wyk (



Second Workshop on Model Based Testing

This workshop is devoted to model-based testing of software and hardware. Models guide such efforts as test creation and test verification.

Model-based testing is gaining attention with the advent of models in software/hardware design and development. Of particular importance are formal models with precise semantics. Testing with such models allows one to measure the degree of the product's conformance with the model. Techniques to support model-based testing are drawn from areas like formal verification, model checking, control and data flow analysis, grammar analysis, Markov processes, game theory, and various other areas.

The intent of this workshop is to discuss the state of art in theory, application, tools and industrialization of model-based testing.

Contact: Bernd Finkbeiner (



Fourth Workshop on Quantitative Aspects of Programming Languages

Quantitative aspects of computation are important and sometimes essential in characterising the behaviour and determining the properties of systems. They are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, risk and trust). Such quantities play a central role in defining both the model of a system (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of the system properties. The aim of this workshop is to discuss the explicit use of quantitative information such as time and probabilities both at the specification level and as a tool for analysis and verification. In particular, the workshop focuses on: the design of probabilistic and real-time languages and the definition of semantical models for such languages; the discussion of methodologies for the analysis of probabilistic and timing properties (e.g. security, safety, schedulability) and of other quantifiable properties such as reliability (for hardware components), trustworthiness (in information security) and resource usage (e.g., worst-case memory/stack/cache requirements); the probabilistic analysis of systems which do not explicitely incorporate quantitative aspects (e.g. performance, reliability and risk analysis); applications to safety-critical systems, communication protocols, control systems, asynchronous hardware, and to any other domain involving quantitative issues.

Contact: Alessandra Di Pierro (



5th International Symposium on Software Composition

The goal of SC 2006 is to advance the research in component-based software development. Challenges are the composition of components, their development, and verification. Another challenge is the scalability of any component-based software development approach to computing devices with different computing capabilities, ranging from cell phones to server farms. Relevant topics of interest comprise all issues related to components-based software development and their application. SC 2006 will be the fifth workshop on software composition. This series is aimed at bringing together the research and industrial communities in order to develop a better understanding of how software components may be used to build and maintain large software systems. Therefore papers relating theory and practice are particularly welcome.

Contact: Welf Löwe (



Synchronous Languages, Applications, and Programming

SLAP is a workshop dedicated to synchronous languages. Such languages have emerged in the 80s as a new method to design real-time embedded critical systems. There exists now a strong interest for them in industry: Lustre, Esterel, and Signal are used with success to program real-time and safety critical applications, from nuclear power plant management layer to Airbus air flight control systems. The purpose of the SLAP workshop is to bring together researchers and practitioners who work in the field of reactive systems. The workshop topics are covering all these issues: synchronous models of computation, synchronous languages and programming formalisms, compiling techniques, formal verification, test and validation of programs, case-studies, education, etc.

Contact: Florence Maraninchi (



13th International SPIN Workshop on Model Checking of Software

The SPIN Workshop is a forum for practitioners and researchers interested in model-checking based techniques for the validation and analysis of communication protocols and software systems. The workshop will focus on topics including theoretical and algorithmic foundations and tools for software model checking, model derivation from code and code derivation from models, techniques for dealing with large and infinite state spaces, and applications. The workshop aims to foster interactions and exchanges of ideas with all related areas in software engineering.

Contact: Antti Valmari (



Third International Workshop on Term Graph Rewriting

Term graph rewriting is concerned with the representation of expressions as graphs and their evaluation by rule-based graph transformation. The advantage of using graphs rather than strings or trees is that common subexpressions can be shared, which improves the efficiency of computations in space and time. Sharing is ubiquitous in implementations of programming languages: many implementations of functional, logic, object-oriented and concurrent calculi are based on term graphs. Term graphs are also used in symbolic computation systems and automated theorem proving. Research in term graph rewriting ranges from theoretical questions to practical implementation issues. Many different research areas are included, for instance: the modelling of first- and higher-order term rewriting by (acyclic or cyclic) graph rewriting, the use of graphical frameworks such as interaction nets and sharing graphs to model strategies of evaluation (for instance, optimal reduction in the lambda calculus), rewrite calculi on cyclic higher-order term graphs for the semantics and analysis of functional programs, graph reduction implementations of programming languages, and automated reasoning and symbolic computation systems working on shared structures.

Contact: Ian Mackie (



Sixth Workshop on Issues in the Theory of Security

WITS is the offical workshop organised by the IFIP WG 1.7 on "Theoretical Foundations of Security Analysis and Design", established to promote the investigation on the theoretical foundations of security, discovering and promoting new areas of application of theoretical techniques in computer security and supporting the systematic use of formal techniques in the development of security related applications. The members of WG hold their annual workshop as an open event to which all researchers working on the theory of computer security are invited. This is the sixth meeting of the series, and is organized in cooperation with ACM SIGPLAN and the working group FoMSESS of the German Computer Society (GI). There will be proceedings published as "Issues in the Theory of Security" (publisher pending).

Contact: Jan Jürjens (



6th International Workshop on Rewriting Logic and its Applications

Rewriting logic (RL) is a natural model of computation and an expressive semantic framework for concurrency, parallelism, communication and interaction. It can be used for specifying a wide range of systems and languages in various application fields. It also has good properties as a metalogical framework for representing logics. In recent years, several languages based on RL (ASF+SDF, CafeOBJ, ELAN, Maude) have been designed and implemented. The aim of the workshop is to bring together researchers with a common interest in RL and its applications, and to give them the opportunity to present their recent works, discuss future research directions, and exchange ideas.

Contact: Carolyn Talcott (


TUTORIAL - The Computer Science Perspective on Quantum Information Processing and Communication

Saturday, March 25 by Philippe Jorrand

8:30 to 13:00, location: EI 3A

About Quantum Information Processing and Communication

Research in quantum information processing and communication was born some twenty years ago, as a child of two major scientific achievements of the 20th century, namely quantum physics and information sciences. The driving force of this interdisciplinary research is that of looking for the consequences of having information encoding, computation and communication based upon the laws of quantum physics, i.e. the ultimate knowledge that we have, today, of the foreign world of elementary particles, as described by quantum mechanics. Breakthroughs in cryptography, communications, information theory, algorithmics and, more recently, in abstract computational models, programming languages and semantics frameworks, have shown that this transplantation of information from classical to quantum has far reaching consequences, both quantitative and qualitative, and opens new avenues for research within the foundations of computer science.

About the Tutorial

The aim of this tutorial is to give a survey of the motivations, principles, main results and long term perspectives of the currently very active research in quantum information processing and communication, from a computer science point of view. No prior knowledge in quantum mechanics is required to follow this tutorial. There will be four lectures of 45' each:
  1. Basic principles for quantum computation and communication: quantum resources and operations, classically controlled quantum computation, state of the art for the physical implementation of quantum computers;
  2. Communication, the quantum way: quantum key distribution, quantum state teleportation;
  3. Computing with quantum resources: Grover's and Shor's quantum algorithms, generalizations and other principles in quantum algorithmics;
  4. Foundational structures for quantum computation: measurement-based quantum computation, abstract quantum machines, quantum lambda calculi, quantum process calculi, quantum type systems, issues in semantics, open problems.

Who Should Attend

Computer scientists from universities and research centers, from academia and industry, with an interest in long term orientations, perspectives, challenging issues and open problems in information processing and communication, especially in areas like programming, semantics, algorithms and other foundational issues in computer science. A reasonable culture in computer science will be assumed from the part of the audience, but no strong expertise is required in any specific area of computer science. It must be stressed that absolutely no prior knowledge in quantum mechanics will be assumed from the audience. Basic notions in linear algebra at the undergraduate level (vectors and matrix products) will be helpful at some points in the lectures.


Philippe Jorrand
Quantum Computation Group
Leibniz Laboratory
46 avenue Felix Viallet
38000 Grenoble - France

Phone: +33 4 76 57 46 47
Fax: +33 4 76 57 46 02

Dr. Philippe Jorrand currently is Director of Research at CNRS (Centre National de la Recherche Scientifique)

Research experience:


08:30 - 10:30: Lectures 1 and 2, and discussion
10:30 - 11:00: Coffee break
11:00 - 13:00: Lectures 3 and 4, and discussion

Detailed Presentation

Information is physical: the laws which govern its encoding, processing and communication are bound by those of its unavoidably physical incarnation. In today's informatics, information obeys the laws of Newton's and Maxwell's classical physics: this assertion holds all the way from commercial computers down to (up to?) their most abstracted models, e.g. Turing machines and lambda-calculus. Today's computation is classical. Research in quantum information processing and communication was born some twenty years ago, as a child of two major scientific achievements of the 20th century, namely quantum physics and information sciences. The driving force of this interdisciplinary research is that of looking for the consequences of having information encoding, computation and communication based upon the laws of quantum physics, i.e. the ultimate knowledge that we have, today, of the foreign world of elementary particles, as described by quantum mechanics. Breakthroughs in cryptography, communications, information theory, algorithmics and, more recently, in abstract computational models, programming languages and semantics frameworks, have shown that this transplantation of information from classical to quantum has far reaching consequences, both quantitative and qualitative, and opens new avenues for research within the foundations of computer science. The aim of this tutorial is to give a survey of the motivations, principles, main results and long term perspectives of the currently very active research in quantum information processing and communication, from a computer science point of view. No prior knowledge in quantum mechanics is required to follow this tutorial. There will be four lectures of 45' each.
Lecture 1: Basic Principles for Quantum Computation and Communication
The basic ingredients and principles of quantum information and quantum computation will be introduced in the first lecture, and their fundamental differences with their classical counterparts will be stressed: quantum bits (qubits), deterministic unitary operations on one and two qubits, probabilistic measurement on one and two qubits, entangled quantum states, no-cloning theorem, combinations of these operations and properties into computations on quantum registers of arbitrary sizes, quantum massive parallelism. The simplest architecture for a quantum computer will be briefly explained, which implements the principles of classically controlled quantum computation, and a state of the art will be sketched about the current status, perspectives and difficulties of physically implementing the quantum side of such a computer.
Lecture 2: Communication, the Quantum Way
The second lecture will be devoted to quantum breakthroughs in the domain of communication, which show evidence that relying upon quantum resources allows qualitative performances beyond the reach of classical information. Two major results will be presented and explained: quantum key distribution, also called quantum cryptography, and quantum state teleportation. Rather than relying upon the difficulty of breaking a code, the quantum cryptographic protocol of Bennett and Brassard (1984) takes advantage of the physical property that quantum measurement causes a probabilistic evolution of the observed quantum state. Undesirable observations thus become detectable, and the secrecy of cryptographic key distribution can be guaranteed. With the quantum state teleportation protocol of Bennett et al (1993), a piece of quantum data initially located at point A can be relocated at a distant point B, without any quantum object carrying that data being moved along a trajectory from A to B. Both of these theoretical results have been experimentally implemented since their initial publications, and quantum cryptography starts being available within commercial products.
Lecture 3: Computing with Quantum Resources
The third lecture will be mostly devoted to two major results in quantum algorithmics, which show evidence that using quantum resources for computing allows quantitative performances unfeasable by classical means. The quantum algorithm of Grover (1996), which achieves a quadratic speedup on a broad class of problems will be presented first. Using a quantum algorithmic technique of amplitude amplification, this algorithm needs exactly sqrt(n) queries to a boolean oracle f for retrieving an element satisfying f from an unordered data base of size n, whereas classical computing requires of the order of n queries to f in a similar situation. Then, the quantum algorithm of Shor (1994) will presented in some detail. It solves a class of problems (e.g. finding factors of very large integers, which is an instance of the hidden subgroup problem) exponentially faster than by any known classical algorithm. It will be explained by developing the example of integer factorization, for which no polynomial classical algorithm is known. With an appropriate reduction of factorization to order finding, Shor has used a polynomial quantum Fourier transform to find the prime factors of an integer with bounded probability in polynomial time. In addition to amplitude amplification and order finding, other, more recent, quantum algorithmic techniques will also be sketched.
Lecture 4: Foundational Structures for Quantum Computation
Much of the quantum informatics research in the last ten to fifteen years has focused on a quest for new quantum algorithms and new kinds of quantum protocols, and great advances have been made, as described in the first three lectures. It is only in the last 5 years or so, that the exploration of other questions which are fundamental to the whole quantum informatics endeavor, has started. They address foundational structures which have long been recognized as essential to the understanding of classical computing but, strangely enough, were left aside until recently in the quantum setting. The current state of the art on these issues will be presented in the fourth lecture. Among them are the minimal resources required for universality, with an emphasis on measurement-based quantum computing, which is considered one of the most promising ways of organizing quantum computations and of designing quantum computers. Then, several families of abstract quantum computation models will be presented: pure quantum Turing machines, measurement-based and classically-controlled Turing machines, quantum cellular automata, quantum lambda calculi with quantum type systems, and quantum process calculi. It will be shwon how these abstract models of quantum computation give rise to the design of quantum programming languages, and a number of open problems for the design of adequate semantic frameworks will be explained. Concluding remarks will close this tutorial with an overview of the current hot topics in quantum information processing and communication.

Short Bibliography

TUTORIAL - Phoenix: A Framework for Code Generation and Program Analysis

Saturday, April 1 by Chuck Mitchell and Mark Lewin

9 am to 1 pm, location: EI 8

About Phoenix

Phoenix is an extensible infrastructure for code generation, optimization, and analysis. A collaboration between Microsoft Research and Microsoft's Developer Division, Phoenix technology will be used in future versions of Microsoft compilers, code generators, and development tools. Many academic researchers are incorporating Phoenix into their research infrastructure. To date many universities have downloaded the Phoenix Research Development Kit (RDK) which is poised for broad distribution in December 2005. Further information on the Phoenix RDK can be found at

About the Tutorial

This tutorial will discuss the design and implementation of Phoenix and provide an introduction to using the Phoenix framework for building code generation plug-ins as well as tools which perform static analysis and program transformation for inserting instrumentation. During the tutorial we will provide an in-depth discussion of the Phoenix IR and facilities for analyzing and transforming the IR, Phoenix Events, various extensibility mechanisms, and walk through a few Phoenix based programs. Finally, information on using Phoenix in academic research and teaching contexts will be presented.

Who Should Attend

The target audience for this workshop includes anyone interested in compiler construction, compiler frameworks, and code generation and optimization. While the primary intended audience is academic, all attendees are welcome.


Chuck Mitchell, Microsoft
Mark Lewin, External Research and Programs, Microsoft Research


ETAPS 2006 | Top | HTML 4.01 | Last Update: 2006-02-08