|| Invited speaker: James C. Browne, The University of Texas at Austin
Date: Tues, 17 April 2007
Title: "Unification of Verification and Validation Methods for Software"
Abstract: This lecture will introduce a unified and comprehensive approach to the verification and validation of software systems. This lecture will be a progress report, not a tour de force of completed work since unification is a decades long task. A unified approach to V&V should span non-functional properties such as performance and security as well as functional properties. Methods and tools available for verification and validation of software include design methods, static analysis of program code, program testing, model checking for temporal properties, runtime monitoring, and formal proofs of correctness. Unification begins with the perspective that the goal of verification and validation is to establish the correctness of a specified set of properties. The second step is to recognize that effective verification and validation requires that a system be designed and implemented with verification and validation as a requirement. The lecture will introduce design for verification, define a template for a unified property specification language, discuss a systematic uniform approach to abstraction and modeling and give examples of the relationships and synergisms among verification and validation methods.
|| Invited Speaker: Masahiro Fujita, The University of Tokyo
Date: Tues, 17 April 2007
Title: "Hardware-software co-design for SoC with separated verification between computation and communication"
Abstract: We describe a hardware-software co-design methodology targeting System-on-Chips (SoC) which constitute entire electronics systems by trying to completely separating their computation and communications parts. The basic idea is to formally verify communication parts independently from computation parts so that any change of interfaces/communications do not affect the correctness of the computation parts. We define interface protocols in terms of automaton which are formally verified through model checking type tools. Then the actual hardware and software can be automatically generated from them.
Software may be automatically compiled into the target processors. As for hardware, high level synthesis followed by logic synthesis is applied to generation gate-level implementations. What high level synthesis is doing is basically the same as what compilers for parallel or distributed processing systems do except that in high level synthesis target instructions are also automatically generated as datapaths with custom functional units. High level synthesis for hardware consists of target independent optimization, scheduling, allocation and binding of functional units, and target dependent optimization. During scheduling phases, input-output (I/O) timing for the target design is fixed. This is true for both hardware and software implementation. That is, when communication interface changes, the timing of such I/O must be rescheduled so as to satisfy the new interface definition.
We define common interface mechanisms between computation and communication parts which make their independent development possible. Using that common interface, computation parts are seamlessly unified with their automatically generated communication parts. Rescheduling of computation for new interface definitions can be made inside high level synthesis in the case of hardware. We also discuss about the relationship between automatic synthesis of hardware for computation parts, especially with automatic synthesis methods which are widely used in hardware design community. As for software synthesis, we follow the same processing flow as that for hardware. I.e., the only difference is the last code generation phases with appropriate concurrent statements if necessary. We give preliminary experimental results, which demonstrate that the proposed methodology really works for both hardware and software.
|| Invited Speaker: Davide Sangiorgi, University of Bologna
Date: Wed, 18 April 2007
Title: "Bisimulation in higher-order languages"
Abstract: The talk is about the bisimulation proof method -- an instance of the co-induction proof method -- that is at the heart of the success of bisimulation.
I will review the method, and discuss some aspects of the methods are not yet well understood. I will focus on one such aspect: the use of the method in higher-order languages (that is, languages where substitutions can involve the replacement of variables with arbitrary terms of the language, such as lambda--calculi or Higher-Order pi-calculi). In these languages, for instance, it can be hard: to have definitions and results that remain valid with modifications or additions
to the language; to prove that bisimilarity is a congruence; to obtain enhancements of the bisimulation proof method.
|| Invited Speaker: Peter D Mosses, Swansea University
Date: Thu, 19 April 2007
Title: Fundamentals of Semantics Engineering
Abstract: Reuse and adaptability are recognised as important factors in software engineering: reuse of previously-developed components can significantly
reduce development effort, and adaptability in software development supports rapid evolution, as well as responsiveness to changes in requirements.
It has also been recognised that reuse and adaptability are crucial for efficient development of formal semantic definitions of programming languages. Unfortunately, the main conventional frameworks for formal semantics lack support for reuse and adaptability, and a huge effort is required to produce and maintain a semantic definition of a full-scale programming language. Because of this, language designers generally resort to informal text for describing (static and behavioural) semantics.
In this talk, we examine the deficiencies of conventional semantic descriptions regarding reuse and adaptability, and show how they have been addressed in some more recent semantic frameworks. We also discuss the possibility of defining the semantics of individual constructs completely independently of each other, to establish the basis for a truly incremental approach to semantic definition of programming languages.