%T A Fibration Semantics for Pi-Calculus Modules via Abstract MONSTR Rule Systems
%A Richard Banach
%D July 1996
%X A translation of the pi calculus into MONSTR rulesets is described. The
syntax-directed aspects of the translation enable a categorical
semantics of modules to be constructed. The relationship between
name-sensitive and name-insensitive categories is explored, and by
moving to abstract rulesets to avoid difficulties with soundness arising
from potential symbol clashes, a categorical semantics of modules which
is both a fibration and an opfibration ensues. The fibration/opfibration
relationship between syntax and rulesets is extended to a semantics in
terms of systems (to which it is related by further
fibrations/opfibrations), and then finally to a semantics in terms of
traces, (the latter relationship being only an opfibration over the
former).
%X UMCS-96-7-3.ps.Z
%T The Semantics of Temporal Prepositions and Related Adverbials in English
%A Ian Pratt
%D July 1996
%X In this paper, we present an account of the semantics of the English
temporal prepositions and some related temporal adverbials. This
account aims to describe how these expressions function in English
sentences, and in particular how they contribute to the
truth-conditions of the sentences in which they occur. Our account
will be couched at a sufficiently formal level to enable us to
define a procedure for assessing the validity of a range of
arguments couched in everyday temporal English.
_
We develop a temporal representation language, TL, into which English
sentences involving temporal prepositions can be naturally translated.
Our
goal is to tailor the representation language so as to fit as
closely as
possible the expressive resources provided by temporal constructions
in English. In particular, we show how many restrictions on use of
multiple preposition phrases are explained naturally by the
semantics we give for these phrases, and the structure of the formal
language we translate them into. This close match, we claim,
justifies a program of work in which TL will be extended so as to
permit a more refined and comprehensive account of the semantics of
English temporal expressions than has been undertaken so far.
%X UMCS-96-7-2.ps.Z
%T Shape Representation using Fourier Coefficients of the Sinusoidal Transform
%A Ian Pratt
%D July 1996
%X In this paper, we investigate a method of representing convex
regions of the plane based on the {\em sinusoidal transform}. The
sinusoidal transform is a member of the general class of geometrical
representation schemes based on a `boundary function'---a periodic
function of one variable which picks out a bounded region of the
plane in terms of the contour that encloses it. Various types of
boundary function have been explored in the literature, but one
feature they all share is that, being periodic, they can be
approximated by a sequence of Fourier coefficients, which therefore
serve to represent the original region.
_
The aim of the paper is to obtain a thorough understanding of the
sinusoidal transform of its associated Fourier descriptors. We
investigate the derivatives of the sinusoidal transform and
establish an inverse transform. We obtain a characterization of the
set of periodic functions of one variable which are the sinusoidal
transforms of `well-behaved' convex regions of the plane. We
derive negative results concerning the prospects for extending the
sinusoidal representation scheme to non-convex regions.
Finally---and of more practical importance---we show how various
geometrical quantities and relations---including overlap, inclusion,
separation, distance, radius of curvature, perimeter length, area
and congruence---can be extracted from the sinusoidal transforms of
regions,and hence from their sinusoidal Fourier descriptors.
%X UMCS-96-7-1.ps.Z
%T Language on the Internet: Fantasy and Terminology
%A Mary McGee Wood
%D March 1996
%X Language on the Internet allows us to communicate in ways which -
in popular fears - threaten to undermine or even replace the material
world. I argue that a distinction between language grounded in
reality ("terminology") and language which goes beyond it ("fantasy")
is fundamental to the nature of human language. Examples show that the
use of language on the Internet simply accentuates this distinction
:and that, as well as facilitating fantasy, it can take us back to a
firmer grounding in reality. The nature of society and of perception
also get involved.
%X UMCS-96-3-1.ps.Z
%T Remote Control of Mobile Robot via Internet
%A Ulrich Nehmzow, Andreas Buehlmeier, Holger Duerer and Manfred Noelte
%D February 1996
%X The paper describes experiments in telerobotics via low-bandwidth
communication channels. A mobile robot with sonar and infrared
range-finding sensors as well as tactile
and proprioceptive sensors at Manchester University in the United
Kingdom was controlled by three of the authors from Bremen University
in Germany in real time, using low bandwidth internet communication
facilities.
_
The experiments comprised remote surveying, fine motion control and
object manipulation tasks. The purpose of the investigations was to
determine whether remote access to robotics facilities can be achieved
in this way (training and sharing of resoures), and whether the limited
bandwidth of the internet is sufficient for more complex tasks than just
crude motion control.
%X UMCS-96-2-3.ps.Z
%T Some Formal Musings on the Performance of Asynchronous Hardware
%A C. Tofts
%D February 1996
%X We present formal models of the performance of various asynchronous
hardware systems. In particular, we examine the behaviour of the systems
under deterministic computation cost, and non-deterministic or
probabilistic cost. We present proofs of the limiting behaviour
of pipelines with variable cost units. Studies of the effect
of buffering variable units within constant cost pipelines.
The effect of various flushing strategies on the performance
of asynchronous pipelines under variable thread
length, and preliminary models of the effects
of data dependency of processing time on pipelines.
%X UMCS-96-2-2.ps.Z
%T A Theory of Classes: Proofs and Models
%A Barnaby P. Hilken and David E. Rydeheard
%D February 1996
%X We investigate the proof structure and models of theories of classes, where classes are `collections' of entities. The theories are weaker than set theories and arise from a study of type classes in programming languages, as well as comprehension
schemata in categories. We introduce two languages of proofs, one a simple type theory and the other involving proof environments for storing and retrieving proofs. The relationship between these languages is defined in terms of a normalisation result
for proofs. We use this result to define a categorical semantics for classes and establish its coherence. Finally, we show how the formal systems relate to type classes in programming
languages.
%X UMCS-96-2-1.ps.Z
%T Fractal Encoding by Classified Domain Trees
%A Behnam Bani-Eqbal
%D January 1996
%X Fractal coding of digital images offers many promising qualities.
The encoding process, however, suffers from the long search time of
the domain block pool. A standard technique for speeding up
the encoding is the feature classification of blocks. In this
paper we show that the classes can be arranged
in a tree, and searched by a pruning algorithm developed and
explained in a previous paper by the author. We show that this hybrid
method enhances the search speed by upto 75
based method.
%X UMCS-96-1-3.ps.Z
%T Learning by Tuition, Experiments with the Manchester ``FortyTwo''
%A A. Buehlmeier, H. Duerer, J. Monnerjahn, M. Noelte and U. Nehmzow
%D January 1996
%X This report describes the results of a short--term research stay of
a group of PhD-students from Bremen University, Germany at the
University of Manchester robotics lab. The goal of the research was to
investigate the feasibility of teaching direct sensor to
motor relations to evolve a complex task-solving behaviour without
explicitly programming it. Results and consequences for directions
of further research are reported.
%X UMCS-96-1-2.ps.Z
%T Modal Observation Equivalence of Processes
%A W.P.R. Mitchell and D.P. Carlisle
%D January 1996
%X This paper shows that many simulation equivalences over processes can
be characterised by a single parameterised modal logic formula.
_
Strong bisimulation equivalence was shown by Milner to be
characterised in this way. This paper shows that simulation, complete
simulation, ready simulation, 2-nested simulation, weak bisimulation
and congruence can also be characterised in this way.
_
The paper constructs a template which can be used to generate a
parameterised modal formula which is guaranteed to characterise a
`sensible' simulation equivalence. Two infinite families of simulation
equivalences are characterised in this way, both of which converge to
strong bisimulation. The paper defines a similar template for weak
simulation equivalences. This template is used to construct an
infinite family of equivalences which converges to weak bisimulation
equivalence, and another infinite family which converges to
congruence.
%X UMCS-96-1-1.ps.Z
%T The Prospective Student's Introduction to the Robot Learning Problem
%A Ulrich Nehmzow and Tom Mitchell
%D December 1995
%X The paper presents an introduction to the area of learning in robot
control. Fundamental principles are discussed, learning mechanisms such
as reinforcement learning are discussed, and open questions and research
areas are identified.
%X UMCS-95-12-6.ps.Z
%T Notes on P-Algebra (4): Algebra over Process Structure, Part I
%A Kohei Honda
%D December 1995
%X We develop theory of algebra over processes, based on an abstract
treatment of process structure. The abstract framework of process
structures is developed (Part I), and, on its basis, theory of algebra
is developed where the basic results including Birkhoff-like Variety
Theorem are proved (Part II). We also discuss properties of
significant concrete examples of algebras, taken from extant theories
of concurrency. Part I develops the abstract theory of process
structure encompassing a wide range of structures for process theory
including concrete structures we have discussed in the preceding
notes. The theory is used as foundations of the algebraic development
in Part II, and is interesting in its own right.
%X UMCS-95-12-5.ps.Z
%T Notes on P-Algebra (3): Cartesian Product of Process Structures
%A Kohei Honda
%D December 1995
%X We establish a cartesian presentation of a product of an indexed
family of process structures, giving an analogue of a similar
presentation of set-theoretic product. Such a presentation is
non-trivial due to a specific nature of the process relation in
process structures and forms a basis of the algebraic theory of
processes to be developed in the subsequent reports.
%X UMCS-95-12-4.ps.Z
%T Notes on P-Algebra (2): Group Presentation of Process Structure
%A Kohei Honda
%D December 1995
%X We give yet another presentation of the process structures introduced
in the previous report and prove its equivalence to the original
one. The presentation enjoys several convenient features not found in
the original presentation, though in compensation for a certain loss
of simplicity. Among others, it dispenses with the use of a choice
function for quotient construction. The new presentation offers a
different viewpoint on the theory of process structure, and may
provide a useful tool for some purposes.
%X UMCS-95-12-3.ps.Z
%T Notes on P-Algebra (1): Process Structures
%A Kohei Honda
%D December 1995
%X We develop an elementary theory of process structure, a
mathematical object which is intended to capture a basic aspect of
concurrent processes in a simple way. It has two kinds of
presentations: one underlies many extant process theories and another
has a more geometric flavour. We prove these are essentially
equivalent. The theory forms a basis of P-algebra, a
general semantic structure for concurrent processes.
%X UMCS-95-12-2.ps.Z
%T Strategies for Temporal Resolution
%A Clare Dixon, 10 pounds
%D December 1995
%X Verifying that a temporal logic specification satisfies a temporal
property
requires some form of theorem proving. However, although proof
procedures
exist for such logics, many are either unsuitable for automatic
implementation or only deal with small fragments of the logic. In this
thesis the algorithms for, and strategies to guide, a fully automated
temporal resolution theorem prover are given, proved correct and
evaluated.
_
An approach to applying resolution, a proof method for classical
logics
suited to mechanisation, to temporal logics has been developed by
Fisher. The method involves translation to a normal form, classical
style
resolution within states and temporal resolution over states. It has
only
one temporal resolution rule and is therefore particularly suitable as
the
basis of an automated temporal resolution theorem prover.
_
As the application of the temporal resolution rule is the most costly
part of the method, involving search amongst graphs, different
algorithms
on which to base an implementation are developed. The correctness of
each
algorithm is given and a comparison of their performance is made.
_
Strategies have been useful in classical resolution to guide the
application of the resolution rule, making proofs more efficient and
generating less redundant information. Similarly, a range of strategies
related to each step of the temporal resolution theorem prover are
suggested to improve the efficiency of the basic system. Heuristics to
reduce the amount of information input to the temporal resolution step
are
described. Suggestions are given on how a classical resolution theorem
prover might be incorporated to perform resolution within states. For
each
of these heuristics and strategies, correctness is considered and
results
given for a set of examples.
_
The suitability of Fisher's temporal resolution method to
mechanisation,
the algorithms developed for temporal resolution and the strategies and
heuristics developed for each stage of the resolution method together
means
that temporal resolution provides a viable option for automated temporal
theorem proving.
%X UMCS-95-12-1.ps.Z
%T A Highly Restricted Temporal Logic with a Tractable Decision Procedure
%A Ian Pratt
%D November 1995
%X This paper demonstrates the tractability of a highly restricted
temporal logic, into which English sentences containing a variety of
temporal prepositional constructions can be translated. This paper is
intended as a preliminary study for a larger project of identifying
tractable temporal logics corresponding to temporal constructions in
natural language. This larger project also includes current work
exploring the automatic translation of English sentences involving
temporal prepositions (and related adverbials) into temporal logics.
%X UMCS-95-11-1.ps.Z
%T Enhancing the Tractability of Rely/Guarantee Specifications in the Development of Interfering Operations
%A P. Collette
%A C. B. Jones
%D October 1995
%X Various forms of assumption/commitment specifications have been
used to specify and reason about the interference that comes from
concurrent execution; in particular, consistent and complete proof
rules relating to shared state operation specifications with rely
and guarantee conditions have been published elsewhere. This
report investigates methodological issues in the formulation of
such specifications, and their way to record design decisions.
This work aims at making the use of rely/guarantee
conditions more tractable, both at the specification level and
in the development towards code.
%X UMCS-95-10-3.ps.Z
%T Designing C-elements for Testability
%A O. A. Petlin
%A S. B. Furber
%D October 1995
%K Asynchronous circuit CMOS VLSI circuit Symmetric C-element Asymmetric C-element Stuck-at fault stuck-open fault Logic testing
%X C-elements are used widely in asynchronous VLSI circuits. Fabrication
faults in some CMOS C-elements can be undetectable by logic testing.
Testable designs of static symmetric and asymmetric C-elements are
given in this report which provide for the detection of single line
stuck-at and stuck-open faults. We show that driving feedback
transistors in the proposed testable static C-elements transforms their
sequential functions into combinational AND and OR functions or into
repeaters of one of their inputs depending on the driving logic value.
This simplifies the testing of asynchronous circuits which incorporate
a large number of state holding elements. The scan testable C-element
described can be used in scan testing of the asynchronous circuit
making the states of its memory elements controllable and observable.
%X UMCS-95-10-2.ps.Z
%T Intra-Modular Structuring in Model-Oriented Specification: Expressing Non-Interference with Read and Write Frames
%A Juan C. Bicarregui
%D October 1995
%X Compositionality provides the key to managing complexity in software
systems and thus should be sought at all levels in the design process. In
the development of code from model-oriented specifications,
compositionality can not only be achieved by the decomposition of system
specifications into modules but may also arise within the development of
individual modules.
_
This thesis considers the compositional development of operations that
share state within modules. A key mechanism in the structuring of such
developments lies in the definition of non-interference between
sub-operations. The read and write frames in VDM implicit operations
definitions can be understood as constraints on the read and write accesses
which can be made by valid implementations. Their use as such permits
reasoning about non-interference between operations and can thus be a
source of compositionality within modules. However, existing semantic
models of model-oriented specification and refinement do not provide a
basis for such an interpretation of read and write frames.
_
Investigations are made into alternative denotational semantic models for
operation specifications which do support read and write frames and a proof
theory for reasoning about non-interference between sub-operations is
founded upon them. The propposed semantic models provide progressively
finer distinctions between implementations over and above the standard
model. Thus new possibilities arise for discrimination between
specifications that would otherwise be considered equivalent, whilst
existing results remain valid.
%X UMCS-95-10-1.ps.Z
%T Research Interactions Between University and Industry in Computer Science in the United States and United Kingdom
%A T. Haigh
%D August 1995
%X The academic discipline of computer science owes its existence to the
industrial development of computer technology; the computer industry
owes its existence to work undertaken by academic researchers. Close
links have always existed between the two, and ideas, equipment and
personnel have flowed between them.
_
The funding of academic research by industry, and the exploitation of
research results by universities, are now the subject of more activity
and interest than ever before. This report examines the current state
of relations between universities and industry in computer science,
with particular reference to the characteristics of different kinds of
formal and informal relationship, such as research sponsorship,
consultancy agreements and technology licensing, and the advantages
and disadvantages which each potentially offers to participants.
Considerable attention is paid to the special characteristics of
computer technology, computer science research and the computer
industry which make certain kinds of interaction more productive and
widespread than others.
_
Policies regarding ownership of intellectual property, the
establishment of start-up companies and other controversial topics are
examined. The role of personal relationships and tacit knowledge is
scrutinised, as are the different roles given to formal university
bodies in regulating what are essentially relationships between
individuals. Where significant, differences between observed
activities in the UK and US are analysed.
_
Primary source material is a series of 27 in depth interviews
undertaken with a range of professionals working in university
departments and research teams, central university offices and
corporate managerial capacities in both the US and the UK. These are
supplemented by references to a range of research literature and by a
concise account of the historical and political background to current
patterns of research support.
%X UMCS-95-8-1.ps.Z
%T The Algebraic Theory of Interaction Nets
%A R. Banach
%D July 1995
%K Interaction net Graph grammar DPO rewriting
%X The theory of interaction nets, invented by Lafont, is re-examined
from the algebraic hypergraph rewriting perspective. Supersimple nets
are defined and discussed, and some related classes of nets, the
polysimple and monosimple classes, are defined and investigated.
Their static properties are established, and the invariants that need
to be preserved by rewriting are investigated in detail. It is shown
that in the general case, context-specific information may be used to
ensure that rules actually preserve the characteristics of the
rewritten net. Subordinate agents, which like logical constants for
falsity may be introduced only in non-void contexts, are presented,
and the ramifications of the theory in their presence are
investigated, relating it to the simple and semisimple classes of
Lafont. Under suitable conditions, describable in purely
combinatorial terms, net rewriting systems possess Church-Rosser and
Strong Normalisation properties usually associated with rewriting
systems derived from logic. This leads to the reformulation of the
proof nets of multiplicative linear logic including constants in an
essentially logic-free way.
%X UMCS-95-7-2.ps.Z
%T The State Evolution Method for Verifying Hardware Systems
%A Howard Barringer
%A Graham Gough
%A Brian Monahan
%A Alan Williams
%D July 1995
%K Symbolic verification Automatic hardware verification Theorem-proving Hardware design aids.
%X We present a novel state evolution method for establishing standard
(strong) bisimulation, which gives a tractable verification approach
for deterministic machines, possibly with infinite state-spaces, and
operates at an abstract level. The problem of establishing
equivalence is reduced to one of proving the validity of a set of
simpler (first-order) logical verification conditions, generated
from the state evolution expressions. The approach maintains a high
degree of automation, a feature of state-based methods, whilst
offering the potential of containing the usual growth in complexity
of verification, one advantage of using theorem-proving techniques.
%X UMCS-95-7-1.ps.Z
%T The Evolution of Polygenic Sex Determination with Potential for Environmental Manipulation
%A M. J. Hatcher
%A C. Tofts
%D April 1995
%K Process algebra Probability Theory of computing Simulation Evolution Sex determination ESD Polygenics
%X We present a formal account of a genetic mechanism for Environmental
Sex Determination (ESD) using a probabilistic process algebra (WSCCS).
We exploit the process algebra to demonstrate that a single chromosome
pair based polyfactorial system is intrinsically unstable. The system
evolves, with a deterministic relationship between an animal's sex and
its polyfactor count, to three possible forms of genetic mechanism
:pure ESD; pure genetic sex determination (GSD) (both XY and ZW being
possible outcomes); or a mixed solution with a 50/50 division between
ESD and GSD. With a probabilistic dependence between gene count and
sex we observe the first and last of the above solutions (no pure
GSD), but with population gene frequencies that can be far from
Hardy-Weinberg equilibrium. A model of the systemic behaviour in the
presence of recombination is presented, and the effect of the relative
recombination and evolution rates is discussed. Our process algebraic
model is extended to account for the polyfactors being distributed
over many possible chromosome pairs and demonstrates that only one
essentially new stable solution is reachable. We demonstrate that
within a multichromosome system there will almost always be a
`leading' chromosome pair which determines sex, other chromosome pairs
will have identical sex determining information upon both members of
the pair, but not necessarily none.
%X UMCS-95-4-2.ps.Z
%T Formal reuse of hardware design
%A A. Melo
%R PhD thesis
%D April 1995
%K Reusability System-level synthesis Formal methods Process algebras Interface equation
%X The complexity of hardware components has grown rapidly over the past
two decades, with the advances in technologies for manufacturing
integrated circuits. Also, the widely spread application of hardware
components has required them to be competitive to the market. These
factors have led to a requirement for new methods and techniques to
enhance controllability, quality and productivity of hardware
components. Reusability has been recognized as a basic principle for
enhancing productivity and quality of engineering
products. Additionally, formal development of hardware has emerged as
a method to ensure quality and help handle the complexity of
description of hardware components. Based on these premises, this
thesis introduces a technique for formal reuse of hardware design.
_
The main idea of this project is to create a `context' in which an
existing hardware component must be embedded in order to implement a
new desired component. A solution for effective reuse of hardware
components depends on mechanisms to compare the behaviour of these
elements, where their similarities and dissimilarities are
recognized. So, ways to relate components from a reuse point of view
are introduced and, based on those, the necessary `context' is
formally constructed.
_
A process algebra is used for the behavioural representation of
hardware, and as a foundation for formally reasoning about the reuse
of hardware design. Algebraic manipulation of processes allows for
managing the representation of complex components. In order to permit
reuse of complex components, the `context' is constructed via a
`decomposition operator' defined for the particular process algebra.
To obtain minimal `contexts', mechanisms for minimization of
concurrent processes, in terms of the number of states and
communications, are finally presented.
_
The contribution of this thesis is a basis for algebraic manipulation
of reusable components, and a mechanism to create minimal `contexts'
for formal reuse of hardware components.
%X UMCS-95-4-1.ps.Z
%T Analytic and locally approximate solutions to properties of
probabilistic processes
%A C. Tofts
%D March 1995
%K Process algebra Probability Theory of computing Simulation
%X Recent extensions to process algebra can be used to describe
performance or error rate properties of systems. We examine how
properties of systems expressed in these algebras can be
elicited. Particular attention is given to the ability to describe the
behaviour of system components parametrically. We present how
analytic formulae for performance properties can be derived from
probabilistic process algebraic descriptions. Demonstrating how local
approximate solutions can be derived for the properties when their
exact solutions would be too computationally expensive to evaluate.
As an example we derive the performance of an Alternating Bit protocol
with respect to its error and retry rates.
%X UMCS-95-3-1.ps.Z
%T A Proof System for a sequential Object-Based Language
%A C. C. de Figueiredo
%D January 1995
%K Object-oriented languages Program correctness Program verification Proof systems Formal semantics Denotational semantics Sequent calculus
%X This thesis presents a proof system to support the formal verification
of the correctness of sequential object-based programs written in a
simple programming language named A1. The proof system uses a
specification language, named L1, that is based on Hoare-style
assertions.
_
The syntax and formal semantics of the programming and specification
languages are given. A formal semantics of correctness formulas (for
partial correctness) is then defined. Axiom schemes and proof rules
associated with commands of A1, which compose a proof system named P1,
are presented. Examples are shown where these axiom schemes and proof
rules are used to derive program correctness. The proof system P1 is
shown to be sound according to the given semantics of A1 and of the
correctness formulas.
_
Finally, conclusions are drawn and suggestions for further work are
made. This thesis presents a proof system to support the formal
verification of the correctness of sequential object-based programs
written in a simple programming language named A1. The proof system
uses a specification language, named L1, that is based on Hoare-style
assertions.
_
The syntax and formal semantics of the programming and specification
languages are given. A formal semantics of correctness formulas (for
partial correctness) is then defined. Axiom schemes and proof rules
associated with commands of A1, which compose a proof system named P1,
are presented. Examples are shown where these axiom schemes and proof
rules are used to derive program correctness. The proof system P1 is
shown to be sound according to the given semantics of A1 and of the
correctness formulas.
_
Finally, conclusions are drawn and suggestions for further work are
made.
%X UMCS-95-1-1.ps.Z
%T Variable Delay Timing Analysis of Logic Circuits
%A B. Bani-Eqbal
%D December 1994
%K Timing verification Hazard detection Asynchronous circuits
%X We propose a new simulation algorithm for combinational circuits which
accurately models delay variations in the gates. The algorithm can
also be used for the timing verification of asynchronous circuits, e.g.
hazard detection, race/oscillation of sequential circuits, etc. The
basic idea is to assign a time variable to each signal transition
(edge). The gate delays translate to algebraic relations between the
variables. As the arrival order of the edges to a gate can be
uncertain, so the simulation result of the gate can diverge. Branching
signal paths are introduced to deal with this problem. However the
simulation process later often does not depend on earlier
branching. The algorithm takes advantage of such cases. We believe
that this leads to considerable efficiency improvements.
%X UMCS-94-12-3.ps.Z
%T The ELLA Verification Environment: A Tutorial Introduction
%A Howard Barringer
%A Graham Gough
%A Brian Monahan
%A Alan Williams
%D December 1994
%K Hardware description languages ELLA Design aids Verification Semantics Process algebras
%X The ELLA Verification Environment (EVE) offers both familiar CAD tool
support for the ELLA hardware description language, and formal
analysis tools, integrated within the one system, accessed by a common
window-based graphical user interface. The aim of the system is to
enable hardware design engineers to benefit from the use of formal
techniques in the design cycle by embedding them into existing
technology, and providing interaction with them at the designer level.
The formal analysis tools are also designed to operate mainly
automatically, significantly reducing the need for formal methods
expertise during design and analysis. The EVE is implemented in
Common Lisp and CLOS using the LispWorks Environment (Harlequin
Ltd. (Cambridge).
_
This tutorial provides a brief overview of the EVE, and then describes
the use of each component tool in detail. Several small design
examples are used as illustration, and also act as reader
exercises. It also includes background information on suggested tool
usage, and a brief introduction to the novel symbolic verification
approach used by the analysis tools.
_
This report was produced by Manchester Informatics Ltd
under contract for DRA (Malvern).
%X UMCS-94-12-2.ps.Z
%T A Process Algebra Foundation for Reasoning about Core ELLA
%A Howard Barringer
%A Graham Gough
%A Brian Monahan
%A Alan Williams
%D December 1994
%K Hardware description languages Semantics Verification Process algebras ELLA
%X A process algebraic foundation is developed, for formal analysis of
synchronous hardware designs using the commercially available
hardware design language, ELLA. An underlying semantic foundation,
based on input/output trace sets, is presented first through
the use of state machines. Such a representation enables direct
application of standard, fully automated, trace equivalence checking
tools. However, to overcome the computational limitations imposed by such
analysis methods, the input/output trace semantics is re-presented
through a synchronous process algebra, EPA. Primitive processes in
EPA denote the behaviour of primitive hardware components, such as
delays or multiplexers, with composition operators corresponding to
the different ways in which behaviours may be built. Of particular
significance is the parallel composition operator which captures the
machinery for building networks from other components/networks.
Actions in EPA are structured and signify the state of input and
output signals. This structure, however, is abstracted by developing
an algebra for the actions. In particular, parallel composition
on processes neatly lifts to a special (synchronous) product operation
on actions.
_
The EPA representation forms a good basis for semi-automated
high-level symbolic manipulation and reasoning tools. Firstly, the
original design structure can be maintained, thus easing the problems
of user level feedback from tools. Secondly, the application of EPA to ELLA
enables a normal form for EPA terms. This provides a route to
tractable symbolic verification and simulation, using a state
evolution method to establish strong bisimulation properties. The
method has been successfully applied to classes of unbounded state
space systems.
%X UMCS-94-12-1.ps.Z
%T Autonomous acquisition of sensor-motor couplings in robots
%A U. Nehmzow
%D November 1994
%K Robotics Artificial neural networks Competence acquisition in robots
%X Fixed robot controllers are suitable for tasks whose main
characteristics are known a priori. However, for tasks such as
exploration of unknown territory, fixed controllers tend to be too
brittle as they rely on predefined knowledge, supplied by the
designer. For such tasks (i.e. tasks in which a-priori knowledge is
limited) we propose to use self-organising controllers which allow the
robot to acquire the necessary competences autonomously.
_
The paper describes experiments with mobile robots, in which the
robots autonomously determine effective connections between their
sensory input and their motor response to it.
_
In these experiments, artificial neural networks are used to store
associations between sensory input and motor responses, the networks
are trained in a self-supervised way, making use of so-called
instinct-rules which govern the robots' behaviour. The fast learning
achieved enables the robots to adapt to changing circumstances such as
changes in their environment or their morphology (e.g. sensor
failure).
%X UMCS-94-11-1.ps.Z
%T Constraint Logic Planning
%A Y. Zhang
%A H. Barringer
%D September 1994
%K Reasoning about action Nonlinear planning Deductive planning Constraint logic programming
%X Deductive planning provides a rigorous approach for planning, such that
the capabilities and limitations of the planning systems can be clearly
defined. However, most of the deductive planners suffer from the frame
problem and they are linear planners. This report presents a novel
nonlinear deductive planning framework: constraint logic planning.
Procedurally speaking, nonlinear planning is an activity of constraint
solving. Constraint logic planning is a combination of the constraint
solving for nonlinear planning with a temporal logic paradigm. In
comparison with other state based deductive planning methods, constraint
logic planning deals with the frame problem efficiently in a way similar to
that in classical nonlinear planning, and constraint logic planning is a
nonlinear deductive planning approach, it employs the least commitment
strategy for both temporal relation decision and object selection. In
comparison with classical nonlinear planning systems, constraint logic
planning is a deductive planning approach and it achieves completeness,
soundness and conflict freeness. Constraint logic planning is believed to
be even more efficient than classical nonlinear planning.
%X UMCS-94-9-2.ps.Z
%T Action Constraint Maintenance
%A Y. Zhang
%A H. Barringer
%D September 1994
%K Reasoning about action Nonlinear planning Domain constraints
%X This report presents a novel approach to detect conflicts in nonlinear
plans. Conventional methods detect conflicts in a nonlinear plan by
checking that if preconditions of actions in the plan are possibly deleted
by some other actions. Our novel method, action constraint maintenance,
detects conflicts in a nonlinear plan by checking that if the nonlinear
plan satisfies a set of specific action constraints. We prove that, under
certain conditions, action constraint maintenance is sound and complete,
and we show that action constraint maintenance is much more efficient than
conventional methods. The most significant advantage of this method is
that, if a nonlinear plan is not consistent with an action constraint, all
its possible expansions will not be consistent with the action constraint.
Therefore, action constraint maintenance can be employed to detect
conflicts in a nonlinear plan during the plan construction, i.e. "on the
fly", and backtracking from inconsistencies will not lose possible
solutions. By adopting action constraint maintenance "on the fly" in our
deductive nonlinear planning framework, we avoid computation for modal
truth criteria in nonlinear planning, and we can achieve and maintain the
necessary truth for all the goals involved in a planning process. Also,
action constraint maintenance itself is a domain independent search
heuristic in nonlinear planning.
%X UMCS-94-9-2.ps.Z
%T Domain Constraint Maintenance
%A Y. Zhang
%A H. Barringer
%A D. Carlisle
%D August 1994
%K Reasoning about action Domain constraints Nonlinear planning
%X In this report, we propose a set of domain independent and human
independent restrictions on actions and we show that the restrictions can
prevent actions from being improperly defined. Then, based on the set of
restrictions, we propose a new conflict detecting method for nonlinear
plans and we show that this new method is sound and complete.
%X UMCS-94-8-1.ps.Z
%T A Reified Temporal Logic for Nonlinear Planning
%A Y. Zhang
%A H. Barringer
%D July 1994
%K Temporal logics in AI AI planning Deductive planning Nonlinear planning
%X Deductive planning treats planning as a reasoning process within a
formal logic. Several formal logics have been applied in various deductive
planning systems, but none is completely satisfactory and widely accepted
by planning researchers. Almost all the formal logics applied in deductive
planning are proposed for more general purpose than planning. Usually, the
more general and the more expressive a language is, the less efficient that
language is for dealing with problems in a specific domain. In this
report, we propose a temporal logic specifically for representing and
reasoning about nonlinear planning. Our temporal logic naturally captures
the persistence assumption and, when applied in deductive planning, can
deal with the frame problem efficiently. Also, in this temporal logic,
planning problem can be concisely and clearly specified. Directly upon
this temporal logic, we build a deductive nonlinear planning framework,
constraint logic planning, which is proved to be sound, complete and is
showed to be efficient. In this report, we also briefly describe how the
temporal logic provides the basis for constraint logic planning.
%X UMCS-94-7-1.ps.Z
%T MathPIP: A Mathematica Interface for PIP
%A Zbigniew Chamski
%D June 1994
%K Parametric integer programming PIP Arbitrary-precision arithmetic Mathematica software system
%X The Parametric Integer Programming (PIP) algorithm due to Feautrier is
highly suitable for solving linear programming problems arising in
program parallelization and synthesis. In this report, we describe the
interface allowing to use the arbitrary-precision arithmetic
implementation of PIP as an external problem solver in MATHEMATICA,
thus providing a convenient means for fast experimentation with
algorithms which use PIP as a building block. The first part of the
report offers a short overview of PIP, followed by an example-based
user's guide to MathPIP. The second part is a reference manual
describing in detail each user-visible component of MathPIP.
%X UMCS-94-6-1.ps.Z
%T The Uflow Computational Model and Intermediate Format
%A J. Sargeant
%A C. Kirkham
%A S. Anderson
%D May 1994
%K Implicit parallelism Computational models Dataflow Actors
%X This report motivates and defines a general-purpose, architecture
independent, parallel computational model, which captures the intuitions
which underlie the design of the United Functions and Objects programming
language. The model has two aspects, which turn out to be a traditional
dataflow model and an actor-like model, with a very simple interface
between the two. Certain aspects of the model, particularly strictness,
maximum parallelism, lack of suspension, and the implications of
introducing stateful objects, are stressed. The model is embodied in a
textual intermediate format, and in a set of UFO data structures. This
report also serves as a definition of the intermediate format, and gives a
brief overview of the data structures.
%X UMCS-94-5-1.ps.Z
%T A Comparative Study of Data-Flow Architectures
%A David F. Snelling
%A Gregory K. Egan
%D April 1994
%K Hardware Computing Methodologies
%X Partly as a result of the diversity of the architectures and system
software, Data-Flow systems have not been compared directly. In this
paper, three widely different Data-Flow systems are compared using a
relatively uniform metric which is representative of the actual amount
"work" performed by these systems to execute a small collection of common
benchmarks. One feature, common to all Data-Flow systems, forms the basis
for a metric for "work", the creation of tokens. By counting the creation
of tokens in all parts of a Data-Flow system, an accurate measure of the
work performed can be obtained. The three systems compared are the
Manchester Data-Flow Machine, the Stateless Data-Flow Architecture (also
from Manchester), and the CSIRAC II machine. All machines perform
approximately the same amount of work when solving a collection of
benchmark problems, but two systems (CSIRAC II and SDFA) exhibit successful
exploitation of a memory hierarchy.
%X UMCS-94-4-3.ps.Z
%T Heuristic reasoning for an automatic commonsense understanding of logic electronic design specifications
%A Salvador Mir
%R PhD thesis
%D April 1994
%X An automatic heuristic understanding of digital electronic design
specifications is discussed in this thesis. The term understanding is used
in the sense that knowledge about the functionality and purpose of the
cells and signals of a design is abstracted away from the specification.
An heuristic analysis which exploits implicit design semantics is carried
out. The analysis bypasses the examination of detailed logical and
electrical data since it is aimed at the machine simulation of an heuristic
way of understanding electronic design specifications exhibited by human
experts. By overlooking implicit knowledge, current automatic systems are
clearly at a disadvantage with respect to human experts for the analysis,
design and management of electronic data. The possibility of getting the
machine to heuristically understand a specification is seen in this thesis
as one way of improving this situation. The thesis defines and classifies
expert knowledge about digital electronic designs, explores ways to generate
expert knowledge about a design from the heurristic analysis of its
specification and discusses examples of exploiting this knowledge to plan
and control automatic tasks. The experimental result of this research is a
knowledge-based system aimed at providing an empirical demonstration of the
convenience and viability of an automatic heuristic understanding of design
specifications. The method of reasoning of the system has without question
limitations, but these are also faced by human experts when they attempt
this task. The current prototype of the system already indicates that
valid knowledge can be generated and it implements methods for avoiding the
critical computational complexity of the problem.
%X UMCS-94-4-2.ps.Z
%T The Global-yet-Personal Information System
%A J. R. Gurd
%A C. B. Jones
%D April 1994
%X The edition of this volume set out to compile a set of personal views about
the long-term direction of computer science research. In responding to
this goal, we have chosen to identify what we perceive as a long-term
challenge to the capabilities of computing technology in serving the
broader needs of people and society, and to discuss how this `Grand
Challenge' might be met by future research. We also present our personal
view of the research methodology. Much of present-day computer technology
is concerned with the processing, storage and communication of digital
data. The view taken in this contribution is that a far more important use
of computers and computing is to manage and manipulate human-related
information. Currently, this has been tackled at the level of single
organisations (company, institution, government department, etc.) by the
use of databases which are often limited to single functions within the
organisation. Databases are closed, in the sense that the information
itself can be viewed in a limited number of ways, and the ways in which it
can evolve are carefully controlled. Interaction between databases
containing related data is prohibited, except through the mediation of
human experts. This is an unnecessarily restricted concept of information
processing,and one which fails to recognise its real social and economic
potential. We foresee a huge market for personal information services
based on open access to a continually evolving global network of stored
information. Although there are significant technical difficulties
associated with creating and controlling such networks and services, we
predict that the economic inventives will ensure that the necessary
development occurs and that this information market will - within a period
of decades - dwarf the market in computing machinery and software. With
regard to the research methodology, we speculate that computer science is
evolving into the more general discipline of informatics and consider the
nature of informatics as an emerging science.
%X UMCS-94-4-1.ps.Z
%T Accelerated Ray Tracing on the KSR1
%A M. J. Keates
%A R. J. Hubbold
%D February 1994
%X This report describes the results of experiments with accelerated ray
tracing on a virtual shared-memory parallel computer. The machine used,
the Kendall Square Research KSR1, is described in sufficient detail for
its novel features to be appreciated. The report describes a new ray
tracer written specifically to capitalise on the KSR1's features. Models
used in the experiments include well-known test cases from the Haines
database, permitting comparison with previously reported results. The
results show a speed-up which is 76.9 percent of linear scaling on 256
processors. The report analyses the performance of the software and
draws conclusions about the suitability of this type of machine for
image synthesis.
%X UMCS-94-2-2.ps.Z
%T A Memory Model for Case Retrieval by Activation Passing
%A Michael Brown
%R PhD thesis
%D February 1994
%X This thesis is concerned with the development of an under-lying model of
memory to support selective case retrieval for case-based reasoning. The
major requirements are that retrieval should be highly flexible yet
efficient. The traditional approach of ``indexing'' is rejected as being
too restrictive while more flexible approaches in analogical reasoning are
generally too comutationally expensive. Several important organisational
principles are developed in the memory model. A network representation is
advocated with a number of required extensions; such as multi-granular
representation, context-based segregation and a statistically-based grading
of paths. The organisation of memory offers the potential for the serial
performance of a number of retrieval tasks that have previously only been
addressed by assuming a massively parallel implementation of the memory
model supports the feasibility of the technique. A meaningful grading of
stored cases is consistently achieved without a reliance on an exhaustive
search of memory. Evidence that the memory model will scale to use on
large-scale casebases is also presented.
%X UMCS-94-2-1.ps.Z
%T From types to dataflow: code analysis for an OO language
%A Andrew J. Barnard
%R PhD thesis
%D October 1993
%X Of all the object-oriented programming languages and systems to appear over
the past decade, Smalltalk-80 has remained one of the more popular `pure'
object-oriented languages. However Smalltalk-80 has suffered criticism due
to its relatively poor run-time efficiency, together with the lack of
compile-time feedback for the programmer. In the past, type systems have
been nominated as potential solutions to both these criticisms. The aim of
this thesis is to compare and contrast what a type inference system may
offer a language like Smalltalk-80, with what is possible from an
alternative form of code analysis, namely dataflow analysis. To this end,
a type inference system and a dataflow analysis system are both formally
defined for a Smalltalk-like language, and their relative properties
examined. While both systems are revealed to be capable of detecting
potential `does-not-understand' errors, the type inference system is shown
to conflict with the semantics of the original untyped language. In
contrast, the dataflow analysis system is shown to have no such problems.
In addition, the dataflow analysis system is shown to be capable of
extending the possibilities of analysing control structures beyond what is
feasible with a type inference system.
%X UMCS-93-10-2.ps.Z
%T Process-Algebraic Foundations for an Object-Based Design Notation
%A Cliff B. Jones
%D October 1993
%X Earlier papers give examples of the development of concurrent programs
using *POBL* which is a design notation employing concepts from
object-oriented programming languages. Use is made of constraints on the
object graphs to limit interference and assertions over such graphs to
reason about interference. This report merges (and corrects minor
inconsistencies between) two papers which document the semantics of
*POBL* by providing a mapping to the pi-calculus and
indicate how arguments about the design notation might be based on that
semantics. It also reflects some recent work not cited in the earlier
papers.
%X UMCS-93-10-1.ps.Z
%T A Graph-Based Approach to Resolution in Temporal Logic
%A Clare Dixon
%D September 1993
%X An approach to the mechanisation of temporal logics, based on a form of
clausal resolution has been developed. Temporal formulae
incorporating both past-time and future-time temporal operators are
converted into Separated Normal Form, and then both non-temporal and
temporal resolution rules are applied. The temporal resolution rule
attempts to match conditions which must be eventually satisfied with sets
of formulae which together imply that the condition will never be
satisfied. In this dissertation the sets of formulae which are used in the
application of the temporal resolution rule are identified using graph
theoretic techniques. This is achieved by first extracting formulae or
combinations of formulae that could possibly stop a condition being
satisfied and then using them to build a graph. Secondly any loops
identified during the construction of the graph will mean that the
condition can never be satisfied, and are used in the application of the
temporal resolution rule thus deriving further formulae. This temporal
resolution method has been combined with sub-programs performing
translation to normal form and non-temporal resolution to produce an
integrated resolution based temporal theorem prover. The main algorithm
cycles through these three procedures until false is derived, (the original
formula was satisfiable).
%X UMCS-93-9-2.ps.Z
%T Trends in Operating Systems Towards Dynamic User-level Policy Provision
%A K. R. Mayes
%D September 1993
%X It is possible to distinguish between policy and mechanism in operating
system design. There is a trend to move policy out of the operating system
kernel and into the user-level. This trend is described with respect to
example operating system types. A system is proposed which takes this
policy/mechanism split to the extreme of having the operating system kernel
reduced to a hardware object which provides a low-level but abstract view
of the actual hardware. Such a system would be flexible enough to allow
investigation of dynamic, user-level, provision of policy.
%X UMCS-93-9-1.ps.Z
%T Cut-free Sequent and Tableau Systems for Propositional Diodorean Modal Logics
%A R. Gore
%D August 1993
%X We present sound, (weakly) complete and cut-free tableau systems for the
propositional normal modal logics **S4.3**,
**S4.3.1** and **S4.14**. When the modality
``box'' is given a temporal interpretation, these logics respectively
model time as a linear dense sequence of points; as a linear discrete
sequence of points; and as a branching tree where each branch is a
linear discrete sequence of points.
_
Although cut-free, the last two systems do not possess the subformula
property. But for any given finite set of formulae *X* the
``superformulae'' involved are always bounded by a finite set of formulae
depending only on *X* and the logic **L**. Thus each
system gives a nondeterministic decision procedure for the logic in
question. The completeness proofs yield deterministic decision
procedures for each logic because each proof is constructive.
_
Each tableau system has a cut-free sequent analogue proving that Gentzen's
cut-elimination theorem holds for these latter systems. The techniques are
due to Hintikka and Rautenberg.
%X UMCS-93-8-3.ps.Z
%T Understanding the differences between VDM and Z
%A I. J. Hayes
%A C. B. Jones
%A J. E. Nicholls
%D August 1993
%X This paper attempts to provide an understanding of the interesting
differences between two well-known specification languages. The main ideas
are presented in the form of a discussion. This was partly prompted by
Lakatos' book `Proof and Refutations' but since this paper is less
profound, characters from the childrens' television series `The Magic
Roundabout' are the speakers: Zebedee speaks for Z, Dougal puts the VDM
position, and Florence acts as the user. The specifications which are
presented have been made similar so as to afford comparison - in neither
the VDM nor the Z case would they be considered to be ideal presentations.
Some technical details are relegated to footnotes.
%X UMCS-93-8-1.ps.Z
%T The Design and Analysis of a Stateless Data-Flow Architecture
%A David F. Snelling
%R PhD thesis
%D July 1993
%X Data-Flow computing is approximately 20 years old, and progress in that time
has been steady but not dramatic. As new technologies have emerged they
have been incorporated into new Data-Flow systems, however, Data-Flow has
held onto many of its original tenets. One of these tenets, contrary to
the Data-Flow ideal, is the use of explicit state for storing data
structures, arrays in particular. By imposing a shared memory model this
explicit notion of state creates a fundamental barrier preventing
traditional Data-Flow systems from being scalable. In this thesis, a
dynamic, tagged token Data-Flow architecture, SDFA (Stateless Data-Flow
ARchitecture), is presented which diverges from other systems in that it
does not support an explicit notion of state. In the absence of explicit
state, it is possible to exploit the advantages of Data-Flow, such as
uniform load balancing, latency hiding, and good performance on small
problems, within a distributed memory architecture. With the shared
memory barrier removed, the SDFA system is scalable. The thesis sets out
the case for a stateless model, develops an architecture based on this
approach, presents a detailed, simulation based analysis of the system,and
proposes further research into stateless Data-Flow computing.
%X UMCS-93-7-2.ps.Z
%T A Data Partioning Algorithm for Distributed Memory Compilation
%A Michael O'Boyle
%D July 1993
%X This paper proposes a compiler strategy for mapping FORTRAN programs onto
distributed memory computers. Once the available parallelism has been
identified, it has to be scheduled so as to minimise overhead. The
minimisation of different costs will suggest different data and computation
partitions. This is further complicated, as the effectiveness of the
partition will depend on later compiler optimisations. For this reason,
partitioning is at the crux point of compilation and has led several
researchers to propose that it be left to the user. it is our contention
that it is possible to automatically determine a good data partition and
furthermore to make use of the analysis in later optimisation stages. This
paper describes an automatic data partition algorithm which is based on
four different analysis techniques. By determining the relative merit of
each form of analysis, a data partitioning decision is made. By
integrating this algorthim into an overall compilation strategy, effective
use can be made of any data partitioning decision. To test the strategy,
it has been applied to a real non-trivial program, TRED2. The resulting
program was ran on 32 cell KSR-1 where this strategy was compared with the
native parallelising compiler and a hand coded version of the program. It
is shown that the performance of the approach described in this paper is
comparable to that of hand-coded techniques.
%X UMCS-93-7-1.ps.Z
%T Generalisation for Induction
%A Sunil Vadera
%D June 1993
%X Proof by induction plays a central role in showing that recursive programs
satisfy their specification. Sometimes a key step is to generalise a lemma
so thatits inductive proof is easier. This report examines existing
heurristics for generalisation for induction. We first examine the
applicability of heuristics for generalisation and then develop a new
aproach for a class of problems for which the existing methods fail.
Unlike the existing methods, we give the conditions under which our method
guarantees a valid generalisation.
%X UMCS-93-6-8.ps.Z
%T Proof by Analogy in Mural
%A Sunil Vadera
%D June 1993
%X Conducting proofs by hand, however, can be time consuming to the extent
that designers have to judge whether a proof of a particular obligation is
worth conducting. Even if hand proofs are worth conducting, how do we know
that they are correct? One approach to overcoming this problem is to use
an automatic theorem proving system to develop and check our proofs.
However, in order to enable present day theorem provers to check proofs,
one has to conduct them in much more detail than hand proofs. Carrying out
more detailed proofs is of course more time consuming. This report
describes the use of proof by analogy in an attempt to reduce the time
spent on proofs. We develop and implement a proof follower based on
analogy. We present several examples to illustrate the characteristics of
our follower. The examples we give include situations in which the
follower is unable to complete a proof, as well as when it completes a
proof. The incomplete examples illustrate that the follower's failure can
provide a hint that enables the user to complete the proof. We conclude
our study by comparing our approach with other work.
%T Recursive Models of General Inductive Types
%A Yuxi Fu
%D June 1993
%X We give an interpretation of Martin-Lof's type theory (with universes)
extended with generalized inductive types. The model is an extension of
the recursive model given by Beeson. By restricting our attention to PER
model, we show that the strictness of positivity condition in the
definition of generalized inductive types can be dropped. It therefore
gives an interpretation of general inductive types in Martin-Lof's type
theory.
%X UMCS-93-6-6.ps.Z
%T Understanding Inductive Types in Constructions
%A Yuxi Fu
%D June 1993
%X In this paper we extend the Calculus of Construction with generalized
inductive types. The extension is justified by showing that the usual set
theoretical model can be effectivized. It is also pointed out that the
model given in a published paper for a collection of inductive types in a
different style is wrong.
%X UMCS-93-6-5.ps.Z
%T Encodings in Polymorphism, Revisited
%A Yuxi Fu
%D June 1993
%X We consider encodings in polymorphism with finite product types. These
encodings are given in terms of I-algebras. They have the property that
all canonical terms (ground terms) are normal terms. We transplant the
proof of a well-known result to our setting and show why weak recursion is
admissible. We also show how to carry out the dual encodings using the
existential quantifier.
%X UMCS-93-6-4.ps.Z
%T Categorical Properties of Logical Frameworks
%A Yuxi Fu
%D June 1993
%X In this paper we give a new presentation of ELF which is well-suited for
semantic analysis. We introduce the notions of internal codability,
internal definability, internal typed calculi and frame languages. These
notions are central to our perspective of logical frameworks. We will
argue that a logical framework is a typed calculus which formalizes the
relationship between internal typed languages and frame languages. In the
second half of the paper, we demonstrate the advantage of our logical
framework by showing some categorical properties of it and of encodings in
it. By doing so we hope to indicate a sensible model theory of encodings.
%X UMCS-93-6-3.ps.Z
%T The Glueing Construction and Lax Limits (with applications to categories of structured posets)
%A Harold Simmons
%D June 1993
%X Starting life as a way of reconstructing a topological space from a pair of
complementary subspaces, the glueing construction has found employment in a
wide range of different roles from the construction of free distributive
lattices to a supporting part in the 2-categorical analysis of types
theories. In this latter role the construction appears to be a fundamental
factor in the behaviour of higher order proof theory. What is going on
here? Before that can be answered we need at least a less ad hoc
description of the contruction. In this paper I set down what is, I
believe, the beginnings of a coherent account of the algebraic version of
glueing. As well as the abstract theory I give a good selection of
different examples to illutrate the diverse nature of the uses of the
construction.
%X UMCS-93-6-2.ps.Z
%T A Formal Description of the ISPW-6 Software Process Example
%A Jin Sa
%A Brian C. Warboys
%D June 1993
%X Recently, software process modelling has attracted an increasing research
interest. For the purpose of comparing and contrasting different
approaches to software process modelling, a common problem, refereed to as
the ISPW-6 software process example,was defined. Many solutions to this
example have been provided by different people using different approaches.
In this report, a solution to the ISPW-6 software process example by an
approach, called BM, is presented. The suitability of BM for software
process modelling is assessed against the criteria proposed at the sixth
International Software Process Workshop.
%X UMCS-93-6-1.ps.Z
%T Semi-analytic Tableaux for Propositional Modal Logics of Nonmonotonicity
%A Rajeev Gore
%D March 1993
%X The propositional monotonic modal logics **K45**,
**K45D**, **S4.2**, **S4R** and
**S4F** elegantly capture the semantics of many current
*nonmonotonic* formalisms as long as (strong) deducibility of
*A* from a theory *G*, allows the use of necessitation on
the members of *G*. This is usually forbidden in modal logic
where *G* is required to be empty, resulting in a weaker notion
of deducibility.
_
Recently, Marek, Schwarz and Truszczinski have given algorithms to
compute the stable expansions of a *finite* theory *G* in
various such nonmonotonic formalisms. Their algorithms assume the
existence of procedures for deciding (strong) deducibility in these
monotonic modal logics and consequently such decision procedures are
important for automating nonmonotonic deduction.
_
We first give a sound, (weakly) complete and cut-free, semi-analytic
tableau calculus for monotonic **S4R**, thus extending the
cut elimination results of Schwarz for monotonic **K45** and
**K45D**. We then give sound and complete semi-analytic
tableau calculi for monotonic
**K45**,
**K45D**, **S4.2** and **S4F**
by adding an (analytic) cut rule. The proofs of tableau completeness
yield a deterministic satisfiability test to determine theoremhood (weak
deducibility), because all proofs are constructive. The techniques are
due to Hintikka and Rautenberg. We then show that the tableau calculi
extend trivially to handle (strong) deducibility for *finite*
*G*.
_
Using a general theorem due to Rautenberg we also obtain the (weak)
interpolation theorem for **K45**, **K45D**,
**S4.2** and **S4R**.
%X UMCS-93-3-2.ps.Z
%T Implementing State Machines in Smalltalk
%A Trevor P. Hopkins
%D March 1993
%X Implementing software solutions using finite state machines (FSMs) is a
useful technique in many application areas, including compiler
implementation and network protocols. This report examines possible
implementation techniques for FSMs in the Smalltalk object-oriented system,
and identifies techniques which best support the goals of generality,
flexibility and software reuse.
%X UMCS-93-3-1.ps.Z
%T A Framework for Experimental Analysis of Parallel Computing
%A J. R. Gurd
%A M. D. Cooper
%A G. A. Hedayat
%A A. Nisbet
%A M. F. P. O'Boyle
%A D. F. Snelling
%A A. P. W. Bohm
%D February 1993
%X A framework is presented for experimental study of the relationships
between different approaches to large-scale computational problem-solving
on parallel computers. An initial series of experiments is introduced, and
preliminary results are presented.
_
The major areas of concern are introduced by defining five major levels of
abstraction, and by describing the application development cycle by means
of which useful parallel programs are constructed. At present, attempts to
use parallel computers are based on a limited number of hardware-oriented
computational paradigms, in which some form of parallel hardware is chosen
first, and this determines, by default, the programming and algorithmic
tools that can be applied. The framework for experimental analysis of
parallel computing is based on a more general, ``top-down'' view of the
process of ``mapping'' applications onto (parallel) hardware. In this
framework, the selection of ``moves'' in the ``mapping space'' is informed
by an understanding of the costs and benefits associated with each option.
_
A taxonomy of parallel hardware systems is outlined and a model of parallel
hardware that unifies treatment of the various groups in the taxonomy is
introduced. Four state-of-the-art parallel hardware systems are described
in the terms of this model. A general performance analysis experiment is
introduced, in which a number of programs, targeted at various parallel
computers, are developed to solve a pair of computational problems. The
problems, one an arbitrary test and the other an artificial ``control''
case, are defined, and specific algorithms for the solution of each are
presented. Parallel programs which implement these algorithms are
described briefly: These are tailored to the four hardware systems and use
appropriate high-level programming languages. Finally, measurement
techniques are discussed, and specific hardware-level performance
characterstics are introduced, based on the unifying hardware model.
Preliminary results for one performance analysis experiment are presented.
%T A Multimedia Information System with Automatic Content Retrieval
%A Michael Hugh O'Docherty
%R MSc thesis
%D February 1993
%X Traditional databases use simple data types - chiefly numbers and strings -
to represent information such as payroll records or scientific data.
Queries are commonly expressed in relational calculus or algebra and the
results require little more than a textual terminal for presentation. The
natural successor to the traditional database is the Multimedia Information
System which also uses richer data types - images,text, sound and so on -
to describe some application domain. This calls for more advanced
technologies such as graphical workstations for data creation and
presentation, high-bandwidth networks for fast access to distributed data
and large repositories for the storage of objects. These technologies are
now widely available. Ideally, queries put to a Multimedia Information
System should refer to the content of stored objects - content retrieval -
and results should contain complexas well as simple data types. Manual
entry of content descriptions is time-consuming, error-prone and
subjective; a better approach is to draw on the emerging fields of image
interpretation, text interpretation, speech interpretation and knowledge
representation to provide automatic content retrieval. This thesis serves
as an introduction to Multimedia Information Systems and related fields and
describes in detail a pilot system with automatic content retrieval built
by the Multimedia Group at Manchester University; the author has been
instrumental in the design and implementation of this system, especially
those components that cater for the gereration and querying of content
descriptions.
%X UMCS-93-2-2.ps.Z
%T A Process Algebraic Semantics for Core ELLA
%A H. Barringer
%A G. D. Gough
%A B. Q. Monahan
%A A. Williams
%D February 1993
%K Hardware description languages Semantics Verification Process algebras, ELLA
%X We give a presentation of a trace-based semantics of Core ELLA, a
covering subset of the commercial hardware description language ELLA,
using a synchronous process algebra called ELLA PROCESS ALGEBRA (EPA).
This report is a semantics reference manual for ELLA, giving a
detailed definition of the behaviour of ELLA expressions on a
construct by construct basis.
_
EPA is an instantiation of a generic Process Algebra Framework
(PAF). It supports parameterised process calls, and has highly
structured actions representing pairs of input/output maps labelling
its transitions. Value expressions belonging to a rich ELLA action
algebra can appear as value-passing arguments to process calls, or in
the input/output maps. The EPA process terms reprsenting
ELLA are constructed compositionally following the structure of ELLA
expressions. Primitive terms, corresponding to atomic operations in
ELLA, are assembled using a small set of process operators. In
particular the parallel composition operator models synchronous
communication between components and is effected by a special product
operation on actions. The well-formedness of ELLA expressions is also
defined, and acts as a pre-condition for the dynamic semantics.
_
The EPA terms resulting from the compilation of ELLA expressions can
be translated into a special normal form, which is essentially a
symbolic state machine representation. This forms the basis for
semi-automated high-level symbolic manipulation and reasoning
tools.
_
The work has been conducted as part of a collaborative project
sponsored by the DTI and SERC, involving Manchester, DRA (Malvern) and
Harlequin Ltd. (Cambridge), with GEC Plessey Semiconductors (Swindon)
as evaluators. The aim of the project has been to provide formal
verification support tools which are amenable to the industrial
hardware designer. Using the formal semantics as a basis, software
tools for ELLA compilation, normalisation, (ground-term/symbolic)
simulation, (enumerated/symbolic) verification and ELLA-to-ELLA
rewriting have been implemented by the project partners, and embedded
into an integrated ELLA Design and Verification Environment.
%X UMCS-93-2-1.ps.Z
%T The Expressive Power of the English Temporal Preposition System
%A Ian Pratt
%A David Bree
%D January 1993
%X In this report, we present an account of the semantics of the English
temporal prepositions. The approach taken here stresses the limited range
of expressive possibilities afforded by these words, and pays particular
attention to the irregular interactions they exhibit with verb tense and
aspect. A framework is developed within which the semantics of English
temporal expressions can be couched, allowing the temporal prepositions to
be grouped according to their `temporal function'. A key finding of the
report is that the eighteen temporal prepositions considered here express,
between them, at most six distinct temporal functions.
%X UMCS-93-1-7.ps.Z
%T Program and Data Transformations for Efficient Execution on Distributed Memory Architectures
%A Michael F.P.O'Boyle
%R PhD thesis
%D January 1993
%X This report is concerned with the efficient execution of array computation
on Distributed Memory Architectures by applying compiler-directed program
and data transformations. By translating a sub-set of a single-assignment
language, Sisal, into a linear algebraic framework it is possible to
transform a program so as to reduce load imbalance and non-local memory
access. A new test is presented which allows the construction of
transformations to reduce load imbalance. By a new expression of data
alignment, transformations to reduce non-local access are derived. A new
pre-fetching procedure, which prevents redundant non-local accesses, is
presented and forms the basis of a new data partitioning methodology. By
applying these transformations in a straightforward manner to some well
known scientific programs, it is shown that this approach is competitive
with hand-crafted methods.
%X UMCS-93-1-6.ps.Z
%T Numerical Linear Algebra and Computer Architecture: An Evolving Interaction
%A G. A. Hedayat
%D January 1993
%X The aim of this report is to highlight the evolving interaction between
computer architecture on the one hand and the design and analysis of
algorithms for numerical linear algebra on the other. This report can best
be described as a collection of pointers to aspects of the development
processes of numerical linear algebra, and to various existing surveys and
software. The rich variety of activity in design, analysis, comparison and
evaluation of algorithms, new and old, is examined in the context of rapid
technological and architectural advances. The characteristics of
algorithms for linear algebra are discussed without going into mathematical
details: There are references to important surveys that provide such
details. Reflections on the practices of numerical linear algebra will, it
is hoped, point to possibilities for and limitations of automating the
``mapping'' of algorithms onto high performance computers.
%X UMCS-93-1-5.ps.Z
%T United Functions and Objects: an Overview
%A John Sargeant
%D January 1993
%X United Functions and Objects (UFO) is a general-purpose, implicitly
parallel language designed to allow a wide range of appolications to be
efficiently implemented on a wide range of parallel machines while
minimising the conceptual difficulties for the programmer. To achieve
this, it draws on the experience gained in the functional and
object-oriented ``worlds'' and attempts to bring these worlds together in a
harmonious fashion. This report concentrates on examples which illustrate
various aspects of UFO, including the various encapsulation and abstraction
mechanisms it provides, and the various forms of parallelism which can be
exploited.
%X UMCS-93-1-4.ps.Z
%T How to eliminate Predicate Variables
%A H.Simmons
%D January 1993
%X I describe an algorithm which when given a sentence *G* involving
universally quantified predicate variables, first reduces this sentence to
normal form *G** and then, if *G** satisfies certain extra
restrictions, eliminates the predicate variables of *G** to produce a
predicate-free equivalent of *G*. This process may involve the
introduction of skolem functions but in many cases does not (and so
produces a more elementary version of *G*). The algorithm eliminates
the predicate variables by witnessing them with certain definable
predicates. The choice of these predicates is guided by the shape of the
normal form *G**. The algorithm was initially designed to deal with
modal correspondence problems. In this respect it is an extension of
Sahlqvist's algorithm which produces first order equivalents of a wide
class of modal formulas. The algorithm is illustrated by several examples
from modal logic.
%X UMCS-93-1-2.ps.Z
%T A Density Property of the Primitive Recursive Degrees
%A Harold Simmons
%D January 1993
%X The standard Ackermann function which is produced by diagonalization to lie
outside the class of primitive recursive function. By slowing down the
diagonalization I produce a whole continuum of functions which, in
complexity, lie above the primitive recursive functions and below the
Ackermann function. There is a dense set of these functions which have a
rudimentary graph.
%X UMCS-93-1-1.ps.Z
%T A Guide to Reading VDM Specifications
%A Bob Fields
%D December 1992
%X The intention of this document is to give an overview of the syntax of the
Specification language of the Vienna Development Method (VDM) as an aid to
those who are to some extend familiar with formal notations and have to
read or review documents containing VDM. This document is not a tutorial
introduction to or motivation for formal methods in general or VDM in
particular; nor is it a formal description of what VDM is or means. Both
these types of description can be found in the references.
%X UMCS-92-12-4.ps.Z
%T idC: A subset of standard C for initial teaching
%A Ian Cottam
%D December 1992
%X A subset of the programming language ISO Standard C is defined informally.
It is claimed that the resulting language (called idC) is suitable for
introductory programming classes for university-level students. The claim
comes with a caveat that the language requires a certain level of software
support not normally mandated (e.g. by the C standard). After justifying
each restriction, a sketch of the first prototype implementation is
presented. idC may well be a suitable "workhorse" language to introduce
concurrently with, or shortly after, a functional lanhuage such as Haskell
or SML.
%X UMCS-92-12-3.ps.Z
%T Formal methods - selected historical references
%A C. B. Jones
%A A. M. McCauley
%D December 1992
%X This report contains citations to papers which are of historical interest
in the area of formal approaches to software development.
%X UMCS-92-12-2.ps.Z
%T An object-based design method for concurrent programs
%A C. B. Jones
%D December 1992
%X The property of a (formal) development method which gives the development
process the potential for productivity is compositionality. Interference
is what makes it difficult to find compositional development methods for
concurrent programs. This paper is intended to contribute to tractable
development methods for concurrent programs. In particular it explores
ways in which object-based language concepts can be used to provide a
compositional development method for concurrent programs. This text
summarizes results from three draft papers. It firstly shows how
object-based concepts can be used to provide a designer with control over
interference and proposes a transformational style of development (for
systems with limited interference) in which concurrency is introduced only
in the final stages of design. The essential idea here is to show that
certain object graphs limit interference. Secondly, the paper shows how a
suitable logic can be used to reason about those systems where interference
plays an essential role. Here again, concepts are used in the design
notation which are taken from object-oriented languages since they offer
control of granularity and ways of pinpointing interference. Thirdly, the
paper outlines the semantics of the design notation mapping its constructs
to Milner's pi-calculus.
%X UMCS-92-12-1.ps.Z
%T Towards a Formal Framework for Deductive Synthesis of Logic Programs
%A Kung-Kiu Lau
%A Mario Ornaghi
%D November 1992
%X In this report, we set up a formal framework for studying deductive
synthesis of logic programs. We describe a model for logic program
systhesis, where synthesis is a process that extracts a logic program from
a first-order specification. A first-order specification is formalised in
the framework of a first-order theory, and the atomic relations computed by
the synthesised program are new symbols intoduced by definition axioms.
Synthesis methods of this kind have been proposed, for example by Hogger
and Lau and Prestwich. We also present uniformity results for SLD and
SLDNF as another important element of our framework. Finally, using our
framework, we give some incompleteness results for deductive synthesis in
Peano Arithmetic.
%T Declarative Integration of Object-Oriented Programming and Knowledge Representation
%A Steven K. Wallis
%R PhD thesis
%D June 1992
%X Knowledge representation is concerned with the representation of real-world
knowledge in a computer. Object-oriented programming provides facilities
for encapsulation and code reuse. It is argued that their integration on a
declarative basis yields significant benefits, including more modular
knowledge bases and more reliable programs. This thesis describes a search
for declarative object-oriented knowledge. The search begins with RUN, an
object-oriented approach to semantic networks. It continues with FOOD,
which integrates declarative and object-oriented techniques. The search
ends with ROOK, which combines the advantages of RUN and FOOD. ROOK
represents object-oriented code and knowledge with the same formalism. An
item of declarative object-oriented knowledge is both an item of knowledge
and a piece of object-oriented code, and is completely declarative. This
integration facilitates the execution of knowledge and reasoning about
programs. ROOK supports four knowledge representation paradigms (logic,
rules, semantic networks and frames), plus other programming paradigms such
as functional programming and constraints.
%T A semantics and verification framework for ELLA
%A H. Barringer
%A G. Gough
%A T. Longshaw
%A B. Monahan
%A M. Peim
%A A. Williams
%D April 1992
%X We present an automata-based compositional semantics for an abstraction of
the hardware design and description language ELLA. The semantics forms the
basis for the automata-theoretic verification of ELLA descriptions, or
programs. We further outline the principles of the verification strategy
and use a small example, a twisted-ring counter, to demonstrate both the
semantics and the verification techniques.
%X UMCS-92-4-6.ps.Z
%T The Search for Tractable Ways of Reasoning about Programs
%A C. B. Jones
%D April 1992
%X This paper traces the most important steps in the history of research
on reasoning about programs. The main focus is on ways of verifying
that sequential imperative programs satisfy their specifications. Over
time it has become clear that post facto verification is only practical
for small programs; verification methods which support the
development of larger programs exploit a notation of compositionality.
Coping with concurrent algorithms is much more challenging -- this and
other extensions are considered briefly. The main thesis is that the
idea of reasoning about programs has been around for a long time; the
research has been to find tractable methods.
%T UFO -- United Functions and Objects: Draft Language Description
%A John Sargeant
%D April 1992
%X UFO is a general-purpose, implicitly parallel language designed to
allow a wide range of applications to be efficiently implemented on a
wide range of parallel machines while minimising the conceptual
difficulties for the programmer. To achieve this, it draws on the
experience gained in the functional and object-oriented ``worlds'' and
attempts to bring these worlds together in a harmonious fashion.
%X UMCS-92-4-3.ps.Z
%T Exclusive normal form of Boolean circuits
%A B. Bani-Eqbal
%D April 1992
%X Recently, Binary Decision Diagrams (BDDs) have been proposed as an
efficient representation of Boolean functions. Although most practical
functions have a compact BDD, some useful ones do not. A prime example of
these is the integer multiplier function, which was proved to have at least
exponential complexity, and to be much worse in practice. We propose a
modification of BDDs using the exclusive OR form, and show that it leads to
a more compact representation for the multipliers.
%X UMCS-92-4-1.ps.Z
%T Knowledge-Based Applications = Knowledge Base + Mappings + Applications
%A Paul Kwai-Chung Pun
%R PhD thesis
%D March 1992
%X The proliferation of research work carried out in knowledge-based systems
shedslight upon issues of knowledge acquisition, manipulation and
representation. However, the issue of how this knowledge, once captured
and stored in a knowledge base, can be incorporated into and reused in
different application programs has not received enough attention. The main
difficulty of this issue is the inherent differences between modelling
concepts manifested by the objects described in the knowledge base and
those manifested by the data structures manipulatedwithin the applications
programs. The aim of this thesis is to investigate this issue in a more
formal way and to try to propose a systematic approach in which mappings
between these two different modelling manifestations can be deduced. In
order to study the problem systematically, a formal framework for modelling
relationships between objects inside a knowledge base and objects in
application data structures is presented. An entity functionalization
process is then defined. This provides an abstraction in which different
modelling manifestations can be expressed both in terms of sets of entities
and sets of entity functions. Two types of mappings, the entity mappings
and the entity function mappings, are then deduced based on tree reduction
and algebraic isomorphism. Given these mappings, objects and inter-object
relationships in a knowlege base can beassociated with the appropriate data
structures in application programs.
%T Towards a Strictness Analyser for Haskell: Putting Theory into Practice
%A Julian R. Seward
%R MSc thesis
%D February 1992
%X This thesis examines the design and implementation of a strictness
analyser for a small functional programming language. I look at
implementation issues neglected by theoreticians and show how a
practical analyser can be implemented and incorporated within the
larger structure of a compiler.
_
Previously published implementations have handled non-flat domains
and polymorphism badly. This thesis presents some new insights
which largely overcome both problems, and which lead to a
surprisingly fast implementation. Three key elements of this
implementation are the Frontiers representation, Abstraction and
Concretisation maps, and appropriate representation of abstract
domains. This thesis discusses their roles. The main deficiency of the
current analyser is a lack of higher-order analysis. To some extent,
previous work shows how to handle this. In time, such analysis
may be incorporated.
_
In the last three or four years, there have been several claims to the
development of fast analysers. Upon examination, these often rely
either on using insufficiently detailed abstract domains, or work with
some ad-hoc algorithm built on dubious mathematical foundations.
This analyser, by contrast is based solidly on the theoretical work
presented in Geoffrey Burn's Ph.D. thesis ``Abstract Interpretation and
the Parallel Evaluation of Functional Languages''.
%T Transformation and Synthesis in METATEM, Part 1: Propositional METATEM
%A Michael Fisher
%A Philippe Noel
%D February 1992
%X In this report, the syntactic manipulation of temporal logic programs
is considered. Transformation rules are provided for a temporal
programming language that forms part of the METATEM framework for
executable temporal logics. Soundness of the various
transformations is shown and several applications are given, such as
the production of a normal form for programs, which is the basis of
both compilation and resolution techniques. The use of transformation
techniques in METATEM program synthesis is also considered. Here, a
high-level specification, represented as a non-deterministic
METATEM program is translated into an `implementation', which is
again represented as a METATEM program.
%T Load Balancing of Parallel Affine Loops by Unimodular Transformations
%A M. O'Boyle
%A G. A. Hedayat
%D January 1992
%X This paper is concerned with the automatic mapping of array
computation to processors efficiently. One of the major overheads
associated with the mapping of computation is load imbalance. An
optimising compiler should find a mapping such that this overhead is
minimised.
_
This paper describes formally the mapping of loop iterations to
processors so as to minimise load imbalance. The class of perfectly
load balanced affine loops is defined whereby using unimodular
transformations, it is shown that a large class of loops are equivalent.
An algorithm is detailed which can determine both whether a loop
structure may be load balanced, and the necessary transformations to
do so. This algorithm has a worst case complexity of O(m3) where m is
the dimensionality of the iteration space. The analysis is extended to
the case when many loops are to be partitioned where it is shown that a
transformation may be constructed which simultaneously balances all
appropriate loops. Finally it is shown that if a loop is perfectly load
balanced, then there exists a transformation such that it may be placed
outermost so as to aid partitioning.
%X UMCS-92-1-1.ps.Z
%T An Introduction to Categorial Grammars
%A Mary M. Wood
%D December 1991
%X Categorial grammars come of a long tradition of linguistic
description rooted in philosophy of language, logic, and
algebra. Like dependency grammars, they look for the
constructive patterns of semantic linkage that hold a sentence
together, rather than for the analytic patterns by which it can
be segmented. They are distinctive for their direct
representation of semantics in syntax - which endears them
to philosophers of language, and to some linguists - and for
their arithmetical transparency - which endears them to
logicians, and especially to formal and computational linguists.
It is no coincidence that they have finally come to prominence
with the dramatic rise of computational linguistics during the
last ten years or less.
_
This introduction outlines first the most important principles
and practice of the categorial enterprise. Chapter 2 offers a
brief historical introduction. Chapter 3 describes one specific
``benchmark'' categorial grammar, the ``Lambek calculus'', and
Chapter 4 discusses a range of different proposed extensions to
that grammar.
%T A Set Theoretic Semantics for First Order Temporal Logic: definition and Application using Isabelle
%A P. A. J. Noel
%D December 1991
%X The paper contains a brief description of an exercise in using the
generic theorem prover Isabelle to perform the following
:
- define the semantics of a temporal logic within Zermelo-Fraenkel (ZF)
set theory;
- define a number of inference rules concerning the temporal
logic;
- define a tactic which uses some of the derived inference rules
in backward proofs to generate automatically a model for
propositional temporal formulae.

One advantage of using a theorem prover to define the semantics of a
logic within another logic is that (sound) proof systems for the defined
logic may be derived from its semantics. Another advantage of this
approach is that it allows the derivation of prototype `programs' (in the
form of inference rules under the control of an automatic tactic) correct
with respect to the semantics. An example of such a derivation is
discussed in this paper.
%T German Temporal Prepositions from an English Perspective
%A Martin Durrell
%A David S. Bree
%D December 1991
%X The following is an attempt to analyse the procedures for
selecting temporal prepositions (or conjunctions) in German
according to the framework developed in Bree et al. (1990).
Systematic contrast is made with the English as analysed there
(and, where interesting or relevant with Dutch). The German
time words follow similar selection trees for those in English
and Dutch, but there are some quite interesting differences
(and not a few problems), and some amendments had to be made
to the analysis in Bree et al. (1990). Despite these differences,
there would seem to be no reason in principle why the
analytical framework in terms of decision trees adopted by Bree
et al. (1990) should not be as revealing for German as for other
languages.
%T A Discrete Systematic Model for racing in an SR Latch
%A W. P. R. Mitchell
%A A. S. Hilditch
%D December 1991
%X This paper develops a model of propagational delay for digital logic
which assumes all voltages to be boolean valued. This allows for
calculations on racing which can not normally be done with only
boolean valued models. To illustrate this point the paper goes through
a particular instance of how a latch performs meta-stably when its set
and reset inputs are made 1 simultaneously, and then simultaneously
made 0.
The extra ingredient which generates the propagational delay is the
incorporation of a value for the MOSFET SiO2/Si interface charge. That
charge is then modelled as varying in a linear way over time with
changes in the gate value.
%T Learning a Grammar
%A W. Daniel Solomon
%D December 1991
%X A prototype system (`L') for the induction of a lexical grammar
from a sample of natural language has been produced. L takes
as its input a selection of children's reading books, and
induces, or learns, a Categorial Grammar, which takes the form
of a lexicon, together with further corpora, to augment its
grammar. L also demonstrates the validity of the grammar
produced by generating further sentences from that grammar.
%T Modularity in Model-Oriented Formal Specifications and its Interaction with Formal Reasoning
%A John S Fitzgerald
%R PhD thesis
%D November 1991
%X This thesis concerns the provision of facilities for the modular
structuring of model-oriented formal specifications of computing
systems.
_
The basic concepts of model-oriented specification and the need for
modular structuring of specifications are reviewed. It is argued that a
structuring mechanism should permit formal reasoning to proceed
separately for each unit of structure. It is also argued that the
provision of structuring facilities should be governed by practical
experience.
_
The body of the thesis applies these principles to the development of a
collection of structuring facilities and their application to a model-
oriented specification language. A naive initial case study helps
determine a basic repertoire of structuring facilities. Structured
specifications will consist of definitions of modules. Each module
contains an unstructured specification encapsulated by an
interface which defines dependencies on, and services supplied to,
other modules. Reuse of modules within and between structured
specifications is supported. Abstract syntax, context conditions and
denotational semantics for this system are proposed. The semantics is
presented in terms of theories associated with modules and does not
make reference to any particular language for the module contents.
A case study using the structuring facilities developed thus far leads
to a consideration of their use with a particular model-oriented
language. The generation of a theory from a module's contents; the
management of the name space due to the other modules in the structure
and the use of state-based specifications within modules are
considered.
A final case study applies the techniques developed in earlier
chapters. A concluding chapter assesses the extent to which the
definition of structuring facilities described in the thesis has achieved
its stated aims. Subjects for further research are suggested.
%T System software on the Flagship parallel graph reduction machine
%A K. R. Mayes
%A J. A. Keane
%A S. Holdsworth
%D November 1991
%X The Flagship system was a graph reduction machine having a
distributed physical architecture. Although Flagship was firmly in
the declarative world, explicit state was supported in the graph
reduction model to express the behaviour of the operating system. This
state not only had to be isolated from the declarative aspects of the
Flagship machine, but also had to be supported with respect to
distribution. The mechanism provided for maintaining the consistency
of state are discussed with respect to atomic actions at levels in the
Flagship system. This approach is used in order to demonstrate how
the software environment was supported by the basic execution
mechanism of the machine. The nature, construction and creation of
system software is discussed with particular reference to optimising
access to distributed resources.
%T Specifying Concurrent Object-based Systems using Combined Specification Notations
%A J. Sa
%A B. C. Warboys
%D July 1991
%X Although many specification notations for sequential systems have
been widely used in practice, most specification notations for
concurrent systems, such as temporal logics, are still far too
complicated to be used in practice. This report describes a
specification method for concurrent object-based systems by utilising
simpler notations, such as VDM, and restricted temporal logic
notations.
In this specification framework, specifications of systems are defined
in a temporal logic. However the specifier of a system does not need to
use the complicated temporal logic to specify the behaviour of the
system, instead, different simpler specification notations are used to
specify the different aspects into a formula defined in the temporal
logic.
%T An Attempt to Reason about Shared-State Concurrency in the Style of VDM
%A Ketil Stolen
%D July 1991
%X The paper presents an attempt to develop a totally correct shared-state
parallel program in the style of VDM. Programs are specified by tuples
of five assertions (P,R,W,G,E). The pre-condition P, the rely-condition
R and the wait-condition W describe assumptions about the
environment, while the guar-condtion G and the eff-condition E
characterise commitments to the implementation.
The pre-, rely- and guar-conditions are closely related to the similarly
named conditions in Jones' rely/guarantee method, while the eff-
condition corresponds to what Jones calls the post-condition. The
wait-condition is supposed to characterise the set of states in which it
is safe for the implementation to be blocked; in other words, the set of
states in which the implementation, when it becomes blocked,
eventually will be released by the environment. The implementation is
not allowed to be blocked during the execution of an atomic statement.
Auxiliary variables are introduced to increase the expressiveness.
They are used both as a specification tool; to characterise a program
that has not yet been implemented, and as a verification tool; to show
that a given algorithm satisfies a specific property. However, although
it is possible to define history-variables in this approach, the
auxiliary variables may be of any type, and it is up to the user to
define the auxiliary structure he prefers. Moreover, the auxiliary
structure is only a part of the logic. This means that auxiliary
variables do not have to be implemented as if they were ordinary
programming variables.
%T Proposal for an Information Model for EDIF
%A Rachel Lau
%R PhD thesis
%D June 1991
%X This report describes a partial information model of EDIF 2 0 0. First
of all, an outline of the purpose of an information model for EDIF is
presented. Then, a general overview of the model is given, followed by
a more detailed description of the modelling work. The complete
dictionaries of terms are given in the following chapters and an index
of terms is included at the end of the report. The model itself which
consists of a total of 17 schemas can be found in appendices A-Q. The
four appendices R-U identify respectively : the point of definition of
an EXPRESS object within the schemas; whether or not an EDIF
construct has been modelled; the available attributes for each entity;
and the relationships in which an object may occur.
%T A Method for the Development of Totally Correct Shared-State Parallel Programs
%A Ketil Stolen
%D June 1991
%X A syntax-directed formal system for the development of totally correct
programs with respect to an (unfair) shared-state parallel programming
language is proposed. The programming language is basically a while-
language extended with parallel- and await-constructs. The system is
called LSP (Logic of Specified Programs) and can be seen of as an
extension of Jones' rely/guarantee method. His method is strengthened
in two respects
:
- Specifications are extended with a wait-condition to
allow for the development of programs whose correctness
depends upon synchronisation.
- Auxiliary variables are introduced to increase
expressiveness. They are either used as a specification
tool to eliminate undesirable implementations or as a
verification tool to prove that a certain program
satisfies a particular specification.

%T Interference Resumed
%A C. B. Jones
%D May 1991
%X Compositional development methods for concurrent (interfering)
programs are harder to find than for sequential (isolated) programs.
Rely and guarantee conditions extended operation decomposition
methods for sequential programs to cover concurrent shared-variable
systems: interference was specified in order to achieve
compositionality. Ketil Stolen's thesis addresses shortcomings of the
rely/guarantee idea: synchronization is specified by a wait-condition
and expressiveness is increased by judicious use of auxiliary
variables. This paper describes a new step in the quest for
compositional development methods for concurrent programs. A
collection of temporal operators aimed at natural reasoning about
programs is indicated; these operators apply to predicates of two
states and a style of specification is proposed which clearly separates
transitions of a component from those of its environment. Furthermore
a case is made for `resumptions' as a semantic model for the logic.
Since this paper reports on work-in-progress, a list of known problems
is given.
%T The HDG-Machine: A Highly Distributed Graph-Reducer for a Transputer Network
%A Hugh Kingdon
%A David R. Lester
%A Geoffrey L. Burn
%D April 1991
%X Distributed implementations of programming languages with implicit
parallelism hold out the prospect that the programs are immediately
scalable. This paper presents some of the results of our part of Esprit
415, in which we considered the implementation of lazy functional
programming languages on distributed architectures. A compiler and
abstract machine were designed to achieve this goal. The abstract
parallel machine was formally specified, using Miranda (trade mark of
Research Software). Each instruction of the abstract machine was then
implemented as a macro in the Transputer Assembler. Although macro
expansion of the code results in non-optimal code generation, use of the
Miranda specification makes it possible to validate the compiler before
the Transputer code is generated. The hardware currently available
consists of five T800-25's, each board having 16M bytes of memory.
Benchmark timings using this hardware are given. In spite of the
straight forward code-generation, the resulting system compares
favourably with more sophisticated sequential implementations, such
as that of LML.
%T A Performance Assessment of the GSM Radio Link Protocol (RLP)
%A Brian W. Marsden
%D April 1991
%X his paper presents the results of a simulation evaluation of the Radio
Link Protocol (RLP), which is an adaptation of HDLC for use on radio
links in cellular networks. The RLP protocol is first described, in
particular the ways in which it differs from standard HDLC, in the
context of the special characteristics of radio links. The simulation
model is then discussed, with particular emphasis on the choice of
operational parameters and error recovery policies, which are crucial
issues given a radio link which is inherently unreliable and
contributes a propagation delay considerably in excess of the time
taken to transmit a frame. The results of the study are then presented,
with recommendations of suitable operational parameter values and
some suggested changes to the RLP specification. The central aim of
this work is to determine the combination of operational parameters
which will maximise throughput and minimise transit time for a useful
range of link error rates.
%T An Algorithm for Planning `Sensible' Routes
%A Ian Pratt
%D April 1991
%X The problem of finding an optimal route between two points in a space
strewn with obstacles has been extensively studied in robotics. The
standard approach to such problems is first to reduce the original
geometrical specification to a finite graph of possible routes, and then
to use graph-searching techniques together with the metrical
information available in the original geometrical specification to select
the best of these routes. The algorithm presented here, by contrast,
employs a `first-past-the-post' approach to route planning.
Specifically, we show that the algorithm is guaranteed to produce a
route satisfying a certain criterion of `sensibleness', because routes
failing this criterion are always slower to get constructed. The
approach taken here differs in three respects from most current work
on route planning:
- it finds sensible, but not necessarily optimal, routes;
- it is uninfluenced by perspective (though it is limited by
occlusion problems);
- it produces its output in a qualitative form,
which is nevertheless well-suited to guide the appropriate route-
following behaviour.

%T Branching Time and Partial Order in Temporal Logics
%A Wojciech Penczek
%D March 1991
%X The aim of this report is to present existing formal languages of
propositional temporal logic with frames based on branching time
structures or, more general, partial orders. Branching time and partial
order logics do not only differ in their underlying frames, but also in
the way the logics are linked to the behaviour of concurrent systems.
%T Distributed Algorithms for Detecting Distributed Deadlock
%A Steve Hilditch
%A Tom Thomson
%D March 1991
%X We present two simple algorithms for the detection of deadlock on a
distributed database system. They are designed to be scalable and
minimise network message traffic. An object-oriented approach is used
and semi-formal proofs of the algorithms' correctness are given.
%T A Distributed Termination Detection Scheme
%A Xinfeng Ye
%A John A. Keane
%D March 1991
%X A fully distributed scheme for detecting the termination of distributed
computations is proposed. The scheme does not require a pre-defined
detector, and takes into account problems such as network delay and
the non-order-preserving arrival of messages. The scheme can be
applied to any kind of connection topology. The correctness of the
scheme is presented in terms of showing that the global stable
condition holds when the scheme declares the termination of the
computation. The upper bound of the number of the messages which are
used to detect termination is also discussed.
%T Operational Semantics for Hardware Design Languages
%A Howard Barringer
%A Graham Gough
%A Brian Monahan
%D February 1991
%X An operational semantics for two hardware design languages, abstracted
from Abel and ELLA, are given using Structural Operational Semantics.
This can be used to calculate a state machine representation suitable
for verification purposes. We show how, despite differences in
semantic structure, the languages discussed possess various common
features, such as the use of iterative fixpoints to calculate stable
states under value propagation. Although hardware designers expect
their designs to represent finite state machines, explicit semantics
is needed to calculate these state machines in general. This enables
construction of a reference simulator against which output of other,
more efficient simulators can be compared.
%T Distributed Simulation Using ``Relaxed Timing''
%A Alvaro Garcia Neto
%R PhD thesis
%D February 1991
%X Research in computer architecture relies increasingly in simulation as
opposed to prototyping. However, as the size and complexity of the
simulated systems increases, ``conventional'' sequential simulation
fails to deliver the necessary performance.
We identify the requirement to model time accurately as a main cause
for poor simulation performance. We propose a new simulation
philosophy, Relaxed Timing, in which the emphasis is on speed of
execution rather than correct modelling of time. In this philosophy,
modules are entirely event-driven, and consume data as soon as they
become available -- no synchronisation is done to ensure timing
accuracy. Because performance and accurate timemodelling are
conflicting requirements, relaxation of temporal precision yields
higher performance. This approach clearly leads to preemption errors.
We show that, for ``self-scheduling'' and ``well balanced'' systems, such
as dataflow, these errors are acceptable. We show that a relaxed timing
dataflow simulator performs better than a sequential simulator for a
large number of algorithms and significant range of input data.
Having shown the adequacy of the model, we investigate possible
improvements, and find that the insertion of time-priority queues
minimizes preemption. By blocking these queues at strategic times, a
spectrum of timing precision is possible. We implement and discuss
this enhancement.
The development language for the simulator is occam. We assess the
suitability of occam as a specification language for the generic
simulation of computer architectures, and find it adequate. We also
discuss the need for tools of higher levels of abstraction to ease the
task of simulating distributed parallel systems, and place this need in
the context of current research in programming environments, software
engineering and program visualization. We describe the available tools
in the hierarchy of abstraction levels, and argue that occam is a good
candidate to fill an existing gap in this hierarchy.
%T Development of Parallel Programs on Shared Data-Structures
%A Ketil Stolen
%R PhD thesis
%D January 1991
%X A syntax-directed formal system for the development of totally correct
programs with respect to an unfair shared-state parallel while-
language is proposed. The system can be understood as a compositional
reformulation of the Owicki/Gries method for verification of parallel
programs.
Auxiliary variables may be of any sort, and it is up to the user to
define the auxiliary structure he prefers. Moreover, the auxiliary
structure is only a part of the logic. This means that auxiliary
variables do not have to be implemented as if they were ordinary
programming variables.
The system is proved sound and relatively complete with respect to an
operational semantics and employed to develop three nontrivial
algorithms: the Dining-Philosophers, the Bubble-Lattice-Sort and the
Set-Partition algorithms.
Finally, a related method for the development of (possibly
nonterminating) programs with respect to four properties is described.
This approach is then used to develop Dekker's algorithm.
%T Indexed Categories for Program Development
%A B. Hilken
%A D. E. Rydeheard
%D October 1990
%X We consider indexed categories as an algebraic framework and meta-
language for program logics. The logics incorporate programs in a
programming language, conversions between programs, specifications of
program behaviour in a specification language, and explicit proofs of
program correctness. Program development methods are interpreted as
constructions of indexed categories. Two examples are given: (1) type
invariants and (2) specifications of operations and their
implementation.
%T The Random Matrix Hashing Algorithm
%A Behnam Banieqbal and Steve Hilditch
%D September 1990
%X The Random Matrix hashing algorithm is a new hashing algorithm which
has the following properties
:
- It can be sized to any bit length input and output.
- It has scattering properties which are indistinguishable from the
theoretical best. It will out-perform the industry standard hashing
functions, e.g. dividing by a prime number, dividing by the Golden
Ratio.
- It is a source of an infinite number of different hashing functions.
- It can be implemented in hardware to hash most inputs in one clock
cycle. The number of gate delays required is proportional to the
logarithm of the number of input bits. It can also be pipelined.
- It can be chosen to be invertible if required.

The algorithm consists of multiplication of the input, thought of as a
binary row vector, by a fixed random binary matrix, the result is a
binary column vector, the hashed value.
We also derive formulae of the theoretical performance of theoretically
best, completely random hashing functions. We include graphs to show
that our algorithm performs as the theoretical best on substantial test
data.
%T Distributed Binding Mechanisms in the Flagship System
%A J. A. Keane
%D August 1990
%X The Flagship machine is a parallel, graph reduction machine designed
to execute functional languages. The machine supports a global address
space across its distributed physical store. A program to run on the
machine is compiled into a graph of packets and the graph is then
reduced. The load balancing of the machine is entirely dynamic and
consequently packets must be moved and copied at run-time. This
paper considers the dynamic mapping of the graph of packets to the
distributed stores that make up the Flagship machine. In particular,
the dynamic mapping of packets across the machine is viewed in a
binding framework. For high performance in a machine such as
Flagship a large proportion of memory accesses should be localised to
reduce communication overheads. There are two binding mechanisms in
the machine that provide global-to-local address packet mapping to
achieve high performance: a conventional cache mechanism and a
static copying mechanism. The paper primarily deals with the
motivation for the static copying mechanism, and considers the use and
management of the mechanism by the system software of the machine.
%T Distributed Relational Queries: Structures for Locking and Access
%A Steve Hilditch
%A Tom Thomson
%D August 1990
%X We consider scheduling relational database queries on
loosely-coupled multi processors, the implementations of
both two-phase locking and index structures, especially
secondary indexes. We consider resources which can be
arranged in the form of a directed graph and we assume
that resources which are close in the hierarchy are close in
the processor site network. A strategy for efficient
distributed locking and releasing of hierarchically
arranged resources is given together with a discussion of its
costs. We also present a novel implementation of
secondary indexes on loosely-coupled multiprocessors,
making use of the natural location of resources. Apart
from the usual constraint that concurrency be maximised,
we consider that careful note should be taken of both
:
- the number of locks needed to be taken.
- the number of network messages that need to be sent.

Our idea is to use possible hierarchical structures and
reasonable resource loading to reduce the number of locks
taken and the number of network messages needed to
implement two-phase locking. Examples of our algorithms'
usefulness to manage relational database queries are given.
%T On the usability of logics which handle partial functions
%A J. H. Cheng and C. B. Jones
%D March 1990
%X Partial functions are common in computing -- they arise from recursive
definitions and from the basic operators of data types like sequences.
The need to prove results about logical formulae which include terms
using partial functions must cast doubt on the applicability of
classical logic. Approaches to handling partial functions which
attempt to salvage classical logic include avoiding function application
and the use of variant equality notions. The alternative is to accept the
need for a non-classical logic. The most frequently used non-classical
logic adopts non-strict (but also non-communicative) conditional
versions of the propositional operators. An alternative -- which is more
fully explored below -- uses commutative and/or operators as defined in
the truth-tables which Kleene attributes to Lukasiewicz. A consistent
and complete proof system for such a logic has been used in the
program development method known as VDM. An important feature of
this approach is the way the definitions of partial functions are
brought into proofs. This paper analyzes different approaches to
reasoning about partial functions. In particular, it compares the
usability of different logics in handling the sort of partial functions
which arise in the specification and design of computer software.
%T The Flagship Declarative System
%A J. A. Keane
%D February 1990
%X The Flagship Project was a research collaboration between the
University of Manchester, Imperial College, London and International
Computers Limited. Its aim was to produce a complete computing
system based on a declarative programming style. This survey report
discusses the approaches taken in the three major areas addressed by
the project: (1) programming languages and programming
environments, (2) the machine architecture and computational models
and (3) the software environment, and, where appropriate, presents the
approaches with regard to general classifications. The influences on
the approaches taken by Flagship are considered and other similar
approaches are briefly discussed. The intention is to present the
different areas of the project both as a coherent whole and in
comparison to similar activities. The work in areas (2) and (3), which
involved the groups at the University of Manchester, is concentrated
upon. Some discussion of the work in area (1), and thus the
contribution of the Imperial College part of the project, is also
included.
%T Modularizing the Formal Description of a Database System
%A John S. Fitzgerald
%A Cliff B Jones
%D January 1990
%X Specification languages have been proposed, and are being developed, which
offer ways of splitting specifications into separate components or
modules. It is important that such languages are able to cope with
modularizations which are required by realistic specification tasks.
This paper offers a challenge problem in modularization based on a
description of an existing database system. The chosen modularization
is motivated by the need to separate coherent units about which useful
properties can be deduced.
%T Specifications are not (necessarily) executable
%A Ian J. Hayes
%A Cliff B. Jones
%D December 1989
%X Specifications can be written in languages which have formal semantics.
Their very formality, and the similarities with some aspects of
implementation languages, invites the idea that specifications might be
executed. This paper presents a number of arguments against that idea.
The aim is to warn of the dangers of limiting specification languages to
the point where all of their constructs can be executed. While
conceding the difficulties of relating specifications to an
understanding of the ``requirements'' for a system, it is argued that
other solutions should be sought than ``executable specification
languages''.
%X UMCS-89-12-1.ps.Z
%T Concurrency Control in the Multi-Ring Manchester Dataflow Machine
%A Yong Meng Teo
%R PhD thesis
%D November 1989
%X Extensive research effort has been spent on developing parallelising
compilers that explicitly extract parallelism in programs coded in
conventional programming languages. Recent research has focused on the
design of new languages, such as functional and single-assignment languages,
which have inherent program parallelism. These research efforts have a
common objective of exposing maximum program parallelism for exploitation
in parallel computer systems so as to attain higher computational
performance. However, over-abundant program parallelism, when left
uncontrolled, runs the risk of the machine getting swamped in parallelism
resulting in excessive (wasteful) usage of memory resources.
_
The aim of this thesis is to demonstrate that program parallelism can be
effectively controlled so as to fully exploit the available machine
parallelism. Using the Multi-Ring Manchester Dataflow Machine as an
example, an architectural approach to program concurrency control in this
machine is proposed. This approach tackles parallelism control at two
levels. Inter-process parallelism is controlled by a process-based hardware
throttle which imposes depth-first process execution. The throttle is a self-
adaptive feedback control mechanism that matches program and machine
parallelism by scheduling processes for execution depending on the machine
activity level. Intra-process parallelism is restrained by controlling the
execution of iterative instructions. Two approaches to controlling iterative
instructions, the self-scheduling method and the process-tree method, are
studied. Apart from reducing memory usage, controlling iterative
instructions has the important effect of reducing the occurrence of pipeline
holes caused by lengthy iterative instructions.
_
Simulation of the extended dataflow system with parallelism control has
produced encouraging results, which are analysed in some detail. Major
problems of parallelism control are exposed and some solutions to these
problems are discussed. In conclusion, the optimal multi-ring configuration
that can be supported by a hardware throttle is established. This research
has shown that the proposed prallelism control strategy is highly effective in
reducing memory resource usage and refuting the criticism that programs
executed in parallel systems require substantially more memory resources.
%T Manchester Dataflow Machine: Benchmark Test Evaluation Report
%A John Foley
%D November 1989
%X ABSTRACT TO GO IN HERE -- JRG to provide
%T Characterising Temporal Logic
%A Michael Fisher
%D October 1989
%X This report extends some of the previous work on the characterisation of
modal and tense logics and develops a framework suitable for the
characterisation of simple properties of temporal. In particular,
characteristic temporal formulas are given for linear, dense, and infinite
temporal model structures, using only the U+ temporal operator.
%T Colour System for Smalltalk-80
%A Trevor P. Hopkins
%D October 1989
%X This document describes an experimental enhancement to the Smalltalk-80
system which supports colour displays. In summary, the improved colour
system allows all output (including text) to be made using colour. It
provides several new classes, including ColorForm and ColorDisplayScreen.
These provide the same functionality as the standard classes Form and
DisplayScreen, except that additional colour operations are supported. A
new global variable ColorDisplay (a reference to an instance of
ColourDisplayScreen) replaces Display for colour output. A class
ColorDevice is provided to represent the colour display hardware.
_
It should be stressed that the system described only works with ParcPlace
Smalltalk V12.3, on Sun 3/60 workstations. It may be possible to get this
system to work using other versions of ParcPlace Smalltalk, or other
hardware platforms, but has not yet been tried. Unless otherwise stated,
remarks herein refer to release 1.01 of the colour system.
%T MetateM: A Framework for Programming in Temporal Logic
%A Howard Barringer
%A Michael Fisher
%A Dov Gabbay
%A Graham Gough
%A Richard Owens
%D October 1989
%X In this paper we further develop the methodology of temporal logic as an
executable imperative language, presented by Moszkowski and
Gabbay and present a concrete framework, called METATEM
for executing (modal and) temporal logics. Our approach is illustrated by
the development of an execution mechanism for a propositional temporal
logic and for a restricted first order temporal logic.
%T Operation Decomposition Proof Obligations for Blocks and Procedures
%A Jean Alain Ah-Kee
%R PhD thesis
%D October 1989
%X In the context of a compositional development method, operation
decomposition proof obligations are given for a language with blocks, arrays,
recursive procedures, unrestricted procedure calls with value, value-result,
reference parameters and global variables. Post-conditions of two states
relating the initial and final states of the computation are used; existing
proof rules for languages with the above features are for post-conditions
which are predicates of the final state only.
_
Aliasing, as arising from reference parameters, and static scoping are dealt
with in the Hoare-like proof system by the use of a ``syntactic environment''
(in a similar way as is usually done in denotational semantics).
_
A denotational semantics is given to the language. The proof rules are proven
to be consistent with the semantics and are also proven to be (relatively)
complete. The denotational semantic model uses an environment which
includes a set giving all the locations in scope, and procedure denotations
carry with them the locations in their ``support''. The use of such an
environment, as opposed to a more standard one in which information about
the locations in scope is not included, makes easier the proofs of consistency
and completeness for the proof rules given. The semantic model using the
extended environment also has some interesting properties. It is shown to be
congruent with a semantic model using a more standard environment.
_
In the semantic model, locations are allocated in a non deterministic way to
variable declarations. This is shown to add no real non-determinism to the
language.
%T Fair SMG and Linear Time Model Checking
%A Howard Barringer
%A Michael D. Fisher
%A Graham D. Gough
%D October 1989
%X SMG is a system designed to generate a finite state model of a program
from the program itself and an operational semantics for the programming
language. Model-checking can then be used to verify that the program
satisfies a set of desired temporal properties.
_
In this paper we first show how we have incorporated notions of fairness into
SMG; in particular, a user is now able to define semantics with ``fair''
constructs, for example, parallel composition, repetetive choice, etc. A
program may contain several fair constructs, each with its own fairness type.
Secondly we describe a practical approach to model checking of linear
temporal formulae over the fair structures generated by the new SMGF. Our
approach is a refinement and extension of the fair-satisfiability algorithms,
presented earlier by Lichtenstein and Pnueli, together with
techniques developed in our practical implementations of decision
procedures for linear temporal logic.
%T Decision Procedures for Temporal Logic
%A Graham D. Gough
%R MSc thesis
%D October 1989
%X Temporal Logic has been shown, to be a
powerful tool for reasoning about concurrent programs. A decision
procedure is an algorithm for determining whether a given formula of
the logic is valid (i.e. it is true in all interpretations of the
logic). This dissertation is a study of decision procedures for
various forms of temporal logic; in particular an improved decision
procedure is given for a future time Temporal Logic, and a new
decision procedure is given for a form of Temporal Logic including
past time operators.
%T Some Special Cases of Linear Approximation of Symmetric Matrices
%A K.K. Lau
%A K. Zietak
%D September 1989
%X We consider the linear problem of best approximating a matrix by elements
of a subspace in the normed linear space of symmetric matrices. First we
identify some special cases for the spectral norm by considering the dual
problem. Then we generalise the results and consider arbitrary unitarily
invariant norms, and identify a further special case.
%T Multicast Facilities for Multimedia Group Communication Environments
%A Lek H. Ngoh
%R PhD thesis
%D September 1989
%X Complex distributed messaging systems are being developed rapidly to take
advantage of computing environments which consist of high-performance
workstations and high-speed local area networks. The Distributed
Multimedia Information Systems (DMISs) proposed heresupport the exchange
of multimedia information between a group of users. The architectural
design of a DMIS is presented in this thesis, together with an investigation,
design, implementation and evaluation of a set of multicast facilities which
support the operations of systems such as a DMIS.
_
This thesis shows that multicast communication facilities are essential in
providing an effective group communication environment. A multicast model
which satisfies the various requirements and characteristics of systems such
as DMIS, is proposed. To obtain a better understanding of multicast, a
prototype implementation based on the proposed model is carried out. This
is followed by an initial performance evaluation using a set of experiments.
The results reveal areas of the prototype implementation which can be
refined to increase performance. The improved performance of the refined
version is shown experimentally. Some future directions of this research
are also discussed.
%T Categorical ML -- Category-Theoretic Modular Programming
%A Esther Dennis-Jones
%A David E. Rydeheard
%D August 1989
%X We consider an extension of the functional programming language Standard
ML with a modular structure based upon concepts in category theory such as
categories, functors, natural transformations and adjunctions. In essence,
what we are doing is following the categorical imperative of considering
arrows as well as objects. This is intended to enforce a certain mathematical
rigour on the programmer, so that the only programs that can be expressed
are those with a categorical significance. Because of the algebraic nature of
category theory, we may generate equational correctness conditions for the
modular structure of programs, thus separating the correctness of individual
functions from that of modules. We describe this programming language, give
examples of its use, and explain how it is implemented in a type system.
%T Spatial Reasoning and Route Planning using Sinusoidal Transforms
%A Ian Pratt
%D August 1989
%X This paper investigates a method of spatial representation whereby convex
regions of the plane are transformed into periodic functions of one variable.
The resulting representation scheme -- called the sinusoidal representation
scheme -- is shown to facilitate a variety of commonsense spatial reasoning
tasks. A guiding principle behind the system presented here is that it
should perform well over a limited repertoire of spatial reasoning problems,
whilst making the least possible use of explicit mathematical machinery.
Thus, although it does not aspire to model human commonsense spatial
reasoning, the system is conceived of as potentially playing the same role in
a robot as that played by commonsense spatial reasoning in humans. The
final section of the paper applies the sinusoidal representation scheme to the
problem of route planning for a point moving in a plane strewn with convex
obstacles. A method is developed to identify not the precise set of optimum
paths, but rather, a set of sensible paths guaranteed to contain all such
optima.
%T An Investigation into Architectures for a Parallel Packet Reduction Machine
%A Mark Irvine Greenberg
%R PhD thesis
%D August 1989
%X Declarative programming languages show promise in solving two fundamental
issues in Computer Science, that of providing concise reliable software, and a
natural methodology for programming parallel computers.
_
The Flagship project is a collaborative academic and industrial venture,
which aims to produce a complete parallel computer system capable of
supporting declarative languages efficiently. A packet-based reduction
model of computation has been specifically developed for parallel execution
and to support the features of declarative languages. The research described
in this thesis examines the machine structures and the organisation needed
for efficient packet reduction. Two main areas are covered, the architecture
of the reduction processor, and the network to interconnect the processors.
_
A high performance delta network is proposed for the reduction machine.
The individual switching elements employ a hybrid circuit/message
switching technique in conjunction with message buffers to achieve
performance. The switching elements also have the capability to distribute
work dynamically to processors of the machine, in accordance with workload
information. The design of the hybrid switch is described and simulation
results are presented to substantiate the claims for efficiency. A number of
experiments to investigate performance under varying network conditions
are also described.
_
The processor investigation examines three candidate architectures to
support packet reduction. These architectures range from a sophisticated
quad processor architecture where each processing action is carried out by a
separate unit, to a single processor architecture where all processing
functions are implemented by one unit. The architectures are simulated at a
detailed level to model the effects of pipelining and contention, and to
monitor the utilisation of the units in the processor. The results from
simulating a suite of benchmark programs indicate that a dual processor
architecture is the most effective in terms of performance and hardware
costs. The simulated execution times are compared to the ambitious
performance goals of Flagship, and the conclusion reached is that the target
performance can be met through the use of the emerging generation of
reduced instruction set microprocessor technologies.
%T Symbolic Execution as a Tool for Validation of Specifications
%A Ralf Kneuper
%R PhD thesis
%D July 1989
%X This thesis investigates symbolic execution, a method originally developed
for the verification and validation of programs, and extends it to deal with
(state-based) specification languages as well.
_
The main problems encountered are implicit specifications and the intended
generacity with respect to the specification language used. Implicit
specifications are handled by introducing `description values': identifiers
take as values predicates which describe their `actual value', the value one
would get by actual execution.
_
Language genericity is achieved by expressing the definition of symbolic
execution in terms of the semantics of the specification language used. Both
denotational and operational semantics of symbolic execution are discussed.
The denotational semantics of symbolic execution, expressed in terms of the
denotational semantics of the specification language, provide a correctness
notion for symbolic execution. A description of the operational semantics of
the language is used as a parameter to tailor symbolic execution to that
language.
_
Based on these ideas, a symbolic execution system called SYMBEX is
described and specified. SYMBEX is to form part of the project support
environment IPSE 2.5 and help the user to validate a specification by
symbolically executing it.
_
Finally, the achievements and limitations of this approach are discussed, and
suggestions made for future work.
%T Distributed Deadlock Detection: Algorithms and Proofs
%A Steve Hilditch
%A Tom Thomson
%D June 1989
%X We consider loosely-coupled multiprocessors on which memory is
distributed.
We discuss transaction schedulers which both serialise transactions and
minimize network messages.
_
Two distributed deadlock detection algorithms are given which are `truly
distributed' in the sense that they require no synchronisation procedures or
central control. These algorithms can be used in the more general
distributed case. Formal proofs of their correctness are given, together with
a discussion of their efficiency and a comparison with published algorithms.
_
We believe that the formal proof notation and method should be useful for
verifying other distributed algorithms where communication is by message
passing.
%T The Heterogeneous Multi-Ring Dataflow Machine Simulator
%A A.P.W. Bohm
%A Y.M. Teo
%D March 1989
%X Increased complexity in parallel and multiprocessor system architectures
has accentuated the need for tools that assist in understanding the behaviour
and evaluating the performance of these systems. This report describes a
simulation model of the proposed. Heterogeneous Multi-Ring Dataflow
Machine (MDM), consisting of a number of processing element rings,
structure store rings and one global allocator ring. The structure of
the multi-ring dataflow machine simulator, MR, is presented. Distributed
storage management is introduced. A proposed dynamic hardware throttle
mechanism which matches program parallelism to machine parallelism is
discussed. A scheme that controls the execution of macro dataflow
instructions is proposed for integration into the throttle mechanism. This
report serves as the defining document for MR, which is one of the tools used
for programming and architectural experimentations in the proposed multi-
ring Manchester Dataflow Machine.
%T A Categorial Syntax for Coordinate Constructions
%A Mary McGee Wood
%R PhD thesis
%D February 1989
%X A generalised categorial grammar is proposed for a wide range of coordinate
constructions, principally but not exclusively English, which attempts
systematically to exploit the use of Lambek's `product' connective. A
historical outline of categorial grammars is followed by consideration of the
formal space of possibilities in terms of the set of operations permitted
within a particular grammar: the rules of the Lambek calculus and some later
proposals are individually discussed and their use in syntactic description
illustrated.
_
The accounts of coordination offered within a number of alternative
linguistic theories are discussed (Generalized Phrase Structure Grammar,
Lexical-Functional Grammar, Word Grammar), as are previous categorial
accounts (Geach, Steedman, Dowty, Oehrle, and others).
_
An original categorial syntax for coordinate constructions is then
proposed, based on the use of the Lambek `product', a connective
forming ordered pairs, which has been neglected or avoided in previous
accounts. The category assignment for a coordinating conjunction in
English is XI(X+*), an infix operator returning a single entity of any
category from a string of entities of that category. On this basis a
detailed grammar is offered for the coordination of canonical
``constituents'', of ``non-constituents'', and for gapping, where I
proposed recovery by ``reconstitution'' of the category of the gap.
The use of products is shown to eliminate the need for the undesirably
powerful rules of generalized and disharmonic composition and of
unrestricted type-raising which characterize otherwise similar current
work. Multiple conjunctions, coordination of unlike categories, and
subordinating conjunction are also discussed.
_
In conclusion I discuss more generally the nature, problems, and prospects
of a categorial grammar incorporating products, and the significance of
formal ``equivalence''.
%T SMACK: The Smalltalk Actor Kernel
%A Trevor P. Hopkins
%D December 1988
%X There has been increasing interest in recent years in concurrent
object-oriented systems: systems in which at least some of the
objects can be `active' concurrently, and (typically) can continue
processing after a message send. The aim is to be able to express
algorithms and applications in a concurrent manner which can be
exploited by aparallel computer system.
_
This report considers `actors' systems, and describes an implementation of
an actor computational model. It further discusses various implementation
techniques for actor programming languages on realistic large-scale parallel
computers.
%T Review of Concurrent Object Systems
%A Trevor P. Hopkins
%D December 1988
%X There has been increasing interest in using the `object' paradigm to
exploit concurrent algorithms on parallel computers. This report reviews
some of the published work in this area. This report goes on to consider
in more detail the `actors' model for concurrent objects, and investigates the
problems of implementing such systems.
%T Synthesis of Logic Programs for Recursive Sorting Algorithms
%A K.K. Lau
%A S.D. Prestwich
%D October 1988
%X We describe the application of folding strategies described in an earlier
report to the synthesis of procedures sets for selected well-known recursive
sorting algorithms. Complete logic programs which define these algorithms
are presented.
%T The Use of Knowledge Based Techniques For Electronic Computer Aided Design
%A N. P. Filer
%R PhD thesis
%D August 1988
%X Electronic Computer Aided Design (ECAD) is described as a process where
the use of computers is required because of the complexity of the objects
being designed. It is known that computer methods used in ECAD problems
do not always produce acceptable results. In the past however, it was
frequently possible to complete a design manually, after a computer had done
all it could.
_
With the increasing use of Very Large Scale Integrated (VLSI) designs it is no
longer possible to rely on the designer to complete the design process.
Therefore, design techniques must be adopted which improve the behaviour
of computerised design tools.
_
The use of expert human knowledge, encoded as rules and used directly by a
computer whilst carrying out design tasks, is gaining popularity. Similar
systems have been used successfully in other domains, such as medicine.
_
A methodology for using expert knowledge alongside applicative design
processes and an appropriate support environment, PESL, have been
developed by the author. The rationale behind the design of PESL and the
main features of the PESL language and environment are described. The use
of PESL has been demonstrated by a set of prototype applications developed
both by the author and by colleagues. These applications are described and
critically evaluated.
_
The development of more example systems is suggested, in order to show the
methodology being used for a number of other ECAD tasks.
%T Recursion Transformations for Run-Time Control of Parallel Computations
%A V.J. Bush
%R PhD thesis
%D August 1988
%X There is much current research interest in parallel computer architectures
for general purpose, high-speed computation. Part of this research effort has
been devoted to the study of dataflow computer systems in which programs
are represented by directed graphs, with nodes representing operators and
arcs representing data dependencies between nodes. Several such computer
systems have been constructed and are now operational. These include the
prototype dataflow machine which has been running at the University of
Manchester for the past six years.
_
Experience of programming this machine has been gained, and guidelines for
algorithm design are becoming apparent. In particular, development of
suitable parallel algorithms is difficult for the programmer because the
conditions prevailing when the program runs may vary between systems, and
even during a single run. In considering the nature of an appropriate
development methodology for such programs it seems that the fitting of
algorithm to machine structure should be an automated process, rather than
the responsibility of the programmer. It has become clear that
transformation of algorithms will be of paramount importance in achieving an
automatic match with the machine configuration.
_
The aim of the research reported in this thesis is to build an algebraic
reasoning system for dataflow. A functional language is used as a high level
notation for dataflow graphs. This facilitates reasoning about graphs and, in
particular, it enables the development of transformations between different
graphs. An operational semantics for a (dynamic) dataflow model is
described and related to an algebraic semantics for the functional language.
A taxonomy of recursive forms of algorithms is presented and the
relationship between the form of an algorithm and its operational behaviour
is discussed. Transformations between different recursive forms are studied
and new transformations between sequential and parallel recursive forms,
which are applicable at run-time as well as at compile-time, are developed.
Examples of using these transformations are discussed.
%T On Computing Best Symmetric Matrix Approximations
%A K.K. Lau
%D July 1988
%X We consider the problem of computing best approximations in the linear
space of symmetric matrices, endowed with the spectral norm, by elements of
linear subspaces. We show how solutions to the dual problem can be used for
computing such best approximations in general, and identify some special
cases of the problem and discuss the details of computing the corresponding
best approximations by making use of the dual wherever appropriate.
%T The Joint Academic Network JANET
%A Manjula Patel
%D June 1988
%X The Joint Academic Network (JANET) came into being on the 1st April 1984.
It represents the rationalisation of previously diverse communications in the
research and academic communities.
This report examines the evolution of JANET, and the communications
protocols it uses; in particular X.25 and the Coloured Books, and their
relationship to the seven layered OSI Model.
Taking the University of Manchester Regional Computer Centre (UMRCC) to be
a typical Network Operations Centre, the internal workings of JANET are also
explored.
%T Semantics of Object-Oriented Languages
%A Mario I. Wolczko
%R PhD thesis
%D June 1988
%X Object-oriented programming is becoming an important technique in the
construction of large software systems. Compelling arguments, like reduced
maintenance costs, are advanced to encourage its use. To maximise the
advantages of such methods, object-oriented programming languages need to
be well-designed. When selecting the main features of a programming
language, or choosing between alternative designs, formal methods or
semantic analysis are invaluable. To date little attention has been given to
the formal description of object-oriented languages. This thesis introduces a
framework for describing the semantics of object-oriented languages.
_
To characterise the important features of object-oriented languages, an
idealised object-oriented language is described and its semantics specified
formally, using the denotational style of VDM. Design alternatives are
explored in the same way. Several general principles of object-oriented
language design are introduced, and the alternatives reviewed in the light of
these principles. By choosing apposite semantic domains, the fundamental
concepts of object-oriented systems are exposed: two message-passing
schemes, based on dynamically-bound procedure call and delegation, are
presented; class- and prototyped-based systems are described; and special
emphasis is given to the different approaches to class-based multiple
inheritance. The encapsulation of behaviour within classes is discussed, and
suggestions are made as to how this might be best achieved. A variety of
object-oriented control mechanisms are also surveyed.
%T Temporal Term Rewriting
%A William P.R. Mitchell
%D April 1988
%X In this paper we explore an extension of term rewriting , temporal term
rewriting, which uses temporal logic formulae to restrain the order in which
rewrite rules occur in a reduction sequence. We extend the usual ideas (local
confluence etc.) to temporal rewrite systems and prove that `forgetful'
terminating temporal systems are confluent if they are locally confluent.
Then we prove that if a system is `derived', it is decidable whether it is
locally confluent.
%T Synthesis of Recursive Logic Procedures by Top-down Folding
%A K.K. Lau
%A S.D. Prestwich
%D February 1988
%X In a logic programming framework, an algorithm can be represented by a
logic program; in particular, a recursive algorithm will correspond to a
program containing recursive procedures. In this report, we describe some
strategies for synthesising recursive logic procedures from their logic
specifications, which use the standard technique of folding for recursion
introduction in a top-down manner.
%T Teaching Notes for Systematic Software Development Using VDM
%A Cliff B. Jones
%D December 1987
%X This is the second version of the ``Teaching Notes'' for the 1986,
Prentice- Hall International publication: ``Systematic Software
Development using VDM''. It is still the case that not all of the
planned material has been included (but the notes are somewhat more
complete than in UMCS-86-4-2). This version is also laid out with
LaTeX thus making it easier to relate to the original book. The
author would be grateful for feedback both on errors and proposals for
extensions.
%T An Experimental User Interface for a Theorem Proving Assistant
%A Cliff B. Jones
%A R. Moore
%D December 1987
%X A Theorem Proving Assistant is taken to be a computer program which can be
used interactively to keep track of, and automate some steps of, attempts to
prove theorems. This paper describes an experiment in the design of user
interfaces to such programs. The approach is introduced using a formal
description from which the program has been implemented in Smalltalk-80.
%T Logical Frames for Interactive Theorem Proving
%A Peter A. Lindsay
%D December 1987
%X This report addresses theoretical issues to do with providing
mechanical support for formal reasoning independent of particular
logics. More precisely, the aim is to provide support for a class of
logics, identified for their utility in Formal Methods of software
engineering. The main problem is to formalize -- and indicate what
support is necessary for -- the logical frame, by defining the notions
``well-formed formula'', ``inference rule'', ``proof'', ``theory'',
``logic'' and so on. The emphasis is on systems suitable for
mechanization as interactive computer programs. We survey various
approaches to the problem, including different type theories which
might serve as logical frames. Our proposal for the FRIPSE logical
frame is presented in the working paper.
%T Symbolic Execution of Specifications: User Interface and Scenarios
%A Ralf Kneuper
%D December 1987
%X This report discusses the basic design of the user interface for a
system that symbolically executes specifications in order to check them
against informal requirements, as a means of increasing confidence in the
correctness of the specification. Leading up to that, we give some
scenarios which show how symbolic execution can be used for this purpose.
This helps to identify the requirements which we have to put on such a system.
_
The symbolic execution system we describe is intended to be part of the IPSE
2.5; it is to be used as a tool to validate a (formal) requirements, and
thus to support the first step in formal software development. It should
help the user to analyse and understand a specification before it is
implemented.
%T A semantics driven temporal verification system
%A Graham Gough
%A Howard Barringer
%D December 1987
%X We present an overview of a SMG, a generic state machine generator, which
interfaces to various temporal logic model checkers and provides a practical
generic temporal verification system. SMG transforms programs written in
user-definable languages to suitable finite state models, thus enabling fast
verification of temporal properties of the input program. It can be applied,
in particular, to the verification of temporal properties of concurrent and
reactive systems.
%T Temporal Logics for Abstract Semantics
%A Michael D. Fisher
%R PhD thesis
%D December 1987
%X The aim of this thesis is to study the use of temporal logics in the
description of abstract semantics for simple programming languages.
Following a discussion of the evolution of temporal logics, a coherent
framework is described for temporal logics in general. The concept of
abstractness and its relationship to compositionality is formalised before
considering the use of several different forms of temporal logic in the
description of abstract temporal semantics. Though some temporal logics do
not easily support the production of such semantics, methods will be
described which attempt to enhance these logics sufficiently to enable
abstract semantics to be given. It is also shown that semantics given using
simple discrete temporal logics do not capture the particular form of
abstractness studied here.
This study resulted in the formulation of a new class of temporal logics,
which, though being a natural extension of other basic temporal logics, also
provides a radically different view of the modelling of temporal events.
%T Monitoring Tools for Parallel Systems
%A Heidi W.H. Tang
%R MSc thesis
%D December 1987
%X Monitoring is essential to assess the performance of a computer system.
Parallel machines are fundamentally different from conventional machines,
and may contain a large number of concurrent activities. Due to their
inherently complex nature, a significant amount of experimentation is
necessary before they can be built in a cost-effective and viable manner.
Therefore new tools and techniques are needed to evaluate their performance,
and to aid in their development and analysis.
_
This thesis describes monitoring computer systems in general, and presents
three experimental parallel machine projects currently under research in
the University of Manchester: the ParSiFal, Flagship and Dataflow projects.
_
Two monitoring tools have been designed and implemented in the context of
the Manchester Dataflow system, both tools present monitoring data as a
graphical, animated display. A program level tool displays a dataflow
program executing on a parallel machine. Although the tools have only been
used on the Manchester Dataflow system, they have been designed to be as
source-machine independent as possible, and they should have wider
applicability.
%T Performance Evaluation of a Heterogeneous Multi-Ring Dataflow Machine
%A Y.M. Teo
%R MSc thesis
%D December 1987
%X Advances in semiconductor device technology have made possible the
construction of parallel computers, using multiple cheap and powerful
processors. Most parallel computers achieve higher performance while
retaining the von Neumann model of computation plus associated
programming techniques and computer architectures.
_
Further increases in processor speed, using more exotic device technologies,
are beginning to be limited by physical constraints, such as the finite speed
of light. The so-called ``von Neumann bottleneck'' also makes exploitation of
inherent program parallelism difficult. To overcome this, several new
models of computation and associated machine architectures have been
proposed. In particular, the dataflow model of computation, which has no
centralised control, has emerged. In this model, instructions are executed
when their data is available, and the order of execution is determined solely
by the data dependencies among instructions.
_
A heterogeneous dataflow machine, consisting of a processing element ring
and two structure store rings, has been built at the University of
Manchester.
_
This thesis describes a performance evaluation study of the proposed
Heterogeneous Multi-Ring Manchester Dataflow Machine, consisting of
multiple processing element rings, multiple structure store rings and a
global allocator ring.
_
Firstly, a bottleneck analysis identifies the module constituting the most
critical bottleneck in the processing element ring. Architectural changes to
remove the bottleneck are proposed. Secondly, a message-based scheme,
incorporating a mechanism for the measuring system activity, is proposed for
implementing a dynamic, process-based, hardware throttle. The scheme
reduces memory requirements in the PE rings by matching program
parallelism with available machine parallelism dynamically at run-time. In
conclusion, as assessment of the project is made, and some issues that have
not been addressed and which require further investigation are discussed.
%T Decidability in Temporal Presburger Arithmetic
%A Wolodymyr Hussak
%R MSc thesis
%D November 1987
%X Decision methods for propositional temporal logics abound. The same cannot
be said for first order temporal logics, because most are inherently
undecidable. Various reasons account for this, not least of which is the
undecidability of the underlying theory to which the temporal operators are
added. For this reason, if decision methods are sought, it might appear wise
to choose a decidable first order theory at the outset, and only then append
temporal operators to the language. However, this still does not guarantee the
existence of a decision method, and it may be necessary to subset the full
theory.
_
This dissertation contains the identification of a non-trivial subtheory of a
temporal theory of Presburger Arithmetic, possessing a decision method. The
decision method offered, comprises two procedures, the second of which is to
be executed if the first proves to be inconclusive. The first is a
satisfiability procedure based on the algorithm for (non-temporal)
Presburger Arithmetic and the second is a tree construction which performs
the task required of it, as a result of Konig's Lemma. The subtheory,
itself, is quite extensive, and an example is given of how additional
formulae, with a greater complexity of temporal description, can be encoded
into it.
%T An Approach to General Silicon Compilation
%A Martyn A. Spink
%R PhD thesis
%D November 1987
%X In the present climate, technological advances continue to provide the
designers of Computer Aided Computer Design (CACD) systems with a
number of problems. More specifically, a CACD system developed to tackle a
specific problem is soon outdated by an advance in technology. Such
problems particularly apply to systems developed to aid the design of silicon
chips, where technological progress is currently especially rapid.
Consequently, there is a need to develop a CACD system capable of being
adapted to meet the needs of new technological developments. Within such a
CACD system, there is a requirement for a silicon compiler general enough to
handle not only current design technologies, but also those developed in the
future.
_
The research reported in this thesis has involved an investigation into the
requirements for a silicon compiler general enough to handle current and
future technological problems. Five criteria, which were used as the basis
for the research, are initially defined. These criteria are used as the basis
for a discussion of the approaches adopted by other systems to tackle the
problems associated with each criterion. Subsequently, the problems that
require solution are identified and discussed. In order to solve some of
these problems, the developed silicon compiler requires the use of a
specifically developed environment. The use of this environment to tackle
some of the problems is described, in order to provide an insight into the
relationship between the compiler and the environment. Finally, the
implementation aspects of two systems are discussed. These systems are
used to generate the necessary information required to enable compilation to
be performed.
%T Manchester Dataflow Machine: Preliminary Benchmark Test Evaluation
%A John Foley
%D November 1987
%X The Manchester Dataflow Hardware is supported by a Software compiler for
the SISAL language and a number of programs have been written to act as
Benchmark tests for the hardware. The Benchmark set used contains a
wide range of programs including numerical algorithms, sorting, graph
colouring and nqueens algorithms plus others. All programs are compiled
using a range of optimisations, including function inlining and
vectorisation. The resulting statistics, obtained both by simulation and
hardware are presented.
%T Formal Specification and Verification of Hardware: A Comparative Case Study
%A Victoria Stavridou
%A Howard Barringer
%A Doug Edwards
%D November 1987
%X Formal methods are applied with increased frequency in the specification
and verification of digital systems as an alternative to traditional methods
of establishing correctness, such as simulation and testing. This article
briefly outlines the goals and the philosophy of the HArdware VErification
project at the Department of Computer Science within the University of
Manchester. Our objective here is to report on the results of a first
controlled experiment comparing formalisms and systems that are currently
used for formally specifying and verifying both hardware and software
systems. Our strategy consists of working with incrementally ``harder'' test
cases, which are used to investigate the characteristics and thus the pros and
cons of each formalism. The example used is a purely combinational device.
%T Review of Existing Theorem Provers
%A Peter A. Lindsay
%A Richard C. Moore
%A Brian Ritchie
%D August 1987
%X The report is the result of an extensive study of mechanical support
for Formal Reasoning, carried out as part of the IPSE 2.5 project.
(It is thus primarily concerned with possible software engineering
applications.) An introduction outlines some of the broader issues of
FR support: choice of an appropriate logic; grouping of results into
``theories''; automation of (certain) deductions; user interaction;
heuristics and tactics; proof management in interactive systems. The
systems are described individually -- including their intended domain
of support, their overall structure and important features -- with
emphasis on how (if at all) they tackle the above issues. Reviewers'
impressions are given, together with pointers to relevant literature.
The main systems reviewed are LCF, NuPRL, Veritas, Isabelle, IPE,
Affirm, Boyer-Moore and Gypsy; there are also brief reviews of other
systems. (Resolution theorem provers are not considered.)
%T Throttle Mechanisms for the Manchester Dataflow Machine
%A Carlos. A. Ruggiero
%R PhD thesis
%D August 1987
%X Dataflow architectures have been studied for several years as an
alternative to the von Neumann model of computation. They have proved to
be effective in exposing and exploiting different forms of parallelism in
programs. However, it has been noted that the parallelism exposed for some
programs is much more than the parallelism that can be exploited by
specific hardware configurations. This leads to both excessive and
unnecessary use of resources (particularly storage).
_
In this thesis, different software methods for reducing store usage in the
Manchester Dataflow Machine by controlling parallelism are presented.
Assessment of these methods is achieved by means of simulation. A process-
based method is found to be the most suitable and its advantages and
problems are discussed in some detail. Among the problems, the necessity of
detecting termination of processes is crucial.
_
Three methods for detecting process termination are analysed and an
instruction reference counting method is described in detail. The way this
method detects termination for various programming constructs is explained,
and the overhead introduced by the method is assessed.
_
The thesis concludes that there are no fundamental reasons for store usage in
dataflow machines to be orders of magnitude higher than in von Neumann
machines.
%T Specification and Control of Execution of Nondeterministic Dataflow Programs
%A Pedro M. C. C. Barahona
%R PhD thesis
%D June 1987
%X Dataflow machines are general purpose parallel machines that can exploit
various sources of parallelism to speed up program execution.
Nondeterministic problem-solving applications can be speeded up if several
paths of their state space are searched in parallel. However, the breadth-
first search, favoured by the pure dataflow datadriven control of execution,
leads to an ``explosion'' of parallelism with correspondingly high demand on
machine resources. Non-trivial problems require a best-first search, guided
by a heuristic through the most promising states.
_
This thesis proposes some extensions to the existing Manchester Dataflow
System in order to combine heuristics and exploitation of parallelism for
efficient execution of nondeterministic applications, and investigates the
suitability of such extended system to support logic programming.
_
A nondeterministic set data type and a priority construct, introduced in the
dataflow language SISAL, enable a flexible control of execution. The set data
type prevents ``spurious'' data dependencies from being introduced in the
logical component of the program specification. The priority construct
allows the problem-dependent heuristic to be specified separately. The
heuristic is taken into account by a process-based control of execution,
superimposed on the purely data-driven control.
_
Simulation of the extended dataflow system has produced encouraging
results, which are analysed in some detail. Finally, the main conclusions of
the project are summarised together with related topics requiring further
investigation.
%T Behavioural Implementation Concepts for Nondeterministic Data Types
%A Tobias Nipkow
%R PhD thesis
%D May 1987
%X This thesis investigates model-theoretic characterizations of behavioural
implementation concepts for data types with nondeterministic operations.
The main idea of what constitutes a correct implementation is reduced from
the level of data types to the level of programs. This provides the basic
correctness notion.
_
The main part of the thesis is devoted to the study of model-theoretic
relationships between data types which are necessary and sufficient
conditions for correct implementations in the above sense. These model-
theoretic characterizations are the basis for correctness proofs of data type
implementations.
_
The two implementation notions studied are loose and robust correctness.
The corresponding model-theoretic relationships between data types are
simulations and weak simulations. Their soundness and completeness is
investigated with respect to two simple stream-based applicative
programming languages of different expressiveness.
%T How to Choose the Weights in the Knuth Bendix Ordering
%A Ursula Martin
%D May 1987
%X Knuth and Bendix proposed a very versatile technique for ordering terms,
based upon assigning weights to operators and then to terms by adding up the
weights of the operators they contain. Our purpose in this paper is twofold.
First we give some examples to indicate the flexibility of the method. Then
we give a simple and practical algorithm, based on the simplex algorithm, for
determining whether or not a set of rules can be ordered by a Knuth Bendix
ordering.
%T Some Aspects of Induction
%A William P. R. Mitchell
%D May 1987
%X Let *F* be a set of function symbols and *R* be a rewrite
system over *t(F U n)* the universal algebra constructed from
*F*
and a countably infinite set of variables *n*. How can we decide
whether an equation *M = N* is an inductive theorem of *R* or
not? Techniques investigated here come under the general notion of
`inductionless induction'. These methods work by completing
*R U {M = N}* in such a way that whilst completing they
simultaneously check to see if *M = N* is an
inductive theorem of *R*.
%T Centrenet Terminal Multiplexer
%A Evan J. D. Price
%A Trevor P. Hopkins
%D April 1987
%X This report contains a description of the Centrenet Terminal Multiplexer.
This is a microprocessor controlled device that enables terminal devices to
be attached to a Centrenet Local Area Network. Currently, a variety of
terminals and computers are used in this way, enabling the computer lines to
be shared among the terminals. The design of the hardware and a description
of the software which controls the Multiplexer is presented.
%T A Hardware Simulator for a Multi-Ring Dataflow Machine
%A John F. Foley
%R PhD thesis
%D February 1987
%X dataflow machine at Manchester has demonstrated a near linear speedup in
program execution when multiple processors are used, provided only that
sufficient parallelism is available within the problem. However, the machine
has a limited memory bandwidth and this places a ceiling on computation
rates; the matching and node stores are particular bottlenecks. One solution
is a multi-ring machines consisting of a number of dataflow machines (each
with several processors) connected by a switching network and operating in
parallel. In order to test such a system, a hardware simulator has been built.
This thesis gives details of the machine and the results of initial tests.
%T The Parallel Reduction of Lambda Calculus Expression
%A Paul Watson
%R PhD thesis
%D February 1987
%X Models of computation for the evaluation of Functional Programs are
based on the rules for reducing Lambda Calculus expressions. Reduction is
achieved by the application of the b-conversion rule to suitable redexes,
but few models of computation provide a full implementation of this rule
because of the complexity of avoiding variable name clashes. Consequently,
evaluation orders are restricted to those in which name clashes cannot occur.
_
This thesis develops a model of computation for the parallel reduction of
Lambda Calculus expressions, represented in De Bruijn's name-free notation,
which does provide a full implementation of b-conversion, allowing
expression to be reduced by any evaluation order. The model is designed to
allow reduction to be performed on a parallel machine comprised of a set of
processor/store pairs connected by a communications network. A data-
driven, graph reduction execution mechanism is used to exploit the parallel
hardware efficiently.
_
A language for specifying graph reduction models of computation is
proposed, and is used to give full specification of the Lambda Calculus
reduction model. Specifications in the language can be compiled to the
instruction set of a virtual machine. The code produced can then be executed
by a virtual machine emulator, or could be recompiled to the order code of a
physical processor to allow the high performance simulation of models of
computation. The virtual machine is used as the foundation for the design of
a physical machine which would support the parallel reduction of lambda
calculus expressions.
_
One of the major issues in the design of graph reduction models of
computation is the removal of redundant parts of the expression graph.
Neither of the two standard Garbage Collection schemes: Reference Count, or
Mark-Scan is entirely suitable for a distributed machine, mainly because of
the synchronisation they require. A modified Reference Garbage Collection
scheme is described which removes the need for synchronisation, and
enhances some of the attractive properties of Reference Count Garbage
Collection.
%T A Formal Description Method for User Interfaces
%A Lynn S. Marshall
%R PhD thesis
%D January 1987
%X Formal description methods and user interface design are both important
research topics in computer science. This dissertation examines the key
issues within these two fields. A formal description method suitable for
describing user interface is sought. Flow of control is of major importance in
an interactive system. A method suitable for the description of the interface
of such a system must clearly illustrate this temporal ordering; it should
separate the system into levels of varying detail, and be presented in a
modular fashion.
The description method should also be concise and understandable while
remaining both abstract and mathematically sound. With these criteria in
mind, existing formal description techniques are examined. No single
existing method is found to be entirely suitable. The finite-state machine, or
transition-state diagram (TSD), approach is best for illustrating the flow of
control, while a set-theoretic approach, such as the Vienna Development
Method (VDM), meets the remaining criteria. Thus a hybrid method is
proposed for formally describing interactive systems with emphasis on the
user interface. This method uses statecharts, an extended form of TSD, to
show the flow of control between operations of the system, along with the
pre- and post- conditions of VDM to describe the operations themselves.
Examples using the suggested method are presented along with formal
semantics and sample proofs.
%T An Investigation of the Argonne Model of Or-Parallel Prolog
%A Kish Shen
%R MSc thesis
%D January 1987
%X The Argonne Or-Parallel Prolog is an all-solution and annotated form of or-
parallel Prolog. It may be regarded as a simple extension of sequential Prolog
into a parallel environment. Or-parallelism is only exploited where it is to
occur and so there are none of the synchronisation and deadlock problems
associated with many other approaches to parallelism.
_
The aim of this project is to study the Argonne model to see if there exists
sufficient or-parallelism in normal Prolog programs, and also where the or-
parallel annotations should be placed in a program. A simulator has been
written in Prolog for this purpose. This simulator allows programs to be run
with different numbers of processors, and execution, and the number and
types of variable references occurs. The distribution of the reference type
directly affects the performance of the model and so is an important
statistic.
_
This report gives an abstract description of the simulator and presents and
discusses some results. It also gives an overview of the current state of
research on parallel logic programming languages and machines.
%T Load Balancing, Locality and Parallelism Control in Fine-Grain Parallel Machines
%A John Sargeant
%D November 1986
%X A number of important operational issues have been identified by
researchers working on many different machine architectures. Load
balancing is necessary to distribute work among the processors. Locality is
desirable to limit the communication bandwidth which a parallel machine
requires. In addition it is necessary to control parallelism to limit
resource usage. This document surveys various approaches which have been
taken in these areas and proposes solutions in the context of the Flagship
rewrite-rule machine project.
%T Design Rule Representation within a Hardware Design System
%A Julio S. Aude
%D November 1986
%X This thesis examines the problem achieving technology adaptability in CAGD
tools.
_
Design rules currently supported in the data base are concerned with
different aspects of design such as timing, loading, signal transmission,
layout and power supply distribution and decoupling. These rules are
described using a special purpose language, DRDL. Both design and the
implementation of DRDL are discussed in this thesis.
_
In the design rule data base, technologies are organized hierarchically
and, in principle, the design rules assigned to a particular technology
may be inherited by all its descendants. Selective access to rules by
application programs is achieved using attributes to qualify rules and hence
control their focus of attention. The internal organization of the design
rule data base and the facilities provided to access and update design rule
information by the data base management system are also described in this
thesis.
%T Unification in Boolean Rings and Unquantified Formulae of the First Order Predicate Calculus
%A Ursula Martin
%D November 1986
%X Unification, or the solution of equations, in particular algebraic
theories has attracted a great deal of attention among computer scientists
in recent years, as it is a basic inference mechanism in algebraic
manipulation of formulae, automated reasoning and some programming languages.
This paper is concerned with unifying Boolean terms, such as those arising
in set theory, and the propositional and predicate calculus.
%T Program Logics -- A Short Survey
%A Michael Fisher
%A Howard Barringer
%D November 1986
%X During the past decade, logics of programs have come to the forefront of
computing science. In particular, various programming logics have been
shown most useful tools for use in specification, development and
verification of complex systems. Although there exist several excellent
survey papers covering certain subclasses of program logics, we felt there
was a requirement for a broader-based survey that detailed, concisely, the
differences between the most common program logics. This report is our
first attempt at such a survey.
%T Program Specification and Verification in VDM
%A Cliff B. Jones
%D October 1986
%X Formal methods employ mathematical notation in writing specifications and
use mathematical reasoning in justifying designs with respect to such
specifications. One avenue of formal methods research is known as the
Vienna Development Method. VDM has been used on programming language
and non-language applications. In this paper, programming languages and
their compilers are ignored; the focus is on the specification and
verification of programs.
_
VDM emphasizes the model-oriented approach to the specification of date.
The reification of abstract objects to representations gives rise to proof
obligations; one such set which has wide applicability assumes an increase
in implementation bias during the design process. The incompleteness of
this approach and an alternative set of rules are discussed.
_
The decision to show the input/output relation by post-conditions which
support decomposition were poorly worked out; those presented below are as
convenient to use as the original ``Hoare-logic''. Recent work on a logic
(which is tailored to partial functions) is also described.
%T A Study of an Extended Temporal Logic and a Temporal Fixed Point Calculus
%A Behnam Banieqbal and Howard Barringer
%D October 1986
%X Propositional temporal logic has previously been extended with right linear
grammars to form a family of languages, known as ETL (extended temporal
logic), expressively equivalent to w-regular languages. In this report we
first show that a set of axioms and inference rules thought to be complete for
ETL is in fact incomplete and then provide a complete axiomatisation.
Secondly, we introduce an alternative and more natural fixed point extension
to propositional temporal logic, for which we provide a decision procedure
and establish equivalence with ETL.
%T Using Temporal Logic in the Compositional Specification of Concurrent Systems
%A Howard Barringer
%D October 1986
%X This article considers the application of temporal logics in the formal
specification and development of complex computing systems. In particular,
the relevance of compositional proof theories, modularity and abstractness
are motivated. A basic technique for obtaining compositional program proof
theories, using a temporal language as the assertion language, is
demonstrated. Several specialisations for various real parallel programming
language frameworks are indicated. Finally, a problem in obtaining abstract
semantic descriptions of programming languages in the temporal framework
is discussed, together with one particular solution suggesting the use of a
logic based on a linear dense time model.
%T FFT on a New Parallel Vector Processor
%A Kung-Kiu Lau
%A Xiang-Zhen Qiao
%D August 1986
%X A new parallel processing system has been proposed, and a prototype model
of the system has been constructed. It is designed to perform parallel
vector operations at maximum efficiency. In addition, it can also handle
communicating vector operations, and hence exploit irregular parallelism
present in many apparently sequential algorithms. The system is therefore
suitable for a wide range of algorithms with varying degrees of
parallelism. In this paper, we give a brief description of the system,
and discuss the implementation of the Cooley-Tukey FFT on this system. We
show that the system's versatility allows it to achieve a near maximum
degree of parallelism for this algorithm in the asymptotic case.
%T A Logic for Partial Functions
%A Jen Huan Cheng
%R PhD thesis
%D July 1986
%X In the formal specification and verification of computer software, a logic
plays a vital role. The classical logic is inadequate because it only handles
total functions, whilst partial functions are indispensable in programming.
_
Criteria for a specification logic that covers partial functions are
discussed, and against these existing methods and related logics are
surveyed.
_
A first-order logic for partial functions (LPF) is formalized. This is
achieved in several stages; propositional calculus, predicate calculus and
predicate calculus with equality. In each stage, various meta-theorems such
as the consistency and the completeness are established. Different methods
are adopted for different stages: Hilbert-style for the propositional calculus,
the sequent calculus for the predicate calculus and an algorithm of
congruence closure for equality axioms. Other established properties are:
the expressive completeness of the propositional connectives and the cut-
elimination theorem. An extension of LPF to the second-order is proposed as
well.
_
To make LPF easily usable by humans, a different proof style, namely the
natural deduction, is considered, and its relation to the sequent calculus is
set up. In order to achieve a practical comparison with other logics and an
automated LPF supporting formal methods of software development, a proof
checker that allows users to define their own logics is proposed and
specified.
%T Multiset Orderings
%A Ursula Martin
%D May 1986
%X A multiset or bag on a set S is an unordered sequence of elements of S.
This paper is concerned with the construction on M(S), the set of multisets
on S. Such orderings have been studied for many years in invariant theory
and the theory of partitions, and also more recently have proved
particularly useful in proofs of program termination and in constructing
orderings which prove the termination of term rewriting systems.
In this paper we shall describe a geometrical approach to the study of
orderings on multisets. We shall show that any ordering on M(S) satisfying
certain natural restrictions (a "tame" ordering) arises from a particular
subset of R called a cone, and conversely any cone gives rise to a tame
ordering on M(S). Tp demonstrate the power of this geometric approach we
shall classify all tame orderings on R , and describe an interesting new
multiset ordering which inherits a partial order on S. We shall show how
the dominance ordering, which has been much studied when S is totally
ordered, can be extended to the case when S is only partially ordered, and
describe how the standard multiset ordering can be interpreted
geometrically. Finally we shall show that any tame total ordering on R is
related to the lexicographic order on R by the action of an invertible
matrix.
%T Approaches to Raster Graphics and Image Manipulation
%A Mark van Harmelen
%R PhD thesis
%D April 1986
%X Several approaches to raster graphics and image manipulation are explored
in this thesis, with the end aim of creating an expandable and flexible
generic architecture with different capabilities and performances.
The task is approached using two routes which are: (i) An examination of
the nature of innovation in computer architecture, with particular
emphasis on factors which enhance the performance and capabilities of the
systems. In the computer graphics and image manipulation domain this
indicates that attention should be paid to parallelism, and to VLSI and
videodisc technology. (ii) An examination of abstract process-data models
for computer graphics and image manipulation, to see how parallelism can be
exploited in that area.
_
A generic architectures is then produced for a satelite computer graphics
and image manipulation system. The generic architecture may be
instantiated in different ways, producing machine architectures with a
variety of different capabilities and performances. The generic
architecture contains a bus structure that enables the exploitation of
non-deterministic parallelism for graphics tasks, and separates out the
high level control information from the low-level pixel data. Thus multiple
processors may be used for performance enhancements in certain tasks, and
different kinds of proccessors may be incorporated into a particular
instantiation to create machines with different capabilities.
_
In particular, the architectural framework uses a Rotating Bus as the
primary means of communication between multiple processors. A dedicated
subsidiary bus is used for transfer of pixel information between pixel
manipulation processors, frame buffers and video disc interfaces.
_
The thesis contains sufficient low-level hardware detail to form an
informal proof as to the feasibility of the approach outlined above,
concentrating on the mroe critical areas of the hardware; bus and frame
buffer design.
%T Stored Data Structures on the Manchester Dataflow Machine
%A John Sargeant
%A Chris C. Kirkham
%D April 1986
%X Experience with the Manchester Dataflow Machine has highlighted the
importance of efficient handling of stored data structures in a practical
parallel machine. It has proved necessary to add a special-purpose structure
store to the machine, and this paper describes the role of this structure
store and the software which uses it. Some key issues in data structure
handling for parallel machines are discussed.
%T An Implementation of the GKS-3/PHIGS Viewing Pipeline
%A Karen Singleton
%D April 1986
%X Following the establishment of GKS as the first international standard for
computer graphics, two new standards, GKS-3D and PHIGS, are emerging
which cater for three dimensional graphics. In order to display a 3-D object,
a description of how it is to be viewed must be given. GKS-3D and PHIGS
provide a viewing pipeline to generate the parallel or perspective view
required. This paper describes in detail the functionality and an
implementation of the GKD-3D/PHIGS viewing pipeline.
%T Centrenet Network Monitor
%A Trevor P. Hopkins
%D March 1986
%X This report contains a description of the Centrenet Network Monitor
system. This device monitors traffic on a selected mode in the network,
and permits the user to display some (or all) of the packets routed through
that mode in a variety of formats. The system is transparent to network
users, and requires no modification to hardware or software. The Network
Monitor is primarily intended as a test and debugging tool for use with
protocol development.
%T Processor Allocation in a Multi-ring Dataflow Machine
%A Pedro Barahona
%A John R. Gurd
%D October 1985
%X Partitioning a program into processes and allocating these processes to
different processors determines the efficiency of a multiprocessor
architecture. In the dataflow model, where each process consists of a
single instruction (e.g. an arithmetic or logic operation), an automatic
decomposition of a program into fine-grain processes can be achieved.
A prototype dataflow machine is in operation at the University of
Manchester. The machine comprises a single processing element (PE), which
contains several units connected together in a pipelined ring. A
Multi-ring Dataflow Machine (MDM), containing several such processing
elements connected together via a switching network, is currently under
investigation.
_
This paper describes a method of allocating the dataflow instructions to
the processing elements of the Multi-ring Dataflow Machine and examines the
influence of the method on the selection of a switching network. Results
obtained from simulation of the machine are presented, and it is shown that
programs are executed efficiently when their parallelism matches the
parallelism of the machine hardware.
%T Efficient Dataflow Code Generation for SISAL
%A Anton P. W. Bohm
%A John Sargeant
%D October 1985
%X The efficiency of dataflow code generated from a high level language can be
improved dramatically by both conventional and dataflow-specific
optimisations. Such techniques and used in implementing the functional
language SISAL on the Manchester Dataflow Machine.
%T Non-deterministic Data Types: Models and Implementations
%A Tobias Nipkow
%D October 1985
%X The model theoretic basis for (abstract) data types is generalized from
algebras to multi-algebras in order to cope with non-deterministic
operations. A programming oriented definition and a model theoretic
criterion (called simulation) for implementation of data types are given.
To justify the criterion with respect to the definition, an abstract
framework linking denotational semantics of programming languages and model
theory of data types is set up. A set of constraints on a programming
language semantics are derived which guarantee that simulation implies
implementation. It is argued that any language supporting data
abstraction does fulfill these constraints. As an example a simple but
expressive language L is defined and it is formally proved that L does
conform to these restrictions.
%T Up and down the Temporal Way
%A Howard Barringer
%D September 1985
%X A formal specification of a multiple lift system is constructed. The
example illustrates and justifies one of many possible system specification
styles based on temporal techniques.
%T Image Transfer by Packet-switched Network
%A Trevor P. Hopkins
%D September 1985
%X The advantages and disadvantages of using packet-switching technology for
the transfer of image information in real time are considered. An
experimental implementation of parts of a system based on a high-speed
Local Area Network is described; these include a simple scren output
device and a real-time camera input device. The generation of images using
a number of microprocessors is also described. A number of applications
for such a system are investigated and the extension of this approach to
implement in Integrated Information Presentation system is considered.
%T Extending Pascal with One-entry/Multi-exit Procedures
%A Ian D. Cottam
%D September 1985
%X To make the verification of Pascal programs more manageable their so-called
standard and exceptional behaviour is separated syntactically with a simple
exception handling mechanism (due to Flaviu Cristian). The programming
language Pascal (BS 6192 / ISO 7185 level 1) is augmented with the guarded
statement sequence that may signal a (parameterised) exception, and
exception handlers.
_
Two different implementations have been undertaken; a Pacal compiler has
been modified to accept and compile the extensions, and a stand-alone
preprocessor from the extended language to standard Pascal has been
developed. The merits, and otherwise, of both strategies are reported.
The constructs are presented informally; a companion paper will give the
formal semantics.
%T Efficient Stored Data Structures for Dataflow Computing
%A John Sargeant
%R PhD thesis
%D August 1985
%X Dataflow computers can exploit many different forms of parallelism in
programs. Dataflow languages are high-level functional languages which
allow many sorts of parallel computation to be expressed naturally.
Dataflow machines executing programs written in such languages potentially
offer very high computing speeds for a wide range of applications. One of
the main problems in dataflow computing to date has been the handling of
data structures. The concept of stored data structures must be integrated
into the dataflow model of computation at reasonable cost.
_
This thesis describes how this can be done, particularly for the Manchester
Dataflow Machine. It is necessary to tackle the problems on several
levels, including the underlying hardware. An extension to the
architecture, a structure store, is described. The code generated for the
dataflow language SISAL is discussed in detail. Particular attention is
paid to the issue of garbage collection. A number of optimisations which
can dramatically improve the quality of dataflow code are described.
As a result of this work, it is now possible to generate from SISAL
dataflow code which is efficient in terms of the total number of
instructions executed, while retaining the ability to exploit large amounts
of parallelism. Simulation results which demonstrate this are presented.
%T The Unification of Terms: A Category-Theoretic Algorithm
%A D. E. Rydeheard
%A R. M. Burstall
%D August 1985
%X As an illustration of the role of abstract mathematics in program design,
an algorithm for the unification of terms is derived from constructions of
colimits in category theory.
%T Performance Evaluation of a Multi-ring Dataflow Machine
%A Pedro Barahona
%D October 1984
%X The integration attained by VLSI technolgoy has made it possible to build
parallel computers composed of several cheap and powerful processors.
Early parallel architectures offered performance improvements whilst
retaining the underlying von Neumann model of computation.
The von Neumann model makes the exploitation of inherent parallelism of
programs problematical. To overcome these problems, other models of
computation, and machines to implement them have been proposed. In
particular, the dataflow model has no centralised control and no spurious
sequencing of instructions. Instructions are executed as soon as their
input data is available.
_
A prototype dataflow machine has been built at the University of
Manchester, composed of several units pipelined together in a ring. This
thesis is a preliminary attempt to assess the performance of a Multi-ring
Dataflow Machine built with several processing elements, each similar to
the prototype ring implemented.
_
The Switch required by such a Multi-ring Dataflow Machine and the
allocation of the dataflow instrucitons to the processing elements are
discussed in detail. Simulation has been used to predict the machine
performance. Minor changes to the ring architecture are proposed in order
to avoid inefficiencies that were discovered. The main conclusions of the
project are presented together with topics requiring further work.
%T A Database Programming Language: Definition, Implementation and Correctness Proofs
%A Ann Welsh
%D October 1984
%X A high level binary relational database programming language is designed
using formal specification. The restrictions imposed by the underlying
data model are minimised, thus removing the distinction between implicit
and explicit database constraints and increasing the freedom of the user.
A new method is devised for handling semantic integrity by means of
program correctness proofs, rather than the conventional approach of
dynamic checking. For this purpose, a set of database-oriented
Hoare-style axioms and proof rules are designed for a subset of the
language. This method avoids many associated constraint handling problems
such as efficiency of implementation and action on constraint violation.
The implementation of the language on MDB, a formally specified database
system, is described.
%T The Formal Semantics of Deterministic Dataflow Programs
%A Jose N. F. Oliveira
%R PhD thesis
%D February 1984
%X For more than a decade, research into dataflow computing has been
targetted at designing novel architectures which attempt to supersede the
von Neumann design for efficient, parallel execution of computations.
Research at Manchester University has led to the implementation of a
prototype dataflow computer which has been operational since 1981.
During the same period, a dramatic reduction in hardware costs has
uncovered deficiencies in software development technology, which has been
unable to produce the required quantity of cheap, reliable and easily
maintainable software.
_
This thesis is an attempt to assess the impact of dataflow computing on
software technology. The Manchester dataflow (MDF) system design is used
as an example for this study. Semantics for the MDF-machine are desribed at
the usual graphical level using notation characteristic of dataflow
programming. Properties of the MDF-system are studied from both
theoretical and practical points of view. Based on this semantics, methods
for constructive design and proof of programs written for the MDF-system
are proposed. A transformational approach to dataflow program development
based on functional programming is presented as the basis for a more
ambitious reasoning methodology.
%T A Denotational Semantics for Concurrent Ada Programs
%A Ian Mearns
%D October 1983
%X The dynamic semantics for a subset of the Ada tasking constructs is
presented. The constructs denote processes, following the general theory
developed by de Bakker and Zucker, and do not require translation into an
intermediate language. All operations on processes are designed and
theoretically justified. An existing proof system for Ada tasks is
revised, and proved to be sound and relatively complete with respect to the
semantics.
%T The Specification, Design and Implementation of NDB
%A Ann Welsh
%D October 1982
%X An attempt is made to become familiar with the "rigorous" method of
software development and to identify some associated problems, using the
binary-relational non-programmer database system, NDB, as a vehicle for
this research.
_
A complete specification and development of NDB, and an implementation in
Pascal, are produced.
_
The decomposition of specifications and designs into manageable and
resuable units is seen to be desirable, especially when concerned with a
large and reusable system.
%T A Multilayered Data Flow Computer Architecture
%A John Gurd
%A Ian Watson
%A John Glauert
%D March 1980
%X This paper introduces a multilayered data flow computer architecture based
upon a high level language and underlying graphical notation for
representing computations.
_
The language, LAPSE, employs a single assignment rule and several more
familiar parallel programming constructs. It permits both iteration and
recursion.
_
The graphical notation utilises the concept of labels for those tokens which
use computational subgraphs reentrantly. The operation of labels in
implementing iteration, recursion and data structures is illustrated.
The architecture is based on a ring-structure in which binary
representations of tokens, known as results, circulate. A processing unit
computes results as required by a stored program which represents the
computational graph. Each result carries with it a name which is derived
from the notational label. Names perform two functions : Firstly they
separate different instantiations of reentrant code; secondly they can be
associatively matched with other inputs to their destination node, to
determine when the node may be executed.
The ring-structure can be pipelined to achieve an instruction execution
rate which is limited by the amount of parallel activity permitted by the
program and the technology used.
_
An extensible version of the architecture contains many such rings in a
multilayered structure whose execution rate is bound solely by the inherent
parallelism of the programs running on it. The architecture can also be
organised so as to exhibit a high degree of tolerance to hardware
component failures.