There are many meanings of the word "responsibility" (see Lucas for a detailed discussion) but we shall the word informally for the moment. The DIRC project is about people and computers, and under the responsibility theme we explore one of the major differences between them: people can be given or assume responsibilities and computers can't. Things that look as if they are attempts to give computers responsibility, such as the common "It's a computer error" excuse, are really a way of saying "We know it's a human error, but we don't know whose but a computer was implicated in some way so we'll blame that". We don't really ascribe moral agency to computers, since we don't ever punish them no matter how egregious the error. We all know the computer only did as it was told, though we may not know who told it or what they told it or why. The real responsibility rests with a person.
In fact mix-ups over responsibility are a common cause of system failure. For example, Alice may fail to take some action because she thinks Bob is responsible whereas Bob believes that it is Alice's responsibility. Such problems can arise because responsibilities are not well-defined, are often implicitly assigned and delegated and may be interpreted differently.
The DIRC view is that, to reduce responsibility-related failures, we need to develop a deeper understanding of these failures, to understand how responsibilities interact in complex computer-based systems and to invent ways of making responsibilities explicit in models that can be used to inform system design. Such models may provide a basis for reasoning about responsibilities and allow identification of areas of critical conflict or vulnerability in a computer-based system.
We are analysing case studies of failures to infer the role of responsibility as well as developing fundamental concepts that allow articulation and discussion of different types of responsibility. Accident and incident reports often focus on technical and organisational issues but sometimes fail to discuss responsibilities.
The report of the inquiry into the Ladbroke Grove rail accident  is an exception and has been the focus of our first major responsibility case study. The Ladbroke Grove accident involved a head-on collision between two trains. One train failed to stop at a red signal. But there is obviously a question about the visibility of this signal; then there is the question of who was responsible for ensuring visibility. A conclusion of the inquiry is telling: "It is impossible to say as regards any of these areas that any one individual person could properly be fixed with responsibility for the errors and omissions which were found." We have carried out a detailed analysis of the responsibilities involved here which are described in two papers listed below.
But the use of the word "responsibility" in this conclusion misses a key distinction about types of responsibility: consequential responsibility - who takes the blame for something; and causal responsibility - who makes something happen. We believe these are quite different - a machine, for example, may be causally responsible for ensuring some state is maintained but a human is consequentially responsible if that machine fails.
We are currently refining these concepts and developing notations to denote both consequential and causal responsibilities. We believe that these will help system designers identify system vulnerabilities and provide a basis for supporting recovery from failure.
Responsibilities cannot be directly observed, in the way that people's actions, writings, and speech utterances can. The way that people interpret their responsibilities has to be inferred and constructed. This can happen for example as the result of an ethnography or in an inquiry following a failure. But it does raise the question "What does the concept of responsibility give that primary observation does not?"
The best answer is a methodological one. One way of detecting weaknesses in an organisation is that advocated by the Soft Systems Methodology : make a normative model (i.e. one which shows what characteristics an organisation needs to have in order for it to counted as an organisation of a certain type) and compare it with a descriptive model of the actual organisation claiming or desiring to be of that type, and see where the discrepancies are. Then use these discrepancies as points for debate about the nature of the organisational change required. Within DIRC, we construct the normative model in terms of responsibilities (which imply the actions and resources required to discharge those responsibilities) and construct the descriptive model on the basis of ethnographic observation. We are using this approach in looking at a hospital situation.
 J. Lucas, Responsibility, Oxford University Press, 1996
 P. Checkland, Soft Systems Methodology, John Wiley, Chichester, 1984.
For our approach to ethnography, see our summary of ethnography.
For its relationship to design, see Ethnography and Design.
For a preliminary report on the hospital situation, see our summary of hospital process modelling.
For an approach to modelling, see situation modelling.
For the elaboration of responsibility modelling and its relationship to dependability, see Chapter 2 in our book on Trust in Technology.
Dobson, J.E. 2005. Enterprise modelling based on responsibility. In Trust in Technology: A Socio-Technical Perspective. Rouncefield, M., Clarke, K. and Hardstone, G. Springer.
J.E.Dobson and I. Sommerville, Roles are Responsibility Relationships Really, IEE Symposium on People and Computers, October 2005 (to appear).
J.E.Dobson. S. Lock, D.B.Martin, Complexities of Multi-Organisational Error Management, Proceedings 2nd Workshop on Complexity in Design and Engineering, Glasgow, March 2005
Martin, D., Rouncefield, M. and Sharrock, W. 2006. Dependability and Responsibility. To appear.
|Page Maintainer: firstname.lastname@example.org||Credits||Project Members only||Last Modified: 10 August, 2005|