11 What has not been covered
OSI is now a very large subject, and inevitably in any text of reasonable length there is a large amount left uncovered. This text has aimed to cover the basic work and principles, together with more recent work that affects principles, or infrastructure facilities that affect what is available to the application designer. Its treatment of specific applications has, of necessity, been somewhat sketchy. The reader who has persevered this far should be well-placed to understand discussions of specific standards, or to read the ISO or ITU-T texts themselves.
It only remains in this last brief chapter to at least provide some pointers to other work that has not been discussed at all or only very briefly in the earlier text.
11.1 Functional profiles
The problem of options remaining in base standards was originally addressed by implementors groups in the USA, Europe, and the Pacific Basin, and latterly by regional standards bodies in these regions. These bodies produce functional profiles that are intended to specify merely the selection of options and the combination of base standards to support specific applications or specific environments. They now feed into further ISO activity involved with International Standardised Profiles (ISPs), which is a new form of ISO publication that is neither an International Standard nor a Technical Report. Thus there are now beginning to emerge international specifications that can be referenced in procurement that provide "standards" with few options (but often lacking a lot of the functionality of the base standards). The eventual importance of the ISP work is unclear. We may see base Standard features not included in ISPs never implemented, and effectively lost, or we may see base standards fully implemented, in which case ISPs will be irrelevant. It is at least arguable that ISPs are important in the short term of partial implementation to ensure interworking, but that they will become increasingly less important as vendors implement the complete contents of standards.
11.2 Conformance testing
Another major area of work (closely linked to formal description techniques, but separate from it) is that of conformance testing. Concern has always been present about the problem inherent in OSI of potential non-interworking of systems from different vendors, all of which claim conformance to the same Standard. This is not just an option question, it also relates to errors in interpreting the standards or in implementation. In order to try to address these problems, standards specifying a series of tests to be carried out (in themselves a further protocol, exercising all the options - and some error cases - of the base Standard) have been produced. A lot of work has also been done on the architecture of testing conformance to some layer Standard (such as the Session Layer). The reader will remember that conformance to OSI relates mainly to the bits on the line. Interfaces in implementations are not specified, nor is there any requirement for an interface of any sort corresponding to a layer service boundary. Moreover, the mapping of top-level service primitives (such as those of FTAM) on to real events in real systems is a matter for implementation design, not something laid down by standards. Despite these problems, a lot of progress has been made on conformance testing, with a number of centres in all parts of the world providing OSI conformance testing services.
11.3 Security
We have touched on security issues from time to time throughout this text. In the early days of OSI there was little attention to security. One of the interesting snippets in the early OSI documents lodged in the UK Archives of the History of Computing was a proposal in 1978 that a series of meetings should take place to discuss security issues in OSI which was blocked by the BSI Secretariat on the grounds that "It is not clear what standard it will produce." In fact security issues were not seriously addressed in a general way in OSI until the late 1980s (about ten years later), with the first reference model and most early standards paying little attention to security issues. Indeed, there were people that argued that "security" (usually implying some degree of closed operation) and "Open" Systems Interconnection were a logical contradiction of terms. The turning point came with the publication in 1989 of Part 2 of the Basic Reference Model: Security Architecture, and with the almost simultaneous maturing of the 1988 X.400 and X.500 specifications with significant security-related features.
Work in the late 1980s and the early 1990s has been guided by the (rather complex) approach shown in figure 11.1: Approach to security specification. This recognises the two main strands of standards work, Open Systems Interconnection on the left, and Open Distributed Processing on the right. Central to the approach is a series of "Framework" documents that would identify the options and techniques for providing certain security features, an Upper and Lower Layers security architecture document to supplement Part 3 of the Basic Reference Model, and a variety of models specific to particular application areas, leading finally to actual security features in OSI Service and Protocol Standards. Whilst this outline has not been completely adhered to, most of the documents shown in the figure were at some stage of draft in 1995, although few could be said to be really mature.
Specific security features in FTAM and X.400 and X.500 have already been briefly discussed, but the reader should also be aware of four other pieces of ongoing work in the early 1990s whose detailed treatment is beyond this text:
- In the lower layers, work was maturing rapidly for major security additions to services and protocols at the transport/network boundary.
- Work was in progress in the application layer on specifying interfaces to locally provided security features that could then be referenced in protocol standards.
- Work was in progress defining a Security ASE that would perform a number of security-related functions (particularly authentication and initialization for secure exchanges) that were not application-specific.
- A Generic Transfer Syntax for security had been drafted. This recognised that secure transfer syntaxes could be developed by applying a series of transformations to an abstract value, starting with the application of a normal encoding rule (such as BER), and followed by encryption or signature transformations. The generic transfer syntax effectively applied the transformations, and carried in the head of the first encoding an ASN.1 data structure identifying the transformations (and any applicable parameters) that had been applied.
11.4 Changes resulting from high bandwidth requirements
Developments for high bandwidth networks and applications have already been touched on at the end of the Network Layer discussions. If followed through appropriately, these could give rise to networking where round-trip times, buffers in the network, and flow control (and hence the need for expedited data and perhaps for resynchronization) become an irrelevance, with consequent knock-on effects on the Session Layer. By 1995, however, the only specific proposal that had been made was for a "graceful termination" feature to be added to the OSI Transport Layer, bringing it nearer to TCP in functionality, and perhaps making S-RELEASE unnecessary (or at least, provided without Session Layer protocol). At the time of writing this text, it is only possible to say that the outcomes of this new work could be fundamental and far-reaching, but could also prove to be a damp squib with little real impact. It is hard to tell.
11.5 Other application standards
There are a number of other standards that are loosely related to OSI. Here we merely list them (and there will be others that we have missed - there is no list of what is an OSI Standard):
- Office Document Architecture (ODA) and Standard Generalized Markup Language (SGML) are both standards which are intended to assist in the production, maintenance, and transfer of highly structured documents. ODA is ASN.1-based, and can be expected to extend to multi-media documents, but the extent to which it can replace or encompass a real multi-media authoring system is very unclear. SGML is more text-based on the surface, but does have the capability to include other material. Major implementations supporting local production and editing of ODA documents, with transfer to other systems using ODIF (ODA Interchange Format), are expected to become commonplace in the late 1990s.
- Manufacturing Message Specification was a standard originally produced by General Motors as the main application standard in their MAP (Manufacturing Automation Protocol). The protocol in MAP 1 was hand-crafted, but it was later rewritten using ASN.1 and eventually Standardised by ISO.
- CASE (Computer Aided Software Engineering) Data Interchange Format (CDIF) was a further application specific standard developed in the early 1990s. The group developing it were largely unaware of ASN.1, and it took its own model and approach to syntax definition and the equivalent of the ASN.1 object identifier. The importance of this work will only become clear in the late 1990s.
- Job Transfer and Manipulation (JTM) became an International Standard in the late 1980s. It was one of the initial three (VT, FTAM and JTM) begun in the late 1970s. It has a number of interesting features, but suffers from being monolithic (no use of ROSE or RTSE), large, and complex to implement. The scope implied by the title: submission of background jobs to a number cruncher queue and (much) later distribution of output, whilst not the only thing it can be used for, has largely become a requirement of the past. There have been no major implementations of the Standard, and it is largely seen in the early 1990s as an irrelevance.
- Open Distributed Processing has been mentioned very briefly earlier in this text. Whilst not really an application standard, its impact is generally expected to be mainly in the application area. In the early 1990s the ODP Reference Model was beginning to mature, but with the exception of the trader concept discussed in the X.500 section, it was still unclear precisely what sort of standards would emerge and what its impact would be. Watch this space!
11.6 Postscript
It is hoped that the reader has at least enjoyed this text, and now has an improved perception of what OSI is about. This book has attempted to present the more interesting and/or difficult concepts and approaches introduced by OSI, and to answer wherever possible and appropriate the "Why?" questions.
OSI Standardization is not a static subject. It is highly dynamic, with new proposals arising all the time, and with areas hitherto considered stable and "finished" (such as the Transport Protocol, and maybe even the most basic concept of seven layers) suddenly becoming highly unstable through the acceptance of New Work Item proposals. It should also be remembered that all ISO standards are required to be formally reviewed and modified, withdrawn, or reaffirmed every five years. Thus any knowledge of OSI inevitably becomes dated and requires frequent updating, but it is hoped and expected that the reader will continue to benefit from the understandings that have been gained from this text, and will not find it difficult to read and discuss any primary OSI material that might be encountered in the future, or even to contribute to the ongoing OSI standardization process. If so, this book will have served its purpose.