9 And yet more tools

This chapter and the next chapter are concerned with the application layer. The concern, however, is with the layer as a whole, and individual applications are in the main treated only where they illustrate more general points about application design. For a detailed treatment of any one application other books exist and should be consulted.

9.1 Application Layer Structure

Before reading this chapter, the reader is invited to review the discussion in Chapter 2 on the OSI Architecture. Moreover, it will be hard for you the reader to grasp and remember abstract model concepts before the practical examples have been presented. On the other hand, it is easier to show how the examples relate to and make use of the model concepts if the model has been presented first! The reader may wish to return to and re-read this Chapter after the later Chapters have been read.

There has been a lot of change (from 1978 to 1995) in the theory of how to define computer protocols and appropriate terminology and model concepts.

First let us introduce the term application association, or just an association. In the lower layers we talk about a network connection, or a presentation connection. The definition in the Basic Reference Model of connection is based on the idea of layer (N) providing a service to entities in layer (N+1), and an (N)-connection is defined (see figure 9.1: Definition of an N-connection) as "an association between two entities in layer (N+1) provided by the entities in layer (N)". Thus a network connection is an association between a pair of transport entities provided by the entities in the network layer. This clearly fits with normal usage. But now consider what an application connection would mean: this would be an association between entities above the application layer (and there are no such things) provided by the application layer. Thus the term application connection is an inappropriate one, and is not used. Applying the definition again, however, we see that a presentation connection is "an association between two application entities" provided by the presentation layer. Thus we can use the term application association as a synonym for presentation connection if we want to focus on the interaction between the applications, rather than on the underlying communications (which we do).

There have been attempts to use the term application association in a somewhat broader fashion. If two application entities are communicating interactively using connectionless communication, with some sort of state information shared between them for the duration of the communication, then one might wish to speak of an application association between them, even tho' there is no presentation connection. Similarly, if two applications have recovery procedures in place to reestablish their operation from a checkpoint following loss of a presentation connection, one might again wish to say that their application association had not been lost. At the present time, however, application association equates with presentation connection, with the ACSE (Association Control Service Element) Standard providing an A-ASSOCIATE service primitive to "establish an application association" (which it does by issuing a P-CONNECT), and a corresponding A-RELEASE and A-U-ABORT and A-P-ABORT which marry together loss of the application association with loss of the presentation connection. (ACSE is discussed in more detail below, but figure 9.2: Services from Presentation + ACSE is worth a moments inspection. It shows an application layer designer choosing to use the ACSE tool as having available to him a set of service primitives consisting of the P-service primitives less those "stolen" by ACSE, plus the A-service primitives provided by ACSE.) With that introduction of the term association, and a brief mention of ACSE, we now return to the problem of modelling the application layer, and the terminology introduced.

In the earliest work, people were expecting simple (and probably large) monolithic specifications for an application on top of the Presentation Service. All necessary common tools were provided by the middle layers, and all that remained was to produce the real application standards. In the late 1970s, work was started on three application standards: FTAM - File Transfer, Access and Management; VT - Virtual Terminals; and JTM - Job Transfer and Manipulation. Work was well-advanced on these by the time other ideas of application layer structure emerged, and these three standards (now full international standards) remained largely monolithic standards, covering in one standard all the necessary details of the protocol exchanges for these three applications.

The original ISO model terminology talked about entities in each layer that communicated, using the services of the layer below. Thus we had network entities, transport entities, session entities, presentation entities, and application entities. These entities were the abstraction of a protocol handler, and were broadly in one-to-one correspondence with protocol standards.

The beginnings of the idea that this was not the right approach came from the CCITT OSI Reference Model, with the idea that, in the application layer, there would not be a single monolithic Standard specifying the behaviour of the application entity, but rather a collection of standards used together. Thus whilst for some aspects of the behaviour of the application entity we can still regard it as atomic (it is still a useful model term), we can peer inside it and find it is made up of Common Application Service Elements (CASEs) and Specific Application Service Elements (SASEs), rather as a physical atom is made up of neutrons and protons. See figure 9.3: Earliest approach to ALS for the view of application layer structure taken in the earliest Reference Model Standard. The CASE standards provide additional infrastructure in the application layer and the SASE standards provide for real applications using that infrastructure. Thus ACSE was a CASE Standard. (In fact, the Reference Model, as shown in figure 9.3, also included a User Element, which was supposed to provide the top-level use of the Standardised services. This term was dropped in later work.)

It was in the middle to late 1980s that this was seen to be a move in slightly the wrong direction, and the terms CASE and SASE were abandoned, describing the basic building blocks (main standards) of the application layer simply as ASEs (Application Service Elements), with a collection of ASEs used to support some application (forming an application entity). The distinction, blurred as it may be, is none-the-less useful for tutorial purposes, and roughly speaking the standards discussed in this Chapter are infrastructure or CASE standards, and those in the next Chapter are real applications or SASE standards.

The concept of ASEs was present in the first Basic Reference Model Standard, but there was still a lot of work needed to make the concept usable. The most important question was: "What glue is needed to put a set of ASEs together?" The immediately obvious answer is the service primitive concept, designed in the lower layers to enable one protocol standard to invoke the procedures of some adjacent layer protocol standard. This notation, however, was designed primarily to pass parameters to fill in holes in parameterised messages, and assumed a strict hierarchy of layers, with one layer completely hiding the services of the layers beneath. Nonetheless, an attempt was made to use this approach. Every Application Layer Standard defined not merely its protocol, but a set of service primitives by which some other ASE could reference it and cause its procedures to be invoked. The concept from the lower layers was slightly modified for use in the application layer to reflect the idea that the lowest ASEs (like ACSE) will "steal" some P-service primitives, providing a total service consisting of the rest of the P-service primitives and the new services of these ASEs. At first sight, any of these ASEs can be used in any combination (provided they don't "steal" the same P-service primitives) to provide a total service which can be used by other application designers to build yet more ASEs. This model of nested service primitives is illustrated in figure 9.4: Nested service primitives model. Unfortunately, there are some problems with this simple approach.

The reader might, for example, like to consider that layering and service primitives are all about filling in "holes". How then does this nested service primitive model relate to the use of ASN.1 macros or information objects to fill in holes? There was, in the early 1990s, no clear answer to this and to many other similar questions.

Part of the general problem of combining ASEs through the nested service primitive model comes in clearly defining what is the total service available to someone writing a new application when there are a lot of ASE specifications (building blocks) available. For example, suppose one ASE (providing a set of X-service primitives, say) has been written using the Session Service activity functional unit, and another ASE (providing a set of Y-service primitives, say) has been written using the major synchronization functional unit. Rules concerning the order in which the X-service and Y-service primitives can be issued (by some third application layer designer who wishes to use these two ASEs as building blocks) can be deduced from the Session Layer rules, but are not necessarily obvious from the definition of the X-service and Y-service, which may not even specify what Session Service primitives they are mapped onto. An even bigger problem arises if the X-ASE uses the TWA Session Service and the Y-ASE uses the TWS Session Service! What we really want is two independent sets of Session Service primitives and messages, one for each ASE, but operating in a coordinated fashion so that messages from the different ASEs do not get out of order. We also have to worry about the effects of resynchronization invoked by one ASE on the messages of the other ASE. Even if both ASEs use similar Session Services, there can still be problems in combining them. Suppose they both define messages that need to be carried on a major synchronization primitive. What is probably required is to combine them in such a way that both their messages are carried on a single major synchronization primitive, not to issue two separate primitives.

Finally, let us consider the use of ACSE. It is the issuing of an A-ASSOCIATE request that establishes a presentation connection to support an application association. With the simple model of nested service primitives described above, there can be precisely one ASE that issues the A-ASSOCIATE primitives. But the broad characteristics of the association are determined when it is first established (and in particular the functional units) by parameters of the A-ASSOCIATE primitives. Thus if the model of flexible re-use of ASEs is to work, we cannot have the A-ASSOCIATE primitives "stolen" by any one ASE.

Again a small digression: the FTAM and JTM standards specify completely the use of A-ASSOCIATE, and follow very much the "nested service primitives" approach. The VT Standard, produced slightly later, merely makes clear the characteristics of the application association that it requires, and makes no pretence of "stealing" A-ASSOCIATE. TP, produced later still, talks about "managing a pool of associations", with no concern about how or when they were established.

These, then, are some of the issues to be addressed in making the ASE concept work.

An attempt was made to produce the beginnings of a solution in the late 1980s with the development of a new Application Layer Standard (ISO 9545) called "Application Layer Structure" (ALS). Much of the work leading up to this was dominated by concerns over whether an application-entity could be described as handling only a single connection, with things dealing with more than one connection not part of OSI, but part of something else. There were at that time two standards at a late state of development that were indeed concerned with coordinated activity on more than one connection, CCR (Commitment, Concurrency, and Recovery) and JTM (Job Transfer and Manipulation), and it was apparent that coordinating activity over more than one connection was an important part of the specification for some applications. The resulting ALS structure therefore recognised two levels in the building up of a complete application entity. (See figure 9.5: Application layer structure circa 1989) First, there were ASEs that operated over a single application association, forming a Single Association Object. These were shown as a vertical stack with ACSE at the bottom, implying something of the nested service model, and alongside them there was shown a Single Association Control Function (SACF). The SACF was the necessary specification of how those ASEs were to be used together on the single association, but no SACF standards as such were ever produced: the idea was that this picture modelled the sorts of specification that were needed, but the SACF would in fact be text distributed through the main ASE standards relating to their use of or use by other ASEs. The Single Association Objects might then be combined into a Multiple Association Object by some Multiple Association Control Function (MACF) text that said how activity on the different associations was to be related.

This was a reasonable starting point, but again it proved to be a slightly off-track development. The picture really implies there is an ASE operating for each association. Whilst this may be appropriate for ACSE, which is only concerned with a single association, it is highly inappropriate for CCR and TP (described later) or JTM, that are very much concerned with the coordinated handling of many associations. The next step in the thinking involved work in the early 1990s on what was called "The Extended Application Layer Structure" (the XALS), which became an International Standard in 1992. This work was published as Amendment 1 to ISO 9545, but the amendment actually struck out virtually all but the introductory text and replaced it with completely new text! It is effectively a new Application Layer Structure Standard.

What are the new concepts this amendment introduced? First, the concepts of Single Association Objects and Multiple Association Objects, and the related concepts of SACF and MACF were deleted from the main text, and were relegated to an informative annex: the concepts and distinctions they implied were not considered useful in the practical job of combining ASE specifications into complete applications.

The new concepts introduced recognised (like the nested service primitives model) an arbitrarily deep hierarchy of nested specifications, but placed much more emphasis on the concept of a specification of a Control Function that would relate component parts together. The major new term introduced was the Application Service Object (ASO). (Everybody started talking about objects in the late 1980s and early 1990s, and to be fair there was a real attempt to apply the information hiding concepts of object-orientation. The class concept and inheritance was less frequently introduced, and was not present in the XALS.) The other "actors" in the model were Control Functions (CFs) and Application Service Elements (ASEs), with Application Entities (AEs) as still the outermost structure. How do these relate together?

An Application Service Element is a basic building block, and has no component parts. An ASO, by contrast, has a recognised structure involving precisely one CF (control Function) together with a collection (one or more) of either ASOs or ASEs. The CF within the ASO determines how the component parts are to be combined to form the ASO. An AE is (Application Entity) is now defined as the outermost ASO operating on an association. The focus, then, is on the structured combining of objects in a hierarchy of specifications, each of which must contain CF text.

Note that an ASE (in the thinking of the 1990s) does not make use of other ASEs. In service primitive terms, it is self-contained and interacts with other ASEs (or ASOs) through the specification of a CF. Note also that an Application Entity (AE) is never just an ASE. It is defined as an ASO - that is, it has a CF and one or more ASE and ASO components. This can be interpreted as meaning that some standards previously called ASEs (particularly those originally termed SASEs) are no longer ASEs with the new definition. In particular, those that reference and use A-ASSOCIATE (like FTAM and JTM), or the CCR service primitives (like TP) are strictly speaking now ASOs. In some cases (and particularly that of FTAM), there is a view that it should indeed be an ASE, making no use of ACSE, and that a number of ASOs should be defined as actual international standards for the combination of an FTAM ASE with other ASEs (particularly with CCR).

There is one final important aspect to the XALS work: the ASE specifications are no longer expected to identify the presentation layer service that the application's messages are to be carried by. That is a piece of specification that should appear only in the outermost CF (the one that specifies the complete Application Entity). The advantages of this for combining specifications that currently appear wholly incompatible (such as ones that use session activities with ones that do not) will be obvious, but it will take some time before there is a clear separation in the documentation of the syntax and semantics of application messages from the actual service primitives carrying these messages.

The ramifications of the XALS specification were not clear in the early 1990s (most of the Standards Experts who were not directly concerned with the work had little knowledge of it), but its importance for obtaining full re-usability of specifications was recognised. There were, however, serious doubts about whether the concepts had arrived too late, most of initial set of OSI standards being complete, and there being little enthusiasm to revise them to produce the separation of text identified above. Work has, however, been undertaken to:

  • Modify ACSE to take account of the new model. (The model not only changed terminology, it also introduced some new naming concepts for identifying ASOs within ASOs).
  • Get a consistent treatment of references or non-references to ACSE in other standards.
  • Investigate the problems of splitting some standards into an ASE component and a CF component.

In the latter case, the ASE component would consist of the messages they defined (with their semantics and the properties needed for their carriage - major sync or resync like, etc), whilst the ASO component would contain the CF that would specify how those messages were carried in P-service primitives and how ACSE was used. These could be separate standards, or separate Parts of one Standard. This would pave the way for other ASO specifications that could use the same ASE part, perhaps with ASEs from some other Standard, to define a different ASO. Unfortunately, this book has been written too soon to describe the outcome (or even the likely outcome) of this work! Look out for the second edition!

There is one further point to consider, and this relates to the entire upper layer architecture (Session, Presentation, and Application). We have already made some mention of the way in which the contents of "holes" need their syntaxes identifying in much the same way as the Presentation Layer identifies the outer level. If we are to have relay-safe encodings, this identification must not be by reference to a real outer-level presentation concept. Thus we begin to get the concept of mini presentation layers inside each hole (and, of course, holes recursively nest, and maybe more or less correspond to ASE/ASO structuring). Further, we have spoken of the problems of combining one ASE/ASO that uses the session activity concept with one that uses major sync. These problems might be addressed by divorcing their messages and semantics from the P-service primitives carrying them, but there is an alternative: could we (in some way to be determined) provide each ASE/ASO with its own independent session functionality, where any session purging purges only the messages of that ASE/ASO, and not those of any other ASE/ASO with which it might be combined? What we would end up with if this approach is followed would be a five layer model, with the bottom four layers (up to the Transport Layer) as now, and with the top layer being a recursively nested structure of ASOs, each ASO containing its own session and presentation functionality (layer would now be the wrong term). A simple unstructured application would still have a session, presentation, and application layer as now, so existing standards would not necessarily be disrupted. A paper from a USA Expert advocating the adoption of this "five plus a three-layer recursion" model was informally circulated round the world early in 1992, but was considered by many to be too radical a change for introduction so late in the development of OSI. Nonetheless, in mid-1995 drafts are in preparation for ammendments to the session, presentation and ACSE Standards that will support "nested session connections", allowing the establishment of session connections (with presentation and ACSE and other ASEs on top) within an existing connection. This nested connection can be used to support an ASO (defined earlier perhaps as a complete specification) which is now being embedded within a newly defined ASO. The inner connection has its own set of session primitives, and any purging on this nested connection does not affect its parent (but purging of the parent does affect the nested connection).

Unless the reader is getting alarmed, however, it is important to note that we are here discussing "architecture", that is, the way we structure the documentation of what has to be implemented. Changing this documentation structure need not, and indeed will not, affect the actual "bits on the line".

At this stage the discussion has to be drawn to a close! The text has moved into very uncertain waters, and the reader is cautioned that the later parts of the above discussion are still somewhat speculative. The ideas presented may provide dramatic changes in the modelling and architecture of OSI, or they may quietly die.

9.2 ACSE and addressing issues

We have already said quite a lot about ACSE. Let us try to complete the picture. First, we need to discuss briefly the addressing provision in the OSI layers, then we will look at the added value provided by ACSE beyond that available from the Presentation Service.

The OSI architecture has the concept of a Network Service Access Point (NSAP) address, discussed earlier, which is world-wide unambiguous, which is carried in Network Layer protocol, and which is used by routers to establish a connection (or transfer connectionless traffic) to the remote end-system. NSAP addresses are passed in the N-CONNECT primitives. The Transport Layer carries in its connect message additional addressing information that can be used to "fan-out" within the end-system. This information is called the Transport Selector, and the combination of an NSAP address and a Transport Selector forms a Transport Service Access Point (TSAP) address. It is TSAP addresses that are passed in T-CONNECT service primitives. Similarly, in the Session Layer there is a Session Selector, and in the Presentation Layer a Presentation Selector. We have (see also figure 9.6: Structure of layer addresses):

                PSAP address = SSAP address + Presentation Selector
                SSAP address = TSAP address + Session Selector
                TSAP address = NSAP address + Transport Selector

Why do we need to provide for fan out in all three layers? There are two reasons. The first relates to the lack of a protocol identifier in the OSI layers. The selectors provide the ultimate fall-back for identifying (using addressing information) the protocol in the layer above. The second relates to recognition that there will be many varying implementation architectures. Suppose, for example, that in one implementation the whole of the network and transport layers are implemented behind the operating system fire-walls by the main computer vendor, but the applications are each implemented by a third-party vendor as separate monolithic implementations of session, presentation, and application, running as processes on that operating system. It would be natural to use the transport selector to determine which of these applications is to receive an incoming call. On the other hand, change the implementation architecture so that all layers up to and including the Presentation Layer are provided by the computer vendor, with third party products being processes implementing only the application layer, and fan-out using information carried in the presentation layer (whilst not essential) is natural. Thus we are led to expect that, in any real system, all selectors except one might be null, but the non-null one might vary with implementation architecture. This is still a bit too simplistic, however. Suppose the computer vendor provides (perhaps for historical reasons) direct interfaces to both the transport and to the presentation layer functions, allowing implementations by third-party vendors of both the above sorts to co-exist as processes on his system. In this case, fan-out might be used in the transport layer (with one value of the transport selector saying "stay inside"), and also (for that value of the transport selector) fan-out in the presentation layer. Clearly in the general case we might have non-null selectors in all the layers.

This addressing structure is clearly not very human-friendly, and may also be subject to change as implementation architectures and the positioning of applications within one of a set of systems changes due to software or system upgrades. As with many protocol architectures (compare the Domain Name of TCP/IP), there is a separate level of world-wide unambiguous, implementation-structure-independent naming provided in the application layer to identify application entities. This is the Application Entity title (AE-title), and its precise form is defined as an ASN.1 datatype in the ACSE Standard (from where it is imported by FTAM and other standards). The first ACSE Standard had a raw ASN.1 "ANY" as the definition of the AE-title, because of lack of agreement on its form. This was later replaced (using the defect report mechanism!) by a definition of the AE-title as either an ASN.1 OBJECT IDENTIFIER (not very human-friendly, but certainly not dependent on the structure of the implementation or the positioning of the application in some particular system) or as a Directory Distinguished Name. This latter is defined by importation from the X.500 standards, and provides an organizationally structured human-friendly name. In the strict addressing architecture of OSI (Part 3 of the Basic Reference Model), the AE-title is made up of an Application Process title (AP-title) with an AE-qualifier. This in principle means that one can identify a number of application entities as associated with the same application process. However, the concept of application process is ill-defined, and no OSI application to-date makes any use of this structure. To all intents and purposes, the AE-title can be regarded as an atomic entity.

Both forms of name can, in principle, be passed to the X.500 Directory Service and will result in a world-wide search which maps the name into the PSAP address needed to access the named application entity. In early 1992, however, the use of X.500 to perform searches using an ASN.1 OBJECT IDENTIFIER was still not sufficiently well-specified in standards, nor were X.500 implementations yet widely deployed. Thus in the early 1990s, the distribution of information about the mapping from application entity titles to PSAP addresses had to be manual.

Let us now turn to the functionality of the ACSE Standard. It has two main pieces of added value compared with direct use of the Presentation Service: the transfer between the communicating partners of their AE-titles and the negotiation of an application context. An application context is identified by an ASN.1 OBJECT IDENTIFIER. Standards which expect to be used as complete applications (hitherto called SASEs, now called ASOs that can - on their own - form application-entities), such as FTAM, X.400, X.500, VT, JTM, all allocate a value for their application context within the base Standard. It is effectively a protocol identifier for that Standard. The negotiation is very simple: an OBJECT IDENTIFIER goes across in the request/indication, and another one returns in the response/confirm. The relationship between these (if any!) is a matter for the application Standard, but they can, for example, be used to negotiate a basic class operation or a full class operation by use of two object identifiers, with the rule that if basic is offered basic has to be accepted and if full is offered either basic or full can be accepted. JTM (Job Transfer and Manipulation) uses them in this way. In the XALS terminology, this specification would be part of the CF of the outer-level ASO.

The value of exchanging application contexts will be clear: where a single implementation can handle a number of protocols, a single address can be used and the protocol to be used can be dynamically selected. This is probably of more value where the protocols are closely-related (as in the JTM full/basic case) than if they are totally unrelated (like FTAM or VT), but it is clearly a useful provision. The value of exchanging the application entity titles is less clear, but consider the situation where the mapping from AE-titles to addresses has got knotted (for whatever reason). It is desirable at an early stage to discover that you are, in fact, knocking on the wrong door! A perhaps more important reason is that a recipient who knows only the PSAP address of a caller (passed up in the P-CONNECT indication) cannot use that to find out anything further about him from the X.500 Directory Service. On the other hand, if you have got his AE-title, then you can go to the Directory (at least in principle, once X.500 is fully deployed) and get his public key to check authentication information, to find out what abstract and transfer syntaxes he might support, or even what machine the call is from. In other words, you have a usable identification of your peer. With the AE-title, ACSE also carries an AE-invocation-identifier. The use of this is usually left as implementation-dependent, and no OSI application layer standards currently make any use of it. Notionally, it identifies this particular invocation of this application entity, and could be useful for logging and diagnostic purposes, or for an application Standard specifying some form of recovery mechanism.

ACSE also provides an A-service to replace use of P-U-ABORT, P-P-ABORT and P-RELEASE because as it is used to set up an association it should sensibly be used to tear one down. There is, however, only very minor added value on these primitives (slightly changed error codes and reasons). The complete set of parameters on the A-ASSOCIATE request/indication (apart from those which are part of the P-CONNECT and are transparently passed through ACSE) are:

  • Application Context Name.
  • Called and calling AE-title and AE-invocation-identifier.
  • Implementation Information (an ASN.1 GraphicString - unlimited length, any characters registered in the International Register of Character Sets).

On the response/confirm we get the same set, except that the called and calling AE-title and invocation-identifier are replaced by a single responding AE-title and invocation-identifier.

In one respect ACSE is negative. The P-CONNECT has a list of Presentation Data Values in its user data parameter. ACSE uses just one of these to carry the ACSE-defined ASN.1 datatype, but does not provide any access to the others, providing for a list of presentation data values as a parameter of the A-ASSOCIATE but mapping these into embedded pdvs using SEQUENCE OF EXTERNAL, with the problems that were discussed earlier for an EXTERNAL carried in the P-CONNECT when contexts have not yet been established and multiple encodings may be needed. (It may be that with the maturing of the ASE/ASO concept, ACSE should increasingly be seen to steal only the first presentation data value of the P-CONNECT, with the others available directly to other ASE/ASOs that are sharing the application association.)

It would in principle have been possible to put the whole of the ACSE parameters as additional fields in the P-CONNECT. This would not have noticeably have increased the size of the Presentation Layer standards, but would have been regarded as a violation of the layering principles: the Presentation Layer is about negotiation of representations, and these parameters have nothing to do with that.

Two final points on addenda to ACSE that have been produced. One permits the carriage of authentication information with the A-ASSOCIATE exchange. It is nothing more than an OBJECT IDENTIFIER which identifies the authentication algorithm, and a "hole" for information or parameters associated with that algorithm. There is as yet no International Standard for such algorithms (see later discussion on security in the last chapter). The second addendum provides an A-UNITDATA (not yet used by any International Standard in 1995) primitive, carrying the same parameters as the A-ASSOCIATE, thus completing the provision for connectionless services right up to the application layer. This, of course, "steals" the P-UNITDATA.

9.3 ROSE and RTSE

The reader may be wondering what else is coming. We have surely got enough tools now to build our real applications without more infrastructure? Well, not quite. ROSE and RTSE were developed specifically (both originally with CCITT Recommendation X.410) to ease the task of developing the X.400 electronic mail application. They were later seen as an important part of the OSI infrastructure, equivalent ISO standards were produced, and they were moved into the X.200 series.

ROSE (Remote Operations Service Element) has been repeatedly discussed earlier in the text. It is again quite a short Standard. It defines an ASN.1 type that is a CHOICE of four types - its messages. Protocol messages are often called Protocol Data Units (PDUs), a term introduced in the Basic Reference Model. The first is the RO-Invoke PDU, the second the RO-Result PDU, the third the RO-Error PDU, and the fourth the RO-Reject PDU. All except the last have "holes" in them.

These messages are used to support the concept of sending a message that invokes some operation or processing on a remote system, and to tie together the invocation message with the eventual reply carrying the result of the operation or processing.

ROSE defines neither an abstract syntax nor an application context. It merely provides these datatypes with rules for their use. Once the holes are filled, an abstract syntax and an application context can be defined by the using application. Originally ROSE was ROS (Service, no Element), then it was considered to be an application service element (ASE), and we got ROSE. But with the new XALS (ASOs, CFs, etc) how do we view ROSE? Pass!

The RO-Invoke PDU carries an OPERATION identifier, an ASN.1 datatype for the arguments of that operation, and an invocation identifier. Invocation identifiers are used sequentially to identify the invocation of an operation within an association (ROSE messages are carried as P-DATA with an application association, although recent proposed changes are discussing use of A-UNITDATA). An operation can be invoked, then another and another and another (of the same or a different operation) before the first has completed (see figure 9.7: Pattern of ROSE invokes and results). Results do not necessarily come back in the order of the invokes. The invocation identifier is thus used to tie together the RO-Invoke PDU and the later RO-Result PDU that carries only the invocation identifier and an ASN.1 datatype carrying the results of the operation.

The ROSE model is of a complete set of operations that are related through having common error returns. Thus as well as defining a set of operations, the user defines a set of error codes (and an ASN.1 datatype for each one which carries parameter information associated with it). Each operation has associated with it one or more of these errors. When an operation is invoked, there is either a RO-Reject PDU returned (which has no holes - this is used when the invocation fails for reasons independent of the operation, such as overload, or unknown operation), or an RO-Result carrying a successful completion, or an RO-Error carrying an error code and parameters. Operations can also be defined that do not report results or do not report errors.

An additional feature is the provision for call-back. This is quite an important feature in supporting invocation across a network. If we consider procedure calls in a programming language, it is common to have parameters passed "by value" or "by reference". In the former case the value is copied into the procedure on entry, and copied out on return. In the latter case, an address is passed which can be used from within the procedure to access the parameter. Clearly only the former mechanism is directly available when the calling and called parties are separated by a network. But the "call by reference" mechanism is an important optimization where the value is a large array and its copying would be expensive in CPU cycles or in memory. There is equally a problem if such a large array were to be transferred across the network when the called procedure is actually only going to look at and/or change a small part of it. The call-back mechanism is designed to address this problem. When the called procedure needs to make an access that would previously have been handled by a "call by reference", it invokes a linked operation on the system from which the call came in order to perform the necessary actions. It is clearly important that such invocations identify the original invocation to which they are linked, as well as the operation they are now invoking, and ROSE makes provision for this in the protocol, as well as for defining which operations are linked to which in the notation for defining operations (originally the ASN.1 OPERATION macro, now the ASN.1 OPERATION Information Object Class, described earlier). figure 9.8: Linked operations shows a possible flow of control with a set of linked operations.

ROSE provides a useful and simple piece of infrastructure, supported by a well-defined notation, for the application designer with a set of requirements that can easily be mapped into the ROSE model of invoking operations on a remote system. In fact, this is a very convenient model for a great deal of protocol design, and the combination of ROSE for the invocation model and ASN.1 for defining the datatypes for the arguments, results and error parameters of the operations makes specifying an application protocol in this way a relatively easy task.

Turning now to RTSE .... this is also quite small and simple, and provides for the other main feature that an application designer might need to consider: how to transfer a series of documents (typically stacked up on disk) between a pair of systems with checkpointing and restart to cope with failures. This is the requirement for relaying X.400 mail, and this is the part of X.400 that has to be compatible with Teletex. Almost any reader who has managed to read this far, and has fully understood the Session Layer, could make a good attempt at writing the RTSE Standard, and would come up with the same result: there are no surprises. It uses activities in the expected way, a TWA session service, P-TOKEN-PLEASE and P-CONTROL-GIVE to determine which end is transmitting the documents, and P-SYNC-MINOR to support checkpointing. The added value over the session service is in precisely specifying the primitives to be used and the application of checkpointing. The process is to get control, start an activity, transmit the document, issue minor syncs, end, discard, or interrupt the activity. If crashes occur, the association can be re-established and an activity restart enables continuation from the last check-point.

Originally written to sit directly on top of the Session Service, it can use X.410-1984-mode or normal-mode Presentation Service. In fact A-ASSOCIATE formally also has an X.410-1984-mode which makes A-ASSOCIATE completely transparent (no transfer of AE-titles, etc). This allows the fiction to be presented that RTSE works with ACSE, but in reality it is direct Presentation Layer access.

A rather more interesting point arises from the data transfer. In X.410-1984 mode, the P-DATA request/indication primitive carries what is described in ISO 8822 as "a single presentation data value which is the value of an ASN.1 octet string". This value is mapped transparently (in X.410-1984 mode) onto the octet string of the S-DATA user data parameter, making the Presentation Layer completely null after connection establishment in X.410-1984 mode. Continuing the modelling of RTSE operation, the document to be transferred is an abstract value (typically the value of a large and complex ASN.1 datatype). RTSE specifies the encoding of this value using a syntax matching service, which is a local implementation way of determining the negotiated transfer syntax and performing the encoding. This produces an OCTET STRING value (at least that is what RTSE assumes!), which is then fragmented to allow the issue of minor syncs at suitable points, and each fragment is passed as the octet string value for a P-DATA, with interspersed minor syncs. This same approach is continued in normal mode: RTSE encodes (using "local magic" to determine the transfer syntax) into an OCTETSTRING, which is fragmented to produce values that go into an ASN.1 OCTET STRING. This is the dreaded OCTETSTRING hole, with some local magic to ensure that the end result can still make use of negotiation.

To be fair to the RTSE workers, there is a very real problem here. How do you place checkpoints in the transfer of pure information (an abstract value)? Or to put it another way, how can an abstract value, even if defined as the value of an ASN.1 type, be fragmented into smaller values so that each fragment can be sent on a separate P-DATA with a P-SYNC-MINOR between them? There is no easy answer to this question. The FTAM Standard addressed the problem by requiring the definer of a document for checkpointed transfer to specify it as a series of (small) abstract values, between any pair of which a checkpoint (minor sync) can be placed during transfer. This then allows the full power of the Presentation Layer to operate, with check-points at what are called semantically meaningful points - in other words, at points that are not dependent on the encoding - thus allowing negotiation of a different transfer syntax (because the back-up line has different QOS characteristics) when recovering from a crash. But the resulting complexity in defining the form of a document is a high price. RTSE wanted to be able to handle any (large and structured) document that can be defined using ASN.1, and in the days when BER was the only encoding rule available, the approach taken was not wholly unreasonable, but the penalty is not being able to use a different transfer syntax on the back-up line, and some slightly dubious model additions (the syntax matching service).

Work was proposed in early 1992 to specify once and for all an algorithm which would map any arbitrary ASN.1 type into a list of semantically meaningful (component parts of the ASN.1 structure) presentation data values. The proposal was actually made in the context of FTAM support, but it could be equally applicable to a revised RTSE, but at the time of writing this text it was not clear whether that work would proceed or not.

Thus we see that RTSE provides another important tool for application layer designers, but its use of TWA session, of activities, and the lack of real support for Presentation Layer concepts make it unattractive to many Experts. Broadly, use of RTSE in new work is supported by CCITT/ITU-T workers, and opposed by ISO workers, reflecting the broad nature of the views of many of those in the two groups on session activities, TWA, and the Presentation Layer.

9.4 CCR and TP

A book could be written (and no doubt soon will be!) solely about CCR (Commitment, Concurrency, and Recovery) and TP (Transaction Processing). (TP is sometimes called DTP: Distributed Transaction Processing.)

It is not therefore possible or appropriate in this text to undertake a detailed technical coverage of these standards. But in order to understand the way they fit into the architecture, the problems the architecture has to address in order to accommodate them, and the tools they provide for you as a potential application designer, it is necessary to give a brief introduction to what these standards are about.

We have got the Session Layer tools. We have got the Presentation Layer and ASN.1. We have got ROSE and RTSE. What other problem is there that could sensibly be solved by another ASE Standard?

What we are addressing in this section is an application that needs to operate with more than two systems. Consider figure 9.9: Requirement for consistency. You as the application designer have to specify a financial services application protocol that will enable system A that is accessing a bank B to debit an account on B with a million dollars, and at the same time to credit an account on bank C with the million dollars. You design a simple protocol where you simultaneously open up a pair of associations, and send on each one a ROS-Invoke (or design an exchange of ASN.1-defined messages, whichever you prefer), on the association to B requesting a debit and on that to C requesting a credit. B replies saying: "OK, done", and C replies saying: "Sorry, the account does not exist with this bank". And a bull-dozer goes through your communications lines to the outside world. The banking system is now short of a million dollars, and nobody is getting any interest on it! This is almost but not quite as bad as the situation where C did the credit but B refused the debit, and before your line was repaired somebody drew out the million dollars from C!

This is just one of many possible applications where there is an implicit or explicit consistency condition to be met which goes across the systems involved. In this case, the condition is that the amount of money in the banking system as a whole should not change as a result of this transfer operation. Database designers on stand-alone systems are used to this problem. A database is nothing more than a large structured file except for one thing: the software supporting it maintains integrity constraints (consistency conditions) across the entire database. In the better systems, these are explicitly specified to the database management system as part of the definition of the database, and application programs (perhaps written in COBOL) cannot cause the conditions to be violated, at least as seen by an outside observer. In practice, the COBOL program may need to change many parts of the database to perform its functions and leave the database in a consistent state, and it can only do one operation at a time, just as our application A can only really do one operation at a time. If only one operation is performed, the consistency is violated. Database systems introduced the concept of atomic actions. A set of operations by the COBOL program, with a known beginning and a known end, which (to an outside observer) were either all performed (leaving the database in a consistent state), or none were performed.

In order to achieve this, we have the following requirements:

  • The atomic action has to be delimited (the reader will recognise that in communication this looks like a candidate for a major sync or an activity exchange).
  • The database management software has to be concerned with commitment (will it commit itself to accepting the changes the COBOL programme has made, or will it rollback to the start of the atomic action?)
  • There also has to be concern for concurrency (some form of lock had better be applied to prevent other users from accessing those parts of the database affected by changes not yet committed, or from changing data that was used in developing the changes being made).
  • Finally, we have to consider recovery (if the complete computer system crashes, then on restarting it, the atomic action had better be rolled back).

These key points - Commitment, Concurrency, and Recover give us the title of the Standard, usually known simply as CCR.

How would the basic CCR exchange between A and B and A and C work? CCR is concerned with a tree of activity, originated by A. It may have one, two, or more subordinates (two - B and C - in our example), and these may in turn have further subordinates. Using CCR, A issues C-BEGIN (carried on P-SYNC-MAJOR) to B and to C to indicate that what is to follow is not actually to be done (committed, made visible) yet. It signals the start of an atomic action. A then conducts an application-specific exchange with B and C. The details of this exchange is quite outside the CCR Standard, possibly invoking ROSE operations, possibly using P-DATA, possibly using RTSE - - - Whoops! Not RTSE: RTSE uses activities, and we can't put major syncs round activities, only inside them, remember? Once A has told B and C what it would like done, it then asks B and C if they are prepared to commit to the changes by issuing a C-PREPARE (mapped to P-TYPED-DATA). At this stage, A has not lost control. It has not itself committed to the actions, it is merely seeking to determine if the actions can be committed at all sites. If all subordinates (having used CCR in a similar way with any subordinates they might have) reply saying: "Yes, I am prepared to COMMIT the changes" (the C-READY primitive, carried on P-TYPED-DATA again - the data token might be at the wrong end in TWA), then A will issue the CCR C-COMMIT to all subordinates, which maps to P-SYNC-MAJOR, is a confirmed service, and both order commitment and obtains confirmation that commitment has indeed occurred. Alternatively, if one or more subordinates said: "No, we want to ROLLBACK this action" (the C-ROLLBACK primitive, mapped onto P-RESYNCHRONIZE), then either the atomic action being attempted must be modified by using another subordinate, or by some further exchange (if it is not too late) with other subordinates, or else a C-ROLLBACK must be issued to all subordinates. This latter is an example of a MACF (Multiple Association Control Function) rule. TP, discussed later, imposes the MACF rule that a rollback on one association has to result in a rollback on all others up and down the tree. CCR is not quite so strict, and imposes the minimum necessary MACF rules to ensure that the atomicity is not lost.

This is often called two-phase commitment, and involves a minimum of two confirmed exchanges: in Phase I, there is a "start atomic action" message followed by an exchange of messages defining the action precisely. These conclude with a reply saying "OK, I am prepared to commit" or "Won't". In Phase II (assuming an "OK" was received in Phase I) there is a "Do it now" with a "Done response".

Notice that whilst we cannot prevent bulldozers from going through lines when commitment (or rollback) commands have got through to some but not all systems, this will not leave the universe in an inconsistent state. It will merely (!) mean that concurrency controls (locks) will be present on some systems whilst released on others until the recovery procedures have been applied and the atomic action can be correctly completed. Unfortunately, for some (but not all) applications, it can be more damaging to keep concurrency controls in place for long periods of time (for example, while waiting 48 hours for a spare part to be flown from the USA to repair a broken system A). It can also happen that systems not only fall over, but their disks can be wiped clean, in which case the CCR protocol will not be honored and recovery will never be instituted. (The whole CCR concept is based on the updating of what is called atomic action data - data that survives a crash: in other words, data stored on disk - at critical points in the protocol exchange. Possibly loss of such data is recognised, but represents a complete breakdown of CCR.) In these circumstances it is necessary to allow systems that have locks in place (have offered commitment but not yet received any order to commit or rollback) to either commit or to rollback and release the locks. This is called a heuristic decision because it requires some (possibly human) intelligence to determine whether, for this application, rollback or commitment is most likely to be ordered, and which wrong guess would be the most damaging, or whether locks should be kept in place anyway!

The history of CCR was very checkered. It was originally developed to support JTM (Job Transfer and Manipulation), the work starting in about 1980, and maturing in about 1984. The initial work was done in the management group of OSI, because CCR was thought to be about managing multiple associations. As soon as it was registered as a Draft Proposal (the very first Draft Proposal from the management group!) it was taken away from them and given to the group looking after ACSE, as CCR was seen to be another CASE.

Shortly thereafter, IBM announced the LU6.2 sync point verb as part of SNA. This rather curiously sounding term is actually more or less synonymous with CCR in terms of functionality of the exchange and what it is trying to do. This resulted in a comparison between LU6.2 sync point and CCR, a greater concern with heuristic commitment, and a slight destabilisation of what was very nearly a full Standard. This, however, was nothing compared with later developments.

A New Work Item Proposal was made and accepted that ISO should develop a Standard for Distributed Transaction Processing (DTP or TP). At the time the precise relationship to CCR was unclear, but where CCR had been developed with a group whose attendance rarely exceed a dozen people, the TP group had an attendance of close to 100, most of them with a good knowledge of LU6.2!

The TP Standard provides a gloss or interface to CCR. It "steals" the CCR primitives, and provides some added value with user data on the CCR primitives and by direct exchanges. For example, CCR is concerned solely with a single atomic action. TP allows notification that, following commitment to the current action, another will immediately begin (a chained transaction), or not (unchained). TP provides a single set of service primitives that allow an application to control an atomic action through a single service access point, mapping each one onto as many C-service primitives as are needed (one for each association). TP steals all primitives, and provides a complete service, including a TP- DATA primitive. TP also allows exchanges within a tree structure without using CCR (no guarantees of atomicity), and supports TWA and TWS exchanges explicitly. The TP work resulted in a serious destabilisation of CCR, with a major change to the recovery procedures (introduction of something called the presumed abort paradigm), and a request for the introduction of a new synchronization service from Session (acceded to) to replace the use of major sync to start an atomic action - the TP group was powerful!

The end-result is a TP Standard that has enabled a prominent computer vendor to offer a product that allows a COBOL program to use an operating system interface providing the functionality of the TP service primitives to communicate with other systems. One branch of the atomic action tree can be using TP over CCR over the OSI stack, and another the vendor's own protocol, and the COBOL program does not know the difference. Neat?

It is worth talking a little about TP-DATA. The TP group recognised the Presentation Layer concepts, and the value of the separation of abstract and transfer syntax, but were concerned that small groups (or even individuals) wanting to write COBOL programs to use TP would not have access to object identifiers, nor perhaps to tools supporting the use of ASN.1. They are therefore included as part of the TP work Part 6: Unstructured Data Transfer. This was designed for those who were content for the COBOL programs to exchange (using TP-DATA) information that was simply the value of an arbitrary length octet string (with any syntax conversion performed by the application). It defined this as the (only) parameter of the TP-DATA, and therefore formally closed all the "holes", making this no longer an ASE/ASO, but a fully-defined SASE-like Standard. This Part allocates an abstract syntax object identifier for the values of this octet string, a transfer syntax object identifier for the transfer syntax that contains an identical bit-pattern for the transfer syntax, and an application context object identifier for the resulting complete protocol.

Putting aside the Unstructured Data Transfer Part, there is a lot of interest in combining the use of CCR/TP with other ASE specifications, particularly ROSE/RPC, FTAM, and so on, giving practical reality to many of the XALS considerations.

9.5 Remote Procedure Call (RPC)

RPC has already been briefly mentioned. In some ways it bears the same relationship to ROSE that TP bears to CCR. It adds little to the underlying model and exchanges, but puts a big gloss on the interface and access to the services.

At the beginning of 1992, RPC was still at the Committee Draft level (the first stage of balloting). Its major technical content was the RPC Interface Definition Notation (IDN). This provided a notation broadly equivalent to (a subset of) ASN.1, but which was somewhat closer to the sort of notations used in programming languages. The main hope and intent of the effort was to encourage the provision, within programming languages, of support for the invocation of RPC calls to support the calling of procedures in one machine from programs in another, possibly written in different programming languages.

The Standard specifies that when an interface is defined, and object identifier is specified for it. The interface definition maps into ASN.1 data structures and ROSE operations in a defined way, and the object identifier maps into the necessary abstract syntax object identifier. An application context object identifier is defined for Basic RPC within the Standard. RPC was the first ASN.1 user group to take a serious interest in the ASN.1 Packed Encoding Rules, seeing them as perhaps as the encoding rules to be made mandatory for implementations of RPC.

9.6 Management standards framework

The work on management in OSI was again one of the areas that were begun at about the same time as the reference model, and has now grown to an extensive and still growing set of standards.

A detailed treatment of OSI management standards is outside the scope of this text, but the architecture and overall approach is covered here.

The earliest versions of the Reference Model spoke about Application Management, Systems Management, and Layer Management, but there was a very limited amount of text discussing their differences, and for many years the management working group tried to put some flesh on the bones. What exactly was OSI management trying to manage? What was the difference between these forms of management? There, was, in the beginning, the view among some parties that anything that was not a simple interchange between two systems was "management". There was also the view that OSI management was about developing management protocols for anything that might need managing over a network. This was a very broad brief!

We have already noted that the CCR work was originally considered "management", and was progressed to a Draft Proposal (the original name for Committee Drafts) by the management working group. Even today, the management working group is actually two almost independent groups, with little overlap of membership, the first is concerned with real management standards, and the second concerned with the X.500 Directory standards. We are discussing only the former work here.

It was as late as 1984 that significant progress started to be made in OSI management, and this stemmed directly from agreement in two important areas: the Management Framework (eventually published as Part 4 of the Basic Reference Model), and Common Management Information Service and Protocol (CMIS/CMIP).

The Management Framework delineated the scope of OSI management with the very important piece of text: "OSI management is concerned with the standards needed to manage the OSI resource." In other words, the protocols to be developed were to be concerned only with the management of those parts of a computer system concerned with the implementation of OSI functionality and standards, not with such things as registering users on systems, access to and transfer of student records, distribution of general operating system or applications software, or anything else that might be described as a "management" activity. Of course, it has turned out that the protocols developed for managing the OSI resource (for example, for controlling the operation of a network layer router), are actually pretty good at controlling the operation of things like a modern computer controlled radar dish or a telescope, but requirements specific to such applications were excluded from consideration.

The Management Framework also introduced the very important concept of the Management Information Base (the MIB), and put to rest once and for all the distinction between layer management and systems management. (All this in a six page document!)

So .... what is the MIB? The idea is that an implementation of a layer protocol will have a variety of pieces of information associated with it, both dynamic and static, both controlling its actions and reflecting its state. For example, the handler of a connection-oriented protocol will probably have some limit on the number of simultaneous connections it can handle. Moreover, there may be an absolute limit based on memory size, but there may also be a more flexible limit which could sensibly be large at night and low in the day-time. It will probably have a number of states, for example "running normally", "closing down" (not accepting new connections), and "closed down". There may be variables that affect the way it behaves, for example, accept connections from any source, or accept connections only from a priority list. There may be events occurring within it, for example failure of an outgoing connection attempt (perhaps subdivided by defined failure reasons). These events maybe could be logged, or at least counted, if the implementation supports this. The concept of the MIB then, is of a model of the total information that reflects or results from the operation of the OSI implementation on a system, or controls that implementation. It is specifically not the prescription of a real database on disk. The way the MIB information is held, modified, obtained (when it is being read) is a local implementation matter, and may involve direct interaction with layer implementation code and in-core state, or may involve indirect communication via a real database. Finally, the MIB will potentially contain information that would relate to almost any conceivable implementation of an OSI protocol, together with information that is very specific to a particular implementation. The precise definition of MIB information (either Standardised or vendor-specific) was a growing trade in the late 1980s and early 1990s.

Once the MIB concept is in place, we can now make what at first sight seems to be an almost arbitrary answer to the questions: "What is layer management? What is system management?" (See figure 9.10: The Management Information Base(MIB)) We define:

  • Access to MIB information by management protocol in a particular layer is restricted to those parts of the MIB that relate to the implementation of protocols defined for that layer, and is called layer management, and the necessary protocols have to be defined by the Working Group responsible for that layer.
  • A protocol operating through the full OSI stack (involving a Systems Management Application Entity), to be defined by the management working group, can access any part of the MIB, and is called systems management.

This was probably the most controversial part of the Management Framework, but it stuck. Interestingly, in the early 1980s there was a lot of concern that network layer "boxes" (routers, X.25 switches, and the like) should not have to implement protocol layers above those needed for their main function (the Network Layer down) in order to be managed. In the 1990s, with the possible exception of link layer protocols to manage bridges on local area networks, there is almost universal acceptance in OSI that management control of almost everything will be by systems management (layer 7 protocols), and there is very little interest within the layer groups in defining Layer Management protocols. Efforts were rather concentrated on defining those parts of the MIB that were amenable to international standardization. After all, if the separation out of Transport and Session and Presentation functions was relevant and important for general application standardization, why should it be less important for exchanges related to management?

Where, then, does that leave application management? There was only one real piece of work started that could be described as "application management", and that was abandoned (formally, merely suspended) in the mid-1980s. This was called Control of Application Process Groups. The main interest in progressing it came from Japan, and there was generally a fairly widespread lack of interest in and/or understanding of the work. The idea is roughly as follows: Suppose you are the implementor (a mere user, not a computer vendor) of a new (OSI) (application) protocol that involves exchanges between four (say) systems. How do you test it? Or maybe later run it? The answer in the early 1980s was that you would line up four dumb terminals on your desk, login to your four systems, invoke the necessary code in the foreground, and monitor them to see what status messages they generated, and whether they "fell over". Actually, apart from using one intelligent workstation with four windows, you couldn't do much better in the early 1990's! Now suppose the resources this application process group needed prohibited running in the foreground, and required scheduling as background tasks by the operating system. What then? This was the problem being addressed.

The approach was to postulate that each Open System would contain an Application Management Application Entity (AMAE), and that the implementor would first define (through his local system and protocol exchanges between the AMAEs) an application process group: the set of his programs that needed to be run simultaneously. He would then request from his local system that his application process group be activated. After appropriate scheduling negotiation between the AMAEs, the relevant programs would start to run, and would be monitored by the local AMAE, with status messages and "its fallen over" messages returned to the AMAE where the activation was initiated, and hence to the implementor. That is about as far as it got. The work was abandoned for lack of international interest, and there has been no other OSI management work in the application management area.

Let us now turn to CMIS/CMIP. Given the concept of the MIB, and of systems management, we clearly need a protocol to enable remote systems to read and write elements of the MIB. This work, in the late 1980s, adopted the object orientation paradigm. We talk about classes of managed objects (for example, the class of all connection-oriented layer protocol handlers), and sub-classes (for example, the network layer connection-oriented protocol handlers). The object oriented concept of class inheritance can now be applied: there are some properties that the subclass inherits from the definition of the superclass, and there are some that are specific to the subclass. What sort of things do we want to define about a class of managed object? Well, there will be specific instances of objects of that class. And perhaps we can talk about creating and destroying such instances as a way of modelling the switching on and off of a communications capability in some system. A managed object will have attributes that can be read (to determine its current state) or set (to affect its operation). Reading or setting these attributes will require the transfer of a value of some ASN.1 type specific to that attribute of that object. From time to time certain events might occur related to that managed object, and such events might be logged or counted in various ways. Such logs and counters might need to be created and destroyed. Finally, the occurrence of certain events, or a counter passing a certain threshold, might require that an alarm be generated and sent to some nominated system that was managing the system where the alarm occurred.

With this model, we clearly need a protocol to create and destroy instances of managed object classes, to read and set attributes, to create, read, reset, destroy logs and counters, and to handle the reporting of alarms. Moreover, we can start to define attribute classes (for example, those concerned with starting up, running, and shutting down) that are likely to be relevant and useful for a number of classes of managed objects. Thus begins a whole range of Standardization activity, initially with the relatively simple CMIS/CMIP.

How are we going to identify managed object classes, instances of them, their attributes, etc? I hope the reader will by now have learnt to answer "With ASN.1 OBJECT IDENTIFIERS, of course!" When the first draft of CMIP was produced in the mid-1980s, it was about half-a-dozen pages long, most of it an ASN.1 definition, and just about every other line was an OBJECT IDENTIFIER datatype (identifying an object or an attribute) and an ANY (to hold the datatype for writing to or reading an attribute, or associated with an event). It was full of "holes"! And at this time not a single actual object or attribute had been defined! Note also that CMIP's use of communications is through a ROSE OPERATIONs macro: CMIS (sometimes now called CMISE - SE for Service Element) is an ASE that uses the ROSE ASE.

An awful lot has happened since the early work. With this model, there is clearly a need for a notation to aid the definition of managed object classes and their classes (what is to fill the "holes" in CMIS/CMIP). Other protocols would have used the ASN.1 macro to provide this notation, but by then this was in disrepute, and English language was used to specify GDMO (Generic Definition of Managed Objects), the notation to be used. When the Information Object Class concept was introduced into ASN.1, work by the Japanese demonstrated that it was "man enough" to take over from GMDO, but of course by then GMDO was firmly established in this role, and take-over did not occur.

There is now a very large amount of text in a series of OSI standards concerned with extensions and specializations and additional functions broadly building on this model, and on the structure and definition of MIB information. There are also a growing body of MIB definitions, particularly related to the management of Network Layer functionality (Network Management), but the reader should note the remark made early: CMIS/CMIP can be used to remotely control any piece of computer-controlled equipment, provided only that the control procedure can be adequately described as the reading and setting of attributes, the recording of events, the generation of alarms, and so on.

In the early 1990s, after previously ignoring the management work, ITU-T as it then was became seriously interested and the work has now been included in ITU-T Recommendations as the X.700 series. (The X.600 series is used for the many Recommendations that specify how to provide the OSI Network Service over the many different real-world communications links.)

A final comment is needed in this section on CMOT and SNMP. The TCP/IP suite had little by way of management functionality before the ISO work began, and it imported OSI concepts and definitions at an early stage. CMOT is CMIS Over TCP/IP. It is the specification of how to carry the CMIP protocol over TCP/IP for the purposes of managing TCP/IP network boxes. There was little implementation interest in it up to the early 1990s. SNMP is Simple Network Management Protocol, and is a simple ASN.1-defined protocol using the OSI Managed Object and MIB concepts. There are also a lot of MIB definitions within the TCP/IP suite. In the early 1990s, there was probably more use of SNMP than CMIS/CMIP to control network boxes, but the ongoing standardization effort related to OSI, and particularly the inclusion of work related to security and to management domains, and the introduction of the X.700 series are expected to change this situation.

<< >>