Packetizer Logo
Understanding OSI

2 The architecture

2.1 Introduction

Perhaps the best-known aspect of OSI is the 7-layer model (formally The Basic Reference Model for Open Systems Interconnection). Most readers will have at least heard of it, and can probably recite the names of the seven layers. What is the purpose of the layer concept? Why do we call it an architecture? Why seven layers? In this introductory discussion we will take a very broad look, and develop more detail as we proceed.

First, why seven? Well, why not? Seven has long had magical significance. The tradition of the seven wonders of the world, the seven labours of Hercules (actually, it wasn't seven, but it should have been!), the seven days of creation, and the seven deadly sins make seven layers for OSI almost mandatory! We will find a more convincing reason for seven later in this Chapter, but who knows, the power of the human psyche is such that the magic of seven could well have been the major determinant in the early work.

What is a protocol architecture? Most recent specifications of computer communications protocols (messages, their meanings, and rules for their interchange) provide the total specification for an exchange as a series of related documents which provide support for a variety of applications when used in varying combinations. The architectural description of the protocol suite identifies the structure of the specifications that are used to define the communications, the broad functions performed by the protocol in each specification, and the way the specifications are combined to form complete useful applications.

2.2 Basic handling of "holes"

The form of combination of separate specifications in OSI (and in many other architectures) is as follows:

This approach is called layered protocol definition or just layering (see figure 2.1: Layered build-up of a message). Each of the layered specifications adds some value to the functionality of the layer below or, in OSI terms, uses the services of the layer below to build an enriched service. Layering, then, is little more than a documentation tool to assist in the development of a complete specification. It performs two useful functions: first, if carefully done, it permits independent and simultaneous definition of each layer, allowing a number of committees to work simultaneously on different parts of the total task; secondly, it enables lower layer specifications to be reused as parts of wholly different applications without having to repeat text or reinvent functions and mechanisms.

2.3 Tying things together

An important part of understanding OSI is to understand the differing concepts of service standards and protocol standards. OSI talks about protocol specifications and service definitions. This is a deliberate use of different terms. "Definition" implies that we are defining terms or a notation, and this is exactly what a service definition is. "Specification" implies that we are giving rules that an implementation must comply with, and this is exactly what a protocol specification is.

The problem with the above discussion on layering is that it is too simplistic. At each layer, there are typically a number of different types of message which can fill a lower layer hole, each type of message providing a "hole" to the layer above, and each one having a different semantics and different information fields. The Service Definition Standards provide a notational means of identifying which particular hole (in which particular lower layer message) a given message fragment is carried in.

The middle layers of OSI have typically two standards associated with each layer: the first is the service definition for the layer, and the second is the protocol specification. Let us take as an illustration the Transport Layer, which lies beneath the Session Layer and above the Network Layer. (See figure 2.2: Service and protocol standards) (For the present, it does not matter what the layers actually do, so if you have little or no current knowledge of OSI layers, just treat these terms as the labels for three adjacent layers that could just as well have been called Jack, Jill and Fred.)

Then we have a transport service definition standard that specifies a notation which is used in both the transport protocol specification and in the session protocol specification and serves as the glue that enables the two specifications to be combined into a single specification. Once combined, the transport service definition can be discarded. In a similar way, the network service definition notation is used in the transport layer to reference procedures and messages defined in the network layer, and the session service definition notation is used in the session protocol to provide hooks for higher layer specifications.

The form of service definition notations is formalised in OSI (although there are in practice some variations on the general theme). Whenever a message is defined by a layer with a hole in it, then the service definition for that layer contains a notation for specifying the issuing of a service request primitive (identifying the lower layer message to be sent), with parameters of the request primitive corresponding to holes in the message. When a higher layer specification wants to cause a message to be sent, it talks about "issuing the service request primitive" to the layer below with specific values for the parameters (information to fill in the holes). In the same way, a layer will describe the effect of receiving a message as the issue of an indication primitive to the layer above (identifying the type of the received message) with parameters corresponding to the information actually received in the holes. A typical layer protocol has a number of different types of message which need to be visible to the layer above, each with a variety of holes. Each of these has an associated service primitive (request primitive and indication primitive) and parameters defined. It will often also have messages (typically acknowledgements or flow control messages) that aid the operation of its protocol, but have no holes, and are not visible to the layer above. These have no corresponding service primitives.

An example always helps. The transport layer has a message used to establish a connection. The corresponding service primitives are the T-CONNECT request and the T-CONNECT indication. The parameters of these primitives relate to addressing information, properties of the connection, and a limited amount of user data (data provided by the layer above) that can be carried in the connection establishment messages. It has another message used to pass data, and the corresponding service primitives are the T-DATA request and the T-DATA indication, with a single parameter which is user data. When an implementor reads the session protocol specification, it instructs the implementor to "Issue a T-DATA request with ... as the parameter." (See figure 2.3: Cross-referencing protocol procedures). The implementor then turns to the transport protocol specification that says "When a T-DATA request is issued, then you do ... ." Similarly, the transport protocol specification says that on receipt of certain messages (actually, parameters of network service indication primitives), "A T-DATA indication is to be issued." The implementor then goes to the session protocol specification which says "When a T-DATA indication is issued, then you do ... ." Note that the implementor never needs to refer to the transport service definition. The main purpose of that standard is to ensure that the session and the transport protocol specifications (which are typically produced by different committees) are actually consistent. The main (in principle the only) interaction between the two committees is to jointly agree the service definition which links them. Traditionally in OSI, responsibility for producing the service definition standard has been given to the committee defining the protocol below that service definition, in consultation with the committee responsible for the layer above.

The above treatment has been somewhat simplistic, but should be sufficient for the reader to understand the service primitive concept, more detailed texts on service primitives, or even the actual Standards!

The definition of primitives (with their completely defined parameter types and ranges) looks rather like a computer programming language interface or procedure call definition, and this has led some people to think that the documentation structure and service definition primitives and parameters have to be reflected in implementation structure and visible interfaces and parameters in implementations. This is not so. There is strong text in every service definition saying that this has no implications for implementations. Of course, some of the reasons for modularising a specification to permit reusability and independent work by different people on the various parts apply equally to the production of computer software. Thus in practice, particularly with prototype or early implementations where speed and ease of production and robustness are perhaps more important than efficiency, one does find software implementations with structures corresponding quite closely with the layered specification structure, and with interfaces and parameters corresponding quite closely with the layer service definitions. Such structures are, however, the implementors choice, they are not a requirement of the OSI standards, and the match is seldom perfect. Thus it is important for the reader to recognise that the terms network layer, transport layer, session layer are strictly only applicable to the documentation structure of OSI. It is generally inappropriate, and can be very misleading, to talk about "the presentation layer" as part of an implementation rather than as a grouping of standards.

2.4 The magic seven

2.4.1 From one layer to two

In the earliest days of computer communication, typically when a single link of copper wire or a radio-link connected the two communicating computers, a single monolithic specification completely defined all aspects of communication in an unstructured manner. Aspects which were very application-specific (such as signalling the end of a deck of cards, or pressing some specific button on one of the communicating machines) were signalled using some specific voltage or current on the wire. In other words, the low-level signalling that was used did not merely indicate zeros and ones that higher layers of specification used to carry information. Rather, the signalling system and the application were inextricably intertwined. Some of these so-called "link" protocols lasted into the 1990s, mainly in the military communications area, but they are very inflexible because of the intermingling of application matters with details of the signalling system and hence the particular medium of communication. They should be regarded only as interesting historical relics.

If we are trying to produce International Standards for a large and ever-increasing number of applications (m say) to run over a large and ever-increasing number of different types of media and signalling systems (n say), then if we were to produce monolithic specifications, we would need m times n standards, an unmanageable task. (See figure 2.4: Monolithic standards: m x n problem).

The first and most important step must be to try to separate the application-dependent aspects from the signalling and routing technology dependent aspects by defining the functionality to be provided by the latter, and assumed by the former. The service primitive definition described above is a suitable vehicle for defining this functionality, so we talk about "defining the OSI Network Service", that is, specifying the functionality for end-to-end communication to be provided by networks and to be used by application specifications. In principle we have now reduced the problem to an m plus n plus 1 problem - one end-to-end Network Service Definition, m application specifications assuming this functionality, and n network technologies providing this functionality to end-systems wishing to communicate. This would produce a two-layer model - one network layer, and one application layer. There have indeed been (non-OSI) protocol suites that have adopted this architecture. It is the minimum layering that makes any sense today. (See figure 2.5: Simple application and media separation).

Notice that this layering fits well with the idea of a network provider as an organizational entity (such as a PTT) providing services for many customers whose computers are end-systems on the network, and all running many different applications over the same network provision. Equally, provided the OSI Network Service is fully implemented by all network providers, and provided the application specifications use only features present in the OSI Network Service Definition, applications can be expected to run over a wide-variety of carriers with no additional specifications, and minimum changes to software code.

This is a very nice simple approach, but it is unfortunately complicated a little by two quite radically different views of what is the most appropriate end-to-end network functionality to standardise. These two views are discussed later when the network service is examined in more detail.

2.4.2 From two to four layers

It is clear that in providing an end-to-end Network Service to support the communications of end-systems, the service will be provided by a combination of one or more passive links (each using potentially a different medium and signalling mechanism), and one or more active relay systems or network nodes. In OSI terminology, these nodes are called intermediate systems, in contrast to end-systems which are the systems containing the applications trying to communicate. (Note, however, reverting back to the discussion of ODP, that this distinction is clear when one focusses on the interconnection of open (end) systems. It is less clear if one's focus is on open distributed processing.)

It is natural, therefore, to consider specifying the provision of the network service by

Following this approach, we introduce three layers at the bottom: the Physical Layer, concerned with media and signalling; the Data Link Layer, concerned with the operation of a single link; and the Network Layer concerned with the behaviour of nodes to provide the overall Network Service.

This is again a nice simple concept, but yet again the reality gets more complicated. There is near total agreement that the Network Service Definition (allowing for the two approaches mentioned above) is all that is required, and that it is "right" for any media, signalling system, and network routing or switching ideas that might arise in the future. (At least, for now!). But when it comes to the separation of functions between the Physical and Data Link and Network Layers, there is less agreement. Physical and Data Link Service Definitions have been produced, but the separation of the total specification of how to provide the Network Service into three parts glued together by these Service Definitions is still more an ideal than a reality. The problem is at least partly one of making use of historical systems, but is also at least partly one of logical problems which arise in attempting efficient separation of function between these layers. We will look at two issues of layer separation.

First, let us consider the nature of the Physical Service Definition. It would seem natural to require providers of this service (signalling systems) to provide for the transmission of arbitrary strings of zeros and ones as the fundamental feature of their operation. It turns out (and I don't know if this can be formally proved to always hold, or whether some reviewer will find an exception) that all current signalling systems either:

In particular, the signalling mechanisms used in all the Local Area Network (LAN) Standards have an extra state/symbol which is used either to terminate a block of data, or to signal a token being passed. If you are an architectural purist, this is layer violation, as such a feature is not part of the Physical Service Definition, and token passing and termination of blocks of data are functions assigned to the Data Link Layer, not to the Physical Layer. This is why, in the LAN Standards, the architectural diagram (see figure 2.6: Architecture of LAN standards) shows the so-called "MAC-Service" spanning the Physical Layer (whose upper boundary is shown as a dotted line) and the lower part of the Data Link Layer. There was a time when people worried about this discrepancy between the ideal and the reality, but that time is long past!

The second question we need to look at is the nature of the Data Link Service Definition if the technology we are considering (again, typically, a LAN technology) has a passive link connecting a large number of stations, rather than just two. We typically call such a system a local area network, yet all the standards related to it are, by common agreement, restricted to the Data Link Layer and below. In order to accommodate such systems, the Data Link Service Definition ends up looking remarkably like the Network Service Definition (and in particular needs an address parameter to determine the recipient of data), and the value of separating these layers becomes less clear. But to make the distinction more obvious, and to emphasise an important point, the OSI Network Service is about the provision of a world-wide (and out to the stars in due course) interconnection capability, with sufficient power in the addressing and routing mechanisms to handle such a remit. By contrast, addressing used on passive links such as Ethernet is intended primarily to support the local dissemination of information over the passive link, not for global addressing. (This despite the fact that allocation mechanisms exist to make Ethernet addresses globally unambiguous).

So ... what is the real OSI architecture of the bottom three layers providing the OSI Network Service? This is contained in a Standard entitled The Internal Organization of the Network Layer (IONL). This Standard makes two very important points. First, it accepts that real networks exist, and provide some sort of data transmission service. These are called subnetworks. Any particular subnetwork can be enhanced by adding specifications for the behaviour of systems connected to it (which may be OSI intermediate or end systems) which enable the OSI Network Service to be provided across it. Such specifications are called convergence protocols. In particular, the X.25 (1980) specification does not provide the OSI Network Service, but can be made to do so with a convergence protocol. By contrast, X.25 (1984) does provide the OSI Network Service with no additional convergence protocol. The convergence protocol for X.25 (1980) was developed, but was abandoned before proceeding to International Standard status because everybody thought implementations of X.25 (1980) would have a short life! Yet even well into the 1990s, many PTTs were still offering only X.25 (1980) services, and hence not fully supporting the OSI Network Service.

When I lecture, I usually summarise the real architecture of the bottom layers with a slide saying "It doesn't matter what goes on down below. All that matters is the decking!" The "decking" is, of course, the OSI Network Service.

The second point made by the IONL is that the OSI Network Service as actually defined has the important property that, given the provision of that service (using some set of protocols) between systems A and B, and given the provision of that service (using potentially different protocols) between systems B and C, then very simple (and specified) behaviour by system B gives us the OSI Network Service between systems A and C (see figure 2.7: Tandem use of OSI Network Service).

Thus we have the situation that:

This is a very powerful provision, both for linking the world now, and for handling future networking developments. In particular, many vendors provide the OSI Network Service across their vendor-specific networks, with (typically) gateways extending that service across an X.25 network or an Ethernet using the standard protocols on these external networks.

So ... to summarise: despite the above comments, formally at least, we have now justified four layers - three bottom ones providing the OSI Network Service, and a top one which contains a series of application-specific standards, written assuming (and using the notation of) the OSI Network Service Definition.

2.4.3 Going beyond four

We have (hopefully) separated out technology-dependence in the bottom three layers from application-dependence above them, using a boundary (service definition) which is independent of any application and independent of any networking technology, now or in the future. What else is worth doing?

At least in principle, we can recognise that in writing the m different application specifications we will find ourselves needing to consider similar problems for each application, and write similar (or worse - different) text to solve these problems for each application. In other words, we can conceive of some parts of the total specification that solve problems that are both application-independent and technology-independent. If we can identify such problems, then these are clear candidates for providing solutions by introducing additional layers between the Application Layer and the Network Service.

What can we identify? Do we find three major problems (giving us seven layers), or only two (giving six layers), or four (giving eight layers)? Surprise, surprise, we find three problems. I leave the reader to judge whether maybe two would have been better - or maybe it should have been four? Or maybe one more would have been best? But as I often say in my lectures:

"There is no one right layering; arguments over "the best" separation of layer functions can continue indefinitely: but ... we do have an International Standard architecture, for better or for worse."

I usually add the further quotation from the Encyclopedia Galactica - 20085 version shown in figure 2.8: Encyclopedia Galactica, p29463.

2.4.4 From four to five

The first application-independent and network-technology-independent problem that we can recognise relates to the quality of the service provided by the particular network connection we are using and the needs of the application we are running. It is again a fact (formally unproven, I think, but I state it as a theorem that no one has disproved) that there can be no combination of signalling system, error detection algorithm and error correction code that can produce a zero probability of error in the transfer of data over computer networks. (We can get arbitrarily close to zero probability of error, but ...). All signalling of zeros and ones is dependent on detection of some threshold being crossed, and all media have some external disturbance that can corrupt the data. Moreover, it is in the nature of economically-viable systems that if you have a low error rate you can increase the speed of transmission (with little extra expenditure on the hardware but with an increased error rate). Thus basic error rates tend to be the product of commercial decisions, not simply the result of properties of the technology.

Suppose we transmit the data, then send it back and check it is still the same as the data we sent. There is still no guarantee that the data has been received correctly, because precisely the corruption that occurred on the forward path can accidentally be exactly reversed by corruption on the reverse path! This is particularly true if we have intelligent human interference: there is nothing humans can do by way of security violations that hardware errors can't do just as effectively, but maybe less predictably!

It would in principle be possible to set the standard for the OSI Network Service at some arbitrary quality level, in term of residual undetected error rate, rate of detected (signalled) but uncorrected errors, throughput, round-trip time, cost, and so on. This approach would suffer not only from requiring a Rolls Royce service from all suppliers and cutting Minis (or the Model T Ford) out of the market, but would more significantly date the Network Service Standard to the expectations of current technology.

Such quality level specifications do not appear. The Quality of Service (QOS) provided by the Network Service is accepted as being something that will vary depending on the real-world networks being crossed by any particular connection, and perhaps even by trade-offs of cost, bandwidth, and error rate selected for a particular hop over a real-world network. Each specification above the Network Service, therefore, needs to address the problem of what additional exchanges to add to ensure that the application can operate over the worst connections that might be encountered as well as over the best.

The problem of designing appropriate mechanisms to provide the QOS required by an application, given the QOS available on a network particular connection was the task given to the Transport Layer, the layer introduced immediately above the Network Layer. The Transport Layer (and QOS issues) will be discussed a little further later when we look at each layer in turn, but for now, it gets us from four to five layers!

2.4.5 From five to six

OK. So we now have an end-to-end connection capability on a world-wide basis, and have allocated responsibility for producing any necessary specifications to address the QOS issue. What other problem will most application specifications have to solve that can sensibly be addressed by a specification that is independent of both the application and the networking technology? This one is not so obvious or clear as the Transport Layer or (to come) the Presentation Layer. It is also difficult to find a similar separation of the problem we are about to address in other communications architectures. The layer we are going to introduce (above the Transport Layer) is the Session Layer.

The jargon phrase, repeated in so many text-books on communications, is that "The session layer is concerned with dialogue control and dialogue separation." What on earth does that lot mean? What real problem are we trying to solve? Even worse, some older text-books equate the word "session" with "login-session" and think the problems addressed are all about usernames and passwords, that is, security. In fact, the Session Layer is the one layer in the OSI model which, in the Standard for OSI Security Architecture has been given no security responsibilities.

There are rather a large number of functions performed by the Session Layer specification, and to understand the need for some of them (particularly the so-called "orderly termination"), the reader needs rather more detail about the nature of the Network Service and the Transport Service than we have been able to provide so far. So a detailed description of all the problems the Session Layer group was asked to solve must come later. For now, however, let us concentrate on two problems. The first relates to the "style" of message exchange that an application protocol designer wants to adopt. There are those who would argue that it is easier to produce a correct protocol specification (and to produce a software implementation) if the lower layers guarantee that normal messages do not cross in transit. Of course, we have to allow for signalling of exceptions, but the basic protocol design is easier (some would claim) if one end is always the sender and the other the receiver of messages, with a mechanism provided by a lower layer for signalling the passing of the turn to send (or in Session Layer terminology, passing the data token). The Session Layer therefore provides the means of operating in two-way alternate mode (TWA) in addition to (no added-value) two-way simultaneous mode (TWS) for those application designers that prefer that mode of operation. This is what is meant by dialogue control.

The second problem relates to the establishment of checkpoints. Here it is important to recognise that in the late 1970s, when the OSI architecture was developed, checkpointing and associated restart mechanisms to cope with system failure during long-running tasks was an absolute necessity. The reader must judge whether the same imperative exists today, but let us nonetheless assume that many applications will need to make provision for checkpointing their activity from time to time to guard against system failures. What protocol support is needed (that can be provided in an application-independent fashion) to make it easy for application-protocol designers to incorporate checkpointing into their designs? Why is the specification of checkpointing such a hard problem that it warrants major discussion as part of a separate layer of specification?

Let us consider checkpointing of an application on a stand-alone machine. It is a simple problem. The designer merely says that after some specified time, the complete state of the application is to be recorded on disk. (My word-processor has as I write this said "* Please Wait *" while it did precisely that.) What is the problem?

Now let us consider an OSI application involving the interconnection of two (two will do for now - more is a different problem) computers. What we need to do is to arrange for them both to check-point their state at some agreed point in time. Oh dear - help us please Einstein: simultaneity (synchronization) of events at different points in space is a hard problem!

In reality, the problem is even harder - the total state of the application includes the state of network relays that contain messages that are currently in transit. So we need not merely to checkpoint (simultaneously) our two end-systems, but also to ask our friendly network provider to please check-point (and later restart!) the network switches when we need to do our checkpointing. Now the problem is rather clearer, and we can see why it may warrant assignment to a special group as part of the functions to be provided in a separate layer of specification.

Of course, if the application involves a very simple use of the connection to transfer sequential data in one direction only, checkpointing is not so much of a problem. What we do is put a marker in the flow of data that separates an earlier part of the transfer (the dialogue) from a later part, and we checkpoint at the sending end when we insert the marker and at the receiving end when we receive it. The marker provides a dialogue separation between the two parts of the dialogue, and enables (as an application-designer specification) checkpointing to take place. In terms of Session Layer jargon, separation of this rather simple one-way dialogue (not really a dialogue at all, rather a monologue) is called "establishing a minor synchronization point". It is minor because it only works for this rather simple (but very common) dialogue. (See figure 2.9: Minor synchronization dialogue separation.)

In the general case of messages flowing in both directions, perhaps with some exception signals that might overtake normal data flow, providing a separation of the total interchange into dialogues A and B such that checkpointing can be applied (should the application designer so wish) between the two dialogues is a more interesting problem. This is solved by what Session Layer terms major synchronization - a steam-hammer that can crack any nut! (The handshakes for major synchronization are discussed in chapter six).

The whole issue, then, of the different forms of synchronization (to permit different forms of checkpointing) and how to provide them is an interesting and complex study, and provides the second major problem that justifies (? - you the reader must judge - ?) the existence of the Session Layer.

2.4.6 From six to the magic seven

So ... we have an end-to-end connection capability, have all we need by way of QOS improvement mechanisms, and have all we need by way of dialogue control and establishment of synchronization points for a variety of types of dialogue. What else can usefully be separated out as a sufficiently important problem to warrant writing another layer into the architecture?

When I ask that question in seminars and lectures, I often get answers of either "Management" or "Security". The idea of a management layer that monitors traffic, introduces charging information, or whatever, is an interesting possibility, but in fact management functions are not seen as an integral part of all applications, but rather as applications in their own right. This is probably an appropriate view. Thus not providing a management layer can be fairly easily justified. Justifying the non-provision of a security layer is certainly harder. There is no doubt that security features must permeate the operation of all applications. However, the OSI architecture sees security features being provided by many (most) layers (with some additional support in the application layer that is discussed briefly in chapter 11), but does not include a layer specifically concerned with security.

So what is the application-independent problem that gets us to the magic figure of seven layers? Let us consider the tasks an OSI application protocol designer has to undertake, and see if there is anything we can do to provide assistance. Is there any identifiable problem that most application protocol designers will face, and which may be amenable to application-independent treatment? First, the designer must examine the application and determine the broad nature of the information that has to be exchanged. We talk about "determining the semantics of messages". Next, the designer needs to specify (somehow) the data structures (message formats) that are going to be uses. We talk about "defining the abstract syntax of messages". Finally, the designer needs to determine an appropriate bit representation for values of the data-structures in use. The bit representation to be independent of any particular compiler or computer system. In particular, if messages contain integer elements, real numbers, character strings, the designer has to determine the representation of these elements, and of any structuring information that groups them into composite elements. We talk about "defining the transfer syntax of messages". In the early days, the names abstract transfer syntax (for the form of messages without concern about bit-pattern representation) and concrete transfer syntax (for their bit-pattern encoding) were used, but these were later abbreviated to abstract syntax and transfer syntax, which are the terms exclusively used today.

It is clear that the application designer needs to define the semantics of protocol exchanges, and once we require this specification to be done formally rather than in English, the designer has effectively also defined an abstract syntax for the messages. There is, however, no reason to force the designer to worry about the transfer syntax (bit-pattern encoding). We ought to be able to at least provide some support in this area in an application-independent manner, and hence a layer concerned with the representation of information during transfer is an appropriate one to introduce. This is the Presentation Layer. The term "Presentation" Layer is a little misleading. (In retrospect "Representation Layer" or "Encoding Layer" would have been better). A number of early books on OSI contained text saying that the Presentation Layer was all about the presentation of information on a terminal display to a human being - in other words, entirely concerned with a terminal handling application. This is (and always was) a wrong statement.

The Presentation Layer, then, is concerned with bit-pattern representation during transfer. In particular, in the OSI architecture, we recognise that any particular set of application messages (abstract syntax) can have associated with it multiple possible representations (transfer syntaxes) to be used in different circumstances (see figure 2.10: Multiple transfer syntaxes). The presentation protocol is given the responsibility for negotiating the encodings to be used in any particular connection, and the Presentation Layer group was (later) also given the responsibility for standardising a language (notation) for specifying abstract syntaxes (Abstract Syntax Notation One - ASN.1), and a set of encoding rules associated with use of that language.

In the early days of OSI, the importance of clearly separating the definition of abstract syntax and semantics from transfer syntax was recognised, but without a formal notation for abstract syntax definition, it was considered necessary to have, for each application, at least two Standards - one specifying the semantics and abstract syntax, and others (which would be application-specific) specifying possible bit-pattern encodings for that application. With the emergence of languages for abstract syntax definition, all Presentation Layer matters (representation issues) can now be treated in an application-independent manner, and there are today no application-dependent Presentation Layer Standards.

The questioning reader might still be asking why it is appropriate to recognise the idea of different encodings that are negotiated, and what the advantages are of a clear separation of the transfer syntax definition from the abstract syntax definition. After all:

So why isn't that approach good enough in the OSI Application Layer? There are a number of points to be made which show why the Application Layer is different from other layers, and why the introduction of the Presentation Layer as a separate layer is at least arguably a "good idea".

First, if we look at the complexity of the data structures needed for the messages in the different layers, we find that in the Data Link and Network Layers we have very simple structures, with a fixed set of parameters of fixed length in fixed positions in the messages. Drawing a picture of the message is easy. In the Transport Layer, we find the need for optional parameters, and variable-length parameters. In the Session Layer we see the introduction of the further concept of parameter groups - sets of parameters appearing together in the message, or omitted in their entirety. When we get to Application Layer protocol design, the complexity of the data-structures we need becomes even greater, with optional groups of information, arbitrary repetition, groups within groups to any depth, and so on. A more powerful and user-friendly descriptive technique than simply drawing a bit-map of the message is needed.

A second point relates to the skills needed in protocol design in the different layers. In the lower layers the skills needed are largely those of the communications expert, to whom "bit-twiddling" to produce efficient bit-pattern representations is a natural pastime. In the Application Layer, however, the most important skill is a good knowledge and understanding of the application domain, not of communications or computer software. Detailed design of bit-pattern representations is likely to be at best a boring chore, to those with application domain skills.

A third point relates to the nature of the protocol definition in the different layers. In the lower layers, we are concerned with some header (and perhaps trailer) information, with the bulk of the message left as a hole to be completed by the next higher layer. If the OSI protocols work efficiently, the total of all the headers and trailers in all the lower layers will not account for more than perhaps 10% of the total communications traffic. By contrast, the Application Layer data is the 90% bulk of the transfer. Thus getting the "best" encoding for the lower layer protocols is not likely to be significant. For an Application Layer protocol, it could be very important.

So what is the "best" encoding? This is where we see the desirability of introducing the idea of multiple encodings selected on a per-connection basis by negotiation between the two end-systems. Consider two systems communicating over a high band-width line with a low-level encryption device in place at both ends. The "best" encoding of the application protocol will be a clear encoding with no compression which minimises the number of CPU cycles needed by the two end-systems to convert between their local representation and the transfer syntax. But suppose that a bull-dozer goes through the high-bandwidth line, and the back-up provision is a dial-up telephone connection with modems - low band-width and insecure. The "best" representation is now likely to be one that introduces selective encryption of some fields at the presentation layer and does as much compression as possible, regardless of the cost in CPU cycles.

Let us look at another scenario (see figure 2.11: Conversion v. vendor-specific encoding) for determining the "best" encoding. Suppose the application (not unrealistically) involves the transfer of a few gigabytes of information comprising a highly-structured and Standardised document format for a large technical document. The way that is represented on disk will be chosen by computer vendor A to suit his system, and could, for example, be an EBCDIC encoding of characters, with some highly indexed set of files for holding the structure of the document. Computer vendor B, on the other hand, might use a more embedded structure with less indexing and an ASCII representation of the text. If, in an instance of communication, a machine from vendor A (A1) is communicating this document with a machine from vendor B (B1), we will need a vendor-independent transfer syntax, and both machines will need to convert the few gigabytes of data between the local representation and the vendor-independent transfer syntax, despite the CPU-cycle and possible disk-churning costs. On the other hand, if, in this instance of communication, the machine from vendor A (A1) happens to be communicating with another machine from vendor A (A2), it is highly desirable to allow a transfer syntax that is as close as possible to the local representation on the vendor A disks. If OSI did not permit this, then there would be tremendous pressure from vendor A customers for a vendor-A-specific protocol to be made available and used, and OSI protocols would only ever be used between dissimilar machines.

This discussion of the Presentation Layer would not be complete without some mention of a few problems.

First, the Presentation Layer of OSI was the last of the non-application layers to mature and stabilise. One major application (X.400 electronic mail) was first published in 1984 before the Presentation Layer Standard was ready, and was written to sit directly on top of the Session Layer, assuming effectively a six-layer model. (This was corrected in the 1988 version, but not without a lot of difficulty - see the discussion in chapter seven). Moreover, X.400 is very much concerned with relaying information from a mail originator through mail relays to a mail receiver. In this relay situation, negotiation of appropriate encodings becomes at first sight problematic, and even on further thought difficult. Thus the Presentation Layer concepts were not strongly supported by CCITT workers (a situation that to some extent persisted into the 1990s), despite their formal acceptance of the Presentation Layer in their Reference Model (CCITT/ITU-T Recommendation X.200).

Second, the Presentation Layer concepts will only be exercised in reality if a number of transfer syntaxes (some of which will be vendor-specific, some standardised with varying verbosity and CPU-cycle properties) have been defined. Up to the start of the 1990s, such definitions of transfer syntaxes did not exist. In practice, there was precisely one transfer syntax defined at the international level for each application layer standard, based on the Basic Encoding Rules of ASN.1. Thus negotiation of transfer syntax was a nice theory, but did not occur in practice. Moreover, implementors were far more concerned to implement a standard transfer syntax to get open interworking with other vendors than to bother defining an efficient transfer syntax for use between their own machines.

Thirdly, the level of understanding of the Presentation Layer and sympathy with its aims (even among implementors and some OSI application protocol definers) has not always been as great as might be wished.

This situation is changed quite rapidly in the early 1990s, with a number of additional Encoding Rules for ASN.1 (with much improved properties over the Basic Encoding Rules) being progressed to International Standards. Thus encoding choices and real negotiation of encodings will become possible in the late-1990s. The reader should note, however, that there is a very real danger of a proliferation of encodings, with any one vendor implementing only a small subset (and with some pairs of machines having no implemented encoding in common) resulting in new interworking problems. In the ideal world, we would have a small number of internationally standardised encodings, each with important useful properties in relation to trade-offs on compression, CPU cycles, and security, with all of them implemented by all vendors and the most appropriate selected for each instance of communication according to circumstances. At the present time there is no real thought being given to the set of encodings (and their characteristics) that are needed, standardization of new encodings being somewhat haphazard. (The reader should, however, note that the last time I made a similar sort of remark in a text for publication, the situation was rectified before the text was published!)

2.4.7 And going beyond seven ...

We said that the OSI architecture had a magic seven layers. But the above discussion will surely have alerted the reader to the fact that there are potentially a pretty large number of other problems that occur in application layer standardization and which will be common to a number of applications and hence worth specifying as separate application-independent Standards. Can the architecture embrace this situation without requiring the addition of more and more layers?

To be fair to CCITT (see 1.2.1), the main difference between the CCITT Reference Model in 1984 and the ISO Reference Model was a recognition by CCITT that one had to accept the situation that there would be some application-independent specifications, Common Application Service Elements (CASEs), in the Application Layer that would be referenced by real application-dependent standards, Specific Application Service Elements (SASEs), to provide the total layer seven specification.

In the late 1980s, however, it became clear that it was hard to distinguish between SASEs and CASEs. The process was really one of producing more and more building blocks, with a gradually increasing amount of application-dependence. What to some people was a very specific application (file transfer or electronic mail) was to others (for example the banking or international trade communities) merely a carrier standard to be used to support their "real" application. The S and the C were therefore dropped, and today we merely talk about Application Service Elements (ASEs) as the jargon term for Standards in the Application Layer that combine with each other to form either bigger building blocks, complete rooms, or livable-in houses.

A discussion of the more important ASEs which form low-level building blocks appears in chapter nine, but it is worth noting for the present that when the first of these building blocks, the Association Control Service Element (once known as the unique CASE) was being discussed, there was a significant lobby from the Experts of one National Body for that to be effectively inserted as an eighth layer between the Presentation Layer and the real Application Layer, with a complete set of service primitive definitions. In the event, this approach was not adopted.

The model now is that an ASE will "steal" some of the service primitives from the Presentation Layer or from some other ASE (that is, if a referencing Standard wants to reference the service primitives of that ASE, it is not allowed to reference the "stolen" services) and provide a richer service (with its own service primitives on top of these (much as layers do), but leaving all other service primitives for direct access by a referencing standard (as layers do not). To summarise the position (see figure 2.12: Stealing Service Primitives):

The elaboration of the concept of ASEs and the way they work together occurred after the completion of the Basic Reference Model, and is included in a Standard called Application Layer Structure. This underwent some further development in the early 1990s to provide an Extended Application Layer Structure, discussed more fully in chapter 9.

Figure 2.13: Final 7-layer model is the final figure depicting the OSI 7-layer architecture. The architecture envisages a never-ending set of Standards in the Physical Layer, perhaps involving additional Standards right up to (but not beyond) the Network Layer as signalling and switching technology advances. OSI standardization will never be complete in these layers. Equally, in the Application Layer, we expect to see an ever-increasing set of ASE Standards as new applications are proposed for standardization. Again, OSI standardization will never be complete in this layer. In the middle layers of application-independent and technology-independent standardization (Network Service, Transport Layer, Session Layer, and Presentation Layer - the heart of OSI), the standardization work is largely complete now, with only minor changes and extensions likely in the future. (But ... see the end of chapter four!)

2.5 Comparison with other architectures

There is no doubt that the OSI architecture is richer and more complicated than any other protocol architecture. It may be illuminating to compare the de facto architecture of another major suite of protocol specifications with the architecture of OSI. The suite we compare with is that of the USA Department of Defense (DoD) Advanced Research Projects Agency (DARPA) Internet Community better known as TCP/IP.

There is no formal document specifying the architecture of the TCP/IP-related (Internet) specifications, so there can be different views by different authors on the de facto architecture, and particularly on its relationship to the OSI architecture. The following treatment would probably be accepted by most workers, however.

The Internet became widely known and talked about in the early to mid 1990s. It is a world-wide collection of interlinked (hence the Inter) wide-area networks, with associated local area networks. In the 1970s it was better known as the Arpanet, when it was the first network to establish the viability of wide-area computer communication. It later became known as the Darpanet communication. It later became known as the Darpanet term the Internet is preferred today. Up to the mid-1990s it was largely a research, military and educational network, with a very limited and restricted amount of commercial traffic over it. This situation changed dramatically in the mid-1990s with the growth of interest in the provision of World-Wide Web pages. (Whilst all the early TCP/IP protocols have equivalent or better OSI equivalents, there is no OSI equivalent for the protocol underlying the World-Wide Web, although the markup language used to author pages (HTML - Hyper-Text Markup Language) is based on an ISO Standard (SGML - Standard Generalized Markup Language).

Communication over the Internet is characterised by the use of Transmission Control Protocol (TCP) and Internet Protocol (IP), but particularly the latter, with a variety of other protocols on top. All the protocols in the suite are generally collectively known as "the TCP/IP protocols" (even if they do not actually use TCP), or more accurately as "the Internet protocols".

The Internet protocols are vendor-independent, and their specification is controlled by open public discussion that is not dominated by any one vendor. They are also widely implemented by a variety of vendors and hence fulfil the definition of "open" as used in OSI. Today, when people talk about "open networking", they can mean implementation of either OSI or TCP/IP, and advertisements need to be examined carefully to see which is meant.

Originally, TCP/IP was very much aimed at wide-area networking, but its adoption in the early 1980s by the UNIX developers led to its wide-spread use in the late 1980s on local area networks. It is probably fair to say that much of the success of UNIX in the 1980s was due to its incorporation of TCP/IP. At the end of the 1980s and in the early 1990s, stand-alone LANs running UNIX workstations were frequently being linked by high-band-width leased lines and cheap commercial routers, so that wide-area TCP/IP company networks (that were initially not necessarily part of the Internet) grew up rapidly in the UK and other parts of Europe. It is probably true to say that whilst in the 1980s TCP/IP local area communications sold UNIX systems, in the 1990s the wide-spread use of UNIX systems sold wide-area TCP/IP.

So ... what is the TCP/IP architecture? It has strong similarities with parts of OSI, but is broadly much simpler. As with OSI, there is critically an end-to-end network service provided by the use of IP, with network switches understanding only the IP protocol and associated routing and management protocols. Beneath the IP layer there is whatever real networks are around, and there are a series of specifications (Internet specifications are called, somewhat misleadingly, Requests for Comment (RFCs)) that specify how to transmit IP messages over a whole range of real-world networks, including some vendor-specific ones. This part of the architecture then is very similar to the OSI "Internal Organization of the Network Layer" described earlier. Above the IP layer, there is a layer corresponding quite closely to the Transport Layer of OSI, and containing TCP and another (very simple) protocol called User Datagram Protocol (UDP). On top of these there sit monolithic specifications for applications.

To avoid learning wrong lessons at this stage, it is important to note that the IP and TCP are roughly equivalent to half - the so-called connectionless half - of the corresponding OSI layers. This issue will be discussed later, and should be ignored for now. Also ignore UDP for now - that is not a difference between TCP/IP and OSI - OSI has an equivalent protocol which will again be discussed later.

The main difference between the OSI and the TCP/IP architectures is that in TCP/IP the Session and Presentation Layer functionality is not factored out into separate specifications, nor are application specifications normally broken down into a set of ASEs. For those readers that know TCP/IP well, one can discern some elements of the ASE concept in the TCP/IP TELNET protocol, which is used both as an actual application (terminal login) and also to support other application protocols (file transfer and electronic mail). With this exception, however, Internet specifications above TCP or UDP tend to be self-contained.

In summary, and to simplify slightly, the main difference between the OSI and the TCP/IP architectures is "monolithic over TCP" versus "session plus presentation plus ASEs over OSI Transport". Which is best? Does it matter?

Certainly one must not confuse discussions of the architecture (how specifications are structured) with the quality of the final protocol. A bad protocol can be specified in a highly structured manner, a good one in a monolithic way, and vice versa. In principle one would expect more commonality in the way functions are performed with the OSI approach than with the TCP/IP approach. In practice ...

The lack of a separated-out Session Layer seems to cause few problems in TCP/IP, although one major feature of Session Layer (orderly termination) comes out as part of TCP because of detailed technical differences between the OSI Transport Protocol and TCP, so the comparison is not quite fair. As far as the Presentation Layer is concerned, one can discern in the more recent TCP/IP protocol specifications elements of the Presentation Layer approach, and of tools very similar to ASN.1. (For those wanting to probe further, and knowing TCP/IP, look at the XDR specification that supports the SUN network file servers (NFS).) TCP/IP also introduced in the 1980s its own Remote Procedure Call (RPC) Protocol that is being used as an ASE comparable to the Remote Operations Service Element ASE of OSI. Thus some of the concepts present in the OSI architecture can be seen in the latest TCP/IP work.