Physical and data link layers of the OSI model. Frequency, time and phase separation of signals (channels) Channel message

The use of P- and V-operations to organize the interactions of processes in the system can be carried out as long as there is no better communication mechanism. One suggestion for improvement

Figure: 8.7. P / V-system of processes for two nodes of the computation graph in Fig. 8.2.

Figure: 8.8. Adding P / V systems to the model hierarchy.

this mechanism is the proposal to use messages. A messaging system is a collection of processes that communicate through messages. Two operations are possible with messages: send and receive. Sending a message is like a V-operation, and receiving a message is like an A-operation. If during the operation to receive there are no messages, then the recipient waits until the message is sent.

This mechanism is the basis of the modeling scheme proposed by Riddle. This model seems to be the most suitable for modeling protocols in computer networks. Riddle considers a (finite) set of processes that communicate via messages. Messages are sent and requested by special processes called channel processes (mailboxes). Channel processes provide, essentially, a set of messages that are sent but not yet received, or a set of message requests from receivers that have been issued but not yet satisfied. Other processes of the system are called software processes and are described in the language of software process modeling (LMPP).

An example of a three-process system is shown in Fig. 8.9. As can be seen from the example, the description of the processes at the YMPP is, in essence, a diagram. Only the activity of passing messages in the system is of interest. Messages are abstract elements whose only characteristic is type. The number of message types in the system can only be finite. Messages are sent from or received into the message buffer in each of the processes. There is only one buffer per process. The suggestions of the YMPP are: Place a message of type in the message buffer. Send message to the message buffer of the channel process Request a message from the channel process Wait (if necessary) until the message is received. The message is placed in the message buffer. Check the message type in the message buffer and jump to the sentence if the message is of a type other than: Simulate internal data-dependent validation. Either continue processing by executing the next sentence, or navigate to the offer labeled Transfer Control to Offer End Process.

The system with LMPP simulates many parallel processes. Each process starts at the beginning of its program and executes its program until it encounters a sentence. Riddle shows how to construct a message passing expression that represents the possible message flows in the system and uses this expression to investigate the structure of the system and organize the correct operation. This message-passing expression is used for the same purposes as the Petri net language. Therefore, we show how a description of a system of processes on the LMPP can be transformed into a Petri net such that its language coincides with the message transfer expression from Riddle's analysis. This transformation ignores the execution of individual sentences of the description in the YMPP, although with the help of minor modifications they could also be represented in the Petri net language.

To simulate a process with a Petri net, we use one token per process as a program counter. The presence of a message in the channel process is also a counter. Since messages are identified by type, it is necessary to model each message type in the channel process with a separate position. A very important property of systems with LMPP is that the number of messages is finite. Each software process is also finite. Only the message queue takes up a potentially unlimited amount of memory. Thus, the ability to simulate channel processes and correctly represent the send and receive sentences are the most important aspects of transforming a description on the LMPP into a Petri net. Modeling

channel processes with sets of positions (one for each message type), we can represent the send clause by a transition that places a token in a position representing the corresponding channel process and message type. The receive clause simply removes the token from any position in the channel process. The specific position that delivers the token determines the type of message received. This information can be used in any subsequent sentence

The only character in a message passing expression is the message type for those messages that are sent to or received from the channel process. Since each transition in a Petri net leads to the appearance of a symbol in the language of the Petri net for this Petri net, only the send and receive clauses in a system with LMPP can be modeled. Thus, there are two kinds of positions in a Petri net. One kind of tagged positions acts as a counter of the number of messages of the type in the channel process. Another kind of positions represents the send and receive clauses of the LMPP program. Let these sentences be unambiguously marked. We will mark the position representing the sentence with a message of type in the message buffer with the token symbol in the position associated with the offer means that the sentence has already been executed. Figure: 8.10 illustrates how sentences should be modeled with a Petri net. In fig. 8.10 position represents a position associated with any offer that precedes the offer

Now it remains to show that it is possible to define a sentence preceding other sentences in the YMPP program. Note that each sentence can be considered as a pair consisting of a message type and a sentence number, since the same sentence with different types of messages in the message buffer will be modeled by a Petri net in different ways. The most obvious way to determine the precursors of a sentence is to run at the beginning of each program on the LMPP a special start sentence (which becomes the start position) and generate, according to the program description, all possible subsequent send and receive sentences with the corresponding contents of the message buffer. This process is repeated for all sentences that appear until all send and receive clauses have been spawned and their successors identified. Since the number of sentences in the description on the LMPP and the number of message types is finite, only a finite number of sentence pairs are generated! / type, message. This procedure is similar to the characteristic equations used by Riddle to construct a messaging expression. In fig. 8.11 lists suggestions

Figure: 8.10. (see scan) Transformation of send and receive clauses into Petri net transitions. at the top is the model of the sk: send clause with the message type in the message buffer. Channel process at the bottom - model of the sk: receive clause from the channel process Possible message types in

and their possible successors for the system with NLMP shown in Fig. 8.9.

Once the followers of the sentence are determined, we can, using this information, identify possible predecessors of the sentence and, therefore, construct a Petri net equivalent to the LMPP system using transitions similar to those shown in Fig. 8.10. The special start position is the predecessor of the first sentence of each process in the system. In fig. 8.12 the system with NMPP shown in Fig. 8.9, converted to an equivalent Petri net.

A brief description of the transformation of messaging systems in a Petri net shows that this model is included by the power of modeling in a Petri net. It also shows that the set of messaging expressions considered as a class of languages \u200b\u200bis a subset of the class of Petri net languages.

Since P / V systems can be modeled by messaging systems with only one message type, P / V systems

Figure: 8.11. (see scan) Proposals and followers for the system with LMPP shown in fig. 8.9.

are included in messaging systems. It is easy to build a message system to solve the problem of cigarette smokers, so the inclusion of P / V systems in message systems is own. On the other hand, systems with messages are not able to perceive input messages from several sources simultaneously and therefore are not equivalent to Petri nets.

When trying to simulate a transition with multiple inputs, one of the following two situations may occur:

1. The process will try to receive tokens (messages) from all its inputs, but it will be invalid, and therefore will be blocked, while delaying the tokens that are needed in order to allow other transitions to continue working. This will lead to dead ends in the system with messages that do not correspond to dead ends in the Petri net, which violates the third constraint.

2. The process will avoid creating unnecessary dead ends by determining that the remaining required tokens are missing and returning

(click to view the scan)

tokens in position (channel processes) from which they were received. Such actions can be performed arbitrarily often, which means that there is no limit on the length of a sequence of actions in a system with messages corresponding to a limited sequence of starting transitions in a Petri net. Thus, this violates our second constraint.

Figure: 8.13. Adding systems with messages to the hierarchy of models.

Riddle presented a transformation that fits case 1 and leads to unnecessary dead ends. In any case, we see that systems with messages cannot simulate arbitrary Petri nets (under the constraints we have formulated). Therefore, as a result, we get the hierarchy shown in Fig. 8.13.

Physical layer - the lowest level of the OSI network model, intended directly for transmitting data streams. Transmits electrical or optical signals to a cable or radio air and, accordingly, receives and converts them into data bits in accordance with the methods of coding digital signals. In other words, it provides an interface between a network medium and a network device.

At this level, concentrators (hubs), signal repeaters (repeaters) and media converters operate ...

Physical layer functions are implemented on all devices connected to the network. On the computer side, the physical layer functions are performed by a network adapter or serial port. The physical layer includes the physical, electrical, and mechanical interfaces between two systems. The physical layer defines such properties of the data transmission network media as fiber, twisted pair, coaxial cable, satellite data link, etc. The standard types of network interfaces related to the physical layer are: V.35, RS-232C, RJ-11, RJ-45, AUI and BNC connectors.

The physical layer features of the OSI model are conveniently viewed using the following figure:

The following sublevels can be distinguished:

Reconciliation - the sub-level of agreement. Serves to translate the commands of the MAC layer into the corresponding electrical signals of the physical layer.
MII - Medium Independent Interface, medium independent interface. Provides a standard interface between the MAC layer and the physical layer.
PCS - Physical Coding Sublayer, physical coding sublayer. Encodes and decodes data sequences from one representation to another.
PMA - Physical Medium Attachment, sublayer of connection to the physical medium. Converts data to a bitstream of serial electrical signals, and vice versa. In addition, it provides synchronization of transmit / receive.
PMD - Physical Medium Dependent, sublayer of communication with the physical environment. Responsible for signal transmission in the physical environment (signal amplification, modulation, signal shaping).
AN - Auto-negotiation, speed negotiation. Used to automatically select the communication protocol by devices.
MDI - Medium Dependent Interface, a medium dependent interface. Defines different kinds connectors for different physical media and PMD devices.

Data transmission medium

A data transmission medium is a physical medium suitable for a signal to pass through. For computers to exchange encoded information, the environment must provide their physical connection to each other. There are several types of environments used to connect computers:
coaxial cable;
unshielded twisted pair;
shielded twisted pair;
fiber optic cable.

Coaxial cable was the first type of cable used to connect computers to a network. Cable of this type It consists of a central copper conductor covered with a plastic insulating material, which in turn is surrounded by a copper mesh and / or aluminum foil. This outer conductor provides grounding and protects the center conductor from external electromagnetic interference. There are two types of cable used for networking - Thicknet and Thinnet. The maximum segment length ranges from 185 to 500 m, depending on the cable type.

Twisted pair cable (twisted pair), is one of the most common cable types today. It consists of several pairs of copper wires covered with a plastic sheath. The wires that make up each pair are twisted around each other, which provides protection against mutual interference. Cables of this type are divided into two classes - " shielded twisted pair"(" Shielded twisted pair ") and" unshielded twisted pair"(" Unshielded twisted pair "). The difference between these classes is that the shielded twisted pair is more protected from external electromagnetic interference, due to the presence of an additional shield of copper mesh and / or aluminum foil surrounding the cable wires. pairs ", depending on the category of the cable, provide transmission at speeds from 10 Mbit / s - 1 Gbit / s. The length of a cable segment cannot exceed 100 m (up to 100 Mbit / s) or 30 m (1 Gbit / s).

Fiber optic cables are the most advanced cable technology providing high speed data transmission over long distances, resistant to interference and eavesdropping. A fiber optic cable consists of a central glass or plastic conductor surrounded by a layer of glass or plastic and an outer sheath. Data transmission is carried out using a laser or LED transmitter, which sends unidirectional light pulses through the center conductor. The signal at the other end is received by a photodiode receiver, which converts the light pulses into electrical signals that can be processed by a computer. The transmission speed for fiber optic networks ranges from 100 Mbps to 2 Gbps. The segment length limit is 2 km.

Link layer

Link layer (English Data Link layer) - the layer of the OSI network model, which is designed to ensure the interaction of networks at the physical level and control over errors that may occur. It packs the data received from the physical layer into frames, checks for integrity, if necessary, corrects errors (sends a second request for a damaged frame) and sends it to the network layer. The link layer can interact with one or more physical layers, controlling and managing this interaction. The IEEE 802 specification divides this layer into 2 sublayers - MAC (Media Access Control) regulates access to the shared physical medium, LLC (Logical Link Control) provides service network layer.

Switches and bridges operate at this level.

In programming, for example, this level represents the driver of the network card, in operating systems there is a software interface for the interaction of the data link and network layers with each other, this is not a new level, but simply an implementation of the model for a specific OS.

The task of the link layer is to ensure the interaction of devices within the local network by transferring special data blocks called frames. In the process of formation, they are supplied with service information (header) necessary for correct delivery to the recipient, and, in accordance with the rules of access to the transmission medium, are sent to the physical layer.

When receiving data from the PHY layer, it is necessary to select the frames intended this device, check them for errors, and pass them on to the service or protocol they were intended for.

It should be noted that it is the link layer that sends, receives, and repeats frames in case of collision. But the physical layer determines the state of the shared environment. Therefore, the access process (with the necessary clarification) is described in detail in the previous chapter.

Communication at the data link layer of Ethernet standard networks, as well as at the physical one, it is customary to divide into additional sublevels that were not provided for by the OSI-7 standard.

LLC (Logical Link Control). Logical channel control level;
MAC (Media Access Control). Media access level.

MAC sublevel

In the ideology of multiple access to the Ethernet environment, data transmission has to be implemented according to the broadcasting principle. This cannot but leave an imprint on the process of forming and recognizing frames. Let's consider the structure of the Ethernet DIX frame as the most frequently used for IP traffic transmission.

To identify devices, 6 byte MAC addresses are used, which the sender must indicate in the transmitted frame. The upper three bytes are the Vendor codes, and the lower ones are the individual device identifier.

The manufacturer of the equipment is responsible for the uniqueness of the latter. The situation with vendor IDs is more complicated. There is a special organization within the IEEE that maintains a list of vendors, allocating each of them its own range of addresses. By the way, putting your record there is not at all expensive, only US 50. It can be noted that the creators ethernet technologies, Xerox and DEC, occupy the first and last lines of the list respectively.

Such a mechanism exists to ensure that the physical address of any device is unique, and there is no situation of its accidental coincidence in the same local network.

It should be especially noted that on most modern adapters, you can programmatically set any address. This poses a definite threat to the performance of the network, and can be the cause of severe "mystical" malfunctions.

The MAC address can be written in various forms. The most commonly used is hexadecimal, in which pairs of bytes are separated by "-" or ":" characters. For instance, network Card Realtek installed on my home computer has an address of 00: C0: DF: F7: A4: 25.

The MAC address allows for Unicast, Multicast and Broadcast frame addressing.

Single addressing means that the source node directs its message to only one recipient, whose address is explicitly indicated.

In multicast mode, the frame will be processed by those stations that have the same Vendor Code as the sender. The sign of such a message is "1" in the least significant bit of the high byte of the MAC address (X1: XX: XX: XX: XX: XX). This format is quite convenient for "proprietary" interaction of devices, but in practice it is rarely used.

Another thing is a broadcast message, in which the recipient's address is encoded with the special value FF-FF-FF-FF-FF-FF. The transmitted packet will be received and processed by all stations in the local network.

For successful delivery, one destination address is clearly not enough. Additional service information is needed - the length of the data field, the type of network protocol, etc.

Preamble. Consists of 8 bytes. The first seven contain the same cyclic bit sequence (10101010), which is well suited for synchronizing transceivers. The last one (Start-of-frame-delimiter, SFD), 1 byte (10101011), marks the beginning of the information part of the frame. This field is not taken into account when determining the frame length and is not calculated in the checksum.
Destination Address (DA) MAC address.
The source MAC address (SA). The first bit is always zero.
Length field or data type (Length / Type, L / T). Two bytes that explicitly indicate the length (in bytes) of the data field in the frame or indicate the data type. Below, in the description of LLC it will be shown that simple automatic recognition is possible different types frames.
Data. Frame payload, data from upper OSI layers. It can be from 0 to 1500 bytes in length.
For correct collision detection, a frame of at least 64 bytes is required. If the data field is less than 46 bytes, then the frame is padded with a padding field.
Checksum (Frame Check Sequence, FCS). 4 bytes, which contains the checksum of all information fields of the frame. The calculation is performed according to the CRC-32 algorithm by the sender and added to the frame. After receiving the frame into the buffer, the receiver performs a similar calculation. In case of discrepancy in the calculation results, an error is assumed during transmission, and the frame is destroyed.

LLC sublevel

This sublayer provides a single, method-independent interface to the upper (network) layer. Basically, you can say that it defines the logical structure of the Ethernet frame header.
...

Network adapters

Network adapters convert data packets into signals for transmission over the network. In the course of manufacturing by the manufacturer, each network adapter is assigned a physical address, which is entered into a special microcircuit installed on the adapter board. Most network adapters The MAC address is stitched into ROM. When the adapter is initialized, this address is copied into the computer's RAM. Since the MAC address is determined by the network adapter, replacing the adapter will change the physical address of the computer; it will match the MAC address of the new network adapter.
For example, imagine a hotel. Suppose further that room 207 has a lock that opens with key A, and room 410 has a lock that opens with key F. It is decided to change the locks in rooms 207 and 410. After the replacement, key A will open room 410, and key F will open room 207. B In this example, locks act as network adapters and keys act as MAC addresses. If the adapters are swapped, the MAC addresses will also change.

Shl. to be continued..

------
Networking Basics
wiki
nag.ru

11.04.2007 17:46

Different approaches to performing switching
In general, the solution to each of the particular switching tasks - determining flows and corresponding routes, fixing routes in configuration parameters and tables of network devices, recognizing flows and transferring data between interfaces of one device, multiplexing / demultiplexing flows and dividing the transmission medium - is closely related to the solution of all the rest. The complex of technical solutions for the generalized switching problem in the aggregate constitutes the basis of any network technology. The fundamental properties of a particular network technology depend on the mechanism for laying routes, forwarding data and sharing communication channels.

Among the many possible approaches to solving the problem of switching subscribers in networks, two fundamental ones are distinguished:

    circuit switching;

    packet switching

Outwardly, both of these schemes correspond to the one shown in Fig. 1 network structure, but their capabilities and properties are different.

Figure: 1. General structure of a network with switching subscribers

Circuit-switched networks have a much richer history, descending from the earliest telephone networks. Packet-switched networks are relatively young; they appeared in the late 60s as a result of experiments with the first global computer networks. Each of these schemes has its own advantages and disadvantages, but according to long-term forecasts of many specialists, the future belongs to packet switching technology, as it is more flexible and universal.

Channel switching
When switching channels, the switching network forms a continuous composite physical channel between end nodes from intermediate channel sections connected in series by switches. The condition that several physical channels in a serial connection form a single physical channel is the equality of data transmission rates in each of the constituent physical channels. Equal speed means that the switches of such a network do not have to buffer the transmitted data.

In a circuit-switched network, before transmitting data, it is always necessary to perform a connection establishment procedure, during which the concatenated channel is created. And only after that you can start transmitting data.

For example, if the network shown in Fig. 1, operates on circuit switched technology, then node 1, in order to transmit data to node 7, must first send a special request to establish a connection to switch A, specifying the destination address 7. Switch A must choose a route to form a concatenated link, and then transmit the request to the next switch, in this case E. Then switch E sends the request to switch F, which, in turn, sends the request to node 7. If node 7 receives a request to establish a connection, it sends a response to the original node on the already established channel, after which the concatenated channel is considered switched , and nodes 1 and 7 can exchange data over it.

Figure: 2. Establishing a compound channel

The circuit switching technique has its advantages and disadvantages.

Advantages of channel switching

    Constant and known data transfer rate over the channel established between end nodes. This gives the network user opportunities based on a pre-made assessment of the necessary for high-quality data transmission bandwidth set a channel of the required speed in the network.

    Low and consistent network latency. This allows high-quality transmission of delay-sensitive data (also called real-time traffic) - voice, video, various technological information.

Disadvantages of circuit switching

    Denial of service for the connection request. Such a situation may arise due to the fact that at some part of the network, the connection must be established along the channel, through which the maximum possible number of information flows already passes. Failure can also occur at the end of the concatenated channel - for example, if the subscriber is able to support only one connection, which is typical for many telephone networks. When a second call arrives at a subscriber who is already talking, the network sends short beeps to the caller - a busy signal.

    Irrational use of the bandwidth of physical channels. The part of the bandwidth that is allocated to the concatenated channel after the connection is established is provided to it for the entire time, i.e. until the connection is terminated. However, subscribers do not always need channel bandwidth during the connection, for example, there may be pauses in a telephone conversation, and the interaction of computers is even more uneven in time. The impossibility of dynamic redistribution of bandwidth is a fundamental limitation of a circuit-switched network, since the switching unit here is information flow generally.

    Mandatory delay before data transmission due to connection setup phase.
    The advantages and disadvantages of any network technology are relative. In certain situations, advantages come to the fore, and disadvantages become insignificant. So, the circuit switching technique works well in cases where only traffic needs to be transmitted telephone conversations... Here, you can put up with the impossibility of "cutting" pauses from the conversation and more rational use of the trunk physical channels between switches. But when transmitting very uneven computer traffic, this irrationality already comes to the fore.

Packet switching
This switching technique has been specifically designed to efficiently transfer computer traffic. The first steps towards the creation of computer networks based on circuit switching technology showed that this type of switching does not allow achieving high overall network throughput. Typical network applications generate traffic very unevenly, with high data rate ripple. For example, when accessing a remote file server, the user first looks at the contents of the directory of that server, which generates a small amount of data transfer. Then he opens the required file in a text editor, and this operation can create quite intensive data exchange, especially if the file contains voluminous graphic inclusions. After displaying several pages of the file, the user works with them locally for some time, which does not require any data transfer over the network at all, and then returns the modified copies of the pages to the server - and this again generates intensive data transfer over the network.

The traffic ripple ratio of an individual network user, equal to the ratio of the average data exchange rate to the maximum possible, can reach 1:50 or even 1: 100. If for the described session to organize the switching of the channel between the user's computer and the server, then most of the time the channel will be idle. At the same time, the switching capabilities of the network will be assigned to this pair of subscribers and will not be available to other network users.

With packet switching, all messages transmitted by the user are split at the source node into relatively small parts called packets. Recall that a message is a logically completed piece of data - a request to transfer a file, a response to this request containing the entire file, etc. Messages can be of arbitrary length, from a few bytes to many megabytes. On the contrary, packets can usually also be of variable length, but within narrow limits, for example, from 46 to 1500 bytes. Each packet is provided with a header that indicates the address information required to deliver the packet to the destination node, as well as the packet number that will be used by the destination node to assemble the message (Figure 3). Packets are transported over the network as independent blocks of information. Network switches receive packets from end nodes and, based on address information, pass them to each other, and ultimately to the destination node.

Figure: 3. Splitting the message into packets

Packet network switches differ from channel switches in that they have an internal buffer memory for temporary storage of packets if the output port of the switch at the time of receiving a packet is busy transmitting another packet (Fig. 3). In this case, the packet is for some time in the packet queue in the buffer memory of the output port, and when the queue reaches it, it is forwarded to the next switch. Such a data transmission scheme allows smoothing out the traffic ripple on the backbone links between switches and thus the most efficient use of them to increase the bandwidth of the network as a whole.

Indeed, for a pair of subscribers, it would be most efficient to provide them with a switched communication channel for sole use, as is done in circuit-switched networks. In this case, the interaction time of this pair of subscribers would be minimal, since data would be transferred from one subscriber to another without delay. Subscribers are not interested in channel downtime during transmission pauses; it is important for them to solve their problem faster. A packet-switched network slows down the process of interaction of a particular pair of subscribers, since their packets can wait in the switches while other packets that have come to the switch earlier are transmitted along the backbone links.

However, the total amount of computer data transmitted by the network per unit of time with the packet switching technique will be higher than with the circuit switching technique. This is because the ripple of individual subscribers, in accordance with the law of large numbers, is distributed over time so that their peaks do not coincide. Therefore, the switches are constantly and fairly evenly loaded with work, if the number of subscribers they serve is really large. In fig. 4 shows that traffic from end nodes to switches is very unevenly distributed over time. However, switches are more high level the hierarchies that service the connections between the lower-level switches are more evenly loaded, and the flow of packets in main canalsconnecting upper-level switches has almost maximum utilization. Buffering smooths out the ripple, so the ripple ratio on trunk channels is much lower than on subscriber access channels - it can be 1:10 or even 1: 2.

Figure: 4. Smoothing traffic ripple in a packet-switched network

The higher efficiency of packet-switched networks in comparison with circuit-switched networks (with equal bandwidth of communication channels) was proved in the 60s both experimentally and with the help of simulation. The analogy with multi-software operating systems is appropriate here. Each individual program in such a system takes longer to execute than in a single-program system, when the program is allocated all the processor time until it is completed. However, the total number of programs executed per unit of time is greater in a multi-program system than in a single-program system.
A packet-switched network slows down the interaction of a specific pair of subscribers, but increases the bandwidth of the network as a whole.

Transmission source latency:

    time to transfer headers;

    delays caused by the intervals between transmission of each next packet.

Delays in each switch:

    packet buffering time;

    switching time, which consists of:

    • waiting time for a packet in the queue (variable);

      time to move the packet to the output port.

The benefits of packet switching

    High overall network throughput for bursty traffic.

    The ability to dynamically redistribute the bandwidth of physical communication channels between subscribers in accordance with the real needs of their traffic.

Disadvantages of packet switching

    Uncertainty of the data transfer rate between network subscribers due to the fact that the delays in the buffer queues of the network switches depend on the overall network load.

    A variable value of the delay of data packets, which can be quite long during moments of instantaneous network congestion.

    Potential data loss due to buffer overflows.
    Currently, methods are being actively developed and implemented to overcome these disadvantages, which are especially acute for delay-sensitive traffic, which requires a constant transmission rate. Such techniques are called Quality of Service (QoS) techniques.

Packet-switched networks, in which methods of ensuring quality of service are implemented, allow simultaneous transmission of various types of traffic, including such important ones as telephone and computer. Therefore, packet switching methods are today considered the most promising for building a converged network that will provide comprehensive high-quality services for subscribers of any type. However, circuit switching methods cannot be discounted as well. Today, they not only successfully operate in traditional telephone networks, but are also widely used to form high-speed permanent connections in the so-called primary (backbone) networks of SDH and DWDM technologies, which are used to create trunk physical channels between switches of telephone or computer networks. In the future, it is quite possible that new switching technologies will appear, in one form or another combining the principles of packet and channel switching.

Switching messages
Message switching in its principles is close to packet switching. Message switching is understood as the transfer of a single block of data between transit computers on the network with temporary buffering of this block on the disk of each computer. A message, unlike a packet, has an arbitrary length, which is determined not by technological considerations, but by the content of the information that makes up the message.

Transit computers can be interconnected by either a packet-switched network or a circuit-switched network. A message (this can be, for example, a text document, a file with a program code, an e-mail) is stored in a transit computer on a disk, and for a rather long time if the computer is busy with other work or the network is temporarily overloaded.

This scheme usually transfers messages that do not require an immediate response, most often messages email... The store-and-forward transfer mode is referred to as the store-and-forward mode.

Message switching relieves the burden on the network for fast-response traffic, such as WWW or file service traffic.

They usually try to reduce the number of transit computers. If the computers are connected to a packet-switched network, the number of intermediate computers is reduced to two. For example, a user sends a mail message to his outgoing mail server, which immediately tries to forward it to the recipient's incoming mail server. But if computers are interconnected by a telephone network, then several intermediate servers are often used, since direct access to the final server may not be possible at the moment due to overload of the telephone network (the subscriber is busy) or economically unprofitable due to high tariffs for long-distance telephone communication.

The message switching technology appeared in computer networks before the packet switching technology, but then it was supplanted by the latter, as more efficient in terms of network bandwidth. Writing a message to disk takes a lot of time, and in addition, the presence of disks implies the use of specialized computers as switches, which entails significant costs for organizing a network.
Today, message switching works only for some non-operational services, and most often over a packet-switched network, as an application layer service.

Comparison of switching methods

Comparison of circuit switching and packet switching

Channel switching

Packet switching

Guaranteed bandwidth (bandwidth) for interacting subscribers

The network bandwidth for subscribers is unknown, transmission delays are random

The network may refuse the subscriber to establish a connection

The network is always ready to receive data from the subscriber

Real-time traffic is transmitted without delay

Network resources are used efficiently when transmitting bursty traffic

The address is used only at the stage of establishing a connection

The address is transmitted with every packet

Permanent and dynamic switching

Both packet-switched and circuit-switched networks can be divided into two classes:

    networks with dynamic switching;

    networks with permanent switching.

In networks with dynamic switching:

    it is allowed to establish a connection at the initiative of a network user;

    switching is performed only for the duration of the communication session, and then (at the initiative of one of the users) is broken;

    in general, a network user can connect to any other network user;

    the connection time between a pair of users with dynamic switching ranges from several seconds to several hours and ends after performing a certain job - transferring a file, viewing a page of text or an image, etc.

Examples of networks that support dynamic switching are telephone networks general use, local area networks, TCP / IP networks.

Always-on network:

    allows a pair of users to order a connection for a long period of time;

    the connection is established not by users, but by the personnel serving the network;

    the period for which permanent switching is established is usually several months;

    the mode of permanent (permanent) switching in networks with circuit switching is often called the service of dedicated (dedicated) or leased (leased) channels;

    when a permanent connection through a network of switches is established using automated procedures initiated by service personnel, it is often referred to as a semi-permanent connection, in contrast to the manual configuration of each switch.

The most popular networks operating in the constant switching mode today are SDH technology networks, on the basis of which dedicated communication channels with a bandwidth of several gigabits per second are built.

Some types of networks support both modes of operation. For example, X.25 and ATM networks can provide a user with the ability to dynamically communicate with any other user on the network and at the same time send data over a permanent connection to a specific subscriber.

Packet-switched bandwidth
One of the differences between the packet switching method and the circuit switching method is the uncertainty of the bandwidth of the connection between two subscribers. In the case of channel switching after the formation of a composite channel, the network bandwidth when transferring data between end nodes is known - this is the channel bandwidth. After the delay associated with channel establishment, data begins to be transmitted at the maximum rate for the channel (Fig. 5). Time of message transmission in a circuit-switched network Tk. is equal to the sum of the signal propagation delay along the communication line and the message transmission delay. The propagation delay of a signal depends on the speed of propagation of electromagnetic waves in a particular physical environment, which ranges from 0.6 to 0.9 of the speed of light in a vacuum. The transmission time of a message is V / C, where V is the message volume in bits and C is the channel bandwidth in bits per second.

In a packet-switched network, the picture is quite different.

Figure: 5 Delays in data transmission in circuit-switched networks.

The procedure for establishing a connection in these networks, if used, takes about the same time as in circuit-switched networks, so we will only compare the data transfer time.

Figure: 6. Delays in data transmission in packet-switched networks.

In fig. 6 shows an example of data transmission in a packet-switched network. It is assumed that a message is transmitted over the network of the same size as the message transmitted in Fig. 5. however, it is divided into packages, each with a header. The transmission time of a message in a packet-switched network is indicated in Figure Tk. Additional delays occur when this packet-based message is transmitted over the packet-switched network. First, these are delays in the transmission source, which, in addition to the transmission of the message itself, spends additional time on the transmission of headers t.c., besides, delays tint are added, caused by the intervals between the transmission of each next packet (this time is spent on the formation of the next packet stack of protocols).

Secondly, extra time is spent in each switch. Here delays are the sum of the packet buffering time tb.p. (the switch cannot start transmitting a packet without receiving it completely into its buffer) and switching time tk. The buffering time is equal to the time of receiving the packet with the protocol bit rate. The switching time is the sum of the waiting time for a packet in the queue and the time it takes for the packet to travel to the output port. If the packet travel time is fixed and, as a rule, short (from a few microseconds to several tens of microseconds), then the waiting time for a packet in the queue fluctuates within very wide limits and is unknown in advance, since it depends on the current network load.

Let us make a rough estimate of the delay in data transmission in packet-switched networks compared to circuit-switched networks using a simple example. Let the test message to be transmitted in both types of networks have a size of 200 KB. The sender is at a distance of 5000 km from the recipient. The throughput of communication lines is 2 Mbit / s.

The data transmission time over a circuit-switched network is the sum of the signal propagation time, which for a distance of 5000 km can be estimated at about 25 ms (assuming the signal propagation speed is equal to 2/3 the speed of light), and the message transmission time, which at a bandwidth of 2 Mbit / c and a 200KB message length is approximately 800ms. When calculating the correct value of K (210), equal to 1024, was rounded up to 1000, similarly the value of M (220), equal to 1048576, was rounded up to 1,000,000. Thus, the data transfer is estimated at 825 ms.

It is clear that when this message is transmitted over a packet-switched network with the same total length and bandwidth of the channels running from the sender to the receiver, the signal propagation time and data transmission time will be the same - 825 ms. However, due to delays at intermediate nodes, the total data transfer time will increase. Let's estimate how much this time will increase. Let's assume that the path from the sender to the receiver runs through 10 switches. Let the original message be split into 1 KB packets for a total of 200 packets. First, let's estimate the delay that occurs at the source node. Let's assume that the share of service information placed in the packet headers in relation to the total message volume is 10%. Therefore, the additional delay associated with the transmission of packet headers is 10% of the transmission time of the whole message, that is, 80 ms. If we take the interval between sending packets equal to 1 ms, then additional losses due to the intervals will be 200 ms. Thus, an additional delay of 280 ms occurred at the originating node due to the bursting of the message during transmission.

Each of the 10 switches introduces switching latency that can range from fractions to thousands of milliseconds. IN this example we will assume that switching takes on average 20 ms. In addition, packet buffering delays occur as messages pass through the switch. This delay for a 1 KB packet size and 2 Mbps line bandwidth is 4 ms. The total latency introduced by 10 switches is approximately 240ms. As a result, the additional delay introduced by the packet-switched network was 520 ms. Considering that the entire data transfer in the circuit-switched network took 825 ms, this additional delay can be considered significant.

Although this calculation is very rough, it explains why the transmission process for a given pair of subscribers in a packet-switched network is slower than in a circuit-switched network.

The uncertain bandwidth of a packet-switched network is a price to pay for its overall efficiency, with some infringement on the interests of individual subscribers. Likewise, in a multiprogramming operating system, the execution time of an application cannot be predicted, since it depends on the number of other applications with which the application shares the processor.

Network efficiency is affected by the size of the packets transmitted by the network. Packet sizes that are too large bring a packet-switched network closer to a circuit-switched network, so the efficiency of the network decreases. In addition, large packet sizes increase the buffering time on each switch. Too small packets significantly increase the share of overhead, since each packet contains a header of a fixed length, and the number of packets into which messages are split will increase dramatically with decreasing packet size. There is some "golden mean" when the maximum efficiency of the network is ensured, but this ratio is difficult to determine precisely, since it depends on many factors, including those that change during the operation of the network. Therefore, the developers of protocols for packet-switched networks choose the limits in which the packet size, or rather its data field, can be located, since the header, as a rule, has a fixed length. Usually, the lower limit of the data field is chosen equal to zero, which makes it possible to transmit service packets without user data, and the upper limit does not exceed 4 KB. When transferring data, applications try to occupy the maximum size of the data field in order to complete the exchange faster, and small packets are usually used for short service messages containing, for example, confirmation of packet delivery.

When choosing a packet size, you must also consider the channel bit error rate. On unreliable links, it is necessary to reduce the packet sizes, as this reduces the amount of retransmitted data in case of packet corruption.
Ethernet is an example of standard packet switching technology

Let us consider how the general approaches to solving the problems of building networks described above are embodied in the most popular network technology - Ethernet. (Note that we will not now consider the technology itself in detail - we will postpone this important issue until the next course, and today we will dwell only on some fundamental points that illustrate a number of the basic concepts already considered.)
Networking technology is a consistent set of standard protocols and firmware (for example, network adapters, drivers, cables, and connectors) that is sufficient to build a computer network.

The epithet "sufficient" underlines the fact that we are talking about the minimum set of tools with which you can build a workable network. This network can be improved, for example, by allocating subnets in it, which will immediately require, in addition to the Ethernet standard protocols, the use of the IP protocol, as well as special communication devices - routers. The improved network will most likely be more reliable and faster, but at the expense of add-ons to the Ethernet technology that formed the basis of the network.

The term "network technology" is most often used in the narrow sense described above, but sometimes its extended interpretation is also used as any set of tools and rules for building a network, for example, "end-to-end routing technology", "technology of creating a secure channel", "technology of IP networks ".

The protocols, on the basis of which a network of a certain technology (in the narrow sense) is built, were created specifically for collaboration, therefore, no additional effort is required from the network developer to organize their interaction. Sometimes network technologies are called basic technologies, meaning that on their basis the basis of any network is built. Examples of basic network technologies include, along with Ethernet, such well-known technologies local area networks like Token Ring and FDDI, or X.25 and frame relay LAN technologies. To get a workable network, in this case, it is enough to purchase software and hardware related to one basic technology - network adapters with drivers, hubs, switches, cabling, etc. - and connect them in accordance with the requirements of the standard for this technology.

So, Ethernet network technology is characterized by:

    packet switching;

    typical topology "common bus";

    flat numeric addressing;

    shared transmission medium.

The basic principle behind Ethernet is a random method of accessing shared media. This medium can be thick or thin coaxial cable, twisted pair, fiber optic or radio waves (by the way, the first network built on the principle of random access to a shared medium was the Aloha radio network of the University of Hawaii).

The Ethernet standard strictly fixes the topology of electrical connections. Computers are connected to the shared environment according to a typical shared bus structure (Figure 7). Using a time-shared bus, any two computers can exchange data. Access control to the communication line is carried out by special controllers - Ethernet network adapters. Each computer, or rather, each network adapter, has a unique address. Data transfer occurs at a speed of 10 Mbit / s. This value is the bandwidth of the Ethernet network.

Figure: 7. Ethernet network.

The essence of the random access method is as follows. A computer on an Ethernet network can transmit data over the network only if the network is free, that is, if no other computer is currently exchanging data. Therefore, an important part of Ethernet technology is the procedure for determining media availability.

After the computer has made sure that the network is free, it starts transmitting and thus "captures" the medium. The time of exclusive use of the shared medium by one node is limited by the transmission time of one frame. A frame is a unit of data that is exchanged between computers on an Ethernet network. The frame has a fixed format and, along with the data field, contains various service information, for example, the recipient's address and the sender's address.

The Ethernet network is designed so that when a frame enters the shared data transmission medium, all network adapters begin to simultaneously receive this frame. All of them parse the destination address located in one of the initial fields of the frame, and if this address matches their own, the frame is placed in the internal buffer of the network adapter. Thus, the destination computer receives the data intended for it.

A situation may arise when several computers simultaneously decide that the network is free and begin to transmit information. This situation, called a collision, prevents the correct transmission of data over the network. The Ethernet standard provides an algorithm for detecting and correctly handling collisions. The likelihood of a collision depends on the amount of network traffic.

After detecting a collision, network adapters that tried to transmit their frames stop transmitting and, after a pause of a random length, try again to access the medium and transmit the frame that caused the collision.

Key Benefits of Ethernet Technology
1. The main advantage of Ethernet networks that made them so popular is their cost effectiveness. To build a network, it is enough to have one network adapter for each computer plus one physical segment of a coaxial cable of the required length.
2. In addition, in Ethernet networks, fairly simple algorithms for access to the medium, addressing and data transmission are implemented. The simplicity of the network logic leads to a simplification and, accordingly, a decrease in the cost of network adapters and their drivers. For the same reason, Ethernet adapters are highly reliable.
3. And finally, one more remarkable property of Ethernet networks is their good scalability, that is, the ability to connect new nodes.

Other basic networking technologies such as Token Ring and FDDI, while distinctly different, have much in common with Ethernet. First of all, this is the use of regular fixed topologies ("hierarchical star" and "ring"), as well as shared data transmission media. Significant differences between one technology and another are related to the peculiarities of the used method of access to the shared environment. So, the differences between Ethernet technology and Token Ring technology are largely determined by the specifics of the media separation methods embedded in them - a random algorithm for access to Ethernet and an access method by transferring a token in Token Ring.

Datagram transmission

There are two classes of packet transmission mechanisms currently used in packet-switched networks:

    datagram transmission;

    virtual channels.

Examples of networks that implement the datagram transmission mechanism are Ethernet, IP, and IPX. X.25, frame relay, and ATM networks carry data using virtual circuits. We will first consider basic principles datagram approach.

The datagram method of data transmission is based on the fact that all transmitted packets are processed independently of each other, packet by packet. Whether a package belongs to a particular flow between two end nodes and two applications running on those nodes is not considered in any way.

The next hop — for example, an Ethernet switch or an IP / IPX router — is selected based solely on the destination host address contained in the packet header. The decision about which node to send the incoming packet to is made on the basis of a table containing a set of destination addresses and address information that uniquely identifies the next (transit or final) node. Such tables have different names - for example, for Ethernet networks they are usually called the forwarding table, and for network protocols such as IP and IPX, they are called routing tables. In what follows, for simplicity, we will use the term "routing table" as a generic name for this kind of tables used for datagram transmission based only on the destination address of the end node.

The routing table for the same destination address can contain several entries, pointing, respectively, to different addresses of the next router. This approach is used to improve network performance and reliability. In the example in Fig. 8, packets arriving at router R1 for the destination node with the address N2, A2 are distributed between the next two routers, R2 and R3, for load balancing purposes, which reduces the load on each of them, which means that it reduces queues and speeds up delivery. Some "fuzziness" in the path of packets with the same destination address across the network is a direct consequence of the principle of independent processing of each packet inherent in datagram protocols. Packets traveling to the same destination can reach it in different ways and due to changes in the state of the network, for example, the failure of intermediate routers.

Figure: 8. Datagram principle of packet transmission.

Such a feature of the datagram mechanism as blurring the paths of traffic through the network is also a disadvantage in some cases. For example, if packets of a certain session between two end nodes of the network need to provide a given quality of service. Modern QoS support methods work best when traffic that needs to provide service guarantees always passes through the same intermediate nodes.
Virtual circuits in packet-switched networks

The mechanism of virtual circuits (virtual circuit or virtual channel) creates a stable network traffic paths through the packet-switched network. This mechanism takes into account the existence of data streams in the network.

If the goal is to lay a single path for all packets of a flow through the network, then a necessary (but not always the only) sign of such a flow should be the presence of common points of entry and exit from the network for all its packets. It is for the transmission of such streams in the network that virtual channels are created. Figure 9 shows a fragment of a network in which two virtual channels are laid. The first goes from the end node with the address N1, A1 to the end node with the address N2, A2 through the intermediate network switches R1, R3, R7 and R4. The second ensures the advancement of data along the path N3, A3 - R5 - R7 - R4 - N2, A2. Several virtual channels can be laid between two end nodes, both completely coinciding with respect to the route through the transit nodes, and different.

Figure: 9. The principle of the virtual channel.

The network only provides the ability to transmit traffic along the virtual channel, and which streams will be transmitted through these channels, the end nodes themselves decide. A node can use the same virtual circuit to transmit all streams that have endpoints in common with this virtual circuit, or only some of them. For example, you can use one VC for real-time traffic and another for email traffic. In the latter case, different virtual circuits will have different requirements for the quality of service, and it will be easier to satisfy them than in the case when traffic with different requirements for QoS parameters is transmitted over one virtual circuit.

An important feature of virtual circuit networks is the use of local packet addresses when deciding whether to transfer. Instead of a sufficiently long address of the destination node (its length should allow uniquely identifying all nodes and subnets in the network, for example, ATM technology operates with 20-byte addresses), a local label is used, that is, changing from node to node, a label that marks all packets moving along specific virtual channel. This label is called differently in different technologies: in X.25 technology - the logical channel number (LCN), in frame relay technology - the data link connection identifier (DLCI), in ATM technology - Virtual Channel Identifier (VCI) However, its purpose is the same everywhere - an intermediate node, called a switch in these technologies, reads the label value from the header of the incoming packet and looks at its switching table, which indicates to which output port the packet should be transmitted. The switching table contains records only about the virtual links passing through this switch, and not about all the nodes in the network (or subnets if a hierarchical addressing method is used). Typically, in a large network, the number of virtual links laid through a node is significantly less than the number of nodes and subnets, so the size of the switching table is much smaller than the routing table, and, therefore, viewing takes much less time and does not require a lot of computing power from the switch.

The virtual channel identifier (this is the name of the label that will be used later) is also much shorter than the end node address (for the same reason), therefore, the redundancy of the packet header, which now does not contain a long address, but carries only the identifier over the network, is much less.

Information transfer is a term that combines many physical processes of information movement in space. In any of these processes, components such as a source and receiver of data, a physical medium of information and a channel (medium) of its transmission are involved.

Information transfer process

The initial repositories of data are various messages transmitted from their sources to receivers. Information transmission channels are located between them. Special technical devices-converters (encoders) form physical data carriers - signals - based on the content of messages. The latter undergo a number of transformations, including encoding, compression, modulation, and then sent to the communication lines. Having passed through them, the signals undergo inverse transformations, including demodulation, unpacking and decoding, as a result of which the original messages perceived by the receivers are extracted from them.

Information messages

A message is a kind of description of a phenomenon or an object, expressed as a collection of data that has signs of a beginning and an end. Some messages, such as speech and music, are continuous functions of sound pressure time. In telegraph communication, a message is the text of a telegram in the form of an alphanumeric sequence. A television message is a sequence of frame messages that are "seen" by a television camera lens and captures them at a frame rate. The overwhelming majority of messages transmitted recently through information transmission systems are numerical arrays, text, graphic, as well as audio and video files.

Information signals

Information transmission is possible if it has a physical medium, the characteristics of which change depending on the content of the transmitted message so that they overcome the transmission channel with minimal distortion and can be recognized by the receiver. These changes in the physical storage medium form an information signal.

Today, information is transmitted and processed using electrical signals in wired and radio communication channels, as well as thanks to optical signals in fiber-optic communication lines.

Analog and digital signals

A well-known example of an analog signal, i.e. continuously changing in time is the voltage removed from the microphone, which carries the speech or musical announcement... It can be amplified and transmitted over wired channels to the sound reproduction systems of the concert hall, which will carry speech and music from the stage to the audience in the gallery.

If, in accordance with the magnitude of the voltage at the output of the microphone, the amplitude or frequency of high-frequency electrical oscillations in the radio transmitter is continuously changed in time, it is possible to transmit an analog radio signal to the air. Television transmitter in the system analog television generates an analog signal in the form of a voltage proportional to the current brightness of the image elements perceived by the camera lens.

However, if the analog voltage from the microphone output is passed through a digital-to-analog converter (DAC), then its output will no longer be a continuous function of time, but a sequence of readings of this voltage taken at regular intervals with a sampling frequency. In addition, the DAC also performs quantization by the level of the initial voltage, replacing the entire possible range of its values \u200b\u200bwith a finite set of values \u200b\u200bdetermined by the number of binary bits of its output code. It turns out that a continuous physical quantity (in this case, this voltage) turns into a sequence of digital codes (is digitized), and then, already in digital form, can be stored, processed and transmitted through information transmission networks. This significantly increases the speed and noise immunity of such processes.

Communication channels

Usually, this term refers to the complexes of technical means involved in transferring data from the source to the receiver, as well as the environment between them. The structure of such a channel, using typical means of information transmission, is represented by the following sequence of transformations:

AI - PS - (CI) - CC - M - LPI - DM - DK - CI - PS

AI is a source of information: a person or other living creature, book, document, image on a non-electronic medium (canvas, paper), etc.

PS - converter of information message into information signal, performing the first stage of data transmission. Microphones, television and video cameras, scanners, faxes, PC keyboards, etc. can act as PS.

KI is an encoder of information in an information signal for reducing the volume (compression) of information in order to increase its transmission rate or reduce the frequency band required for transmission. This link is optional, as shown in parentheses.

KK - channel encoder to improve the immunity of the information signal.

M - signal modulator for changing the characteristics of intermediate carrier signals, depending on the size of the information signal. Typical example - amplitude modulation of the carrier signal of a high carrier frequency depending on the value of the low-frequency information signal.

LPI is an information transmission line representing a combination of a physical medium (for example, an electromagnetic field) and technical means for changing its state in order to transmit a carrier signal to a receiver.

DM is a demodulator for separating the information signal from the carrier signal. Present only if M.

DC - channel decoder for detecting and / or correcting errors in the information signal that have arisen on the LPI. Present only with QC.

CI - information decoder. Present only if CI is present.

PI - receiver of information (computer, printer, display, etc.).

If the transmission of information is two-way (duplex channel), then on both sides of the LPI there are modem blocks (Modulator-DEModulator), which combine the M and DM links, as well as codec blocks (COder-DECoder), combining encoders (CI and CK) and decoders (DI and DK).

Characteristics of transmission channels

The main distinguishing features of the channels are bandwidth and noise immunity.

In the channel, the information signal is exposed to noise and interference. They can be caused by natural causes (for example, atmospheric for radio channels) or be specially created by the enemy.

The interference immunity of transmission channels is increased by using various kinds of analog and digital filters to separate information signals from noise, as well as special message transmission methods that minimize the effect of noise. One of these methods is to add extra characters that do not carry useful content, but help control the correctness of the message, as well as correct errors in it.

The channel capacity is equal to the maximum number of binary symbols (kbits) transmitted by it in the absence of interference in one second. For various channels, it varies from a few kbps to hundreds of Mbps and is determined by their physical properties.

Information transfer theory

Claude Shannon is the author of a special theory for encoding transmitted data, who discovered methods of dealing with noise. One of the main ideas of this theory is the need for redundancy of the digital code transmitted over information transmission lines. This makes it possible to restore the loss if some part of the code is lost during its transmission. Such codes (digital information signals) are called anti-jamming codes. However, the redundancy of the code cannot be made too great. This leads to the fact that the transmission of information is delayed, as well as to the rise in the cost of communication systems.

Digital signal processing

Another important component of the theory of information transmission is a system of methods for digital signal processing in transmission channels. These methods include algorithms for digitizing the original analog information signals with a certain sampling rate determined on the basis of Shannon's theorem, as well as methods for generating noise-immune carrier signals for transmission over communication lines and digital filtering of received signals in order to separate them from interference.

The Physical layer deals with the transfer of bits over physical communication channels, such as coaxial cable, twisted pair cable, fiber optic cable, or digital landline. This level is related to the characteristics of physical data transmission media, such as bandwidth, noise immunity, wave impedance other. At the same level, the characteristics of electrical signals transmitting discrete information are determined, for example, the steepness of the pulse edges, the voltage or current levels of the transmitted signal, the type of coding, and the signal transmission rate. In addition, the types of connectors and the purpose of each contact are standardized here.

Physical layer functions are implemented in all devices connected to the network. On the computer side, the physical layer functions are performed by a network adapter or serial port.

An example of a physical layer protocol is the l0-Base-T specification of Ethernet technology, which defines an unshielded twisted pair cable of category 3 with a characteristic impedance of 100 Ohm, an RJ-45 connector, a maximum length of a physical segment of 100 meters, a Manchester code for representing data in cable, as well as some other characteristics of the environment and electrical signals.

Link layer

At the physical layer, bits are simply transferred. It does not take into account that in some networks, in which communication lines are used (shared) alternately by several pairs of interacting computers, the physical transmission medium may be busy. Therefore, one of the tasks of the Data Link layer is to check the availability of the transmission medium. Another task of the data link layer is to implement error detection and correction mechanisms. To do this, at the data link layer, the bits are grouped into sets called frames... The link layer ensures the correctness of the transmission of each frame by placing a special sequence of bits at the beginning and end of each frame to extract it, and also calculates the checksum, processing all bytes of the frame in a certain way and adding the checksum to the frame. When a frame arrives over the network, the receiver computes the checksum of the received data again and compares the result with the checksum from the frame. If they match, the frame is considered correct and accepted. If the checksums do not match, then an error is recorded. The link layer can not only detect errors, but also correct them by retransmitting damaged frames. It should be noted that the error correction function is not mandatory for the link layer, therefore, it is not available in some protocols of this layer, for example, in Ethernet and frame relay.

The link layer protocols used in local networks have a certain structure of connections between computers and methods of their addressing. Although the link layer ensures the delivery of a frame between any two nodes of the local network, it does this only in a network with a completely defined link topology, exactly the topology for which it was designed. Common bus, ring, and star topologies supported by LAN link-layer protocols include common bus, ring, and star, as well as structures derived from them using bridges and switches. Examples of link layer protocols are Ethernet, Token Ring, FDDI, l00VG-AnyLAN.

In local area networks, link layer protocols are used by computers, bridges, switches, and routers. In computers, link layer functions are implemented jointly by network adapters and their drivers.

In wide area networks, which rarely have a regular topology, the data link layer often allows messages to be exchanged only between two neighboring computers connected by a single link. Examples of point-to-point protocols (as such protocols are often called) are the widely used PPP and LAP-B protocols. In such cases, network layer facilities are used to deliver messages between end nodes across the entire network. This is how X.25 networks are organized. Sometimes in wide area networks, it is difficult to isolate link layer functions in their pure form, since in the same protocol they are combined with network layer functions. ATM and frame relay protocols are examples of this approach.

In general, the data link layer is a very powerful and complete set of functions for sending messages between network nodes. In some cases, link layer protocols turn out to be self-sufficient vehicles and can allow application layer protocols or applications to work on top of them directly, without involving the means of the network and transport layers. For example, there is an implementation of the SNMP network management protocol directly over Ethernet, although by default this protocol runs over the IP network protocol and the UDP transport protocol. Naturally, the use of such an implementation will be limited - it is not suitable for composite networks of different technologies, for example, Ethernet and X.25, and even for a network in which Ethernet is used in all segments, but there are loop-like connections between the segments. But in a two-segment Ethernet network connected by a bridge, the implementation of SNMP above the link layer will be quite efficient.

Nevertheless, to ensure high-quality transport of messages in networks of any topologies and technologies, the functions of the link layer are not enough, therefore, in the OSI model, the solution of this problem is assigned to the next two levels - network and transport.

Network layer

The Network layer serves to form a single transport system that unites several networks, and these networks can use completely different principles for transferring messages between end nodes and have an arbitrary structure of connections. The functions of the network layer are quite diverse. Let's start their consideration by the example of combining local networks.

Data link-layer protocols of local area networks ensure data delivery between any nodes only in a network with a corresponding typical topology, for example, a hierarchical star topology. This is a very severe limitation that does not allow building networks with a developed structure, for example, networks that combine several enterprise networks into a single network, or highly reliable networks in which there are redundant connections between nodes. It would be possible to complicate the link-layer protocols to maintain loop-like redundant links, but the principle of separation of duties between the layers leads to a different solution. In order, on the one hand, to preserve the simplicity of data transfer procedures for typical topologies, and on the other to allow the use of arbitrary topologies, an additional network layer is introduced.

At the network level, the term itself network endow with a specific meaning. In this case, a network is understood as a set of computers interconnected in accordance with one of the standard typical topologies and using one of the link layer protocols defined for this topology for data transmission.

Within the network, data delivery is provided by the appropriate link layer, but the network layer is responsible for the delivery of data between networks, which supports the ability to choose the correct route for transmitting messages even if the structure of connections between the constituent networks has a character different from that adopted in the link layer protocols. Networks are interconnected by special devices called routers. Router is a device that collects information about the topology of interconnection and, based on it, forwards the network layer packets to the destination network. To send a message from a sender on one network to a recipient on another network, you need to make a certain amount of transit transfers between networks, or hops (from hop - jump), each time choosing a suitable route. Thus, a route is a sequence of routers through which a packet passes.

In fig. 1.27 shows four networks connected by three routers. There are two routes between nodes A and B in this network: the first through routers 1 and 3, and the second through routers 1, 2, and 3.

Figure: 1.27. Example of a composite network

The problem of choosing the best path is called routing, and its solution is one of the main tasks of the network layer. This problem is compounded by the fact that the shortest path is not always the best. Often the criterion for choosing a route is the time of data transmission along that route; it depends on the bandwidth of the communication channels and the traffic rate, which can change over time. Some routing algorithms try to adapt to changes in load, while others make decisions based on averages over time. Route selection can be carried out according to other criteria, for example, transmission reliability.

In the general case, the functions of the network layer are broader than the functions of transferring messages over links with a non-standard structure, which we have now considered using the example of combining several local networks. The network layer also solves the problems of harmonizing different technologies, simplifying addressing in large networks, and creating reliable and flexible barriers to unwanted traffic between networks.

Network layer messages are usually called packets... When organizing packet delivery at the network level, the concept of "network number" is used. In this case, the recipient's address consists of the upper part - the network number and the lower part - the node number in this network. All nodes of one network must have the same upper part of the address, therefore the term "network" at the network level can be given another, more formal definition: a network is a collection of nodes whose network address contains the same network number.

At the network level, two kinds of protocols are defined. The first kind - network protocols (routed protocols) - implement the promotion of packages through the network. These are the protocols that are commonly referred to when talking about network layer protocols. However, another type of protocol is often referred to the network layer, called routing information exchange protocols or simply routing protocols... Routers use these protocols to collect information about the topology of interconnection. Network-layer protocols are implemented by software modules of the operating system, as well as software and hardware of routers.

At the network level, there are other types of protocols that are responsible for mapping the node address used at the network level to the local network address. Such protocols are often called address Resolution Protocol, ARP... Sometimes they are referred not to the network level, but to the channel level, although the subtleties of the classification do not change their essence.

Examples of network layer protocols are the IP interworking protocol of the TCP / IP stack and the IPX internetworking protocol of the Novell stack.

Transport layer

On the way from sender to receiver, packets can be garbled or lost. While some applications have their own error handling facilities, there are some that prefer to deal with a reliable connection straight away. The Transport layer provides applications, or the upper layers of the stack — application and session — to transfer data with the degree of reliability they require. OSI model defines five classes of service provided by the transport layer. These types of services are distinguished by the quality of the services provided: urgency, the ability to restore an interrupted connection, the availability of multiplexing means for multiple connections between different application protocols via a common transport protocol, and most importantly, the ability to detect and correct transmission errors such as distortion, loss and duplication of packets.

The choice of the class of service of the transport layer is determined, on the one hand, by the extent to which the problem of ensuring reliability is solved by the applications themselves and protocols higher than the transport levels, and on the other hand, this choice depends on how reliable the data transport system is. a network provided by the layers below the transport - network, data link and physical. So, for example, if the quality of communication channels is very high and the probability of errors that are not detected by the protocols of lower levels is small, then it is reasonable to use one of the lightweight services of the transport layer, not burdened with numerous checks, acknowledgment and other methods of increasing reliability. If the vehicles of the lower levels are initially very unreliable, then it is advisable to turn to the most developed service of the transport level, which works using the maximum means for detecting and eliminating errors - by means of preliminary establishment of a logical connection, control of message delivery by checksums and cyclic numbering of packets , setting delivery timeouts, etc.

As a rule, all protocols, starting from the transport layer and higher, are implemented by the software of the end nodes of the network - the components of their network operating systems. Examples of transport protocols include TCP and UDP in the TCP / IP stack and SPX in the Novell stack.

The protocols of the lower four levels are collectively called network transport or transport subsystem, since they completely solve the problem of transporting messages with a given level of quality in composite networks with arbitrary topology and various technologies. The other three upper levels solve the problem of providing application services based on the existing transport subsystem.

Session level

The Session layer provides control of the dialogue: it fixes which of the parties is currently active, provides synchronization tools. The latter allow breakpoints to be inserted into long passes so that in the event of a failure, you can go back to the last breakpoint rather than starting over. In practice, few applications use the session layer, and it is rarely implemented as separate protocols, although the functions of this layer are often combined with the functions of the application layer and implemented in a single protocol.

Representative level

The Presentation layer deals with the form of presentation of information transmitted over the network, without changing its content. Due to the presentation layer, the information transmitted by the application layer of one system is always understood by the application layer of the other system. With the help of this layer, application protocols can overcome syntactic differences in data representation or differences in character codes, such as ASCII and EBCDIC. At this level, data encryption and decryption can be performed, thanks to which the secrecy of data exchange is ensured immediately for all application services. An example of such a protocol is Secure Socket Layer (SSL), which provides secure messaging for the application layer protocols of the TCP / IP stack.

Application level

The Application layer is really just a collection of different protocols by which network users can access shared resources such as files, printers or hypertext Web pages, and organize their collaboration, for example, using the electronic protocol. mail. The unit of data that the application layer operates on is usually called message.

There are a wide variety of application services. Let's take as an example at least a few of the most common file service implementations: NCP in the Novell NetWare operating system, SMB in Microsoft Windows NT, NFS, FTP, and TFTP that are part of the TCP / IP stack.

Network independent and network independent layers

The functions of all layers of the OSI model can be classified into one of two groups: either functions that depend on the specific technical implementation of the network, or as functions oriented towards working with applications.

The three lower layers - physical, channel and network - are network dependent, that is, the protocols of these layers are closely related to the technical implementation of the network and the used communication equipment. For example, the transition to FDDI equipment means a complete change of the physical and link layer protocols at all network nodes.

The top three layers — application, representative, and session — are application oriented and depend little on the technical features of the network design. The protocols of these layers are not affected by any changes in network topology, equipment replacement, or migration to another network technology. Thus, the transition from Ethernet to high-speed l00VG-AnyLAN technology will not require any changes in software toolsah that implement the functions of the application, representative and session layers.

The transport layer is intermediate, it hides all the details of the functioning of the lower layers from the upper ones. This allows you to develop applications that are independent of the technical means of direct message transport. In fig. 1.28 shows the layers of the OSI model at which various network elements operate. A computer with a network operating system installed on it communicates with another computer using protocols of all seven layers. Computers carry out this interaction indirectly through various communication devices: hubs, modems, bridges, switches, routers, multiplexers. Depending on the type, the communication device can work either only at the physical layer (repeater), or on the physical and channel (bridge), or on the physical, channel and network, sometimes capturing the transport layer (router). In fig. 1.29 shows the correspondence of the functions of various communication devices to the layers of the OSI model.

Figure: 1.28. Network independent and network independent layers of the OSI model

Figure 1.29. Correspondence of the functions of various network devices to the layers of the OSI model

The OSI model, although very important, is only one of many communication models. These models and their associated protocol stacks can differ in the number of layers, their functions, message formats, services supported at the upper layers, and other parameters.

1.3.4. The concept of "open system"

The OSI model, as its name suggests (Open System Interconnection), describes the interconnections of open systems. What is an open system?

In a broad sense open system any system can be named (computer, computer network, OS, software package, other hardware and software products), which is built in accordance with open specifications.

Recall that the term “specification” (in computer science) is understood as a formalized description of hardware or software components, how they function, interactions with other components, operating conditions, limitations, and special characteristics. It is clear that not every specification is a standard. In turn, open specifications mean published, publicly available specifications that meet standards and are adopted as a result of reaching agreement after extensive discussion by all interested parties.

The use of open specifications in the development of systems allows third parties to develop various hardware or software extensions and modifications for these systems, as well as create software and hardware complexes from products from different manufacturers.

For real systems, complete openness is an unattainable ideal. As a rule, even in systems called open, only a few parts that support external interfaces meet this definition. For example, the openness of the family of operating unix systems consists, among other things, in the presence of a standardized software interface between the kernel and applications, making it easy to port applications from one Unix version to another. Another example of partial openness is the use of the Open Driver Interface (ODI) in the fairly closed Novell NetWare operating system to include third-party network adapter drivers in the system. The more open specifications are used in the development of a system, the more open it is.

The OSI model deals with only one aspect of openness, namely the openness of the means of interaction between devices connected to a computer network. Here, an open system is understood as a network device that is ready to interact with other network devices using standard rules that determine the format, content and meaning of messages sent and received.

If the two networks are built in compliance with the principles of openness, then this gives the following advantages:

    the ability to build a network of hardware and software from different manufacturers adhering to the same standard;

    the possibility of painless replacement of individual network components with other, more advanced ones, which allows the network to develop with minimal costs;

    the ability to easily pair one network with another;

    ease of development and network maintenance.

A striking example of an open system is the international Internet. This network has evolved in full accordance with the requirements for open systems. Thousands of specialists-users of this network from various universities, scientific organizations and companies-manufacturers of computing equipment and software working in different countries... The very name of the standards that define the operation of the Internet - Request For Comments (RFC), which can be translated as "request for comments" - shows the open and open nature of the adopted standards. As a result internet network managed to combine the most diverse equipment and software a huge number of networks scattered around the world.
When using materials from the site, a link to the project is required.
All rights reserved. © 2006