Fast network. Description of Fast Ethernet technology. DSAP and SSAP Field Values

Fast Ethernet - the IEEE 802.3 u specification officially adopted on October 26, 1995, defines a data link protocol standard for networks operating using both copper and fiber-optic cables at a speed of 100 Mb / s. The new specification is the successor to the IEEE 802.3 Ethernet standard, using the same frame format, CSMA / CD media access mechanism, and star topology. Several physical layer configuration elements have evolved to increase throughput, including cable types, segment lengths, and number of hubs.

Physical layer

The Fast Ethernet standard defines three types of media for 100 Mbps Ethernet.

· 100Base-TX - two twisted pairs of wires. Transmission is carried out in accordance with the standard for data transmission in a twisted physical medium, developed by ANSI (American National Standards Institute - American National Standards Institute). Coiled data cables can be shielded or unshielded. Uses 4B / 5B data coding algorithm and MLT-3 physical coding method.

· 100Base-FX - two conductors, fiber optic cable. The transmission is also carried out in accordance with the ANSI standard for data transmission in fiber optic media. Uses 4B / 5B data coding algorithm and NRZI physical coding method.

· 100Base-T4 is a special specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called UTP Category 3 cable. It uses 8B / 6T data coding algorithm and NRZI physical coding method.

Multimode cable

This type of fiber optic cable uses a fiber with a 50 or 62.5 micrometer core and a 125 micrometer outer sheath. Such a cable is called 50/125 (62.5 / 125) micrometer multimode optical fiber. An LED transceiver with a wavelength of 850 (820) nanometers is used to transmit a light signal over a multimode cable. If a multimode cable connects two ports of switches operating in full duplex mode, then it can be up to 2000 meters long.

Single mode cable

Singlemode fiber has a smaller core diameter of 10 micrometers than multimode fiber, and uses a laser transceiver to transmit over singlemode cable, which collectively provides efficient transmission over long distances. The wavelength of the transmitted light signal is close to the core diameter, which is 1300 nanometers. This number is known as the zero dispersion wavelength. In a single mode cable, dispersion and signal loss are very low, which allows light signals to be transmitted over long distances than in the case of multimode fiber.


38. Gigabit Ethernet technology, general characteristics, specification of the physical environment, basic concepts.
3.7.1. General characteristics of the standard

Soon enough after Fast Ethernet products appeared on the market, network integrators and administrators felt certain limitations when building corporate networks. In many cases, servers connected over a 100 Mbps channel overloaded the network backbones that also operate at 100 Mbps - FDDI and Fast Ethernet backbones. There was a need for the next level of the speed hierarchy. In 1995, only ATM switches could provide a higher level of speed, and in the absence at that time of convenient means of migrating this technology to local networks (although the LAN Emulation - LANE specification was adopted in early 1995, its practical implementation was still ahead), they were to be implemented in almost no one dared to the local network. In addition, ATM technology was very expensive.

So the next step, taken by the IEEE, seemed logical - 5 months after the final adoption of the Fast Ethernet standard in June 1995, the IEEE High Speed \u200b\u200bTechnology Research Group was ordered to look into the possibility of developing an Ethernet standard with an even higher bit rate.

In the summer of 1996, an 802.3z group was announced to develop a protocol similar to Ethernet as much as possible, but with a bit rate of 1000 Mbps. As with Fast Ethernet, the message was received with great enthusiasm by Ethernet proponents.



The main reason for the enthusiasm was the prospect of the same smooth migration of network backbones to Gigabit Ethernet, similar to the migration of congested Ethernet segments located at the lower levels of the network hierarchy to Fast Ethernet. In addition, the experience of transferring data at gigabit speeds was already available, both in territorial networks (SDH technology) and in local networks - Fiber Channel technology, which is mainly used to connect high-speed peripherals to large computers and transmit data over fiber-optic cable from speed close to gigabit by means of redundant code 8B / 10B.

The first version of the standard was reviewed in January 1997, and the 802.3z standard was finally adopted on June 29, 1998 at a meeting of the IEEE 802.3 committee. Work on the implementation of Category 5 Gigabit Ethernet on twisted pair was transferred to a special committee 802.3ab, which has already considered several versions of the draft of this standard, and since July 1998 the project has become quite stable. The final adoption of the 802.3ab standard is expected in September 1999.

Without waiting for the standard to be adopted, some companies released the first Gigabit Ethernet equipment on fiber optic cable by the summer of 1997.

The main idea of \u200b\u200bthe developers of the Gigabit Ethernet standard is to preserve the ideas of classic Ethernet technology as much as possible while reaching a bit rate of 1000 Mbps.

Since when developing a new technology, it is natural to expect some technical innovations that follow the general course of the development of network technologies, it is important to note that Gigabit Ethernet, like its slower counterparts, at the protocol level will not besupport:

  • quality of service;
  • redundant connections;
  • testing the operability of nodes and equipment (in the latter case - with the exception of testing port-to-port communication, as is done for Ethernet 10Base-T and 10Base-F and Fast Ethernet).

All three of these properties are considered very promising and useful in modern networks, and especially in networks of the near future. Why are the authors of Gigabit Ethernet abandoning them?

The main idea of \u200b\u200bthe developers of the Gigabit Ethernet technology is that there are and will exist many networks in which the high speed of the backbone and the ability to assign priority packets in the switches will be quite sufficient to ensure the quality of transport services for all network clients. And only in those rare cases, when the backbone is sufficiently loaded and the requirements for the quality of service are very strict, it is necessary to use ATM technology, which, due to its high technical complexity, guarantees the quality of service for all major types of traffic.


39. Structural cabling system used in network technologies.
A Structured Cabling System (SCS) is a set of switching elements (cables, connectors, connectors, cross-over panels and cabinets), as well as a technique for their joint use, which allows you to create regular, easily expandable communication structures in computer networks.

The structured cabling system is a kind of "constructor", with the help of which the network designer builds the required configuration from standard cables connected by standard connectors and switched on standard cross-over panels. If necessary, the configuration of connections can be easily changed - add a computer, segment, switch, remove unnecessary equipment, and also change the connections between computers and hubs.

When building a structured cabling system, it is assumed that every workplace in the enterprise should be equipped with sockets for connecting a telephone and a computer, even if this is not required at the moment. That is, a good structured cabling system is redundant. This could save money in the future, as changes to the connection of new devices can be made by re-connecting existing cables.

A typical hierarchical structure of a structured cabling system includes:

  • horizontal subsystems (within a floor);
  • vertical subsystems (inside the building);
  • a campus subsystem (within one territory with several buildings).

Horizontal subsystemconnects the floor marshalling cabinet to users' outlets. Subsystems of this type correspond to the floors of a building. Vertical subsystemconnects marshalling cabinets on each floor to the central control room The next step in the hierarchy is campus subsystem,which connects several buildings to the main control room of the entire campus. This part of the cabling system is usually called the backbone.

There are many advantages to using structured cabling instead of chaotic cables.

· Versatility.A structured cabling system with a well-thought-out organization can become a unified medium for the transmission of computer data in a local computer network, organization of a local telephone network, transmission of video information and even transmission of signals from fire safety sensors or security systems. This makes it possible to automate many processes of control, monitoring and management of economic services and life support systems of the enterprise.

· Increased service life.The obsolescence of a well-structured cabling system can be 10-15 years.

· Reducing the cost of adding new users and changing their placements.It is known that the cost of a cable system is significant and is mainly determined not by the cost of the cable, but by the cost of laying it. Therefore, it is more profitable to carry out a single work on laying the cable, possibly with a large margin in length, than to carry out the laying several times, increasing the length of the cable. With this approach, all work on adding or moving a user is reduced to connecting the computer to an existing outlet.

· Possibility of easy network expansion.The structured cabling system is modular and therefore easy to expand. For example, a new subnet can be added to a trunk without affecting existing subnets. You can change the cable type on a separate subnet independently of the rest of the network. The structured cabling system is the basis for dividing the network into easily manageable logical segments, since it is itself already divided into physical segments.

· Providing more efficient service.The structured cabling system makes maintenance and troubleshooting easier than a bus cabling system. In the case of bus cabling, the failure of one of the devices or connecting elements leads to a hard-to-locate failure of the entire network. In structured cabling systems, the failure of one segment does not affect the others, since the aggregation of segments is carried out using hubs. Concentrators diagnose and localize the faulty area.

· Reliability.A structured cabling system has increased reliability, since the manufacturer of such a system guarantees not only the quality of its individual components, but also their compatibility.


40. Hubs and network adapters, principles, use, basic concepts.
Hubs, along with network adapters and cabling, represent the minimum amount of equipment that can be used to create a local area network. Such a network will represent a common shared environment

Network Adapter (Network Interface Card, NIC)together with its driver, it implements the second, link layer of the open systems model in the final network node - the computer. More precisely, in the network operating system A pair of adapter and driver performs only the functions of the physical and MAC layers, while the LLC layer is usually implemented by the operating system module, which is the same for all drivers and network adapters. Actually, this is how it should be in accordance with the model of the IEEE 802 protocol stack. For example, in Windows NT, the LLC level is implemented in the NDIS module, which is common to all network adapter drivers, regardless of which technology the driver supports.

The network adapter together with the driver perform two operations: frame transmission and reception.

With adapters for client computers, much of the work is offloaded to the driver, making the adapter simpler and cheaper. The disadvantage of this approach is high degree loading the computer's central processor with routine work on transferring frames from the computer's RAM to the network. The central processor is forced to do this work instead of performing user application tasks.

The network adapter must be configured before being installed in a computer. Configuring an adapter typically specifies the IRQ used by the adapter, the DMA channel (if the adapter supports DMA mode), and the base address of the I / O ports.

In almost all modern technologies of local networks, a device is defined, which has several equal names - hub (concentrator), hub (hub), repeater (repeater). Depending on the field of application of this device, the composition of its functions and design changes significantly. Only the main function remains unchanged - it is frame repetitioneither on all ports (as defined in the Ethernet standard), or only on some ports, according to the algorithm defined by the corresponding standard.

A hub usually has several ports to which the end nodes of the network - computers - are connected using separate physical cable segments. The concentrator combines separate physical network segments into a single shared environment, access to which is carried out in accordance with one of the considered LAN protocols - Ethernet, Token Ring, etc. Since the logic of access to the shared environment depends significantly on the technology, for each type technologies produced their own hubs - Ethernet; Token Ring; FDDI and 100VG-AnyLAN. For a specific protocol, sometimes its own, highly specialized name of this device is used, which more accurately reflects its functions or is used by virtue of traditions, for example, for Token Ring concentrators the name MSAU is characteristic.

Each hub performs some basic function defined in the corresponding protocol of the technology it supports. Although this function is defined in some detail in the technology standard, when implemented, hubs from different manufacturers may differ in such details as the number of ports, support for several types of cables, etc.

In addition to the main function, the hub can perform a number of additional functions, which are either not defined at all in the standard, or are optional. For example, a Token Ring hub can perform the function of disabling malfunctioning ports and switching to a backup ring, although such capabilities are not described in the standard. The hub turned out to be a convenient device for performing additional functions that facilitate the monitoring and operation of the network.


41. The use of bridges and switches, principles, features, examples, limitations
Structuring with bridges and switches

the network can be divided into logical segments using two types of devices - bridges and / or switches (switch, switching hub).

The bridge and the switch are functional twins. Both of these devices advance frames based on the same algorithms. Bridges and switches use two types of algorithms: an algorithm transparent bridge,described in the IEEE 802.1D standard, or the algorithm source routing bridgefrom IBM for Token Ring networks. These standards were developed long before the first switch was introduced, so they use the term "bridge". When the first industrial switch model for Ethernet technology was born, it performed the same IEEE 802.ID frame forwarding algorithm, which had been worked out by local and global networks

The main difference between a switch and a bridge is that the bridge processes frames sequentially, while the switch processes frames in parallel. This circumstance is due to the fact that bridges appeared in those days when the network was divided into a small number of segments, and the intersegment traffic was small (it obeyed the 80 by 20% rule).

Today bridges still work on networks, but only on fairly slow global links between two remote LANs. These bridges are called remote bridges, and their algorithm is the same as 802.1D or Source Routing.

Transparent bridges can, in addition to transmitting frames within the same technology, translate LAN protocols, for example Ethernet to Token Ring, FDDI to Ethernet, etc. This property of transparent bridges is described in the IEEE 802.1H standard.

In what follows, we will call a device that advances frames by the bridge algorithm and works in a local network, the modern term "switch". When describing the 802.1D and Source Routing algorithms themselves in the next section, we will traditionally call the device a bridge, as it is actually called in these standards.


42. Switches for local networks, protocols, modes of operation, examples.
Each of the 8 10Base-T ports is served by one Ethernet Packet Processor (EPP). In addition, the switch has a system module that coordinates the work of all EPP processors. The system module maintains the general address table of the switch and provides management of the switch using the SNMP protocol. To transfer frames between ports, a switching fabric is used, similar to those found in telephone switches or multiprocessor computers, connecting multiple processors with multiple memory modules.

Switching matrix works on the principle of switching channels. For 8 ports, the matrix can provide 8 simultaneous internal channels at half-duplex port operation and 16 at full-duplex, when the transmitter and receiver of each port operate independently of each other.

When a frame arrives at a port, the EPP processor buffers the first few bytes of the frame to read the destination address. After receiving the destination address, the processor immediately decides to transfer the packet, without waiting for the arrival of the remaining bytes of the frame.

If the frame needs to be transmitted to another port, then the processor contacts the switching fabric and tries to establish a path in it connecting its port with the port through which the route to the destination address goes. The switching fabric can only do this when the destination port is free at that moment, that is, not connected to another port; if the port is busy, then, as in any circuit switched device, the matrix fails the connection. In this case, the frame is fully buffered by the input port processor, after which the processor waits for the output port to be released and the switching matrix forms the desired path. After the desired path is established, the buffered frame bytes are sent to it, which are received by the output port processor. As soon as the downstream processor accesses the attached Ethernet segment using the CSMA / CD algorithm, frame bytes are immediately transferred to the network. The described method of transmitting a frame without its full buffering is called “on-the-fly” or “cut-through” switching. The main reason for improving network performance when using a switch is parallelprocessing multiple frames This effect is illustrated in Fig. 4.26. The figure shows an ideal situation in terms of improving performance, when four out of eight ports transmit data at a maximum speed of 10 Mb / s for the Ethernet protocol, and they transmit this data to the other four ports of the switch without conflict - the data flows between the network nodes are distributed so that each receiving port has its own output port. If the switch manages to process the incoming traffic even at the maximum rate of incoming frames to the input ports, then the total switch performance in the given example will be 4x10 \u003d 40 Mbps, and when generalizing the example for N ports - (N / 2) xlO Mbps. It is said that the switch provides each station or segment connected to its ports with dedicated protocol bandwidth. Naturally, the situation in the network does not always develop as shown in Fig. 4.26. If two stations, for example stations connected to ports 3 and 4, at the same time you need to write data to the same server connected to the port 8, then the switch will not be able to allocate a 10 Mbps data stream to each station, since port 5 cannot transmit data at 20 Mbps. Station frames will wait in internal queues of input ports 3 and 4, when the port becomes free 8 to transmit the next frame. Obviously, a good solution for such a distribution of data streams would be to connect the server to a higher-speed port, for example Fast Ethernet. Since the main advantage of the switch, thanks to which it has won very good positions in local networks, is its high performance, the developers of switches try to release it in this way called non-blockingswitch models.


43. Algorithm of the transparent bridge.
Transparent bridges are invisible to network adapters of end nodes, since they independently build a special address table, on the basis of which it is possible to decide whether it is necessary to transmit the incoming frame to some other segment or not. When transparent bridges are used, network adapters work in the same way as in the absence of them, that is, they do not take any additional action to get the frame through the bridge. The transparent bridging algorithm is independent of the LAN technology in which the bridge is being installed, so transparent Ethernet bridges work just like transparent FDDI bridges.

A transparent bridge builds its address table based on passive monitoring of traffic circulating in segments connected to its ports. In this case, the bridge takes into account the addresses of the sources of data frames arriving on the bridge ports. Based on the frame source address, the bridge concludes that this node belongs to one or another network segment.

Consider the process of automatically creating a bridge address table and using it using the example of a simple network shown in Fig. 4.18.

Figure: 4.18. How the transparent bridge works

The bridge connects two logical segments. Segment 1 consists of computers connected with one length of coaxial cable to port 1 of the bridge, and segment 2 consists of computers connected with another length of coaxial cable to port 2 of the bridge.

Each bridge port acts as an endpoint on its segment with one exception - a bridge port does not have its own MAC address. The port of the bridge operates in the so-called promisquouspacket capture mode, when all packets arriving on the port are stored in the buffer memory. With this mode, the bridge monitors all traffic transmitted in the segments attached to it and uses packets passing through it to study the composition of the network. Since all packets are written to the buffer, the bridge does not need a port address.

In the initial state, the bridge does not know anything about the computers with which MAC addresses are connected to each of its ports. Therefore, in this case, the bridge simply transmits any captured and buffered frame to all of its ports except for the one from which it was received. In our example, the bridge only has two ports, so it transmits frames from port 1 to port 2, and vice versa. When a bridge is about to transmit a frame from segment to segment, for example from segment 1 to segment 2, it tries again to access segment 2 as an end node according to the rules of the access algorithm, in this example - according to the rules of the CSMA / CD algorithm.

Simultaneously with the transmission of a frame to all ports, the bridge learns the address of the source of the frame and makes a new entry about its belonging in its address table, which is also called the filtering or routing table.

After the bridge has passed the learning phase, it can operate more efficiently. When receiving a frame sent, for example, from computer 1 to computer 3, it scans the address table for the coincidence of its addresses with the destination address 3. Since there is such an entry, the bridge performs the second stage of table analysis - it checks whether computers with source addresses ( in our case, this is address 1) and the destination address (address 3) in one segment. Since in our example they are in different segments, the bridge performs the operation forwardingframe - transmits a frame to another port, having previously gained access to another segment.

If the destination address is unknown, then the bridge transmits the frame to all its ports, except for the port - the source of the frame, as in the initial stage of the learning process.


44. Bridges with source routing.
Source-routed bridges are used to connect Token Ring and FDDI rings, although transparent bridges can be used for the same purpose. Source Routing (SR) is based on the fact that the sending station puts in a frame sent to another ring all the address information about intermediate bridges and rings that the frame must pass before entering the ring to which the station is connected. recipient.

Let's consider the principles of operation of Source Routing bridges (hereinafter, SR-bridges) using the example of the network shown in Fig. 4.21. The network consists of three rings connected by three bridges. Rings and bridges have identifiers to define the route. SR-bridges do not build an address table, and when advancing frames, they use the information available in the corresponding fields of the data frame.

Fig. 4.21.Source Routing Bridges

Upon receipt of each packet, the SR-bridge only needs to look at the Routing Information Field (RIF, in a Token Ring or FDDI frame) for its own identifier. And if it is present there and accompanied by the identifier of the ring that is connected to the given bridge, then in this case the bridge copies the incoming frame to the specified ring. Otherwise, the frame is not copied to the other ring. In any case, the original copy of the frame is returned on the original ring of the sending station, and if it was transmitted to another ring, then the A (address recognized) and C (frame copied) bits of the frame status fields are set to 1 to inform the sending station, that the frame was received by the destination station (in this case, transmitted by the bridge to another ring).

Since routing information in a frame is not always needed, but only for frame transmission between stations connected to different rings, the presence of the RIF field in the frame is indicated by setting the individual / group address (I / G) bit to 1 (this bit is not used as intended, since the source address is always individual).

The RIF has a three-part control subfield.

  • Frame typedefines the type of the RIF field. There are different types of RIF fields used to find a route and to send a frame along a known route.
  • Maximum frame length fieldused by the bridge to connect rings that have a different MTU value. Using this field, the bridge notifies the station of the maximum possible frame length (that is, the minimum MTU value along the entire composite route).
  • RIF field lengthis necessary, since the number of route descriptors that specify the identifiers of the crossed rings and bridges is unknown in advance.

For the source routing algorithm, two additional frame types are used - a single-route broadcast frame (SRBF) and a multi-route ARBF (all-route broadcast frame) broadcast frame.

All SR bridges must be manually configured by the administrator to send ARBF frames to all ports except the source port of the frame, and for SRBF frames, some bridge ports must be blocked so that there are no loops on the network.

Advantages and Disadvantages of Source Routing Bridges

45. Switches: technical implementation, functions, characteristics that affect their work.
Features of the technical implementation of switches. Many first-generation switches were similar to routers, that is, based on a general-purpose central processing unit connected to interface ports via an internal high-speed bus. The main disadvantage of such switches was their low speed... The general-purpose processor could in no way cope with the large volume of specialized operations for transferring frames between interface modules. In addition to the processor chips, for successful non-blocking operation, the switch also needs a high-speed node to transfer frames between the port processor chips. Currently, switches use one of three schemes as a base, on which such an exchange node is built:

  • switching matrix;
  • shared multi-input memory;
  • common bus.

Introduction

The purpose of this report was a short and accessible presentation of the basic principles of operation and features of computer networks, using the example of Fast Ethernet.

A network is a group of connected computers and other devices. The main purpose of computer networks is the sharing of resources and the implementation of interactive communication both within one firm and outside it. Resources are data, applications and peripherals, such as an external drive, printer, mouse, modem or joystick. Interactive communication between computers implies real-time messaging.

There are many sets of standards for data transmission in computer networks. One of the kits is the Fast Ethernet standard.

From this material you will learn about:

  • Fast Ethernet technologies
  • Switches
  • FTP cable
  • Connection types
  • Topologies computer network

In my work, I will show the principles of a network based on the Fast Ethernet standard.

Local area network (LAN) switching and Fast Ethernet technologies were developed in response to the need to improve the efficiency of Ethernet networks. By increasing throughput, these technologies can eliminate network bottlenecks and support applications that require high data rates. The appeal of these solutions is that you don't have to choose one or the other. They are complementary, so network efficiency can most often be improved by using both technologies.

The collected information will be useful both for those who are beginning to study computer networks and for network administrators.

1. Network diagram

2. Fast Ethernet technology

fast ethernet computer network

Fast Ethernet is the evolution of Ethernet technology. Based on and keeping intact the same CSMA / CD (Channel Polling and Collision Detection Shared Access) method, Fast Ethernet devices operate at 10 times the speed of Ethernet. 100 Mbps. Fast Ethernet provides sufficient bandwidth for applications such as computer-aided design and manufacturing (CAD / CAM), graphics and imaging, and multimedia. Fast Ethernet is compatible with 10 Mbps Ethernet, so integrating Fast Ethernet into your LAN is more convenient using a switch rather than a router.

Switch

Using switches many workgroups can be linked together to form a large LAN (see Figure 1). Inexpensive switches perform better than routers for better LAN performance. Fast Ethernet workgroups of one or two hubs can be connected through a Fast Ethernet switch to further increase the number of users as well as cover a larger area.

As an example, consider the following switch:

Figure: 1 D-Link-1228 / ME

The DES-1228 / ME series includes configurable Layer 2 “premium” class Fast Ethernet switches. With advanced functionality, the DES-1228 / ME are a low-cost solution for creating a secure and high-performance network. The switch features high port density, 4 Gigabit Uplink ports, small increments for bandwidth management, and improved network management. These switches allow you to optimize your network both in terms of functionality and cost characteristics. The DES-1228 / ME series switches are the optimal solution both in terms of functionality and cost characteristics.

FTP cable

LAN-5EFTP-BL cableconsists of 4 pairs of single-core copper conductors.

Conductor diameter 24AWG.

Each conductor is encased in HDPE (high density polyethylene) insulation.

Two conductors, twisted at a specially selected pitch, form one twisted pair.

4 twisted pairs are wrapped in polyethylene film and, together with a single-core copper grounding conductor, are enclosed in a common foil shield and PVC sheath.

Straight through

It serves:

  • 1. To connect a computer to a switch (hub, switch) through the computer's network card
  • 2. To connect to a switch (hub, switch) of network peripheral equipment - printers, scanners
  • 3. for UPLINK "and on the upstream switch (hub, switch) - modern switches can automatically configure the inputs in the connector for reception and transmission

Crossover

It serves:

  • 1. For direct connection of 2 computers to a local network, without the use of switching equipment (hubs, switches, routers, etc.).
  • 2. for uplink, connection to a higher-standing switch in a complex local network structure, for old types of switches (hubs, switches), they have a separate connector, or marked "UPLINK" or X.

Star topology

To the stars - the basic topology of a computer network, in which all computers on the network are connected to a central node (usually a switch), forming a physical network segment. Such a network segment can function both separately and as part of a complex network topology (usually a “tree”). All information exchange is carried out exclusively through central computer, on which a very large load is imposed in this way, so it cannot do anything other than the network. As a rule, it is the central computer that is the most powerful, and it is on it that all the functions of managing the exchange are entrusted. In principle, no conflicts in a network with a star topology are possible, because the management is completely centralized.

application

Classic 10Mbit Ethernet has been satisfying for most users for about 15 years. However, in the early 90s, its insufficient capacity began to be felt. For computers based on Intel 80286 or 80386 processors with ISA (8 MB / s) or EISA (32 MB / s) buses, the throughput of the Ethernet segment was 1/8 or 1/32 of the memory-to-disk channel, and this was in good agreement with the ratio volumes of data processed locally and data transferred over the network. For more powerful client stations with PCI bus (133 MB / s) this share dropped to 1/133, which was clearly not enough. As a result, many 10Mbit Ethernet segments became congested, server responsiveness dropped significantly, and collision rates increased dramatically, further reducing usable bandwidth.

There is a need to develop a "new" Ethernet, that is, a technology that would be as efficient in terms of price / quality ratio at a performance of 100 Mbps. As a result of searches and research, specialists were divided into two camps, which ultimately led to the emergence of two new technologies - Fast Ethernet and l00VG-AnyLAN. They differ in the degree of continuity with classic Ethernet.

In 1992, a group of networking equipment manufacturers, including leaders in Ethernet technology such as SynOptics, 3Com, and others, formed the Fast Ethernet Alliance, a nonprofit alliance, to standardize on a new technology that would preserve the features of Ethernet as much as possible.

The second camp was led by Hewlett-Packard and AT&T, who offered to take advantage of the opportunity to address some of the known flaws in Ethernet technology. Some time later, IBM joined these companies, which contributed to the proposal to provide some compatibility with Token Ring networks in the new technology.

At the same time, a research group was formed in committee 802 of the IEEE to study the technical potential of new high-speed technologies. Between the end of 1992 and the end of 1993, the IEEE group examined 100-megabit solutions from various manufacturers. In addition to the Fast Ethernet Alliance offering, the group also reviewed high-speed technology from Hewlett-Packard and AT&T.

Discussion focused on the issue of preserving the random access method of CSMA / CD. The Fast Ethernet Alliance proposal maintained this method and thereby ensured the continuity and consistency of 10 Mbps and 100 Mbps networks. A coalition of HP and AT&T, which had the backing of significantly fewer vendors in the networking industry than the Fast Ethernet Alliance, proposed a completely new access method called Demand Priority - priority access on demand. It significantly changed the picture of the behavior of nodes in the network, so it could not fit into the Ethernet technology and the 802.3 standard, and a new IEEE 802.12 committee was organized to standardize it.

In the fall of 1995, both technologies became IEEE standards. The IEEE 802.3 committee adopted the Fast Ethernet specification as an 802.3 standard and is not a stand-alone standard, but an addition to the existing 802.3 standard in the form of chapters 21 to 30. The 802.12 committee adopted l00VG-AnyLAN technology, which uses the new Demand Priority access method and supports frames in two formats - Ethernet and Token Ring.

v Physical layer of Fast Ethernet technology

All the differences between Fast Ethernet technology and Ethernet are concentrated on the physical layer (Fig. 3.20). The MAC and LLC layers in Fast Ethernet remain exactly the same, and are described in the previous chapters of the 802.3 and 802.2 standards. Therefore, considering Fast Ethernet technology, we will study only a few options for its physical layer.

The more complex structure of the physical layer of Fast Ethernet technology is caused by the fact that it uses three variants of cable systems:

  • · Fiber-optic multimode cable, two fibers are used;
  • · Twisted pair of category 5, two pairs are used;
  • · Twisted pair of category 3, four pairs are used.

The coaxial cable, which gave the world the first Ethernet network, was not included in the number of allowed data transmission media of the new Fast Ethernet technology. This is a common trend in many new technologies, because over short distances, Category 5 twisted pair can transfer data at the same speed as coaxial cable, but the network is cheaper and easier to use. Over long distances, optical fiber has much higher bandwidth than coax, and the network cost is not much higher, especially when you consider the high troubleshooting costs of a large coaxial cabling system.


Differences between Fast Ethernet technology and Ethernet technology

The rejection of coaxial cable has led to the fact that Fast Ethernet networks always have a hierarchical tree structure built on hubs, like l0Base-T / l0Base-F networks. The main difference between Fast Ethernet configurations is a reduction in the network diameter to about 200 m, which is explained by a 10-fold reduction in the transmission time of a minimum frame length due to a 10-fold increase in the transmission speed compared to 10-Mbps Ethernet.

Nevertheless, this circumstance does not really impede the construction of large networks based on Fast Ethernet technology. The fact is that the mid-90s were marked not only by the widespread use of inexpensive high-speed technologies, but also by the rapid development of local area networks based on switches. When using switches, the Fast Ethernet protocol can operate in full-duplex mode, in which there are no restrictions on the total length of the network, and only restrictions on the length of physical segments connecting neighboring devices (adapter - switch or switch - switch) remain. Therefore, when creating long-distance LAN backbones, Fast Ethernet technology is also actively used, but only in a full-duplex version, together with switches.

This section discusses the half-duplex variant of Fast Ethernet operation, which fully complies with the definition of an access method described in the 802.3 standard.

Compared to the options for the physical implementation of Ethernet (and there are six of them), in Fast Ethernet, the differences between each option from others are deeper - both the number of conductors and the coding methods change. And since the physical versions of Fast Ethernet were created simultaneously, and not evolutionarily, as for Ethernet networks, it was possible to define in detail those sublayers of the physical layer that do not change from version to version, and those sublevels that are specific to each version of the physical environment.

The official 802.3 standard established three different specifications for the Fast Ethernet physical layer and gave them the following names:

Fast Ethernet physical layer structure

  • 100Base-TX for a two-pair cable on an unshielded twisted pair UTP category 5 or shielded twisted pair STP Type 1;
  • · 100Base-T4 for four-pair cable on unshielded twisted pair UTP category 3, 4 or 5;
  • · 100Base-FX for multimode fiber optic cable, uses two fibers.

The following statements and characteristics apply to all three standards.

  • · Fast Ethernetee frame formats are different from 10Mbit Ethernet frames.
  • · The Inter-Frame Gap (IPG) is 0.96 µs and the Bit Gap is 10 ns. All time parameters of the access algorithm (backoff interval, transmission time of the minimum frame length, etc.), measured in bit intervals, remained the same, so no changes were made to the sections of the standard concerning the MAC level.
  • · A sign of the free state of the medium is the transmission of the Idle symbol of the corresponding redundancy code over it (and not the absence of signals, as in the Ethernet 10 Mbit / s standards). The physical layer includes three elements:
  • o reconciliation sublayer;
  • o media independent interface (Mil);
  • o Physical layer device (PHY).

The negotiation layer is needed so that the MAC layer, designed for the AUI interface, can work with the physical layer through the IP interface.

The physical layer device (PHY) consists, in turn, of several sublevels (see Figure 3.20):

  • · The sublayer of logical data coding, which converts bytes coming from the MAC level into 4B / 5B or 8B / 6T code symbols (both codes are used in Fast Ethernet technology);
  • · Physical interconnection sublayers and physical media dependency (PMD) sublayers, which provide signaling in accordance with a physical coding technique such as NRZI or MLT-3;
  • · An auto-negotiation sublayer, which allows two communicating ports to automatically select the most efficient mode of operation, for example, half-duplex or full-duplex (this sublayer is optional).

The IP interface supports a physical medium independent way of exchanging data between the MAC sublayer and the PHY sublayer. This interface is similar in purpose to the AUI interface of classic Ethernet, except that the AUI interface was located between the physical signal coding sublayer (for all cable variants the same physical coding method was used - the Manchester code) and the physical connection sublayer to the medium, and the IP interface is located between the MAC sublayer and signal coding sublevels, of which there are three in the Fast Ethernet standard - FX, TX and T4.

The MP connector, unlike the AUI connector, has 40 pins, the maximum cable length for the MP is one meter. The signals transmitted via the MP interface have an amplitude of 5 V.

Physical layer 100Base-FX - multimode fiber, two fibers

This specification defines the operation of Fast Ethernet over multimode fiber in half and full duplex modes based on the well-proven FDDI coding scheme. As in the FDDI standard, each node is connected to the network by two optical fibers coming from the receiver (R x) and from the transmitter (T x).

There are many similarities between the l00Base-FX and l00Base-TX specifications, so the properties common to the two specifications will be given under the generic name l00Base-FX / TX.

While 10 Mbps Ethernet uses Manchester coding to represent data when transmitted over a cable, Fast Ethernet defines a different coding method, 4V / 5V. This method has already shown its effectiveness in the FDDI standard and has been carried over to the l00Base-FX / TX specification without changes. In this method, every 4 bits of MAC sublayer data (called symbols) are represented by 5 bits. The redundant bit allows candidate codes to be applied by representing each of the five bits as electrical or optical pulses. The existence of prohibited combinations of characters allows you to reject erroneous characters, which increases the stability of networks with l00Base-FX / TX.

To separate the Ethernet frame from the Idle symbols, a combination of Start Delimiter symbols (pair of symbols J (11000) and K (10001) of the 4B / 5B code is used, and after the end of the frame, a T symbol is inserted before the first Idle symbol.


Continuous data stream of 100Base-FX / TX specifications

After converting the 4-bit portions of MAC codes into 5-bit portions of the physical layer, they must be represented as optical or electrical signals in the cable connecting the network nodes. The l00Base-FX and l00Base-TX specifications use different physical coding methods for this - NRZI and MLT-3, respectively (as in FDDI technology when working through fiber and twisted pair).

Physical layer 100Base-TX - twisted pair DTP Cat 5 or STP Type 1, two pairs

The l00Base-TX specification uses UTP Category 5 cable or STP Type 1 cable as the transmission medium. The maximum cable length in both cases is 100 m.

The main differences from the l00Base-FX specification are the use of the MLT-3 method to transmit signals of 5-bit 4V / 5V code portions over a twisted pair, as well as the presence of the Auto-negotiation function to select the port operation mode. The auto-negotiation scheme allows two physically connected devices that support several physical layer standards, differing in bit rate and number of twisted pairs, to choose the most advantageous mode of operation. Usually, the auto-negotiation procedure occurs when you connect a network adapter that can operate at speeds of 10 and 100 Mbps to a hub or switch.

The Auto-negotiation scheme described below is now the standard for l00Base-T technology. Prior to this, manufacturers used various proprietary schemes for automatically detecting the speed of the interacting ports, which were not compatible. The standard Auto-negotiation scheme was originally proposed by National Semiconductor called NWay.

A total of 5 different modes of operation are currently defined that can be supported by l00Base-TX or 100Base-T4 twisted pair devices;

  • L0Base-T - 2 pairs of category 3;
  • L0Base-T full-duplex - 2 pairs of category 3;
  • L00Base-TX - 2 pairs of category 5 (or Type 1ASTP);
  • 100Base-T4 - 4 pairs of category 3;
  • 100Base-TX full-duplex - 2 pairs of category 5 (or Type 1A STP).

L0Base-T has the lowest call priority, and 100Base-T4 full duplex has the highest. The negotiation process occurs when the device is powered on, and can also be initiated at any time by the device control module.

The device that started the auto-negotiation process sends a burst of special impulses to its partner Fast Link Pulse burst (FLP), which contains an 8-bit word that encodes the suggested communication mode, starting with the highest supported by this node.

If the partner node supports the auto-negotuiation function and can also support the proposed mode, it responds with a burst of FLP pulses, in which it confirms this mode, and the negotiation ends there. If the partner node can support a lower priority mode, then it indicates it in the response, and this mode is selected as the working one. Thus, the highest priority common mode of the nodes is always selected.

A node that only supports l0Base-T technology sends Manchester pulses every 16 ms to check the continuity of the line connecting it to the neighboring node. Such a node does not understand the FLP request that the Auto-negotiation node makes to it and continues to send its pulses. A node that has received only line continuity check pulses in response to a FLP request, realizes that its partner can only work according to the l0Base-T standard, and sets this operating mode for itself.

Physical layer 100Base-T4 - twisted pair UTP Cat 3, four pairs

The 100Base-T4 specification was developed to leverage existing Category 3 twisted-pair wiring for high-speed Ethernet. This specification improves overall throughput by simultaneously transmitting bit streams across all 4 cable pairs.

The 100Base-T4 specification appeared later than other Fast Ethernet physical layer specifications. The developers of this technology primarily wanted to create physical specifications closest to the l0Base-T and l0Base-F specifications, which worked on two data lines: two pairs or two fibers. To implement work on two twisted pairs, I had to switch to more quality cable category 5.

At the same time, the developers of the competing l00VG-AnyLAN technology initially relied on Category 3 twisted pair cables; the main advantage was not so much in cost, but in the fact that it had already been laid in the vast majority of buildings. Therefore, after the release of the l00Base-TX and l00Base-FX specifications, the developers of Fast Ethernet technology implemented their own version of the physical layer for twisted pair Category 3.

Instead of 4V / 5V coding, this method uses 8V / 6T coding, which has a narrower signal spectrum and at a speed of 33 Mbps fits into the 16 MHz band of a twisted pair cable category 3 (when coding 4V / 5V, the signal spectrum does not fit into this band) ... Every 8 bits of MAC layer information are encoded with 6 ternary symbols, that is, digits with three states. Each ternary digit is 40 ns long. A group of 6 ternary digits is then transmitted to one of the three transmit twisted pairs, independently and sequentially.

The fourth pair is always used to listen to the carrier frequency for collision detection. The data rate for each of the three transmit pairs is 33.3 Mbps, so the overall 100Base-T4 protocol speed is 100 Mbps. At the same time, due to the adopted coding method, the signal change rate on each pair is only 25 Mbaud, which allows the use of a Category 3 twisted pair cable.

In fig. 3.23 shows the connection of the MDI port of the 100Base-T4 network adapter with the MDI-X port of the hub (the prefix X says that at this connector the connections of the receiver and transmitter are swapped in pairs of the cable compared to the connector of the network adapter, which makes it easier to connect pairs of wires in the cable - without crossing). Couple 1 -2 always required to transfer data from MDI port to MDI-X port, pair 3 -6 - for receiving data by the MDI port from the MDI-X port, and pairs 4 -5 and 7 -8 are bi-directional and are used for both receiving and transmitting, depending on the need.


Connection of nodes according to the 100Base-T4 specification

In the test laboratory "ComputerPress", the testing of Fast Ethernet network cards for PCI bus intended for use in 10/100 Mbit / s workstations was carried out. The most common currently used cards with a bandwidth of 10/100 Mbit / s were selected, since, firstly, they can be used in Ethernet, Fast Ethernet and mixed networks, and, secondly, the promising Gigabit Ethernet technology ( bandwidth up to 1000 Mbps) is still used most often to connect powerful servers to the network equipment of the network core. It is extremely important what quality passive network equipment (cables, sockets, etc.) is used on the network. It is well known that if Category 3 twisted pair cable is sufficient for Ethernet networks, then Category 5 is required for Fast Ethernet. Signal scattering, poor noise immunity can significantly reduce network bandwidth.

The purpose of testing was to determine, first of all, the index of effective productivity (Performance / Efficiency Index Ratio - hereinafter P / E-index), and only then - the absolute value of the throughput. The P / E index is calculated as the ratio of the network card's throughput in Mbps to the percentage of CPU utilization. This index is the industry standard for determining the performance of network adapters. It was introduced in order to take into account the use of network cards of CPU resources. This is because some manufacturers of network adapters try to maximize performance by using more CPU cycles to perform network operations. Low CPU usage and relatively high bandwidth are essential for running mission-critical business, multimedia, and real-time applications.

We have tested the cards that are currently most often used for workstations in corporate and local networks:

  1. D-Link DFE-538TX
  2. SMC EtherPower II 10/100 9432TX / MP
  3. 3Com Fast EtherLink XL 3C905B-TX-NM
  4. Compex RL 100ATX
  5. Intel EtherExpress PRO / 100 + Management
  6. CNet PRO-120
  7. NetGear FA 310TX
  8. Allied Telesyn AT 2500TX
  9. Surecom EP-320X-R

The main characteristics of the tested network adapters are given in table. 1 . Let us explain some of the terms used in the table. Automatic detection of the connection speed means that the adapter itself determines the maximum possible speed of operation. In addition, if autosensing is supported, no additional configuration is required when switching from Ethernet to Fast Ethernet and vice versa. That is, the system administrator is not required to reconfigure the adapter and reload the drivers.

Bus Master support allows data transfer directly between the network card and the computer memory. This frees the central processor for other operations. This property has become the de facto standard. No wonder all known network cards support Bus Master mode.

Wake on LAN allows you to turn on your PC over the network. That is, it becomes possible to service the PC after hours. For this purpose, three-pin connectors on the motherboard and network adapter are used, which are connected with a special cable (included in the package). In addition, special control software is required. Wake on LAN technology is developed by the Intel-IBM alliance.

Full duplex mode allows you to transmit data simultaneously in both directions, half duplex - only in one. Thus, the maximum possible throughput in full duplex mode is 200 Mbps.

DMI (Desktop Management Interface) provides the ability to obtain information about the configuration and resources of the PC using network management software.

Support for the WfM (Wired for Management) specification enables the network adapter to communicate with network management and administration software.

To remotely boot a computer's OS over a network, network adapters are equipped with a special BootROM memory. This allows for efficient use of diskless workstations on the network. Most tested cards only had a BootROM slot; the BootROM itself is usually a separately ordered option.

ACPI (Advanced Configuration Power Interface) support helps to reduce power consumption. ACPI is a new technology for power management. It is based on the use of both hardware and software tools... Basically, Wake on LAN is part of ACPI.

Proprietary means of improving performance can increase the efficiency of the network card. The most famous of them are Parallel Tasking II from 3Com and Adaptive Technology from Intel. These funds are usually patented.

Support for major operating systems is provided by almost all adapters. The main operating systems include: Windows, Windows NT, NetWare, Linux, SCO UNIX, LAN Manager and others.

The level of service support is assessed by the availability of documentation, a diskette with drivers and the ability to download the latest drivers from the company's website. Packaging also plays an important role. From this point of view, the best, in our opinion, are network d-Link adapters, Allied Telesyn and Surecom. But in general, the level of support was satisfactory for all cards.

Usually the warranty covers the entire life of the power adapter (lifetime warranty). Sometimes it is limited to 1-3 years.

Testing technique

All tests used the latest NIC drivers downloaded from the respective vendors' Internet servers. In the case when the driver of the network card allowed any settings and optimization, the default settings were used (except for the Intel network adapter). Note that the richest additional features and functions are provided by cards and corresponding drivers from 3Com and Intel.

Performance was measured using Novell's Perform3 utility. The principle of the utility is that a small file is copied from a workstation to a shared network drive on the server, after which it remains in the server's file cache and is read from there many times over a specified period of time. This allows for memory-network-memory interactions and eliminates the impact of disk latency. The utility parameters include initial file size, final file size, resizing step, and test time. Novell Perform3 utility displays performance values \u200b\u200bwith different file sizes, average and maximum performance (in KB / s). The following parameters were used to configure the utility:

  • Initial file size - 4095 bytes
  • Final file size - 65,535 bytes
  • File increment - 8192 bytes

The test time with each file was set to twenty seconds.

Each experiment used a pair of identical network cards, one running on a server and the other on a workstation. This does not seem to be in line with common practice, as servers usually use specialized network adapters with a number of additional features. But this is exactly how - the same network cards are installed on the server and on workstations - testing is carried out by all well-known test laboratories in the world (KeyLabs, Tolly Group, etc.). The results are somewhat lower, but the experiment turns out to be clean, since only the analyzed network cards work on all computers.

Compaq DeskPro EN client configuration:

  • pentium II 450 MHz processor
  • cache 512 KB
  • rAM 128 MB
  • hard drive 10 GB
  • operating system Microsoft Windows NT Server 4.0 c 6 a SP
  • tCP / IP protocol.

Compaq DeskPro EP server configuration:

  • celeron 400 MHz processor
  • rAM 64 MB
  • hard drive 4,3 GB
  • operating system Microsoft Windows NT Workstation 4.0 c c 6 a SP
  • tCP / IP protocol.

Testing was conducted under conditions where computers were directly connected with a UTP Category 5 crossover cable. During these tests, the cards were operating in 100Base-TX Full Duplex mode. In this mode, the throughput is somewhat higher due to the fact that part of the service information (for example, acknowledgment) is transmitted simultaneously with the useful information, the amount of which is estimated. In these conditions, it was possible to record rather high values \u200b\u200bof the throughput; for example, 3Com Fast EtherLink XL 3C905B-TX-NM adapter averages 79.23 Mbps.

The processor load was measured on the server using windows utilities NT Performance Monitor; the data was written to a log file. Perform3 was run on the client so as not to affect the server processor load. Intel Celeron was used as the processor of the server computer, the performance of which is significantly lower than the performance of Pentium II and III processors. Intel Celeron was used deliberately: the fact is that, since the processor load is determined with a sufficiently large absolute error, in case of large absolute values \u200b\u200bthe relative error turns out to be smaller.

After each test, Perform3 utility places the results of its work in a text file as a dataset of the following form:

65535 bytes. 10491.49 KBps. 10491.49 Aggregate KBps. 57343 bytes. 10844.03 KBps. 10844.03 Aggregate KBps. 49151 bytes. 10737.95 KBps. 10737.95 Aggregate KBps. 40959 bytes. 10603.04 KBps. 10603.04 Aggregate KBps. 32767 bytes. 10497.73 KBps. 10497.73 Aggregate KBps. 24575 bytes. 10220.29 KBps. 10220.29 Aggregate KBps. 16383 bytes. 9573.00 KBps. 9573.00 Aggregate KBps. 8191 bytes. 8195.50 KBps. 8195.50 Aggregate KBps. 10844.03 Maximum KBps. 10145.38 Average KBp.

The file size is displayed, the corresponding throughput for the selected client and for all clients (in this case, there is only one client), as well as the maximum and average throughput throughout the test. The resulting average values \u200b\u200bfor each test were converted from KB / s to Mbit / s using the formula:
(KB x 8) / 1024,
and the value of the P / E index was calculated as the ratio of the throughput to the processor load in percent. Subsequently, the average value of the P / E index was calculated based on the results of three measurements.

Using the Perform3 utility on Windows NT Workstation, the following problem arose: in addition to writing to a network drive, the file was also written to the local file cache, from which it was subsequently read very quickly. The results were impressive, but unrealistic, since no data was transmitted over the network as such. To enable applications to treat shared network drives as normal local drives, the operating system uses a special network component - a redirector that redirects I / O requests over the network. Under normal operating conditions, when executing the procedure for writing a file to a shared network drive, the redirector uses the algorithm windows caching NT. That is why, when writing to the server, it also writes to the local file cache of the client machine. And for testing, it is necessary that caching is carried out only on the server. To prevent caching on the client computer, parameter values \u200b\u200bin the Windows NT registry were changed, which made it possible to disable the caching performed by the redirector. Here's how it was done:

  1. Registry path:

    HKEY_LOCAL_MACHINE \\ SYSTEM \\ CurrentControlSet \\ Services \\ Rdr \\ Parameters

    Parameter name:

    UseWriteBehind enables write-behind optimization for files being written

    Type: REG_DWORD

    Value: 0 (default: 1)

  2. Registry path:

    HKEY_LOCAL_MACHINE \\ SYSTEM \\ CurrentControlSet \\ Services \\ Lanmanworkstation \\ parameters

    Parameter name:

    UtilizeNTCaching specifies whether the redirector will use the Windows NT cache manager to cache file contents.

    Type: REG_DWORD Value: 0 (default: 1)

Intel EtherExpress PRO / 100 + Management Network Adapter

The card's throughput and processor utilization are nearly the same as that of 3Com. The windows for setting the parameters of this map are shown below.

The new Intel 82559 controller in this card provides very high performance, especially in Fast Ethernet networks.

The technology that Intel uses in its Intel EtherExpress PRO / 100 + card is called Adaptive Technology. The essence of the method is to automatically change the time intervals between Ethernet packets, depending on the network load. As network congestion increases, the distance between individual Ethernet packets dynamically increases, which can reduce collisions and increase throughput. With a low network load, when the probability of collisions is low, the time intervals between packets are reduced, which also leads to increased performance. The advantages of this method should be greatest in large collisional Ethernet segments, that is, in those cases when hubs rather than switches dominate the network topology.

Intel's new technology, called Priority Packet, allows traffic through the NIC to be tuned according to the priorities of individual packets. This provides the ability to increase data transfer rates for mission-critical applications.

VLAN support is provided (IEEE 802.1Q standard).

There are only two indicators on the board - work / connection, speed 100.

www.intel.com

SMC EtherPower II 10/100 SMC9432TX / MP Network Adapter

The architecture of this card uses two promising technologies SMC SimulTasking and Programmable InterPacket Gap. The first technology is similar to 3Com Parallel Tasking technology. Comparing the test results for the cards of these two manufacturers, we can conclude about the degree of efficiency of the implementation of these technologies. Note also that this network card showed the third result in terms of performance and P / E index, outperforming all cards except 3Com and Intel.

There are four LED indicators on the card: speed 100, transmission, connection, duplex.

The company's main Web site is www.smc.com

Ethernet despite
for all its success, has never been elegant.
NICs only have rudimentary
the concept of intelligence. They really
first send the packet, and only then
see if anyone else has transmitted data
simultaneously with them. Someone compared Ethernet to
a society in which people can communicate
with each other only when everyone screams
at the same time.

Like him
predecessor, Fast Ethernet uses the method
CSMACD (Carrier Sense Multiple Access with
Collision Detection - Multiple access environments with
carrier sense and collision detection).
Behind this long and incomprehensible acronym
hiding a very simple technology. When
the Ethernet board should send a message, then
first she waits for silence, then
sends a packet and listens at the same time, not
did anyone send a message
simultaneously with him. If this happened then
both packages do not reach the addressee. If a
there was no collision, but the board should continue
transmit data, it still waits
a few microseconds before again
will try to send a new batch. it
made to ensure that other boards also
could work and no one was able to capture
the channel is monopoly. In case of collision, both
devices fall silent for a small
time span generated
randomly and then take
a new attempt to transfer data.

Due to collisions, neither
Ethernet nor Fast Ethernet can ever achieve
its maximum performance 10
or 100 Mbps. As soon as it starts
increase network traffic, temporary
delays between sending individual packets
are reduced, and the number of collisions
increases. Real
Ethernet performance cannot exceed
70% of its potential bandwidth
ability, and maybe even lower if the line
seriously overwhelmed.

Ethernet uses
the packet size is 1516 bytes, which is fine
fit when it was first created.
Today this is considered a disadvantage when
Ethernet is used for communication
servers because servers and communication lines
tend to exchange large
the number of small packages that
overloads the network. In addition, Fast Ethernet
imposes a limit on the distance between
connected devices - no more than 100
meters and it forces to show
extra caution when
designing such networks.

Ethernet was first
designed based on bus topology,
when all devices were connected to a common
cable, thin or thick. Application
twisted pair has only partially changed the protocol.
When using a coaxial cable
the collision was determined at once by all
stations. In the case of twisted pair
use the "jam" signal as soon as
the station detects a collision, then it
sends a signal to the hub, the latter in
in turn sends "jam" to everyone
devices connected to it.

In order to
reduce congestion, Ethernet networks
are split into segments that
unite with bridges and
routers. This allows you to transfer
only necessary traffic between segments.
A message passed between two
stations in one segment will not
transferred to another and cannot call in it
overload.

Today at
building a central highway,
unifying servers use
switched Ethernet. Ethernet switches can
regarded as high-speed
multiport bridges that are able to
independently determine which of its
ports the packet is addressed. Switch
looks at packet headers and so
way makes a table that determines
where is this or that subscriber with such
physical address. This allows
limit the scope of the package
and reduce the likelihood of overflow,
sending it only to the correct port. Only
broadcast packets are sent by
all ports.

100BaseT
- big brother 10BaseT

Technology idea
Fast Ethernet was born in 1992. In August
next year group of producers
merged into the Fast Ethernet Alliance (FEA).
The FEA's goal was to obtain
Fast Ethernet formal approval from committee
802.3 Institute of Electrical Engineers and
radio electronics (Institute of Electrical and Electronic
Engineers, IEEE), since this committee
deals with standards for Ethernet. Luck
accompanied by new technology and
supporting alliance: in June 1995
all formal procedures have been completed, and
Fast Ethernet technology was named
802.3u.

With a light hand IEEE
Fast Ethernet is referred to as 100BaseT. This is explained
simple: 100BaseT is an extension
10BaseT standard with bandwidth from
10M bps to 100 Mbps. 100BaseT standard includes
into a protocol for processing multiple
carrier-sense access and
CSMA / CD collision detection (Carrier Sense Multiple
Access with Collision Detection), which is also used in
10BaseT. In addition, Fast Ethernet can operate on
cables of several types, including
twisted pair. Both of these properties are new
standards are very important to potential
buyers, and thanks to them 100BaseT
turns out to be a good way to migrate networks
based on 10BaseT.

The main
a selling point for 100BaseT
is that Fast Ethernet is based on
inherited technology. Since Fast Ethernet
the same transfer protocol is used
messages as in older Ethernet versions, and
cable systems of these standards
compatible, to go to 100BaseT from 10BaseT
required

smaller
capital investment than for installation
other types of high-speed networks. Besides
addition, since 100BaseT is
continuation of the old Ethernet standard, all
tools and procedures
network analysis, as well as all
softwareworking on
older Ethernet networks must be in this standard
keep working capacity.
Hence the 100BaseT environment will be familiar
network administrators with experience
with Ethernet. This means that staff training will take
less time and will cost significantly
cheaper.

PRESERVATION
Of the PROTOCOL

Perhaps,
the greatest practical use of the new
technology brought the decision to leave
message transfer protocol unchanged.
The message transfer protocol, in our case
CSMA / CD, defines the way in which data
transmitted over the network from one node to another
through the cable system. In the ISO / OSI model
CSMA / CD protocol is part of the layer
media access control (MAC).
This level defines the format, in
where information is transmitted over the network, and
the way the network device gets
network access (or network management) for
data transmission.

CSMA / CD name
can be broken down into two parts: Carrier Sense Multiple Access
and Collision Detection. From the first part of the name you can
conclude how a node with a network
the adapter determines the moment when it
a message should be sent. In accordance with
the CSMA protocol, the network node first "listens"
network to determine if it is being transmitted to
any other message at the moment.
If you hear a carrier tone,
means the network is currently busy with another
message - the network node goes into the mode
waiting and remains in it until the network
will be released. When the network comes
silence, the node starts transmitting.
In fact, the data is sent to all nodes
network or segment, but are accepted only by
the node to which they are addressed.

Collision Detection -
the second part of the name is used to resolve
situations where two or more nodes try
send messages simultaneously.
According to the CSMA protocol, everyone ready to
transmission, the node must first listen to the network,
to determine if she is free. But,
if two nodes are listening at the same time,
they will both decide the network is free and start
transmit your packages at the same time. In this
situations transmitted data
overlap each other (network
engineers call it a conflict), and none
from messages does not reach the point
destination. Collision Detection requires the node
listened to the network after the transmission
package. If a conflict is found, then
node repeats transmission through random
the chosen time interval and
checks again if a conflict has occurred.

THREE KINDS OF FAST ETHERNET

Along with
preservation of the CSMA / CD protocol, other important
the solution was to design 100BaseT like this
in such a way that it can be applied
cables of different types - like those that
are used in older versions of Ethernet and
newer models. The standard defines three
modifications to work with
different types of Fast Ethernet cables: 100BaseTX, 100BaseT4
and 100BaseFX. Modifications 100BaseTX and 100BaseT4 are calculated
twisted pair, and 100BaseFX was designed for
optical cable.

100BaseTX standard
requires two pairs of UTP or STP. One
a pair is used for transmission, the other for
reception. These requirements are met by two
major cable standard: EIA / TIA-568 UTP
Category 5 and STP Type 1 from IBM. In 100BaseTX
attractive provision
full duplex mode when working with
network servers, as well as the use
only two out of four pairs of an eight-core
cable - the other two pairs remain
free and can be used in
further to empower
networks.

However, if you
going to work with 100BaseTX, using for
of this Category 5 wiring, then you should
to know about its shortcomings. This cable
more expensive than other eight-core cables (for example
Category 3). Also, to work with it
use of punchdown blocks is required (punchdown
blocks), connectors and patch panels,
meeting the requirements of Category 5.
It should be added that for support
full duplex mode should
install full duplex switches.

100BaseT4 standard
differs in softer requirements for
the cable you are using. The reason for this is
the fact that 100BaseT4 uses
all four pairs of an eight-core cable: one
for transmission, another for reception, and
the remaining two work as a transmission,
and at the reception. Thus, in 100BaseT4 and reception,
and data transmission can be carried out by
three pairs. By decomposing 100 Mbps into three pairs,
100BaseT4 decreases the frequency of the signal, so
enough and less
high quality cable. For implementation
For 100BaseT4 networks, Category 3 and UTP cables are suitable.
5, as well as UTP Category 5 and STP Type 1.

Advantage
100BaseT4 is less rigid
wiring requirements. Category 3 cables and
4 are more common, and in addition, they
significantly cheaper than cables
Category 5 things to remember before
start of installation work. The disadvantages are
are that 100BaseT4 requires all four
pairs and that full duplex is this
not supported by the protocol.

Fast Ethernet includes
also a standard for working with multimode
optical fiber with 62.5-micron core and 125-micron
shell. The 100BaseFX standard is focused on
mainly on the trunk - for connection
Fast Ethernet repeaters within one
building. Traditional benefits
optical cable are inherent in the standard
100BaseFX: immunity to electromagnetic
noise, improved data protection and large
distance between network devices.

RUNNER
FOR SHORT DISTANCES

Although Fast Ethernet and
is a continuation of the Ethernet standard,
no migration from 10BaseT to 100BaseT
regarded as a mechanical substitute
equipment - for this they can
changes in network topology are required.

Theoretical
Fast Ethernet segment diameter limit
is 250 meters; it's only 10
percent theoretical size limit
Ethernet network (2500 meters). This limitation
stems from the nature of the CSMA / CD protocol and
transmission speed 100Mbps.

What already
noted earlier transmitting data
the workstation must listen to the network in
the passage of time to make sure
that the data has reached the destination station.
On an Ethernet network with a bandwidth of 10
Mbps (for example 10Base5) time interval,
necessary workstation for
listening to the network for a conflict,
determined by the distance, which is 512-bit
frame (frame size is specified in the Ethernet standard)
will pass during the processing of this frame by
workstation. For Ethernet with bandwidth
with a capacity of 10 Mbps, this distance is
2500 meters.

On the other hand,
the same 512-bit frame (802.3u standard
specifies a frame the same size as 802.3, then
is in 512 bits), transmitted by the working
station in the Fast Ethernet network, will pass only 250 m,
before the workstation completes it
processing. If the receiving station were
removed from the transmitting station by
distance over 250 m, the frame could
come into conflict with another frame on
lines somewhere further, and the transmitting
the station, having completed the transmission, no longer
would accept this conflict. therefore
the maximum diameter of a 100BaseT network is
250 meters.

To
use the allowed distance,
you need two repeaters to connect
all nodes. According to the standard,
maximum distance between node and
repeater is 100 meters; in Fast Ethernet,
as in 10BaseT, the distance between
hub and workstation are not
must exceed 100 meters. Insofar as
connecting devices (repeaters)
introduce additional delays, real
working distance between nodes can
be even smaller. therefore
it seems reasonable to take all
distances with some margin.

To work on
long distances will have to be purchased
optical cable. For example, equipment
100BaseFX in half duplex mode allows
connect a switch to another switch
or a terminal station located on
distance up to 450 meters from each other.
With 100BaseFX full duplex installed, you can
connect two network devices on
distance up to two kilometers.

AS
INSTALL 100BASET

Besides cables,
which we have already discussed for installing Fast
Ethernet network adapters are required to
workstations and servers, hubs
100BaseT and possibly some
100BaseT switches.

Adapters,
necessary for organizing a 100BaseT network,
are called 10/100 Mbps Ethernet adapters.
These adapters are capable of (this requirement
standard 100BaseT) independently distinguish 10
Mbps from 100 Mbps. To serve the group
servers and workstations transferred to
100BaseT, a 100BaseT hub is also required.

When turned on
server or personal computer from
adapter 10/100, the latter issues a signal,
announcing what he can provide
bandwidth 100Mbps. If a
receiving station (most likely this
will be a hub) is also designed for
work with 100BaseT, it will give a signal in response,
to which both a hub, a PC or a server
automatically switch to 100BaseT mode. If a
the hub only works with 10BaseT, it does not
gives a response and the PC or server
will automatically switch to 10BaseT mode.

When
small-scale 100BaseT configurations can be
use a 10/100 bridge or switch that
will provide communication of the part of the network working with
100BaseT, with pre-existing network
10BaseT.

Deceiving
RAPIDITY

Summing it all up
the above, we note that, as it seems to us,
Fast Ethernet is best for problem solving
high peak loads. For example, if
some user is using CAD or
image processing programs and
needs to increase throughput
ability, then Fast Ethernet may be
a good way out. However if
problems caused by excess
users online, then 100BaseT starts
slow down the exchange of information at about 50%
network load - in other words, on the same
level as 10BaseT. But in the end it is
after all, nothing more than an extension.

Fast Ethernet

Fast Ethernet - the IEEE 802.3 u specification officially adopted on October 26, 1995, defines a data link protocol standard for networks operating using both copper and fiber-optic cables at a speed of 100 Mb / s. The new specification is the successor to the IEEE 802.3 Ethernet standard, using the same frame format, CSMA / CD media access mechanism, and star topology. Several physical layer configuration elements have evolved to increase throughput, including cable types, segment lengths, and number of hubs.

Fast Ethernet structure

To better understand the operation and understand the interaction of Fast Ethernet elements, let's turn to Figure 1.

Figure 1. Fast Ethernet System

Logic Link Control (LLC) Sublayer

The IEEE 802.3 u specification breaks down link layer functions into two sublayers: logical link control (LLC) and medium access layer (MAC), which will be discussed below. LLC, whose functions are defined by the IEEE 802.2 standard, actually provides interconnection with higher level protocols (for example, IP or IPX), providing various communication services:

  • Service without connection and acknowledgment. A simple service that does not provide flow control or error control, and does not guarantee correct delivery of data.
  • Connection-oriented service. An absolutely reliable service that guarantees correct data delivery by establishing a connection to the receiving system before the data transfer begins and using error control and data flow control mechanisms.
  • Connectionless service with acknowledgments. A moderately complex service that uses acknowledgment messages to ensure delivery, but does not establish connections until data is transferred.

On the transmitting system, data downstream from the Network Layer protocol is first encapsulated by the LLC sublayer. The standard calls them Protocol Data Unit (PDU, protocol data unit). When the PDU is handed down to the MAC sublayer, where it is again framed by a header and post information, it can technically be called a frame at this point. For an Ethernet packet, this means that the 802.3 frame contains a three-byte LLC header in addition to the Network Layer data. Thus, the maximum allowable data length in each packet is reduced from 1500 to 1497 bytes.

The LLC header consists of three fields:

In some cases, LLC frames play a minor role in the network communication process. For example, on a network using TCP / IP along with other protocols, the only function of LLC might be to allow 802.3 frames to contain a SNAP header, like an Ethertype, indicating the Network Layer protocol to which the frame should be sent. In this case, all LLC PDUs use the unnumbered information format. However, other higher-level protocols require more advanced services from the LLC. For example, NetBIOS sessions and several NetWare protocols use LLC connection-oriented services more broadly.

SNAP header

The receiving system needs to determine which of the Network Layer protocols should receive the incoming data. 802.3 packets within LLC PDUs use another protocol called Sub - Network Access Protocol (SNAP, Subnetting Access Protocol).

The SNAP header is 5 bytes long and is located immediately after the LLC header in the data field of the 802.3 frame, as shown in the figure. The header contains two fields.

Organization code.The Organization or Vendor ID is a 3-byte field that takes the same value as the first 3 bytes of the source MAC address in the 802.3 header.

Local code.The local code is a 2 byte field that is functionally equivalent to the Ethertype field in the Ethernet II header.

Matching sublevel

As stated earlier, Fast Ethernet is an evolutionary standard. A MAC designed for the AUI interface needs to be mapped for the MII interface used in Fast Ethernet, which is what this sublayer is for.

Media Access Control (MAC)

Each node in a Fast Ethernet network has a media access controller (Media AccessController- MAC). MAC is key to Fast Ethernet and has three purposes:

The most important of the three MAC assignments is the first. For any network technology that uses a common medium, the medium access rules that determine when a node can transmit are its primary characteristic. Several IEEE committees are involved in the development of rules for accessing the environment. The 802.3 committee, often referred to as the Ethernet committee, defines LAN standards that use rules called CSMA / CD (Carrier Sense Multiple Access with Collision Detection - multiple access with carrier sense and collision detection).

CSMS / CD are media access rules for both Ethernet and Fast Ethernet. It is in this area that the two technologies completely coincide.

Since all nodes in Fast Ethernet share the same medium, they can only transmit when it is their turn. This queue is defined by CSMA / CD rules.

CSMA / CD

The MAC Fast Ethernet controller listens to the carrier before transmitting. The carrier exists only when another node is transmitting. The PHY layer detects the presence of a carrier and generates a message for the MAC. The presence of a carrier indicates that the environment is busy and the listening node (or nodes) must yield to the transmitting one.

A MAC that has a frame to transmit must wait a minimum amount of time after the end of the previous frame before transmitting it. This time is called interpacket gap(IPG, interpacket gap) and lasts 0.96 microseconds, that is, a tenth of the transmission time of a normal Ethernet packet at a speed of 10 Mbps (IPG is the only time interval, always specified in microseconds, not in bit time) Figure 2.


Figure 2. Interpacket gap

After packet 1 ends, all LAN nodes must wait for IPG time before being able to transmit. The time interval between packets 1 and 2, 2 and 3 in Fig. 2 is IPG time. After the transmission of packet 3 was complete, no nodes had material to process, so the time interval between packets 3 and 4 is longer than the IPG.

All nodes on the network must comply with these rules. Even if a node has many frames to transmit and this node is the only transmitting one, then after sending each packet, it must wait for at least IPG time.

This is part of the CSMA Fast Ethernet Media Access Rules. In short, many nodes have access to the medium and use the carrier to monitor its busyness.

The early experimental networks applied exactly these rules, and such networks worked very well. However, the use of CSMA alone caused a problem. Often, two nodes, having a packet to transmit and waiting for IPG time, would start transmitting at the same time, resulting in data corruption on both sides. This situation is called collision (collision) or conflict.

To overcome this obstacle, early protocols used a fairly simple mechanism. Packages were divided into two categories: commands and reactions. Each command sent by the node required a response. If no response has been received for some time (called a timeout period) after the command has been sent, the original command is re-issued. This could happen several times (the maximum number of timeouts) before the sending node recorded the error.

This scheme could work fine, but only up to a certain point. The occurrence of conflicts led to a sharp decrease in performance (usually measured in bytes per second), because nodes often stood idle, waiting for responses to commands that never reached their destination. Network congestion, an increase in the number of nodes are directly related to an increase in the number of conflicts and, therefore, to a decrease in network performance.

Early network designers quickly found a solution to this problem: each node must detect the loss of a transmitted packet by detecting a conflict (and not wait for a reaction that never comes). This means that packets lost due to the conflict must be re-transmitted immediately before the timeout expires. If the node transmitted the last bit of the packet without a conflict, then the packet was transmitted successfully.

Carrier sense can be combined well with collision detection. Collisions still continue to occur, but this does not affect the performance of the network, as the nodes quickly get rid of them. The DIX group, having developed the rules for accessing the CSMA / CD environment for Ethernet, formalized them in the form of a simple algorithm - Figure 3.


Figure 3. Algorithm of CSMA / CD operation

Physical layer device (PHY)

Since Fast Ethernet can use different type cable, each medium requires a unique signal preconversion. Conversion is also required for efficient data transmission: to make the transmitted code resistant to interference, possible loss, or distortion of its individual elements (baud), to ensure effective synchronization of clocks on the transmitting or receiving side.

Coding Sublayer (PCS)

Encodes / decodes data coming from / to the MAC layer using algorithms or.

Physical attachment and physical media sublayers (PMA and PMD)

The PMA and PMD sublayers communicate between the PSC sublayer and the MDI interface, providing formation in accordance with the physical coding method: or.

Auto-negotiation sublevel (AUTONEG)

The auto-negotiation sublayer allows two communicating ports to automatically select the most efficient mode of operation: full-duplex or half-duplex 10 or 100 Mb / s. Physical layer

The Fast Ethernet standard defines three types of media for 100 Mbps Ethernet.

  • 100Base-TX - two twisted pairs of wires. Transmission is carried out in accordance with the standard for data transmission in a twisted physical medium developed by ANSI (American National Standards Institute - American National Standards Institute). Coiled data cables can be shielded or unshielded. Uses 4B / 5B data coding algorithm and MLT-3 physical coding method.
  • 100Base-FX is a two-core fiber optic cable. The transmission is also carried out in accordance with the ANSI standard for data transmission in fiber optic media. Uses 4B / 5B data coding algorithm and NRZI physical coding method.

100Base-TX and 100Base-FX specifications are also known as 100Base-X

  • 100Base-T4 is a special specification developed by the IEEE 802.3u committee. According to this specification, data transmission is carried out over four twisted pairs of telephone cable, which is called UTP Category 3 cable. It uses 8B / 6T data coding algorithm and NRZI physical coding method.

Additionally, the Fast Ethernet standard includes guidelines for the use of Category 1 shielded twisted pair cable, which is the standard cable traditionally used in Token Ring networks. The support organization and guidelines for using STP cable over Fast Ethernet provide a path to Fast Ethernet for customers with STP cabling.

The Fast Ethernet specification also includes an auto-negotiation mechanism that allows a host port to automatically adjust to a data transfer rate of 10 Mbps or 100 Mbps. This mechanism is based on the exchange of a number of packets with a port of a hub or switch.

100Base-TX environment

Two twisted pairs are used as a 100Base-TX transmission medium, with one pair being used to transmit data and the other to receive them. Since the ANSI TP-PMD specification contains descriptions of both shielded and unshielded twisted pairs, the 100Base-TX specification includes support for both unshielded and shielded type 1 and 7 twisted pairs.

MDI (Medium Dependent Interface) connector

The media-dependent 100Base-TX link interface can be one of two types. For unshielded twisted-pair cable, use an 8-pin RJ 45 Category 5 connector as the MDI connector. The same connector is used on a 10Base-T network, which provides backward compatibility with existing Category 5 cabling. use IBM STP type 1 connector, which is a shielded DB9 connector. This connector is commonly used in Token Ring networks.

Category 5 (e) UTP cable

The UTP 100Base-TX media interface uses two pairs of wires. To minimize crosstalk and possible signal distortion, the remaining four wires should not be used to carry any signals. The transmit and receive signals for each pair are polarized, with one wire carrying a positive (+) signal and the other a negative (-) signal. The color coding of the cable wires and the pin numbers of the connector for the 100Base-TX network are shown in table. 1. Although the 100Base-TX PHY layer was developed after the adoption of the ANSI TP-PMD standard, the pin numbers of the RJ 45 connector have been changed to align with the 10Base-T pinout already used. The ANSI TP-PMD standard uses pins 7 and 9 to receive data, while the 100Base-TX and 10Base-T standards use pins 3 and 6 for this. This wiring allows you to use 100Base-TX adapters instead of 10 Base adapters - T and connect them to the same Category 5 cables without changing the wiring. In the RJ 45 connector, the pairs of wires used are connected to pins 1, 2 and 3, 6. For the correct connection of the wires, follow their color coding.

Table 1. Purpose of connector contacts MDI cable UTP 100Base-TX

Nodes interact with each other by exchanging frames (frames). In Fast Ethernet, a frame is the basic unit of exchange over a network - any information transmitted between nodes is placed in the data field of one or more frames. Forwarding frames from one node to another is possible only if there is a way to uniquely identify all network nodes. Therefore, each node on a LAN has an address called its MAC address. This address is unique: no two hosts on the local network can have the same MAC address. Moreover, none of the lAN technologies (except for ARCNet) no two nodes in the world can have the same MAC address. Any frame contains at least three main pieces of information: recipient address, sender address, and data. Some frames have other fields, but only the three listed are required. Figure 4 shows the Fast Ethernet frame structure.

Figure 4. Frame structure Fast Ethernet

  • address of the recipient - the address of the node receiving the data is indicated;
  • sender address - the address of the node that sent the data is indicated;
  • length / type (L / T - Length / Type) - contains information about the type of transmitted data;
  • frame checksum (PCS - Frame Check Sequence) - designed to check the correctness of the frame received by the receiving node.

The minimum frame size is 64 octets, or 512 bits (terms octetand byte -synonyms). The maximum frame size is 1518 octets, or 12144 bits.

Frame addressing

Each node on a Fast Ethernet network has a unique number called the MAC address or node address. This number consists of 48 bits (6 bytes), assigned to the network interface during device manufacture and programmed during initialization. Therefore, the network interfaces of all LANs, with the exception of ARCNet, which uses 8-bit addresses assigned by the network administrator, have a built-in unique MAC address that differs from all other MAC addresses on Earth and is assigned by the manufacturer in agreement with the IEEE.

To facilitate the management of network interfaces, the IEEE has proposed to divide the 48-bit address field into four parts, as shown in Figure 5. The first two address bits (bits 0 and 1) are address type flags. The meaning of the flags determines how the address part is interpreted (bits 2 - 47).


Figure 5. Format of the MAC address

The I / G bit is called individual / group address flagand shows what (individual or group) the address is. An individual address is assigned to only one interface (or node) on the network. Addresses with the I / G bit set to 0 are MAC addressesor node addresses.If the I / O bit is set to 1, then the address belongs to the group and is usually called multipoint address(multicast address) or functional address(functional address). A multicast address can be assigned to one or more LAN network interfaces. Frames sent to a multicast address receive or copy all LAN network interfaces that have it. Multicast addresses allow a frame to be sent to a subset of hosts on a local network. If the I / O bit is set to 1, then bits 46 to 0 are treated as a multicast address and not as the U / L, OUI, and OUA fields of the normal address. The U / L bit is called universal / local control flagand determines how the address was assigned to the network interface. If both bits, I / O and U / L, are set to 0, then the address is the unique 48-bit identifier described earlier.

OUI (organizationally unique identifier - organizationally unique identifier). The IEEE assigns one or more OUIs to each manufacturer of network adapters and interfaces. Each manufacturer is responsible for the correct assignment of the OUA (organizationally unique address - organizationally unique address),which should have any device it creates.

When the U / L bit is set, the address is locally managed. This means that it is not specified by the manufacturer of the network interface. Any organization can create its own MAC address for a network interface by setting the U / L bit to 1, and bits 2 through 47 to some chosen value. The network interface, having received the frame, first of all decodes the destination address. When the I / O bit is set in the address, the MAC layer will receive this frame only if the destination address is in the list that is stored on the node. This technique allows one node to send a frame to many nodes.

There is a special multicast address called broadcast address.In a 48-bit IEEE broadcast address, all bits are set to 1. If a frame is sent with a destination broadcast address, all nodes on the network will receive and process it.

Field Length / Type

The L / T (Length / Type) field serves two different purposes:

  • to determine the length of the frame data field, excluding any padding with spaces;
  • to denote the data type in the data field.

The L / T field value, between 0 and 1500, is the length of the frame data field; a higher value indicates the type of protocol.

In general, the L / T field is a historical residue of the Ethernet standardization in the IEEE, which gave rise to a number of compatibility problems for equipment released before 1983. Now Ethernet and Fast Ethernet never use L / T fields. The specified field serves only for coordination with the software that processes frames (that is, with protocols). But the only truly standard purpose of the L / T field is to use it as a length field - the 802.3 specification does not even mention its possible use as a data type field. The standard states: "Frames with a length field value greater than that specified in clause 4.4.2 may be ignored, discarded, or privately used. The use of these frames is outside the scope of this standard."

Summarizing what has been said, we note that the L / T field is the primary mechanism by which frame type.Fast Ethernet and Ethernet frames in which the L / T field value specifies the length (L / T 802.3 value, frames in which the data type is set by the value of the same field (L / T value\u003e 1500) are called frames Ethernet- II or DIX.

Data field

In the data fieldcontains information that one node sends to another. Unlike other fields that store very specific information, a data field can contain almost any information, as long as its size is at least 46 and no more than 1500 bytes. How the content of a data field is formatted and interpreted is determined by the protocols.

If you need to send data less than 46 bytes long, the LLC layer adds bytes with an unknown value to the end, called insignificant data(pad data). As a result, the field length becomes 46 bytes.

If the frame is of 802.3 type, the L / T field indicates the amount of valid data. For example, if a 12-byte message is being sent, then the L / T field contains the value 12, and the data field contains 34 additional insignificant bytes. The addition of insignificant bytes initiates the Fast Ethernet LLC layer, and is usually implemented in hardware.

The MAC layer facility does not specify the content of the L / T field — software does. Setting the value of this field is almost always done by the network interface driver.

Frame checksum

The Frame Check Sequence (PCS) ensures that the received frames are not corrupted. When forming the transmitted frame at the MAC level, a special mathematical formula is used CRC(Cyclic Redundancy Check) designed to calculate a 32-bit value. The resulting value is placed in the FCS field of the frame. The values \u200b\u200bof all bytes of the frame are fed to the input of the MAC layer element that calculates the CRC. The FCS field is the primary and most important Fast Ethernet error detection and correction mechanism. Starting from the first byte of the destination address and ending with the last byte of the data field.

DSAP and SSAP Field Values

DSAP / SSAP Values

Description

Indiv LLC Sublayer Mgt

Group LLC Sublayer Mgt

SNA Path Control

Reserved (DOD IP)

ISO CLNS IS 8473

The 8B6T coding algorithm converts an eight-bit data octet (8B) to a six-bit ternary symbol (6T). Code groups 6T are designed to be transmitted in parallel over three twisted pairs of cable, so the effective data transfer rate for each twisted pair is one third of 100 Mbit / s, that is, 33.33 Mbit / s. The ternary symbol rate for each twisted pair is 6/8 of 33.3 Mbps, which corresponds to a clock frequency of 25 MHz. It is with this frequency that the MP interface timer works. Unlike binary signals, which have two levels, ternary signals transmitted on each pair can have three levels.

Character encoding table

Linear code

Symbol

MLT-3 Multi Level Transmission - 3 (multilevel transmission) - a bit similar to the NRZ code, but unlike the latter, it has three signal levels.

The unit corresponds to the transition from one signal level to another, and the change in the signal level occurs sequentially taking into account the previous transition. When transmitting “zero”, the signal does not change.

This code, like NRZ, needs to be pre-encoded.

Compiled on the basis of materials:

  1. Laem Queen, Richard Russell "Fast Ethernet";
  2. K. Zakler "Computer Networks";
  3. V.G. and N.A. Olifer "Computer networks";