User talk:Hcberkowitz/Sandbox-BW

From Citizendium
Jump to navigation Jump to search

As well as defining information, Shannon analyzed the ability to send information through a communications channel. He found that a channel had a certain maximum transmission rate that could not be exceeded. Today we call that the bandwidth of the channel. Shannon demonstrated mathematically that even in a noisy channel with a low bandwidth, essentially perfect, error-free communication could be achieved by keeping the transmission rate within the channel's bandwidth and by using error-correcting schemes: the transmission of additional bits that would enable the data to be extracted from the noise-ridden signal.[1]

Assuming a communications technique that encodes single bits, the maximum theoretical bandwidth is inversely proportional to the bit time. A technique whose bits are one microsecond long, for example, would have a maximum possible bandwidth of one megabit per second.

With real protocols, however, the maximum theoretical bandwidth may have to be reduced due to a variety of overhead factors. Remember, however, that the universal answer to any network question is "it depends". Some protocols actually make the full theoretical bandwidth available to the entity making use of the protocol, but, internally, the protocol runs faster than the official maximum rate, to leave room for some overhead functions.

Physical protocol overhead

Unfortunately, dealing with real and not theoretical transmission systems, the "maximum theoretical rate" is quoted in different ways. If one considers the rate of the physical protocol as the rate that is available to the function, such as a data link entity, above it, that rate may be the theoretical maximum or something less than the maximum. The reality is that almost any modern physical protocol has some internal overhaad; the protocol specifications vary as to whether that overhead is visible to the user of the physical protocol. Sometimes, the bit time is accurately stated, but the theoretical maximum is not reachable because some of the bits are required for overhead in the transmission system. With some technologies, the real bit time is shorter than the one that is used to compute the available bit rate, because the people who wrote its specification wanted the user to see the achievable maximum that the physical layer could deliver to the service using it. Let's examine two examples.

DS1: some bandwidth reserved for overhead

The first widely used digital transmission format used inside telephone networks is called DS1, part of the plesisochronous digital hierarchy (PDH).. For many people, DS1 and T1 are interchangeable terms, although T1 carrier was the first specific implementation. The DS1 rate of 1.544 Mbps is made up of 24 DS0 digitized voice subchannels, which add up to 1.536 Mbps. For every 24 DS0 channels, which can be combined in various ways to reach 1.536 Mbps, an additional 8 Kbps is reserved for media synchronization and media functions.

FDDI: overhead hidden from the physical layer user

With the obsolescent Fiber Data Distributed Interface (FDDI), the rate normally quoted is 100 Mbps, and that is indeed fully available to the function that uses the FDDI stream. In actuality, however, the actual signal on the FDDI medium sends four bits of data, called a quat, and then one overhead bit that is used for time synchronization. So, while the FDDI physical service makes 100 Mbps of available to the function that uses it, the real line rate is 120 Mbps.

Data link control overhead

When one moves up to data link, which, in turn, has its user entity, the bandwidth has several components:

  • The maximum frame rates that should be used when testing LAN connections SHOULD be the listed theoretical maximum rate for the frame size on the media.[2]
  • The number of bits that are available to be used for data from the user entity

The product of maximum number of frames and the bits available per frame would seem to be a reasonable estimate of bandwidth at the frame level, but this assumes all frames are of equal length. Since the overhead header and trailer are the same regardless of the frame payload field length, long frames, when the framing protocol allows variable lengths, are more bandwidth efficient than short frames. Ethernet, for example, allows minimum payloads of 64 bytes and maximum payloads of 1500 bytes. Bandwidth figures, therefore, are not very meaningful if the test frame length is not known.

Example of frame overhead

In the real world, however, there are a number of factors that reduce the bandwidth to something below the theoretical maximum. While a nominal 10-megabit Ethernet/IEEE 802.3 stream doesn't really use 100 nanosecond bits on the line, assume that it does. Immediately, there start to be reductions due to overhead transmissions and protocol-enforced quiet times. In reality, it is not a sequence of bits, but a sequence of frames. Each frame has 64 bits of hardware synchronization preamble, 96 bits of addressing, 16 bits that identify aspects of the payload, a 32-bit error-checking field, and up to 12,000 bits of data.

Ignoring additional complications of the protocol, out of every 1526 bytes, the system can transmit 1500 data bytes.

References

  1. Shannon C (1948), The Mathematical Theory of Communications
  2. Bradner S, McQuaid J (March 1999), Benchmarking Methodology for Network Interconnect Devices, RFC2544Section 20, p. 12