Data Communication Electronics Page

    General information

    In data communications there are lots of different interfaces and network types.A baseband network is one that provides a single channel for communications across the physical medium (e.g., cable), so only one device can transmit at a time. Devices on a baseband network, such as Ethernet, are permitted to use all the available bandwidth for transmission, and the signals they transmit do not need to be multiplexed onto a carrier frequency. An analogy is a single phone line such as you usually have to your house: Only one person can talk at a time ? if more than one person wants to talk everyone has to take turns.In broadband network, the physical cabling is virtually divided into several different channels; each with its own unique carrier frequency, using a technique called "frequency division modulation" (or other technique which provides same type of fuctionality). These different frequencies are multiplexed onto the network cabling in such a way to allow multiple simultaneous "conversations" to take place. The effect is similar to having several virtual networks traversing a single piece of wire. Network devices "tuned" to one frequency can't hear the "signal" on other frequencies, and visa-versa. Cable-TV is an example of a broadband network: multiple conversations (channels) are transmitted simultaneously over a single cable; you pick which one you want to listen to by selecting one of the frequencies being broadcast. Broadband communications term is also often used to refer to any high bitrate communications. What is condered broadband varies from who you access. For example in Internet access many consider anything above 128 kbit/s broadband. The people working at at networking field, that speed need to be very many megabytes before it can be called broadband. In data communications and telecommunications one physical medium needs to be divided by many different data that needs to be transported. The general idea is to collect many data, combine them to one wire and separate them on other end. The idea of this process of combining different signal on one is called multipexing and the process of separating them again is called demultiplexing. There are many techniques how this can be done. Here are the main techniques:

    • Time division multiplexing (TDM): A type of multiplexing where two or more channels of information are transmitted over the same link by allocating a different time interval ("slot" or "slice") for the transmission of each channel. I.e. the channels take turns to use the link. Some kind of periodic synchronising signal or distinguishing identifier is usually required so that the receiver can tell which channel is which. TDM is very well working technique in applications where the division of the communications channels is quite fixed or does not change to often (typical application in telephone network). TDM becomes inefficient when traffic is intermittent because the time slot is still allocated even when the channel has no data to transmit.
    • Statistical time division multiplexing (STDM, StatMUX): STDM is a special type of multiplexing where the use of medium is divided in time, but there is no fixed time slots. STDM uses a variable time slot length and by allowing channels to vie for any free slot space. It employs a buffer memory which temporarily stores the data during periods of peak traffic. This scheme allows STDM to waste no high-speed line time with inactive channels. STDM requires each transmission to carry identification information (i.e. a channel identifier). To reduce the cost of this overhead, a number of characters for each channel are grouped together for transmission.
    • Frequency division multiplexing (FDM): The physical cabling is virtually divided into several different channels; each with its own unique carrier frequency. These different frequencies are all put onto the network cabling in such a way to allow multiple simultaneous "conversations" to take place (each operate at their own frequency so no interfering with each other). The effect is similar to having several virtual networks traversing a single piece of wire. Network devices "tuned" to one frequency can't hear the "signal" on other frequencies, and visa-versa. Cable-TV is an example of a broadband network that uses time division multiplexing. Multiple conversations (channels) are transmitted simultaneously over a single cable; you pick which one you want to listen to by selecting one of the frequencies being broadcast. Many radio networks also use frequency division multipexing (different radio channels).
    • Code Division Multiple Access (CDMA): Code division multiple access is a multiple access scheme where a spcial coding is used to seprate different signals on the same transmission medium. Code Division Multiple Access (cdma) is used in spread spectrum systems to enable multiple-access. It is a transmission technique in which the frequency spectrum of a data-signal is spread using a code uncorrelated with that signal and unique to every addressee. Using a unique code to distinguish each different call, CDMA wireless system enables many more people to share the airwaves at the same time.
    • Wavelength Division Multiple Access (WDMA): Wavelength Division Multiple Access is a multiplexing method used in fiber optics and other optical communication systems. The idea in WDMA is that different information streams are transmitted at different wavelength optical signal and the optical receivers are made to be wavelength specific to receive only the signals ment to them. The basic idea is similar to FDM but uses different optical carrier wavelengths instead of different electrical signal carrier freuency signal.
    • Space Division Multiple Access (SDMA): The idea in space division multiplex is to divide the communication medium in some way physically so that different users can access the same medium at same kind of signal without intefering with each other. The medium where space medium needs to be some kind of space, not a simple wire. For example space division multiplexing can be used in radio communications by using directional antennas on point to point links. Same applies also to free space optics. Cellular network topology allows using same radio frequencies over and over (the users using same frequencies are so far ways that they do not interfere with each other). The idea behind cellular networks is the sub-division of a geographical area covered by a network into a number of smaller areas called cells. Sometimes space division multiplexing is used to refer case where many signals are transmitted through same cable separated to different pairs of wires in it or to different optical fibers on one large cable.
    In networking technologies there are different technologies usually divided to the following classes:
    • Circuit switching: Circuit switching establishes a dedicated connection between the sender and receiver. In circuit switching, a caller must first establish a connection to a callee before any communication is possible. During the connection establishment, resources are allocated between the caller and the callee. Generally, resources are frequency intervals in a Frequency Division Multiplexing (FDM) scheme or more recently time slots in a Time Division Multiplexing (TDM) scheme. The set of resources allocated for a connection is called a circuit.This circuit connection consumes network capacity whether or not there is an active transmission taking place. Currently, circuit switching is mainly used in the telephone networks to transmit voice and data signals.
    • Packet switching: In packet switching the data is sent to network small segments of transmission (called packets). The packets contain the information where they should be sent (destination address) and the actual data. The transmitted packets can be fixed length or variable lenght depending on the network technology use. Inside network the packets are routed to the destination based on destination address in the packet. In packet switching system no specific connections are made, packets are just sent to network and they get to the destination. A single communication line can be shared with many users, because packets are sent to line after each other. If no data is available at the sender at some point during a communication, then no packet is transmitted over the network and no resources are wasted. Packet switching is the generic name for a set of two different techniques: datagram packet switching and virtual circuit packet switching. Packet switching is widely used in LAN and WAN networks. Internet operation is based on TCP/IP packet switching network.
    • Cell switching: Cell switching is a special form of packet switching. It uses short fixed-length packets which are called cells. The most well known technology which uses cell swiching is Asynchronous Transfer Mode (ATM). ATM uses cells that are 53 octets in length (header 5 + data 48 octets). Thise cells are stored and forwarded in the network.
    • Virtual circuit packet switching: Virtual circuit packet switching (VC-switching) is a packet switching technique which merges datagram packet switching and circuit switching to extract both of their advantages. VC-switching is a variation of datagram packet switching where packets flow on so-called logical circuits for which no physical resources like frequencies or time slots are allocated. Each packet carries a circuit identifier which is local to a link and updated by each switch on the path of the packet from its source to its destination. A virtual circuit is defined by the sequence of the mappings between a link taken by packets and the circuit identifier packets carry on this link. This sequence is set up at connection establishment time and identifiers are reclaimed during the circuit termination.
    There is also definitions of syncronous and asyncronous transmission types:
    • In a synchronous transmission, a synchronized connection must be made between the sender and receiver because there must be a constant time interval between each successive bit, character, or event, with a start bit preceding and a stop bit following each one.
    • In asyncronous communication no such syncronization is needed.
    In adata communication you hear ofter term OSI model.The Open Systems Interconnect (OSI) reference model is the ISO (International Standards Organization) structure for the "ideal" network architecture. This Model outlines seven areas, or layers, for the network. These layers are (from highest to lowest):
    • 7.) Applications: Where the user applications software lies. Such issues as file access and transfer, virtual terminal emulation, interprocess communication and the like are handled here.
    • 6.) Presentation: Differences in data representation are dealt with at this level. For example, UNIX-style line endings (CR only) might be converted to MS-DOS style (CRLF), or character sets to another.
    • 5.) Session: Communications between applications across a network is controlled at the session layer. Testing for out-of-sequence packets and handling two-way communication are handled here.
    • 4.) Transport: Makes sure the lower three layers are doing their job correctly, and provides a transparent, logical data stream between the end user and the network service he/she is using. This is the lower layer that provides local user services.
    • 3.) Network: This layer makes certain that a packet sent from one device to another actually gets there in a reasonable period of time. Routing and flow control are performed here. This is the lowest layer of the OSI model that can remain ignorant of the physical network.
    • 2.) Data Link: This layer deals with getting data packets on and off the wire, error detection and correction and retransmission. This layer is generally broken into two sub-layers: The LLC (Logical Link Control) on the upper half, which does the error checking, and the MAC (Medium Access Control) on the lower half, which deals with getting the data on and off the wire.
    • 1.) Physical: Here is where the cable, connector and signaling specifications are defined.
    NOTE: There is also the undocumented but widely recognized ninth network layer: 9.) Bozone (a.k.a., loose nut behind the wheel): The user sitting at and using (or abusing, as the case may be) the networked device. All the error detection/correction algorithms in the world cannot protect your network from the problems initiated at the Bozone layer.

    Error correction

    In many data comnmunications systems you need to transfer the data from one end to another end correctly. Here is where error detection and correction systems come into picture.

    Error detection coding is designed to permit the detection of errors. Once detected, the receiver may ask for a re-transmission of the erroneous bits, or it may simply inform the recipient that the transmission was corrupted. In a binary channel, error checking codes are called parity check codes. A very common code is the single parity check code. Practical codes are normally block codes. Parity checking in this way provides good protection against single and multiple bit errors when the probability of the errors are independent.

    However, in many circumstances, errors occur in groups, or bursts. Parity checking of the kind just described then provides little protection. In these circumstances, a polynomial code is used. Polynomial codes work on each frame. Additional digits are added to the end of each frame. These digits depend on the contents of the frame. The number of added digits depends on the length of the expected error burst. Typically 16 or 32 digits are added. The computed digits are called the frame check sequence (FCS) or cyclic redundancy check (CRC). Before transmission, each frame is divided by a generator polynomial. The remainder of this division is added to the frame. On reception, the division is repeated. Since the remainder has been added, the result should be zero. A non-zero result indicates that an error has occurred. A polynomial code can detect any error burst of length less than or equal to the length of the generator polynomial. CRC error checking is now quite common, and its use will increase. CRC error checking is often impemented with hardware integrated to networking card hardware.

    Error correction coding is more sophisticated than error detection coding. Its aim to to detect and locate errors in transmission. Once located, the correction is trivial: the bit is inverted. Error correction coding requires lots of more processing and error correction information than simple error detection coding. It is therefore uncommon in terrestrial communication, where better performance is usually obtained with error detection and retransmission. However, in satellite communications, the propagation delay often means that many frames are transmitted before an instruction to retransmit is received (can make the task of data handling very complex). Data transmission systems that use retransmission do not suit wel to real-time transmission, because thsoe retransmissions can cause unexpected delays to signals. It is necessary to get it right first time. In these special circumstances, the additional bandwidth and extra processing required for the redundant check-bits is an acceptable price.

    Forward-error correction (FEC) is nowadays very valued in communication links because it allows for virtually error free communications over a noisy channel. FEC improves system capacity by permitting high data rates within the communications link while providing improved transmission power efficiency. There are two principle types of error correctin codes: Hamming codes and convolutional codes.

    A Hamming code is a block code capable of identifying and correcting any single bit error occurring within the block. It is identified by the numbers K and N; we talk of an (N K) Hamming code. Hamming codes employ modulo 2 arithmetic (exclusive OR as addition operator). Hamming codes suffer from the same difficulty as block-codes. They offer protection against single-bit errors. They offer little protection against burst errors.

    Convolutional codes are designed to deal with this circumstances where burst errors are present. Convolutional codes work in a statistical sense, meaning that that we cannot say that, for example, every single-bit error will be corrected. We can only say that, on average, the use of the convolutional code will improve the error rate and typically provide an error rate improvement of 3 orders of magnitude, with a code rate of 1/2. Convolutional codes are widely used to encode digital data before transmission through noisy or error-prone channels. During encoding, k input bits are mapped to n output bits to give a rate k/n coded bitstream. The encoder consists of a shift register of kL stages, where L is described as the constraint length of the code. At the receiver, the bitstream can be decoded to recover the original data, correcting errors in the process.

    The optimum decoding method is maximum-likelihood decoding where the decoder attempts to find the closest "valid" sequence to the received bitstream. The most popular algorithm for maximum-likelihood decoding is the Viterbi Algorithm. The possible received bit sequences form a "trellis" structure and the Viterbi Algorithm tracks likely paths through the trellis before choosing the most likely path. A Reed-Solomon Codec (Coder-Decoder) is a block-oriented coding system that is applied typically on top of standard Viterbi coding. It corrects the bulk of the data errors that are not detected by other coding systems, significantly reducing the BERs at nominal signal-to-noise levels.

    For the past few years, turbo coding was discussed and touted as the key FEC technique for improving channel performance. Now, a new technique, called low-density parity check (LDPC), is emerging and could replace turbo coding as the FEC of choice by taking designers even closer to the Shannon Limit. For communication designers, especially those in the networking and wireless field, the Shannon limit can be seen as the Holy Grail. And, since being first defined in the late 1940s, designers have developed and implemented error correction coding techniques to push channel performance closer and closer to the Shannon limit.

    Signal coding

    Usually the actual serial binary data to be transmitted over the cable are not sent as a sequence of logic 1's and 0's, known technically as Non Return to Zero (NRZ). Instead, the bits are usually translated into a slightly different format that has a number of advantages over using straight binary encoding. There are many coding systems in use. Here are few details of some commonly used codes:

    • NRZ Coding: In this scheme, ones are represented by a high signal and zeros by a low signal. NRZ relies on on the state during the period with transitions occurring between measurement periods.
    • Manchester Coding: In this scheme, ones are represented by a transition for high-to-low while a zero is represented by a transition from low-to-high. Unlike NRZ, Manchester relies upon the transitions within the measurement period to define the data.
    • Miller Coding: In this scheme, a one is represented by a transition (either high-to-low or low-to-high) within the measurement period while a zero is represented by the lack of a transition.
    • 4B5B (4 bit 5 bit): 4B5B is the line coding used in in FDDI and 100 Mbit/s Ethernet, each 4 bit data nibble is encoded as 5 bit code with additional bit transition.
    • 8B10B (8 bit 10 bit): 8B10B is the line coding used in Gibagit Ethetnet and Fiber Channel systems. 8B10B codes 8 databits to 10 line bits adding extra transitions to the signal. Added extra bits allow reliable transmission of clock information and other extra framing data.
    The most common advantages of the coding systems in are the inclusion of data clock to the same line as the data itself flows and removal of the DC component from the sent signals (allows the signal to pass nicely through capacitive and transformer coupling, and the signal can be easily amplified with AC coupled amplifier). Environmental parameters are often taken into consideration when the coding scheme is selected. Variables such as power sources, acceptable error rates and correction procedures, and modulation type (ASK, FSK, or PSK) can make one coding type better than another in a specific application. Just keep in mind that coding schemes are not magic. Like Morse code, they are simply a system for encoding data onto the carrier waves in a manner that is shared and meaningful to both the transponder and the receiver.

    Even though the information transported is digital in nature, the actual signals are analog. A true digital pulse signal only possesses two states, either "zero" or "one." An analog-digital pulse signal possesses many other characteristics, including amplitude, rise/falltime, over/undershoot, ringing, long-term droop, etc. In simplest applications digital signals on the line are either "zero" or "one", but there are also more complicated multi-level codes that have signals with multiple possible states (they can transport more than one bit per signal on the line).

    To design, characterize, and troubleshoot fast data communications systems, engineers and technicians eventually need to observe the actual system pulse waveforms. To make this measurement, engineers generally use a photodetector and an oscilloscope. The most common time domain measurement for a transmission system is the eye diagram (see figure). The eye diagram is a plot of data points repetitively sampled from a pseudo-random bit sequence and displayed by an oscilloscope. The time window of observation is two data periods wide.

    The Shannon limit or Shannon capacity of a communications channel is the theoretical maximum information transfer rate of the channel. In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. The law is named after Claude Shannon and Ralph Hartley. Shannon and Hartley asked: How do bandwidth and noise affect the rate at which information can be transmitted over an analog channel? Bandwidth limitations alone do not impose a cap on maximum information transfer because information can be also transported by conding information to different signal voltage levels.

    When we combine both noise and bandwidth limitations, however, we do find there is a limit to the amount of information that can be transferred. Considering all possible multi-level and multi-phase encoding techniques, Shannon's theorem gives the theoretical maximum rate of clean (or arbitrarily low bit error rate) data C with a given average signal power that can be sent through an analog communication channel subject to additive, white, Gaussian-distribution noise interference. The Shannon theorem can be used also (togehther with error correction theory) to determine the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The theory doesn't describe how to construct the error-correcting method or what is the best coding system to use, it only tells us how good the best possible method can be. Shannon's theorem has wide-ranging applications in both communications and data storage applications.

    The V.34 modem standard advertises a rate of 33.6 kbit/s, and V.90 claims a rate of 56 kbit/s, apparently in excess of the Shannon limit calculated for a normal telephone line (telephone bandwidth is 3.3 kHz and there is always considerable amoint of noise in line). In fact, neither standard actually reaches the Shannon limit, but closely approaches it. The speed improvement of V.90 was made possible by the elimination of some signal conversions done on normal telephone system (there is always a fully digital equipment at the other end of a modem connection, so there is only one analogue-digital conversion on the way). his improves the S/N ratio, which in turn produces the required headroom to exceed 33.6 kbit/s which was otherwise near the Shannon limit.

    • Shannon-Hartley theorem - In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. The Shannon limit or Shannon capacity of a communications channel is the theoretical maximum information transfer rate of the channel.    Rate this link
    • Spectral content of NRZ test patterns - NRZ-test-pattern properties, such as data rate and pattern length, determine important time- and frequency-domain characteristics.    Rate this link
    • Manchester Encoding - Manchester encoding is a synchronous clock encoding technique used by the OSI physical layer to encode the clock and data of a synchronous bit stream. In this technique, the actual binary data to be transmitted over the cable are not sent as a sequence of logic 1's and 0's (known technically as Non Return to Zero (NRZ)). Instead, the bits are translated into a slightly different format that has a number of advantages over using straight binary encoding (i.e. NRZ).    Rate this link
    • Communication system block diagram    Rate this link
    • Examples of 4B/45 and 8B/6T Data Encoding    Rate this link

    Asynchronous Transfer Mode (ATM)

    Asynchronous Transfer Mode (ATM) is a very high speed transmission technology for voice, data, video, and television that tries to combine the best of circuit switching and packet switching. ATM is a compromise between the synchronous circuit-switched and the packet-switched systems both in delays, resource use and complexity. Cell switching is a preferred technology for the Broadband ISDN (B-ISDN) because of the flexible data transfer rates. Asynchronous Transfer Mode (ATM) is based the use of fixed-length cells.Each fixed-length cell contains 53 bytes, five of which are a header that defines the address, routing, forward error correction, and a plus bit for priority handling and network management. The remaining 48 bytes are the user data. Cells in the network can be filled from various sources; voice, video, data, or television ATM can be used in existing twisted pair, fiber-optic, coaxial, and hybrid fiber/coax (HFC) networks for local area network (LAN) and wide area network (WAN) communications. . Because ATM was developed to have such a wide range of compatibility with existing networks, its implementation does not require replacement or over-building of telephone, data, or cable networks. ATM is also compatible with wireless and satellite communications.The downside of the wide range of compatibility is the complexity in the ATM network control layer.


      Voice over ATM

      • Voice Telephony over Asynchronous Transfer Mode (VToA) Tutorial - Voice telephony over asynchronous transfer mode (VToA) is a single integrated infrastructure, able to manage and deliver all subscriber signals (audio, data, voice, and video) and switched and dedicated services reliably and efficiently. The goal of this tutorial is to provide an understanding of what VToA is, why it came into existence, and how it will benefit the public by increasing the availability and quality of telephone service worldwide.    Rate this link

    Dynamic Synchronous Transfer Mode (DTM)

    Dynamic synchronous transfer mode (DTM) is an exciting networking technology. The idea behind it is to provide high-speed networking with top-quality transmissions and the ability to adapt the bandwidth to traffic variations quickly. DTM is designed to be used in integrated service networks for both distribution and one-to-one communication.

    • Dynamic Synchronous Transfer Mode (DTM) Fundamentals and Network Solutions Tutorial - Dynamic synchronous transfer mode (DTM) is an exciting networking technology. The idea behind it is to provide high-speed networking with top-quality transmissions and the ability to adapt the bandwidth to traffic variations quickly. This tutorial explores the development of DTM in light of the demand for network-transfer capacity. DTM combines the two basic technologies used to build high-capacity networks?circuit and packet switching?and therefore offers many advantages. It also provides several service-access solutions to city networks, enterprises, residential and small offices, content providers, video production networks, and mobile network operators.    Rate this link

    Frame relay

    The use of frame relay has grown significantly over the past several years as traditional host-based systems with dedicated point-to-point links have been replaced with distributed client/server networks. Frame relay tends to offer a cost-effective, reliable, and flexible wide-area network service. Frame relay costs are typically lower than dedicated point-to-point circuits since customers can aggregate multiple virtual circuits on a single physical circuit and customer premise equipment is relatively inexpensive. As the use of frame relay has grown, several physical and data link layer components have evolved, which has increased the efficiency and interoperability of frame relay devices.


Back to ePanorama main page ??