Digital Video Page

    General information

    Digital video technology is a method of representing video image signal using binary numbers. Simply stated, digital video is nothing more than the digitizing of the old signal now in use in analog video. An analog video signal is converted to digital by the use of an analog-to-digital (A/D) converter chip by taking samples of the signal at a fixed time interval (sampling frequency). Assigning a binary number to these samples, this digital stream is then recorded onto storage media (magnetic tape, optical disk, hard disk or computer memory) or transmission path (telecommunication network, Internet, digital satellite, digital TV transmission). Upon playback, a digital-to-analog (D/A) converter chip reads the binary data and reconstructs the original analog signal. This process virtually eliminates generation loss as every digital-to-digital copy is theoretically an exact duplicate of the original.This allows copying the video material multiple times without degradation that many analogue systems cause to the image quality. Digital signals are virtually immune to noise, distortion, crosstalk, and other quality problems (if systems are workign properly). In addition, digitally based equipment often offers advantages in cost, features, performance and reliability when compared to analog equipment.Digital systems are not perfect and specialized hardware/software is used inside equipment to correct all but the most severe data loss. Because a video signal converted to digital format needs lots of data bandwidth, in many application some for of (lossy) video data compression is used to keep the amount of data to be stored and transmitted at reasonable limits. Modern digital video compression systems can reduce the amount of data needed to a very small faction of the original A/D converter data rate without much degration in picture quality.Using computers and communication systems, it is easy to acquire, process,transmit, and display photographic-quality still color pictures. The technologies of digital video are necessary to achieve smooth motionand accurate color representation. Digital video technologies arean essential part of multimedia, image communication and brodcast industry(in both video material production and distribution),

    Video compression

    Digital signal compression is the process of digitizing an analog television signal by encoding a TV picture to "1s" and "0s". In video compression, certain redundant details are stripped from each frame of video. This enables more data to be squeezed through a coaxial cable, into a satellite transmission, or a compact disc. The signal is then decoded inside a TV set-top box or CD player. In simpler terms, "video compression is like making concentrated orange juice: water is removed from the juice to more easily transport it, and added back later by the consumer." Heavy research and development into digital compression has taken place over the past years because of the enormous advantages that digital technology can bring to the broadcasting, telecommunications, and computer industries. The use of compressed digital over analog video allows for lower video distribution costs, increases the quality and security of video, and allows for interactivity.

    Currently, there are a number of compression technologies that are available. For example MPEG-1, MPEG-2, MPEG-4, Indeo, Cinepak, Montion JPEG, Realvideo, H.261, DV and Divx, . Digital compression can take these many forms and be suited to a multitude of applications. Each compression scheme has its strengths and weaknesses because the codecs you choose will determine how good the images will look, how smoothly the images will flow and what is the needed video data rate for usable picure quality.

      MPEG 1 and MPEG 2

      MPEG (pronounced M-peg), which stands for Moving Picture Experts Group, is the name of family of standards used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format. The major advantage of MPEG compared to other video and audio coding formats is that MPEG files are much smaller for the same quality. This is because MPEG uses very sophisticated compression techniques. MPEG 1 and MPEG 2 standards made interactive video on CD-ROM and Digital Television possible.

      MPEG-1 is the first standard which was adopted in 1991. The goal of MPEG-1 was to develop an algorithm that could compress a video signal and then be able to play it back off a CD-ROM or over telephone lines at a low bit rate (less than 1.2 Mbits per second) at a quality level that could deliver full-motion, full screen, VHS quality from a variety of sources. The MPEG-1 standard is primarily intended to process video at what is known as SIF (Source Input Format) resolution. That is 352x240 pixels at 30 frames per second. This process is one-fourth the resolution of the broadcast television revolution standard called CCIR 601. MPEG-1 standard consists also of the three layers: video, audio, and system. The most common applications for MPEG-1 have been VideoCD and computer video CD-ROMs.

      The creation of MPEG-2 by the ISO committee was to improve MPEG-1, because it not serve the requirements of the broadcast industry. So the group developed a compression algorithm that processed video at full resolution that would match CCIR 601 video (704 x480 NTSC, 704 X 576 PAL). MPEG-2 took advantage of higher band widths available to deliver higher image resolution and picture resolution. It targets increased image quality, support of interlaced video formats, and provision for multi-resolution scalability. It allows compression at high resolution and higher bit rates than MPEG-1. MPEG-2 runs at a data rate of 6.0 Mbps and is designed for broadcast quality video that delivers better quality at a faster data rate. MPEG-2 is like its predecessor in that the standard consists also of the three layers video, audio, and system. The most common applications for MPEG-2 are digital television and DVD.

      MPEG 3

      Along with the development of MPEG-2 began work on the MPEG-3 standard. This standard was directed towards the expected market of High Definition Television, HDTV. MPEG-3 targeted HDTV applications with sampling dimensions up to 1920 x 1080 x 30hz and coded bitrates between 20 and 40mbit/sec. However, after research, it was discovered that MPEG-2 and MPEG-1 syntax could work well together for HDTV rate video. With some fine turning MPEG-2 was found to be suitable for HDTV also. MPEG-3 no longer exists because HDTV became part of the MPEG-2 standard.

      MPEG 4

      MPEG-4 is an ISO/IEC standard developed by MPEG (Moving Picture Experts Group), the committee that also developed the Emmy Award winning standards known as MPEG-1 and MPEG-2. MPEG-4 work started at 1993. The MPEG-4 Version 1 standard was finalized in October 1998 and became an International Standard in the first months of 1999. The fully backward compatible extensions under the title of MPEG-4 Version 2 were frozen at the end of 1999, to acquire the formal International Standard Status early in 2000. Some work, on extensions in specific domains, is still in progress.

      Since MPEG-4 adopted an object-based audiovisual representation model with hyperlinking and interaction capabilities and supports both natural and synthetic content, it is expected that this standard will become the information coding playground for future multimedia applications. MPEG 4 provides better compression and more options for future applications. The MPEG-4 Visual standard will allow the hybrid coding of natural (pixel based) images and video together with synthetic (computer generated) scenes.MPEG-4 provides the standardized technological elements enabling the integration of the production, distribution and content access paradigms of the three fields: digital television, interactive graphics applications (synthetic content) and interactive multimedia (World Wide Web, distribution of and access to content).


      Montion JPEG (MJPEG, M-JPEG) is a common name used to refer to many different digital video formats which store the video film as series of JPEG compressed images (video frames or fiels). Motion JPEG compression technology ses the limits of our visual perception to discard information we don't use. Motion JPEG treats each frame as a single image to which it applies JPEG compression. In the compression process first each frame is first broken into 8x8 pixel blocks, then pixel values (brightness and color) are converted to frequencies. This conversion is done using Discrete Cosine Transformation. At it's simplest level we can compress each block of 8x8 pixels by reducing the number of values that are acceptable for the block. Uncompressed we would have 64 values, although many would most likely be similar given that they are in the same area of the picture. There are various methods used to reduce the number of pixel data that needs to stored to give "close enough" picture compared to the original one.DCT compression works very well with soft images, images with large expanses of almost flat color, and images that don't contain a lot of detail. DCT compression has more difficulty with fine detail gradients (blue skies or underwater can look 'quantized'), and noise. In general terms, the practical balance of best picture quality as opposed to storage space taken falls like this:

      VHS        2-2.5 MB/sec (80-100 Kbyte/frame, 11:1-8.5:1 compression ratio) SVHS       3-3.75 MB/sec (120 -150 Kbyte/frame, 7:1-5:1 compression ratio)BetacamSP  4.5 MB/sec (180 Kbyte/frame, 4.8:1 compression ration)
      Compression ratios of around 4:1 or 5:1 are nearly always without visual loss of image quality (unless material is very hard to compress). Critical applications might use higher data rates. Some non linear systems require you to use one compression rate for the entire program while others allow mixed data rates in the same program.For those that require a data rate (compression ratio) to be chosen for the project, knowing your images and how they will compress will help you choose the rate that gives best image quality while minimizing hard drive space needed.


      DV is nowadays a popular digital video camera format used for consumer and semi-professional use.MiniDV is the "low-cost" digital video (DV) format targeted for comsumer use.The resolution quality of MiniDV and Betacam SP are perceptively similar (=very good). MiniDV has some limitations. The DV and MiniDV formats use IEEE 1394 interfacefor connecting the camcoder to a computer to transfer the video in digital format. DV system uses the 4:1:1 sampling for video signal. It has 480 active for NTSC and around 500 lines horizonal resolution. DV system uses 5:1 DCT-based video compression. Many detractors of the DV format arbitrarily categorize 5:1 compression as "excessive" for broadcast and corporate video production. It is a very good for most uses, but is not always free of artifacts. The use of DV codes has become popular after the introduction of computer based non-linear editing systems for DV cameras. Those system take the DV data from the tape as it is there to the computer through IEEE 1394 interface. Then they process the data in the computer in the native DV camera format.



      SDI stands for Serial Digital Interface. The Serial Digital Interface-SDI (SMPTE 259M) grew out of the need for longer distance connection of component digital television equipment, the result being the viability of a truly digital broadcast station. SDI is capable of running hundreds of feet and can run thousands of feet if properly distributedTo understand SDI you must understand some history of digital video interfaces. The impetus for serial digital coding and transmission of video heightened with the introduction of the first component digital production video tape recorder in the mid-'80s, known as D1 or CCIR 601. Digital component recording began in 1987 with the creation of the D1 format (SMPTE 125M). The D1 interface is an 8/10 bit parallel system intended for close-in connection between digital tape recorders (19 mm tape). Its interface cabling is short due to the difficulty in maintaining proper bit timing over a byte-wide data channel. Reformatting the byte-wide D1 data via a serializer yields a very high-speed serial data stream. Serializing a 10-bit data word results in a data rate ten times faster. The 27 MHz D1 data becomes serial data at 270 megabits per second for standard component NTSC. Although SDI bit rates are very high, distribution of serial data as a single cable connection presents significant advantages. First, it's much easier (read cheaper) to route and switch one cable than a parallel system of cables. Having all data bits organized as one stream means there will be no issues with clock and data synchronization. Managing bit timing and cable equalization is easier. Data skew problems encountered with multi-conductor cables do not exist.SDI format utilizes a differential signaling technique and NRZI (non-return to zero inverted) coding. Although SDI is transmitted as an unbalanced signal on 75-ohm coax, transmission and reception involves differential amplifiers that format and detect, respectively, both data phases. Utilizing differential reception creates additional headroom and robustness in signal-to-noise performance. SDI is very immune to extraneous noise and low frequency components (hum) because the receiver takes one phase of the data transmission, inverts it, and then adds it to the in-phase portion. Like a regular analog differential amplifier, common mode noise induced into the signal is cancelled out during this inversion and addition operation.SMPTE 259M supports four SDI transmission rates and SMPTE 292M supports 1.485 Gbps for HD SDI.Currently, most serial digital application situations involve standard definition television, and here serial digital component format are the most often used. Component serial digital (4:2:2 digital component PALo or NTSC) requires 270 megabits per second. The SDI encoding algorithm ensures enough signal transitions to embed the clock within the data and minimize any DC component. SDI coaxial cable drivers AC-couple the serial data into the transmission cable, thus providing DC isolation between source and receiver. The cable loss is a serious issue on those high data rated.Well-designed receivers, called Class A type, can recover serial digital data as low as -30 dB at one-half the clock rate from a pristine source, or about 25 millivolts. The one-half clock rate frequency is used to calculate SDI cable loss. For 270 Mbps component SDI, the rate would be 135 MHz. Cable loss specifications for standard SDI, SDTI, and uncompressed SDTV are addressed in SMPTE 259M and ITU-R BT.601. In these standards, the maximum recommended cable length equals 30 dB loss at one-half the clock frequency. This high serial digital signal loss level is acceptable due to the serial digital receiver. Serial digital receivers have special signal recovery processing. SMPTE 259M mentions a typical range of expected SDI receiver sensitivity between 20dB and 30dB at one-half the data clock frequency. Like analog signals, SDI data can be corrupted with improper termination or routing that results in cable reflections. Maintaining a clean distribution path with SDI means that decoding will largely be a function of the decoder sensitivity on the receiving end. Assuming that bit transitions are recognizable, the decoder will only be limited by its peak-to-peak sensitivity. For HD SDI running at 1.5 Gbps, SMPTE 292M governs cable loss calculations. In that standard, maximum cable length equals 20dB loss at one-half the clock frequency. Due to the data coding scheme, the bit rate is effectively the same as the clock frequency in MHz. Recall that digital systems do not perform linearly to cable losses. The system performance depends on the cable loss and on the receiver performance. The economy of distributing SDI and HD SDI lies in the ability of the serial digital receiver to recover a low-level signal. In all cases, your system must operate solidly before the "cliff region" where sudden signal dropout occurs. Recommendations among cable manufacturers will certainly vary, but it is good practice to limit your run lengths to no more than 90% of the calculated value. This provides leeway for cable variations, connector loss, patching equipment, etc. Here is information on some SDI signal versions:

      • SMPTE 259 Level A: 143 Mbps clock, NTSC 4fsc Composite signal, timing/alignment jitter 1.40 nS
      • SMPTE 259 Level B: 177 Mbps clock, PAL 4fsc Composite signal, timing/alignment jitter 1.13 nS
      • SMPTE 259 Level C: 270 Mbps clock, 525/625 line Component signal, timing/alignment jitter 0.74 nS
      • SMPTE 259 Level D: 360 Mbps clock, 525/625 line Component signal, timing/alignment jitter 0.56 nS
      • SMPTE 292: 1485 Mbps clock, HDTV signal, timing jitter 0.67 nS, alignment jitter 0.13 nS
      All those system use 800 mV (peak-to-peak) signal level with 0V +-0.5V DC offset. The rise/fall times on SMPTE 259 specification are 0.40-1.50 nS range and rise/fall tiem differential is limited to 0.5 nS. In SMPTE 292 the rise/fall time is limited to be maximum 0.27 nS.Allowed overshoot is maximum 10%.


      The serial digital transport interface (SDTI) utilizes the SDI data format for the transport of other types of digital data. In particular, it is great for transporting compressed SDTV and HDTV throughout a television plant. Any data capable of fitting within the data transport structure (270 Mbps or 360 Mbps) of the standard may be routed via existing SDI equipment. SDTI is defined by SMPTE 305M.

      Firewire / IEEE 1394

      IEEE 1394 is a fast (up to 400 Mbit/s) serial bus interface. IEEE 1394 was called Firewire before standardization in IEEE to become standard IEEE 1394. Firewire, or IEEE-1394, is that tiny, square-like connector tucked away on the side of your digital camcorder that allows you to upload DV format to your computer, among other things.IEEE 1394 is nowadays used mainly for interconnecting modern digital video equipments to PC. For example practically every DV camera has IEE 1394 interface in it, so with IEEE 1394 interface card and suitable software you can transfer your movies form DV camera to PC hard disk for editing. The DV (Digital Video) recording standard now driving most consumer camcorder purchases, is a serial digital format of 25 Mbps, sometimes called DV25. The Firewire (IEEE 1394) interface conveniently handles the data rate of DV, and then some. The DV format is the first application making tremendous use of the IEEE 1394 capability. IEEE 1394 is also designed be become on universal digital inteface between digital consumer video equipment like DV cameras, DVD players and digital flat panel displays. Devices on the IEE 1394 bus are Hot-Swappable, which means that the bus allows live connection/disconnection of devices. The digital interface supports either asynchronous and isochronous data transfers. Addressing is used to a particular device on the bus. Each device determines its own address. IEEE 1394 supports up to 63 devices at a maximum cable distance between devices of 4.5 meters. However, "powered" Firewire devices and repeaters will repeat a signal and allow you to extend another 15 feet. The maximum devices on the bus is 16 allowing a total maximum cable distance of 72 meters. The 1394 specification limits cable length to 4.5 meters in order to satisfy the round trip time maximum required by the arbitration protocol. Some applications may run longer lengths when the data rate is lowered to the 100 Mbps level. The 1394 system utilizes two shielded twisted pairs and two single wires. The twisted pairs handle differential data and strobe (assists in clock regeneration) while the separate wires provide power and ground for remote devices needing power support. Signal level is 265 mV differential into 110 ohms. The 1394 specification provides electrical performance requirements, which leave open the actual parameters of the cable design. As with all differential signaling systems, pair-to-pair data skew is critical (less than 0.40 nanoseconds). Crosstalk must be maintained below -26 dB from 1 to 500 MHz. The only requirement on the size of wire used is that velocity of propagation must not exceed 5.05 nS/meter. The typical cable has 28 gauge copper twisted pairs and 22 gauge wires for power and ground. A Firewire connected appliance may or may not need power from its host, but must be capable of providing limited power for downstream devices. The 1394 specification supports two plug configurations: a four-pin version and a six-pin version. Six-pin versions can carry all six connections and are capable of providing power to appliances that need it. For independently powered appliances, like camcorders, the four-pin version is used for its compactness. Cable assemblies have the data signal pairs crossed over to avoid polarity issues. All 1394 type appliances have receptacles, which makes for easy upstream-downstream connection with the male-to-male cable. New standard version have increased the avaialble media from original short "Firewire" cable to other medias also. Transmitting data over CAT5 cable allows data at 100Mbps to travel 100m (specified in IEEE 1394b). Fiber cable will allow 100 meter distances at any speed (maximum speed depends on the type of fiber cable).


Back to ePanorama main page ??