Video Signal Standards and Conversion Page

    Basics of video signals

    Any TV signal is a complex combination of video as well as timing information. Video signals are complex waveforms comprised of signals representing a picture as well as the timing information needed to display the picture. You can think of an image as a two-dimensional array of intensity or color data. A camera, however, outputs a one-dimensional stream of analog or digital data. Standard analog video signals are designed to be broadcast and displayed on a television screen. To accomplish this, a scheme specifies how the incoming video signal gets converted to the individual pixel values of the display. An analog video signal consists of a low-voltage signal containing the intensity information for each line, in combination with timing information that ensures the display device remains synchronized with the signal. The signal for a single horizontal video line consists of a horizontal sync signal, back porch, active pixel region, and front porch. The horizontal sync (HSYNC) signals the beginning of each new video line. It is followed by a back porch, which is used as a reference level to remove any DC components from the floating (AC-coupled) video signal. This is accomplished during the clamping interval for monochrome signals, and takes place on the back porch. Color information can be included along with the monochrome video signal (NTSC and PAL are common standard formats). A composite color signal consists of the standard monochrome signal (RS-170 or CCIR) with color information added. Another aspect of the video signal is the vertical sync (VSYNC) pulse. This is actually a series of pulses that occur between fields to signal the monitor to peform a vertical retrace and prepare to scan the next field. There are several lines between each field which contain no active video information. Some contain only HSYNC pulses, while several others contain a series of equalizing and VSYNC pulses. These pulses were defined in the early days of broadcast television and have been part of the standard ever since, although newer hardware technology has eliminated the need for some of the extra pulses.

    Most video sources you will see use interlaced video. For example the video signal used in analogue TV broadcasts, video on VCR tapes and video on many DVDs is interlaced video. Interlaced video sacrifices picture quality to reduce bandwidth demands. Interlacing is in fact a clever way to compress a movie when one cannot use digital compression methods. Interlacing reduces the bandwidth (= storage space nowadays) by half, without losing vertical resolution in quiet areas (in motion areas you don't notice very much anyway, because it's moving 50 times per second). So interlacing is a way to display the nonmoving parts with full resolution and the moving parts with half resolution, but fluidly. It's a very clever way to cut bandwidth without sacrificing much quality. Interlaced video allows quick enough screen refresh at reasonable bandwidth usage so that TV image does not flicker too much and motion is smooth. An interlaced video display system builds an image on the picture tube in two phases, known as "fields", consisting of even and odd horizontal lines. The complete image (a "frame") is created by scanning an electron beam horizontally across the screen, starting at the top and moving down after each horizontal scan until the bottom of the screen is reached, at which point the scan starts again at the top. On an interlaced display, even numbered scan lines are displayed in the first field and then odd numbered lines in the second field. For a given screen resolution, refresh rate (frames per second) and phosphor persistence, interlacing reduces flicker because the top and bottom of the screen are redrawn twice as often as if the scan simply proceded from top to bottom in a single vertical sweep. Analog camcorders, VCRs etc do not mix the recorded pictures. They record picture after picture after picture. Analog camcorders use "odd" and "even" sets of scan lines, too, but they don't intermix them into 1 frame. In a typical interlaced video signal from video camera the "odd" and "even" fields are taken at different times. Interlacing works well with traditional analogue televisions. Interlaging is annoying when video signal needs to be processed with computer or displayed on non-interlaced display device. To display interlaced video on non-interlace display, you need to use a process called deinterlacing. There are several techniques to do deinterlacing, but none of them is perfect. On a computer screen interlaced recordings are annoying to watch because the lines are really disturbing. Especially in scenes where there's movement from left to right (right to left) you see the interlacing.

    To capture and use those complex signals, you need special electronics to do the job.Your TV receiver sorts out this information by sampling the level (in % of modulation) of the complex signal and displaying a picture on the screen in relation to this signal. Composite video signal is the signal that contains the same information but is not modulated to a RF carrier. There are also many other video signal formats in use in various applications.Typical video signals you see nowadays are analogue video signals.Analog refers to changing the original signal acquired (in a camera) into something that represents the signa - in this case, into a wave form transfered through video cable or other transmission medium (like through air in TV broadcasts).There are also digital video signals where the picture contents are encoded in digital format (information converted to a series of bits which represent numbers).The video systems are genrally based on RGB color space system. Both the image source (in this case, the camera) and thedisplay (a CRT) are "speaking" in terms of RGB. In the way from image source to the display device the signal is converted in many cases to some other formats that are more convient to transport than three separate RGB signals (for example composite video signal tran goes through one coax cable). The RGB system is not perfect system. Accurate reproduction of color imagery via electronic means is atopic that can fill a book, but in simplified terms there are those major problems:

    • No three-primary system can ever cover the entire range of colors perceivable by the human eye (due to the nature and overall shape of the "color space" in which they must be represented).
    • None of the three primaries used - either by the camera or in the CRT - is fully saturated (due technical/physical limitations), which additionally restricts the range of colors which can actually be reproduced correctly. Truly saturated purples and yellows are difficult if not impossible to produce in the typical RGB system.
    • Both the image source (in this case, the camera) and the display (a CRT) may be "speaking" in terms of RGB values, but may not be using quite the same red, green, and blue or how much of each is supposed to be combined to make "white." This will alter the appearance of the color as displayed vs. what the camera "intended." The RGB values used are supposedly standardized by the broadcast television specifications, but there are still some differences (different broadcast specifications, own specifications in computer systems etc.)
    RGB is in video systems generally the best you can get. Other video color coding systems are generally more limiting. The color encoding system used in television in USA ("NTSC"encoding) imposes additional limits, and also some unique problemsin obtaining accurate and repeatable colors. The color encoding system used in television in USA ("PAL" encoding) imposes it's own limits and unique problems, the colors itself generally get accurate but the color saturation can have accuracy problems. PAL and NTSC systems also have their own limitations on the color space they can produce (they cannot properly reproduce some verysaturated colors without problems). Check following links if you are looking for basic information on video signals.

    Video signal standards

    There are many different video formats and interfcae in use in video systems. They are used in different applications for various technical and economical reasons. All the interfaces below are designed to use 75 ohm coaxial cables (one coaxial cable or more per interface, typical impedance is 75 ohm ? 10%) . Standard Video signal output level is 1 Vpp ? 10% into 75 ohms. The following signal level applies to video signals like composite video signal that has sync information in it. The video inputs are generally designed to get this specified level input level at ? 3 dB or ? 6 dB accuracy (meaning 0.5-2V signal). The video signals that do not have sync signals (for example RGB component signals) use level of 0.7Vpp (same level as the picture part of normal video signal). Here is a short primer of the most commonly used signal interface types from the best to worst in picture quality:

    • RGB video is the highest quality video used in professional A/V presentation industry and computer video. It has one wire for each colour, usually with it's own RF sheilding to reduce any interference and any subsequent quality degredation. Nothing is better. The standard signal interface level for RGB interface is 0.7Vpp to 75 ohm load. How the sync information is transferred varies from RGB interface application to another (possibilities are sync-on-green, separate composite sync and sepatate HSYNC + VSYNC signals).
    • Component Video is a bit of a misnomer - RGB is technically also component video, or video whose components are transmitted seperately. Usually, when someone refers to Component Video however they're referring to Colour-Difference component video. In color difference component video the picture is transported in luminance plus two color difference signals format. Color difference component video [YUV or YCrCb] is the highest quality form of video typically used in TV broadcasting industry. All signal components are used to use 75 ohm coaxial line. The Y component has amplitude of around 1Vpp and other components somewhat less. There are different names used for compoent formats, and they correspond each other in the following way: YCrCb = YPrPb = YUV = Y, R-Y, B-Y
    • In S-video format RGB components are combined into two wires, Luma and Chroma (Y + C), or Brightness and Colour. Again including their own shielding to prevent interference. The Y signal has a nominal level of 1Vpp and C signal a level of around 0.5V. Both use 75 ohm terminated coaxial lines as the medium.
    • Composite video (sometimes refeereed only video in connector name) uses one wire (with it's own shielding) to carry all video information (red, blue, green and sync) mixed together. This is generally a pretty good picture, but depends greatly on the quality of the generating & receiving equipment. This format is quite often referred as PAL video or NTSC video depending on what video format is used. The nominal signal level is 1Vpp on a 75 ohm terminated line.
    • RF video format goes into the cable plug on the back of your TV. This is one wire, shielded, carrying not only the NTSC or PAL video information, but also the sound information as well. In the case of the cable coming out of you wall, this one wire contains many (In some cases hundreds) channels. Unfortunately in real-life situations those many channels and the soudn with video can interfere with each other and cause picture quality to degrade. The antenna networks try to give you a signal level of 60..80 dBuV (1..10mV) for your TV to be happy with the signal.
    There are also different video signal standards in use. The most common are the the three color TV systems in use around the world: PAL, NTSC and SECAM. Besides that there are special computer video formats (VGA, SVGA, XGA etc.) used by computer industry. And then there are then also various digital video broadcasting formats (DVB, ATSC etc.) which either resemble the analoogue system (DVB-TB) or introduce their own picture format (different HDTV formats).There are 3 main analogue TV broadcasting standards in use around the world: PAL, NTSC and SECAM. Each one is incompatible with the other. For example, a recording made in the France could not be played on an American VCR. All three color TV systems have several things in common. They are all interlaced, where two fields make up one fullframe. Interlaced is used so the picture doesn't flicker to much on thescreen even though picture freme rate is quite low (25 Hz or 30 Hz).Interlacing allows the TV system to have a double field rate comparedto the frame rate (the screen is refreshed 50 to 60 timer per second).In interlaced picture one picture frame consists of two picture fieldtransitted after each other (called odd and even fields). The picture repetition frequency (usually called field rate or frame rate) is an important factor in video signal properties. Since the mid-1930s this frequency has been the same as the mains frequency, either 50 or 60Hz according to the frequency used in each country. This is for two very good reasons. Studio lighting generally uses alternating current lamps and if these were not synchronised with the field frequency, an unwelcome strobe effect could appear on TV pictures. Secondly, in days gone by, the smoothing of power supply circuits in TV receivers was not as good as it is today and ripple superimposed on the DC could cause visual interference. If the picture was locked to the mains frequency, this interference would at least be static on the screen and thus less obtrusive.In TV broadcasting two signals are transmitted at the same time, the black/white signal (with sync pulses) and the color signal.The black/white signal is the main signal and has a frequency response of about 0-5 MHz, depending on the TV standard.The black/white signal is made by mixing the three colors, 0.30Red, 0.59Green and 0.11Blue. Two color signals are needed to make a full color picture is R-Y (red minus black/white), and B-Y (blue minus black/white). Since our eyes are less sensitive to color compared to black/white, the resolution needed for color can be less.Two bandwidth limited color signals are multiplexed to a color subcarrier. This makes the color signal. To be ableto decode those two signals from the color subcarrier, the TVstandards include a short burst of unmodulated color carrier isright after every horizontal sync pulse (TV decoding electronics cansync to this).The frame rate of the TV systems are chosen to be very close to the frequency of the power system to minimize interference.The line frequency, frame frequency andcolor carrier frequency are all related to each other, and areactually locked to each other. Here is a short primer of the three main TV broadcasting standards:
    • NTSC: USA uses 60Hz power, so the NTSC field rate is 59.94 Hz. The picture frame has 525 lines. The color carrier for NTSC is 3.58 MHz, for short. For NTSC, the two color signals are modulated on the color carrier using quadrature AM modulation. This modulation is problematic, because transmission impairments like severe phase changes can cause color problems (sets need HUE or TINT control).
    • PAL: Europe used 50Hz power, so the field rate is also 50 Hz. The picure frame has 625 lines. The color information is transmitted on the 4.43 MHz color carrier with about 1.4 MHz bandwidth. PAL system uses a color special modified quadrature AM modulation with special subcarrier phase shifts between picture lines. This allows the decoder to combine the color information of two pictures, using a delay line in the TV set. This allows any phase error can be cancelled so that severe phase changes in the transmission of a PAL signal will show up as weak colors, but correct colors. There are several variations of PAL system in use. Common types are B, G and H; less common types include D, I, K, N and M. The different types are generally not compatible on the TV broadcasting level (the RF signals you pick up on antenna), but most versions are compatible as composite video signal. Pal-B, G, H, I and D as far as the actual video is concerned, are all the same format. All use the 625/50 line/field rate, scan at 15,625 h-lines/sec and use a 4.433618 color subcarrier frequency. The only difference is in how the signal is modulated for broadcast. Thus the B, G, H, I & D designate broadcast variations (different luminance bandwidth and different audio sibcarrier frequencies) as opposed to any variation of the video format.PAL-I for example, has been allocated a wider bandwidth than PAL-B, necessitating that the sound carrier is placed 6Mhz above the picture instead of 5.5 MHz above the picture carrier. PAL-M and PAL-N are considerably different from other versions, as the line/field rate and color subcarrier frequencies are different from standard PAL. PAL system was originally developed by Walter Bruch at Telefunken Germany (German State Television) and is used in much of western Europe, Asia, throughout the Pacific and southern Africa.
    • SECAM: SECAM was developed in France and is used in France and it's territories, much of Eastern Europe, the Middle East and northern Africa. This system uses the same resolution of PAL, 625 lines, and frame rate, 25 per second, but the way SECAM processes the color information is unique. SECAM was not developed for any technical reason of merit but was mainly invoked as a political statement, as well as to protect the French manufacturers from stiff foreign competition. The Eastern Block countries during the cold war adopted variations of SECAM simply because it was incompatible with everything else. For picture scanning and format, SECAM is same as PAL. SECAM is a totally different system form PAL and NTSC when color transmission is concerned. SECAM uses FM modulation on color subcarrier to send one color component at the time. One line transmits the R-Y signal, and the next line transmits the B-Y signal. With delay lines those color signal sent at different times can be combined to decode the color picture. SECAM has some problems, however. You cannot mix together two SECAM video signals, which is possible for two locked NTSC or PAL signals. Most SECAM TV studios use PAL equipment, and the signal is converted to SECAM before it goes on the air. Also, the color noise is higher in SECAM. Recording the SECAM signal to video tape is hard (give easily poor picture quality). If that wasn't bad enough, there are other variations of SECAM: SECAM-L (also known as French SECAM) used in France and its' now former territories, MESECAM and SECAM-D which is used primarily in the C.I.S. and the former Eastern Block countries. Naturally, none of the three variations are compatible with even one another.
    There are combinations used of the above systems in some special casesand in some countries. There might be a typical 625/50 scanning system used for PAL, but the color is actually NTSC. The carrier will be the typical 4.43 MHz PAL,but the modulation on the carrier is NTSC. Vise versa is also used: PAL60 is a signal with NTSC timing, but 4.43 MHz PAL color coding.

    In practival video world there are three color video encoding standards: PAL, NTSC and SECAM.With each of them you will get a color picture if the receivingequipment can decode that particular video format.If the receiving equipment can't decode the particular videostandard, most often the result is black and white picture(sometimes no usable picture at all), because the the black and whitepart of the picture is quite similar on those different videostandards but the color information is very diffent(if the TV can't find proper color information it willjust display the black and white part of the picture).There are also some other standard and de-facto standards that you might encounter in video world:

    • ATSC: Advanced Television Systems Committee (ATSC) is a committee which spcifies the digital TV broadcasting system in use in USA. This standard support both standard definitoon and HDTV broadcasts. There are 18 approved formats for digital TV broadcasts, those format cover both SD (640x480 and 704x480 at 24p, 30p, 60p, 60i) and HD (1280x720 at 24p, 20p, and 60p; 1920x1080 at 24p, 30p and 60i).
    • CIF: Common Interface Format. This video format was developed to easily allow video phone calls be-tween countries. The CIF format has a resolution of 332 x 288 active pixels and a refresh rate of 29.97 frames per second.
    • QCIF Quarter Common Interface Format. A video format to allow the implementation of cheaper video phones. It has a resolution of 176 x 144 active pixels and a refresh rate of 29.97 frames per second.
    • CCIR: The CCIR is a standards body that originally defined the 625 line 25 frames per second TV standard used in many parts of the world. The CCIR standard defines only the monochrome picture component, and there are two major colour encoding techniques used with it, PAL and SECAM.
    • CCIR video: Video signal with same timings as PAL and SECAM systems use in Europe (50 fields per second, 625 lines per frame).
    • HDTV: HDTV (high-definition TV) encompasses both analog and digital televisions that have a 16:9 aspect ratio and approximately 5 times the resolution of standard TV (double vertical, double horizontal, wider aspect). High definition is generally defined as any video signal that is at least twice the quality of the current 480i (interlaced) analog broadcast signal. Generally video formats 720p and 1080i are proper definition of the term HDTV.
    • ITU-R BT.BT.470 Conventional television systems characteristics
    • ITU-R BT.601 Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios
    • BT.656-4 Interfaces for digital component video signals in 525-line and 625-line television systems operating at the 4:2:2 level of Recommendation ITU-R BT.601 (Part A)
    • ITU-R BT.709 Video color space standard (old standard)
    • ITU-R BT.804 Characteristics of TV receivers essential for frequency planning with PAL/SECAM/NTSC television systems
    • RS-170: RS 170(A) Standard that was used for black and white TV. It defines voltage levels, blanking times, width of the sync pulses, etc. The specification spells out everything required for a receiver to display a mono-chrome picture. Example: the output of black and white security cameras conform to RS 170 specification. RS 170 (A) is the same specification as for color TV but without the color components. When NTSC decided on the color broadcast standard, they modified RS 170 slightly so that color could be added, with the result called RS 170 A.
    • RS-170 RGB: Refers to RGB signals timed to RS-170 specifications.
    • RS-330: A standard recommended by EIA for signals generated by closed-circuit TV cameras scanned at 525/60 and interlaced 2:1. The standard is more or less similar to RS-170, but H-sync pulses are absent during V-sync. Equalizing pulses are not required and may be added optionally during the V-blanking interval. This standard is also used for some color television studio electrical signals.
    • RS-343: RS 343 Standard or specification for video. RS 343 is used for high resolution video (workstations) while RS 170 A is for lower resolution video. RS-343 was introduced later than RS-170 and intended, according to the title, as a signal standard for "high-definition closed-circuit television". RS-343 specifies a 60 Hz non-interlaced scan with a composite sync signal with timings that produce a non-interlace (progressive) scan at 675 to 1023 lines. This standard is used by some computer systems and high resolution video cameras.
    • RS-343A: EIA standards for high resolution monochrome CCTV. Based on RS-343.
    • VGA: VGA (Video Graphics Array) originates from 640x480 color grpahics adapter used in first IBM PS/2 computers. There never really was an official standard for VGA video, but it was used as a loosely fedined "nearly industry standard" for many makers of grpahics card and display devices. VGA uses RGG signals and separate sync signals (HSYNC). VGA is quite closely related to 343 in many details, but the difference is that VGA uses non-interlaced picure.
    • SVGA: SVGA (Super VGA) is extension to normal VGA system. The term has bee used in different places to mean different kind of extentions with higher resoluons. Generally SVGA is used toi refer to 800x600 resolution computer video signal.
    • XGA: XGA (Extended Graphics Adapter) is IBM high resolution graphics card. This term is generally used to indicate 1024x768 resolution?computer video signal.
    • SXGA: SXGA (Super XGA) is used to refer to resolutions that are higher than XGA. SXGA is used often to refer to resolutions like 1280x1024 and 1400x1050 pixels.
    • UXGA: UXGA (Ultra XGA) is used to refer to 1600x1200 pixels resolution.
    Some of those standards come from the traditional vidoe broadcasting field and some other from computer world. Those are all included in the same table because the video field has nowadays very much integrated with the computer industry. Video resolution is a term that this often seen in video world expressed in different ways. In computer world the resolution is simply expressed in the terms of pixels in the picture. But in the analogue video world things are different, because for example in horizonal direction there is no just one specified exact definition where resolution ends. Generally in analogue video world tiny details seem to be just attenuate when signal appraches the available bandwidth. "Lines of resolution" is a technical parameter that has been in use since the introduction of television to the world (so long before digital and pixels, and so forth). The measurement of "lines of resolution" attempts to give a comparative value to enable you to evaluate one television or video system against another, in terms of overall resolution. This measurement refers to a complete video or television system, which includes everything to record and display an image. It includes the lens, the camera, the video tape, and all the electronics that makes it the entire system work. This number (and it can be a horizontal or vertical value) tells us something about the overall resolution a complete television or video system is capable of. There are two types of measurement, (1) "lines of horizontal resolution," also known as LoHR, and (2) "lines of vertical resolution," or LoVR. However, it is much more common to see the term "TVL" (=TV Lines).In precise technical terms, "lines of resolution" refers to the limit of visually resolvable lines per picture height (i.e. TVL/ph = TV Lines per Picture Height). In other words, it is measured by counting the number of horizontal or vertical black and white lines that can be distinguished on an area that is as wide as the picture is high. The idea is to make this measurement independent of the aspect ratio. "Lines of resolution" is not the same as the number of pixels (either horizontal or vertical) found on a camera's CCD, or on a digital monitor or other display like a video projector, and so forth. And it is also not the same as the number of scanning lines used in an analog camera or television system such as PAL or NTSC or SECAM etc.If the system has a vertical resolution of say 500 lines, then the whole system (lens + camera + tape + electronics) can distinguish 250 black lines and 250 white spaces in between those black lines (250 + 250 = 500 lines). "Lines of resolution" may ultimately be replaced by a true pixel count when referring to resolution in the future (especially in all-digital systems). For example, since the current DVD format has 720 horizontal pixels (on both NTSC and PAL discs), the true horizontal resolution can be calculated by dividing 720 by 1.33 (for a 4:3 aspect ratio) to get 540 lines. In this same way VGA 640x480 pixels resolution on TV screen would indicate 480 TV lines resolution.

    Here is an overview of different video display resolution standards and de-facto standards in use:

    Computer Standard Resolution
    VGA 640 x 480 (4:3)
    SVGA 800 x 600 (4:3)
    XGA 1024 x 768 (4:3)
    WXGA 1280 x 768 (15:9)
    SXGA 1280 x 1024 (5:4)
    SXGA+ 1400 x 1050 (4:3)
    WSXGA 1680 x 1050 (16:10)
    UXGA 1600 x 1200 (4:3)
    UXGAW 1900 x 1200 (1.58:1)
    QXGA 2048 x 1536 (4:3)
    QVGA (quarter VGA) 320 x 240 (4:3)
    Analogue TV Standard Resolution
    PAL 720 x 576
    PAL VHS 320 x 576 (approx.)
    NTSC 640 x 482
    NTSC VHS 320 x 482 (approx.)
    Digital TV Standard Resolution
    NTSC (preferred format) 648 x 486
    D-1 NTSC 720 x 486
    D-1 NTSC (square pixels) 720 x 540
    PAL 720 x 486
    D-1 PAL 720 x 576
    D-1 PAL (square pixels) 768 x 576
    HDTV 1920 x 1080
    Digital Film Standard Resolution
    Academy standard 2048 x 1536


      S-Video is one of the high quality methods of transmitting a television signal from a device such as a Camcorder, DVD, or a digital satellite receiver. S-video signal is also know with name Y/C-video. Sometimes you can also see name S-VHS-video used (use of this name is not recommended).In "S" video, the chroma and video are separated to eliminate noise and tocreate a higher bandwith for each. S-video (Y/C) uses two separate video signals. The luminance (Y)is the black & white portion, providing brightness information.The chrominance, or chroma (C) is the colour portion, providinghue and saturation information. Signal component separation prevents nasty things like color bleeding and dot crawl, and helps increase clarity and sharpness. S-Video is "essentially" the same as Chroma & Luma, Brightness & Color, or y/c. They all mean the same thing, in a vague sort of way. Don't get confused here if you see different names for this connection. S-Video appeared associated with the first S-VHS VCR systems.S-video was also used by small computer industry starting from late 1980s.This separate the color (c) and luminance (y) information and carry them through separate wires, a system that became known as YC and later S-Video. This made it reasonably easy to integrate into existing equipment and at the same time provide a distinct increase in picture quality. Since the color and luminance were carried in separate cables, it had the potential to eliminate both the cross color problem and the trap problem.While most equipment takes full advantage of the signal separation, not all equipment properly implements the two separate channels.S-Video has also become a popular standard for home use, especially with DVD players.Panasonic's version of S-Video (using the 4-pin mini-DIN connector) seems tobe a "de-facto" standard these days. This means that S-Video (also called Y/C or component video) is carried on cables that end in 4-pin Mini-DIN connectors (other connector can be also used like pair of BNC connector, SCART connector or 7-pin Mini-DIN on some computer graphics cards). Quite often you can see documents that compare composite video and S-video to each other. The problem with composite video vs. "S-Video" isn'tone of bandwidth loss. S-Video can provide a better imagesimply because the luminance and chrominance components ofthe TV signal (also known as "Y" and "C") are kept separateand SHOULD therefore never have a problem with mutualinterference, which results in such effects as the infamous "chromacrawl" problem. However, there is absolutely no reason to thinkthat an S-Video connection will provide a chrominance signalof greater bandwidth. It could, were such a signal available,but there's just no reason to expect that it will.

      Component video formats

      There are many component video formats in use in use. The most commonly used component video formats are RGB, YPbPr and YCbC. The RGB format is the basic format in which the signal is generated in the video camera. In other formats the Y component of this signal is the black and white information contained within the original RGB signal. The Pb and Pr signals are colour difference signals, which are mathematically derived from the original RGB signal. It is important to realize that what is commonly called "component video" (YPbP or YCbC) output and RGB video output are not the same and are not directly compatible with each other, however, they are easily converted either way, at least in theory.

      Program Delivery Control (PDC)

      PDC is an invention that enables you to set your video recorder to tape a programme, knowing that it will be recorded in full, even if the programme is shown later than advertised. Programme Delivery Control (PDC) is a system which permits simple programming and recording control of VCRs using Teletext technology. It promises simplified VCR programming (through information on teletext system) and programmes are recorded even if the broadcaster changes the transmission times due to over-runs, schedule changes, etc. Under PDC the VCR can be programmed to look out for and record certain types or categories of programme. In addition to on/off recording control information, data is also transmitted about programme categories and intended audiences. General categories include sport, music, leisure, etc. Intended audience data identifies different age groups, disabled people etc. For example it is possible to programme a VCR to record all programmes featuring rock music or athletics. For PDC to work, the broadcaster should be transmitting PDC data and the viewer should have a VCR capable of making and controlling PDC selections. Technically PDC is just extra data transmitted within teletext data stream. BT.809 is standard for programme delivery control (PDC) system for video recording.


      Teletext is a method to transfer text pages to television sets withing the TV broadcasts. Teletext system is in use in many European countries. A teletext service consists of a number of pages, each page consisting of a screen of information. These pages are transmitted one at a time utilising spare capacity in the television composite video signal. When the complete service has been transmitted, the cycle is repeated, although the broadcaster can choose to transmit some pages more frequently if required. A domestic television set equipped with a suitable teletext decoder can display any one of these pages at a time. The viewer selects the page for display by means of a remote handset. The service is one way; the user is unable to request a page directly and can only instruct the decoder to search for a particular page in the teletext data stream. There will usually be a delay before the requested page appears in the transmission cycle. When the page is detected, the decoder captures and displays the information contained in the page. Thus the more pages within the service, the longer the access item. For this reason, broadcasters usually adjust the size of their services to obtain a cycle time of around 30 seconds, and therefore an average access time of 15 seconds.A teletext service is divided into up to eight magazines; each magazine can contain up to 100 pages. Magazine 1 comprises page numbers 100 to 199, magazine 2, numbers 200 to etc. Each page also has an associated sub-page which can be used to extend the number of individual pages within each magazine.A teletext display consists of letters, numbers, symbols and simple graphic shapes. In addition, there are a number of control codes which allow selection of graphic or text colours and other display features known as attribute. The characters available for display in the teletext system are the letters A to Z, a to z, numerals 0 to 9, the common punctuation marks and symbols, including currency signs, accents and a whole range of other character sets.Other graphic shapes (termed block graphics) are used to create simple pictures. The history of teletext starts at early 1970's when British broadcasters investigated extending the use of the existing UHF TV channels to carry a variety of information. After discussions with industry a common teletext standard was agreed and following extensive trials a pilot teletext service was started in 1974. Teletext was introduced into the UK on a full commercial basis in 1976. Fastext, a means of reducing wait times for pages, was introduced in 1987 and helped to spread consumer acceptance. Teletext is now included as a standard feature on may European TV sets. The characters that make up the teletext page are transmitted in the Vertical Blanking Interval (VBI) of the television signal. Lines 6 to 22 in field 1 and 319 to 335 in field 2 are available to carry teletext data. Each character or control code is represented by a 7 bit code plus an error checking parity bit. If the teletext decoder detects a parity error the characters is not displayed. The data for one teletext display row together with addressing information is inserted in one VBI line. Since there are 24 display rows or packets per teletext page, it takes 24 data lines to transmit a teletext page. Bits are represented by a two level NRZ signal. Synchronisation information is included at the start of each packet to indicate bit and byte positions. ITU-R BT.653 (formerly know as CCIR 653) is a recommendation defines the various teletext standards used around the world. TV systems A, B, C, and D for both 525-line and 625-line TV systems are defined. The teletext system in USA is called NABTS (North American Broadcast TeletextSpecification). It is specified in in EIA-256/ITU-R BT.653. Notes on recording the teletest signal with video recording device:

      • Normal VHS tape recorder does not have sufficient bandwidth to record teletext signals
      • Decent S-VHS recorders have sufficient bandwidth (just) to record the teletext data stream in S-VHS mode
      • Several VHS models are available to record Decoded subtitles 'burnt in'to the picture
      • DV recorders (and presumably Digital 8) WON'T record the vertical interval of video signal that has the teletext information
      • DVD+RW recorders should have sufficient bandwidth, and digital stability on playback to allow the data stream to be recorded, PROVIDED they aren't stripping off the vertical interval
      The combination of digital TV an teletext system is uncertain. Only some digital satellite channels use analogue teletext - subtitles are their own separtate digtial data stream. And new digital TV broadcasting systems seem to have shifted from teletext to "super text TV", that is a completely different text information tranferring system that can do more than standard teletext.

      Closed Captioning

      Captions are text versions of the spoken word. Captions allow audio to be perceivable those who do not have access to audio (can't hear it). Though captioning is primarily intended for those who cannot receive the benefit of audio, it has been found to greatly help those that can hear the audio and those who may not be fluent in the language in which the audio is presented. Closed captions are the visible text for spoken audio transmitted invisibly within the TV signal in USA. Closed captions are captions that are hidden in the video signal, invisible without a special decoder. The place they are hidden is called line 21 of the vertical blanking interval (VBI). On newer televisions, closed captions can be accessed via the menu. The US captioning system is NOT incompatible with teletext. Captioning uses video line 21 to transport it's data. The method of encoding used in North America allows for two characters of information to be placed in each frame of video, and there are 30 frames in a second. This corresponds (roughly) to 60 characters per second, or about 600 words per minute. The character set was designed for the United States, and really has very little beyond basic letters, numbers, and symbols. Closed Captioning can be recorded to a VCR with video signal. Because of the slow data rate, Closed Captioning survives even on poor VHS recordings. Digital television broadcasts in USA have also closed captioning functionality, but it is implemented differently than in NTSC TV system. Digital Television Closed Captioning (DTVCC) (formerly known as Advanced Television Closed Captioning (ATVCC)) is the migration of the closed-captioning concepts and capabilities developed in the 1970's for NTSC television video signals to the high-definition television environment defined by the ATV Grand Alliance and standardized by the ATSC (Advanced Television Systems Committee). This new environment provides for larger screens, higher screen resolutions, enhanced closed captions, and higher transmission data rates for closed-captioning. The DTVCC specification is defined in the Electronic Industries Association publication EIA-708.


      FCC adopted system in USA to to block the display of television programming based upon its rating. The V-Chip reads information encoded in the rated program and blocks programs from the set based upon the rating selected by the parent. The V-chip is a standard for placing program rating information on a television program, so that parents can choose to filter what their children see. This information is carried in field 2 of the caption area. The standard for television content advisories ("ratings") is EIA-744-A.

      • V-Chip Homepage - the FCC adopted rules requiring all television sets with picture screens 33 centimeters (13 inches) or larger to be equipped with features to block the display of television programming based upon its rating    Rate this link

      Time code

      Time code is a time information included in the video signal or stored separately to a taped video signal. This time information is very useful in video editing and post-processing applications. When you require pieces of audio, video, or music technology equipment (eg. tape recorder and sequencer) to work together, you may need some means to make sure that they play in time with each other. This is called 'synchronisation' or 'synchronization', which gets shortened to 'sync' or even 'synch'. The SMPTE/EBU timecode standard defines the predominant internationally accepted standard for a sync tone and it allows devices to 'chase' or locate to a precise position. SMPTE timedoce comes in two versions: "LTC" timecode sync tone which can be recorded onto the audio track of a video tape or onto an audio tape and "VITC" time-code which is stored inside video signal. There are four standard frame-rate formats: The SMPTE frame-rate of thirty frames per second (fps) is often used for audio in America (for example Sony 1630 format for CD mastering). It has its origins in the obsolete American mono television standard. The American colour television standard has a slightly different frame-rate of about 29.97 fps. This is accommodated by the SMPTE format known as thirty Drop Frame and is required for video work in America, Japan and generally the 60 Hz (mains frequency), NTSC (television standard) world. The EBU (European Broadcasting Union) standard of 25 fps is used throughout Europe, Australia and wherever the mains frequency is 50 Hz and the colour TV system is PAL or SECAM. The remaining rate of 24 fps is required for film work.One of the wonderful things about professional video equipment is that every field of every frame on a videotape has a unique address indicated by time code. The address is recorded on a special track using SMPTE Timecode. This timecode track is in addition to the CTL (control) track, linear audio tracks, and helical-scan video track. The address is displayed in decimal format as HH:MM:SS;FF. The consequences of every frame being permanently labeled are enormous. It makes it possible to eject a tape from it's player and reload it later, and still be able to find exactly the same frame as before. Having the timecode "permanently" associated with the video means that frame-accurate "cue sheets" can be drawn up, so that the director or editor can find important points in the program just by seeking to a specified timecode number. Timecode thus allows editing sessions to be spread out over days or even weeks, with perfect confidence that any edit point can be precisely re-visited at any time. Commonly, SMPTE code would only be used where video is involved, and thevideo machine becomes the master with everything else slaving to that. In video editing the time code is usually adopted for equipment syncronizing and for finding the exact positions on the video tape (when video is recorded, time code is also recorded, so it can be used to find some specific positions again and again). In audio post-production, SMPTE has been adopted for machine synchronisation and as a reference of tape position. You record SMPTE timecode to a spare track on tape. Youfeed this (audio) signal into a box which converts it into Midi Time Code that many audio equipment use. There are also other time code systems in use in some special applications (for example MIDI time code, proprietary video time code systems etc.)

    Video signal distribution

    Generally majority of video interfaces are designed for point to point connection: you have signal source on the one end of the cable and signal receiver on the other end. Transmission lines have a characteristic impedance (Zo) with which they should be driven and terminated; and, in video, the most popular is the 75 ohm coaxial cable. Video signals are wide bandwidth signals that cover video covers six or more ocataves of frequency range. A tpyical video signal can start from few tens of Hz (even DC on some applications) and can extend easily to tens of MHz (up to 5-6 Mhz typically TV broadcast video). Video engineers must match impedances to avoid reflections when driving transmission lines. The video transmission lines are traditionally 75 ohm coaxial cables. Only dissipative elements (resistors) can be relied on for matching1 over such wide bandwidths. The use of resistors creates a loss. The driver must compensate with added gain. That's why most video drivers have a fixed gain of two2, though some are settable. The first information needed when designing or choosing a video driver is the bandwidth. Microscopically, video is a bit-stream, and the high-frequency end depends on the rise/fall time of the waveform. To reproduce the waveform with satisfactory fidelity, the upper -3db point should be between 0.35 to 0.50 over the rise/fall time of the video signal, thus putting the high end of the video bandpass in the tens or hundreds of MHz depending on the application. Macroscopically, video is an image, and to reproduce it we have to pass the rate at which it was sampled, or the frame rate. This sets the low end around 2.5 to 5Hz. AC coupling would require large capacitors, which is why most applications are DC coupled. What if we wanted to display the camera's signal on multiple monitors? One way is to loop or tee the signal through a monitor and connect the first monitor to a second monitor. Professional video equipment typically provides loop-thru connections that allow this.These connections may terminate the signal automatically when a connection is made. If not, the signal needs to be terminated manually. This is done by using either a switch near the connector or a special connector with a precision resistor inside, a 75-ohm termination.Only one termination should be placed on the video signal, and it should be at the end. Double-terminated signals look dark, while unterminated signals appear overly bright. Consumer video equipment normally terminates internally and does not allow for looping signals.It is perfectly acceptable to loop video signals as long as the overall cable length and the number of loop-thrus are not excessive. Five or fewer loop-thrus is generally acceptable.A better way to increase the number of signals is through a distribution amplifier (DA). DAs, available for most signal types, amplify the signal and provide typically four to eight outputs for each input. This allows you to feed the same signal up to eight picture monitors or other destinations with the signal from one camera. Looping an input signal through several DA inputs can quickly allow 50 to 100 outputs.Sometimes when video signal needs to be distributed to many receivers, RF distribution system is used. This works exactly like a common antenna system or cable TV. You feed the signal or several signals in from the start of the network, and this signal (or signals) can be received from all TVs connected to this antenna network. With a modulator, you can take a video and its associated audio and modulate them up to RF. Channel 3-4 modulators are usually easily available and allow the use of a television for monitoring both video and audio (most home VCRs have also modulator in them). Although this kind of RF system is not recommended in a studio environment, it can be quite handy while out on the road. When video lines get long, the cable length needs to be taken into consideration. A video able is a transmission line and a transmission-line's bandwidth depends on its length. For example, at 10MHz, 100ft (30 meters) of RG-59A has 1.1dB IL (insertion loss), 200ft (65 meters) has 2.2dB IL, and 300ft (100 meters) has 3.3dB. Depending on the length, NTSC or PAL video experiences little loss, but HDTV or SXGA video would be very much affected. To correct for this, the line is "equalized" to restore overall response to the necessary application bandwidth. The equalizer has an inverse-frequency characteristic, compared to that of the transmission line, to create a flat response at the end of the line. There are so called "cable compensating" video amplifiers that can do the cable compensation. Generally those amplifiers have some form of setting where the user sets the amount of compensation to add. The need of the compensation depends on the cable length and type of cable used (more lossy cable needs more compensation).

    • Driving Video Lines - When does a trace or a wire become a transmission line? Bandwidth, characteristic impedance, ESD, and shoot-through considerations for selecting the proper video driver, receiver, mux-amp, or buffer.    Rate this link

    Video signal switching

    Video signals can be switched with mechanical switches. With two cameras, the simplest method is to use an A-B switch. A-B switches have two inputs and one output. Camera A goes to one input, camera B to the other; connect the monitor to the output or common terminalWith simple (passive) switches, the picture will likely roll every time the feed is switched. Avoiding this requires a vertical interval switch. Switching video signals during the vertical interval keeps the switch out of the picture area and reduces the likelihood of a vertical roll. Eliminating the roll requires a vertical interval switch and that the signals be synchronized or genlocked. If the signals are not genlocked, it is not always possible to make the switch during the vertical intervals of both signals.

    Film to video conversion

    Telecining is a method by which progressive video that runs at 24 fps (such as a film) is converted to a format which can be displayed on a TV.The word "telecine" is derived from the words television and cinema. Telecining is a process by which video that runs at 24 frames per second (fps) is converted to run at ~30 fps or 25 fps. This is necessary because a television can only display video at ~30 fps (in NTSC based countries) or 25 fps (in PAL/SECAM based countries). The telecining process is used on many types of video, such as films, most cartoons, and many other kinds of programs. Telecine is a device used for scanning photographic motion-picture images and transcoding them into video images in one of the standardized video formats. Its most common usage is to prepare videotape transfers from completed film programs. Film scanner is a more general term and telecine is frequently reserved for a scanner that operates only in real-time. In addition to scanning the film images, telecines must reconcile the speed and frame count differences between various film and video formats.NTSC telecining is usually done using process called 3:2 pulldown. In this process one film frame is shown for two TV field, next film frame for three TV fields, next for two TV fields etc. This process work well, looks quite nice on TV and is technically quite simple.In general, PAL/SECAM telecining does not use duplicated fields. Instead, most 24 fps films are simply sped up by 4% to play at 25 fps on a PAL/SECAM system. In most cases, one one film frame makes one TV picture frame, but in some cases the conversion from 24 fps to 25 fps, one field of video is shifted by a frame (this looks nice on TV but can cause problem when video signal is processed further). In very rare cases a 24 fps video is converted to 25 fps for PAL/SECAM video by duplicating 2 fields over 24 frames to produce 25 frames.In addition to the different frame rate and interlacing, TV display and movies use different aspect ratios. The normal TV screen has 4:3 aspect ratio, where practically all movies (expect some very old ones) use a much wider picture format. Panning & Scanning is the oldest and most used method of converting a widescreen image to fit an old-fashioned 4:3 TV screen. In this way of transferring images, the resolution is kept as high as it can be, but at the cost of missing area (you see only the part of the picture the movie theater audience sees). Another way to do the aspect ratio conversion is letterboxing. In this conversion the whole movie screen is shown in the TV screen, but it fills only part of the 4:3 TV screen. The unused parts (above and below movie image) are filled withblack video signal. This can be viewed with normal 4:3 TV nicely (although picture can look somewhat small) and looks goodwith 16:9 TV also (with "Zoom" feature the 16:9 TV ownet can get the whole screen filled with picture). The problem inherent with letterboxing is that lots of perfectly good video resolution is lost in the black lines. However, this has now changed. With DVD taking larger market on all over the world, HDTV coming to USA and digital TV coming to Europe, there is a new video format that can be used. This video mode has many names, like "16:9 Enhanced Widescreen", "Anamorphic widescreen" or simply "16:9". The point is that instead of optimizing the video image for a 4:3 TV set, it is optimized for a 16:9 TV set. For example all 20" or bigger 4:3 TV sets sold in the European Union since 1995 or so have an "anamorphic squeeze" to be able to take this new format. Also many digital playback devices (DCD players, digital TV set-top boxed) have options to set if this signal format is used or not. If you try to push "16:9 Enhanced Widescreen" signal to a TV which does not support it, you will get a picture thatis around 33% too tall.Playing a telecined video on a television will usually look okay. However, problems can arise when capturing and playing a telecined video on a computer. For NTSC telecining, interlacing artifacts are caused by duplicate fields of video that are used to increase the number of frames displayed each second. Generally telecined PAL signal does not cause problems in computer processing, but is some cases where one field of video is shifted by a frame, you get a video signal which looks okay on an interlaced display (television), but it produces interlacing artifacts on a computer monitor and when video is digitized.

    • How film is transferred to video - The goal of this article is to make the reader understand how a movie is shot and later transferred to home video.    Rate this link
    • How Video Formatting Works - If you've watched many movies on video, you've probably read the words, "This film has been modified from its original version." But how has it been modified? The message that appears at the beginning of video tapes isn't very specific. As it turns out, there are a number of ways video producers modify theatrical films for video release, and elements of these processes have sparked heated debates about maintaining artistic visions.    Rate this link
    • Letterbox and Widescreen Advocacy Page - This page describes the difference of letterbox and widescreen picture formats.    Rate this link
    • Telecining - Telecining is a process by which video that runs at 24 frames per second (fps) is converted to run at ~30 fps or 25 fps. The telecining process is used on many types of video, such as films, most cartoons, and many other kinds of programs.    Rate this link
    • What Is 3:2 Pulldown? - Film runs at a rate of 24 frames per second, and video that adheres to the NTSC television standard runs at 30 frames a second. When film is converted to NTSC video, the frame rates need to be matched in order for the film-to-video conversion to take place. This process of matching the film frame rate to the video frame rate is called 3:2 pulldown.    Rate this link

<[email protected]>

Back to ePanorama main page ??