Video production technology page

    Video production and editing

      Video editing technology

      Linear editing, simply stated, involves the use of two tape machines and a device that controls those two machines. One tape machine is the "playback" device, which contains the original tape to be edited, and the second machine is the "record" device that records the new edited sequence. The device that controls the syncing of these two tape machines is an edit controller. A more complex system would include a third tape machine which would be used as an additional "playback" device. This would allow the combining of two separate tapes during the edit process.As the name implies, linear editing involves the editing of a sequence of events that are spread over a constant line, or in this case laid down on tape in a constant sequence. In other words, everything is recorded one scene after another.With linear editing, you would have to spend hours moving the tape back and forth though the "playback" machine picking out the scenes you wanted to keep and placing them in the right order. The time required to move the tape through the machine, skipping all the unwanted video, can be mind-boggling. With the revolution of digital video, the information that is laid down on videotape can now be converted to digital information. Written in a digital format, the digital information can be manipulated by a computer, the heart of the non-linear editing system. This solves the problem of mechanically moving the "play back" tape back and forth during the linear editing process, because the original tape is loaded onto the computer?s hard disk drive as digital data. The computer?s hard disk can access any scene of the video without having to read the entire tape from the beginning, so the editing process is no longer bound to the time requirements of moving tape.Scenes can be picked in any order, played and reviewed at any time during the edit process. They can be fine-tuned and transitions added between scenes. Because the computer sees the video as digital data, it is simple to manipulate. Sound can be added at anytime, from almost any source. And there is no generation loss with non-linear digital editing. The edited copy will be as clear as the original and you can edit as many times as you wish without any loss of video quality. Many video editing systems are computer-based. This type of system includes capture hardware that is installed in a standard computer along with editing software. The best news of all is the cost of non-linear video editing. This type of equipment needed for non-linear editing is a fraction of the cost of a full linear editing system. Generally you need a computer (Pentium III or better), capture card, enough hard disk space (tens of gigabytes easily) and suitable editing software (usually something for this comes with capture card or you buy it separately). This adapter card takes the analog video output signal from your camera or VCR and converts it to digital data needed for the computer. If you already have a digital camera instead of a video capture card, you will need an adapter card that will accept direct digital video.

      Time code

      Timecode is the general name given to sending time information between equipment so that they can synchronise together. There are many different types:

      • Vertical Interval TimeCode (VITC): This time code is incorporated into a video signal in the vertical refresh time period (stored as digital code in non-visible video line).
      • Longitudinal TimeCode (LTC): Longitudinal TimeCode is a code which can be recorded on audio tape or CD, recorded to video recorder audio track or transmitted as an audio signal. It comes in four flavours of its own:
        • Film: This operates at 24fps and is used by the film industry
        • EBU: This operates at 25fps, the European video standard
        • SMPTE (Society of Motion Picture & Television Engineers): This is an universal standard operates at 30fps, one of the US video formats
        • DF (Drop Frame): This operates at 29.97fps, the other US video standard;
      • MIDI TimeCode (MTC): MIDI TimeCode is time code that is transmitted over a MIDI cable. It is the MIDI equivalent of SMPTE and operates at any one of the frame rates above.
      Time code is very useful tool when there is need to syncronize different devices to each other (for example video, sound, lights etc.). Time code is also used in video editing to be able to accurately determine the video tape position when making "cuts" in editing.

      Video cameras

      In TV, everything starts with the camera. Get that right and whatever problems you have later should only be creative. The work that you do will determine the camera you need to buy. The smaller the camera the more discrete you can be, but there are trade-offs. The CCDs are the part of the camera that is sensitive to light - they are like the film or the retina of your eye. For professional results, you need 3 CCD chips (one each for red, green and blue light). Single chip cameras (generally) belong in the domestic market. Most prosumer cameras use 1/4" CCDs, cameras have usually 1/2" CCDs, while pro cameras often use 2/3" CCDs. A larger CCD always results in better pictures, even if the pixel count is the same. FIT (Frame Interline Transfer) CCDs are better than IT (Interline Transfer), because they are more resistant to vertical smear (which shows up if you point the camera at lights). Both have improved so very much recently so vertical smear is hardly a problem nowadays. CCD shape is also important. Low-budget cameras only have a 4:3 aspect ratio. To future proof your camera, you proapbly need 16:9/4:3 switchable to be ablr to do also "widescreen" videos. Low-end cameras are stuck with the lens they ship with. More expensive cameras allow changing lenses. If you wnat a good lens set, you need at least two zoom lenses: telephoto and wide angle. The reason for this is that it is is difficult to cover all eventualities with one lens (many lenses do not perform well on extremes). Typical problems are that as you zoom out you may notice that strong verticals become curved and when you zoom in the picture may get slightly darker (the lens is ramping). Most cameras can get very good picture at bright sets (a smalle aperature, usually f8 or f11). A camera performance in low light is also important in many application, and this varies greatly between different cameras. Fast shutter speeds are are useful for fast sports, but in normal applications you rarely need the wide range of speeds they offer. If you expect to shoot computer screens you need a well variable shutter so you can sync the shutter speed with the computer screen refresh rate, this reduces annoying computer monitor flicker and roll bars. A 25 fps shutter setting can help give your video a film look. Nowadays digital DV camera technology is becoming very popular. You won't find a better videotape format in terms of price/performance for standard-definition television than DV or its related formats DVCAM and DVCPRO. Also, DV is the first broadcast-quality format small enough for a camera master to fall into a cup of tea. There are three varieties of DV camera: miniDV (domestic cameras), Sony's DVCAM (prosumer or what was once termed industrial) and Panasonic's DVCPRO (professional). All DV tapes have CD-quality audio. When bying cameras, the connectors for video and audio are important.On low-end cameras, audio input is via mini jack. These cannot cope with daily use (will eventually become loose and the sound intermittent). XLR sockets and plugs are more robust and plug directly to professional microphones. If your choice of camera has mini jacks you'll need a sound adapter box to connect XLR microphones to it. There are several ways to take pictures out of the camera, besides ejecting the tape. High-end cameras have BNC outputs, plus BNC inputs to allow you to link two cameras together to lock timecode. Digital DV camesas have Firewire IEEE-1394 connection (called i-Link by Sony). This allows you to play straight from the camera into an editing system with no conversion. When using cameras please note that some cameras might be also sensitive to other radiotion than just visible light. Most black and white CCD cameras are sensitive to near-IR light,meaning that with ten or so high output IR LEDs, you can create enough"invisible light" to get a reasonable picture inside a dark room(colour CCD cameras would actually work too but not well, because IR light is fusually prettu much filtered out inside them, or the colour separation filter/prism does it). You can test the sensitivity of your CCD camera to IR by operating your IR remote controller in front of it. If you see flashing on the remote controller where IR transmitters in it are, your camera is somewhat senstive to IR light.

        TV studio camera informations

        In TV studio is is typical that all the cameras are controlled from the main controlling room. Camera control unit (CCU) allows the camera settings to be controlled remotely. In this arrangement the camera operator just needs to do the sootin by aiming the camera to right direction and possibly doign shooting. All other camera settings are done through CCU from the control room. This allows that the techncian there can easily match the picture from different cameras to look the same, because he/she has access to control all of them.

        The camera is connected to the CCU using a cable. This cable generally carries the video signal from camera (in suitable video format), audio, power to camera, intercom communications, tally light control (12-24V voltage or closure) and camera control signals. Traditionally CCU has been connected to the camera through a multi-core cable or throug Triax cable.

        One option to do the CCU to camera connection is to use multicore cable or separate cables for all signals. Multicore cable is a cable with multiple different kind of wires inside it. There are conductors for carrying video signals (generally miniature coaxial cables) and other signals (usually twisted pairs). Different camera systems use different kind of multicore cables. One commonly used multicore cable type is 26 pin Multicore Camera/Remote using round Hirose connectors. This cable can carry analogue Y/R-Y/B-Y Component, RGB, Y/C or composite depending on the equipment connected to it. In addition to video it carries power (typically 12V), audio and control signal. 26-pin multicore (usually referred as CCZ 26 pin) is used by Sony, Panasonic, Hitachi, Ikegami, JVC, and Toshiba. This is what majority of professional TV camera multicore cables use. This kind of multicore cable systems can generally transfer video signal up to 100-200 meters (depends on equipment connected). There are also other multicore cable system, for example 10 and 14 pin camera connector used in older consimer/prosumer video cameras to mention few.

        Triax cables are used in TV broadcast industry for TV camera interconnections (connecting camera to CCU and supplying power to camera). Triaxial cables are constructed with a solid or stranded center conductor and two isolated shields. The center conductor and the inner isolated shield make up a coaxial cable configuration that functions to carry the video signal. The outer isolated shield can be used for several separate signals by means of multiplexing that may include power feed, teleprompter feeds and control for automation. Triax Cable is designed with two isolated shields to provide multiple functions through one cable to your camera such as power. There are two versions of triax cable commonly used in TV industry: RG59 (3/8") and RG11 (1/2"). Typical triax camera system can send the picture from over a triax cable for up to 500 meters with no degradation. Camera set-ups that can be remotely adjusted though ta cable, as well as usually intercom functions. There are (at least have been) two types of triax systems in use in broadcast industry: analogue triax and digital triax. In conventional Analog Triax the signals (component video, audio, intercom, control etc.) are modulated onto different frequency FM carriers which are carried through the same cable. Digital Triax is Component Digital video (plus other signals) running down the cable in digital format.

        Triax cable

        Triax is a clever (though complex and expensive) system to enable a broadcast television camera to 'communicate' with its base station by means of a single fairly light weight co-axial cable. Triax cables are used in TV broadcast industry for TV camera interconnections(connecting camera to CCU and supplying power to camera).

        Triax Cable is designed with two isolated shields to provide multiple functions through one cable to your camera such as power. There are two versions of triax cable commonly used in TV industry: RG59 (3/8") and RG11 (1/2").

        Triaxial cables are constructed with a solid or stranded center conductor and two isolated shields. The center conductor and the inner isolated shield make up a coaxial cable configuration that functions to carry the video signal. The outer isolated shield can be used for several separate signals by means of multiplexing that may include power feed, teleprompter feeds and control for automation. This means that one Triax cable can replace a large number of separate cables (or a one multi-conductor cable) between the video camera and the control room equipment. Triax systems are quite expensive, but they are economical in TV production because they are reliable (much more reliable than multi-wire camera cables), allow flexible system construction (just one cable for all) and allow quite long distances.

        Triax cables use special triax connectors. The TV industry generally uses the connectors made by Lemo and Fischer. There are at least three different commonly used triax connectors in TV production industry. Typical triax camera system can send the picture from over a triax cable for up to 500 meters with no degradation. Camera set-ups that can be remotely adjusted though ta cable, as well as usually intercom functions.

        There are (at least have been) two types of triax systems in use in broadcast industry: analogue triax and digital triax.

        In conventional Analog Triax the signals (component video, audio, intercom, control etc.) are modulated onto different frequency FM carriers which are carried through the same cable. Digital Triax is Component Digital video (plus other signals) running down the cable. Typically the various video and audio signals from the camera are modulated into various FM radio carriers, and sent down the centre conductor. At the same time RF modulated communication, video feeds and control instructions were travelling in opposite direction back to the camera. The triax adapter sorted all of this out. The camera had power, syncronising and control signals, thecameraman had video feeds and two way communication, and the director had 'perfect' video and audio signals.

        When using triax cable, the overall system is powered by AC power at the CCU (some CCUs might also accept local DC power from batteries). Because the cable length can be very long (up to hundreds of meters) and considerable power needs to be transported (large camera and local monitor), the voltages transported through the triax cable can be quite high (up to 160V DC or 250V AC on some systems) to allow long distance power transfer (resistance can be 5-30 ohms per kilometer). The high supply voltage is converted in the camera adapter to 12V DC by a switched mode power supply. Because of high voltages on the cable, there are various special precautions that are taken in account to monitor earth leakageetc. to prevent electric shock under fault (damaged cable) conditions.

        CCU (Camera Control Unit) refers to a range of equipment and operations related to remote control of video/television camera functions. This can include either partial or complete camera control. CCU operations are an important component in many types of television production, in particular multi-camera productions. The person operating the CCU units is known as a CCU Operator, Vision Controller or (in some cases) a Technical Director (TD).

        Partial CCU Control is a common method for controlling camera functions in television production. It is a professional approach, allowing for maximum control and quality. Most of the camera functions (framing, focus, etc) are controlled normally by a camera operator, whilst certain functions (colour balance, shutter speed, etc) are controlled remotely by the CCU operator. This allows the camera operator to concentrate on framing and composition without being distracted by technical issues. At the same time the CCU operator, who is a specialist in the more technical issues, is concentrating on the quality and consistency of the pictures. In a multi-camera production the CCU operator will usually be responsible for more than one camera (2-3 cameras is common, but up to 10 is possible). For example, a 20-camera broadcast could have 5 CCU operators, each controlling 4 cameras.

        Since the advent of high-performance remote-controlled cameras, CCU can also refer to cameras which are completely controlled by the CCU operator (the camera itself is unmanned). Such controllers may include any of the features mentioned above, with the addition of pan/tilt, zoom and focus controls.

        The Technical Director is the person responsible for setting up and maintaining the technical parameters of the production's video images. In many cases this is the same person as the CCU operator, but in any case the two jobs are closely linked. The TD's responsibilities include making sure all vision sources (cameras, tape machines, graphic generators, etc) meet the technical requirements for broadcast, and that their outputs are consistent and stable. In many older systems this involves monitoring video signals with a waveform monitor and vectorscope (some modern systems do this more or less automatically). A good CCU operator should have a solid technical understanding of how video and television works.

        DV camera information

        DV is a digital video camera format.DV spec is a 720x480 image size with a 5:1 compression at the camera source. Video signal in DV system is compressed to a constant throughput of 3,600 kilobytes per second which averages out to 5:1 compression. DV offers good picture quality and good quality editing.What makes DV so great is that the imagery captured on the hard drive is an exact duplicate of the image captured on tape. There is no loss due to the fact that the camera and your computer share the same "codec" and each recognizes the data stream the same as any other computer data -- as a binary stream of ones and zeroes. This is a benefit of the format, not of any specific piece of hardware. The digital interfacing between DV camera and computer is done using IEEE 1394 interface (also known as Firewire).

      Video mixing and effects

      Video mixers are used to combine and select sources for playback and recording. A mixer allows you to select a source, then use a particular transition (such as a wipe or crossfade) to blend from the previous source to the new one.

      Traditional video mixers have two busses, an A bus and a B bus. Much like a two-scene preset light board, when the A bus is active, a new source can be selected on the B bus with the source select buttons. Once a transition is selected, the t-bar can be used to manually fade from one bus to the other, applying the transition as quickly or as slowly as the bar is moved. Alternatively the take button can be used instead of the t-bar to automatically perform the fade from bus to another at defined speed (take speed control). The way the fading from one picture can be anythign from simple crossfade to some complex video effect depending on the capabilities of the video mixer. The effects controls allow you to select transitions, apply video effects (such as negative, solarizing, and other special effects), as well as apply chroma keying (making a video transparent based on a certain color) or luminance keying (making a video transparent based on a certain brightness).

      Some modern video mixers use the bus system in a slightly different way. Instead of the T-Bar transitioning between busses, the t-bar / take button transitions between a current source and a next source. The source select buttons allow you to cue up a source as 'next' or to cut directly to it. The benefit here is that the take button and the T-bar ALWAYS fade from the current source to the next source (you cannot get lost).

      Some technical background on video mixing:

      When working together with other video equipment, it's important that these devices are tightly synchronized with each other. Whenever we use two or more video signals for a production, whether it's being recorded or broadcast, we need to be sure that the signals are synchronized. To do any type of complex switching/mixing requires that all the signals are genlocked and timed. Timing involves aligning the various system delays such that the sync and color phases match. Vertical, horizontal, color subcarrier components within the two signals each need to match their counterparts in order to avoid a picture roll, tear, or hue shift, respectively, when switching between sources. Genlock is the term used to describe the process of synchronizing these components within video signals.

      When signals are synced together, it is possible to seemlesly switch between different video sources easily and it is possible to mix video signals together using video mixer. If incoming video signals are already synchronous, mixing them using some electronics isn't a terribly complicated. If they aren't, then things get easily quite complicated. Not all equipment can be genlocked. If video sources are not syncronized, this is a non-trivial problem. You will need to either:

      • Genlock the two video signals so that they are in sync before mixing. Depending on the sources, the difficulty may range from easy to impossible. Production video equipment will probably have the necessary inputs and outputs. Consumer stuff probably will not. For mixing N signals sources, N-1 will need to have genlock inputs.
      • For two sources mixing you need a real time programmable video delay. This would typically consist of a video A/D, dual ported frame store, readout delay timing logic, and video D/A. Since there is no way to assure the precise phase stability needed for PAL encoding, you would probably need to separate the luminance and chrominance and deal with them separately. The delay would need to be anywhere up to 1/2 frame (or 1 frame if only one of the sources can be delayed). Not an afternoon project. For N sources, you would need N-1 0 to full frame delay units. You will need an automatic adjustment scheme to maintain synchronization.

      In practical video systems usually this syncing is done using devices called frame synchronizers (or frame syncs) have their outputs locked to a genlock signal, while their inputs follow an unlocked signal. At least one or two frame syncs can be found in nearly any professional video facility.

      Many professional video equipment, like tape devices and cameras, have capability to get synchronized to an external video clock. This keeps them all in sync. Genlocked camera system is a system that is designed to use the vertical and/or horizontal sync pulses from one camera to drive all the cameras in the system. The biggest problem with this type of system is that it requires an additional coaxial cable run between each of the cameras. For example in four camera system you can have the camera 1 to be the sync source (set it to free running mode). You'll need to set cameras number 2, 3, 4, etc, to the external sync position. This is usually achieved by flipping a switch on the camera. The key is that you're setting all cameras to accept the system. In some cases the sync source for cameras is takes from some external sync source (studio mater sync device or similar).

      Normally, a sync generator is used as a common reference in fixed studio systems. However, just about any composite video signal provides enough information to genlock another video device, assuming it can be genlocked. So genlock actually means to feed thevideo signal of a master camera to a number of slave cameras so so thatthey are driven at the same speed in phase. True genlock requires three parameters: vertical and horizontal sync, vertical and horizontal blanking, and color burst. These signals are usually combined into a composite signal called black with burst, or black burst. Black burst is really just a black video signal, the same thing that you would get out of a capped camera. The genlock signal is not a special signal, it is the whole video signal just transmitting black picture (sometimes can have some picture material in it, but usually black signal is preferred). The slave camera should be set up as a slave - normally this isperformed in a sort of switch or menu system built into the camera.

      You can buy special devices to split a genlock signal, but usually you may simply use a BNC T-connector on the genlock input of each camera. Genlock cameras usually provide two controls: H PHASE (Horizontal Phase) and SC PHASE (Colour Sub-Carrier Phase). You will need to adjust these controls on the genlocked camera (the right camera in the illustration above) such that the horizontal phase and SC phase of the genlocked camera match the master camera (the left camera in the above diagram). This type of genlock runs into problems when long cable runs between cameras are used. This is because of the time delay (it's usually in nanoseconds) that's realized when going from camera number one to the last camera in the system. Genlock is best applied in small systems of 4 cameras or less with a total cable length between all cameras of less than 300m.

      When computer generated graphics is added to this game, usually a device called "Genlock" is used. An external Interface, called Genlock, generates video clock for computer generated graphics and can additionally overlays it on another video signal. This way you can e.g. create a film title that can be blended over video picture.

      When you need to connect video devices to your system that cannot be syncronized to our existing system, you need to have some form of syncronizing device in between (frame syncronizer, field syncronizer, time base corrector etc.). Some video mixers have this kind of feature built-into them, so they can mix video signals from sources that are not synced to each other. Usually video mixers with this feature are the "low end" or "middle class" mixers, because you can't get ideal results with non-synced signal sources ever (there will always be possibility of some signal glitches).

<[email protected]>

Back to ePanorama main page ??