SlideShare ist ein Scribd-Unternehmen logo
1 von 201
Downloaden Sie, um offline zu lesen
Dr. Mohieddin Moradi
mohieddinmoradi@gmail.com
Dream
Idea
Plan
Implementation
1
Clear Scan
2
– When the CRT’s vertical frequency is lower than the camera’s operating frequency, the camera CCD will
output (readout) the image before all lines of the CRT are scanned.
– The lines that were not scanned within the CCD charge accumulation period will appear black.
3
Clear Scan
𝒕 𝑪𝑹𝑻: Frame time on display =1/50 sec
𝒕 𝑪𝑪𝑫 : CCD charge and discharge time (scan time) =1/70 sec
Clear Scan
– Most CRT computer displays have a higher vertical frequency than video cameras operating at 50Hz (for
PAL areas).
– When capturing the CRT image using such cameras, the camera CCD will capture part of the computer
scan twice.
– This results in more light being captured for that part of the scan and a white bar being output.
4
𝒕 𝑪𝑹𝑻: Frame time on display =1/80 sec
𝒕 𝑪𝑪𝑫 : CCD charge and discharge time (scan time) =1/50 sec
– By activating Clear Scan, the electronic shutter speed (= CCD charge accumulation period) can be
controlled in small increments so it can be matched to the computer display’s vertical frequency.
• In this way, the banding is effectively eliminated.
• This banding effect, both white and black is not seen when shooting a Plasma or LCD display.
– Clear Scan is also effective for eliminating the flicker effect when shooting under fluorescent lights, which
have a different frequency than the standard CCD accumulation period.
5
Clear Scan
Slow Shutter
– The Slow Shutter feature extends the CCD accumulation period to longer than the frame (or field) rate,
instead of shortening it, which is the case with conventional electronic shutters.
– For example, by setting the Slow Shutter speed to 16 frames (32 fields), a unique blur effect can be
produced.
– This is because the CCD captures movement across the 16-frame period as one image.
6
Normal shutter speed Blur effect using slow shutter (16-frame accumulation)
Slow Shutter
– In addition to producing such effects, the Slow Shutter mechanism can help shooting dark scenes with
higher sensitivity.
– The longer accumulation period allows more electrical charges to be accumulated under low light
conditions.
7
Normal shutter speed High sensitivity using slow shutter (16-frame accumulation)
– A 1/10 second shutter speed translates into an
accumulation period six times longer the signal’s field
rate (1/10 sec = 1/60 sec × 6).
– This means that only 10 fields are available to generate
a 60-field video signal.
– This ‘field count’ discrepancy is compensated for using
a memory buffer. This image is read out from the buffer
six times, at the video signal’s 1/60-second field rate.
– After the 6th output, the memory is refreshed with the
next image.
– This mechanism allows moving pictures to be
reproduced with unique effects and high sensitivity.
– However, motion may sometimes appear jerky
(≠smooth) when the accumulation duration is
excessively longer than the speed of the moving
scene.
Slow Shutter Example
8
– In general, the frequency response of a camera is defined by
its CCD’s pixel count.
– Due to this fact, the frequency response of a 720P signal varies
depending on the pixel count of the CCD used to produce it.
– The vertical frequency response of a native 720P camera
(using a CCD with 720 vertical pixels) draws a gradual
downward curve toward the 720 TV line vertical resolution.
– In contrast, a super-sampled 720P signal, originated from a
1080P camera (using 1080 vertical pixels), maintains a higher
response level up to the 720 TV line range.
– This higher response allows the reproduction of much sharper
picture edges.
– This is because Super Sampling uses the full pixel count of the
1080P CCD.
9
Super Sampling
– The 720P output is achieved using a digital filtering method
called Super Sampling technology.
– Super Sampling digitally cuts off the vertical frequency
response right before the 720 TV line resolution.
– This is achieved without any degradation of the higher 1080P
response level.
– As a result, excellent response characteristics (almost flat)
are obtained for the 720P output.
10
Super Sampling
Optical Low Pass Filter
Effective in studio
11
Optical Low Pass Filter
– Due to the physical size and alignment of the photo sensors on a CCD imager, when an object with fine
detail (such as a fine striped pattern) is shot, a rainbow-colored pattern known as Moiré may appear
across the image.
– This tends to happen when the image’s spatial frequency exceeds the CCD’s spatial-offset frequency or,
put more simply, when
The image details are smaller than the spacing between each photo sensor.
– In order to reduce such Moiré patterns from appearing, optical low pass filters are used in CCD cameras.
– An optical low pass filter is placed in front of the CCD prism block to blur image details that may result in
Moiré.
– Since this type of filtering can reduce picture resolution, the characteristics of an optical low pass filter are
determined with special care to effectively reduce Moiré, but without degrading the camera’s maximum
resolving power.
12
Optical Low Pass Filter
13
Cross Color Suppression
− Cross color is an artifact seen across very fine striped patterns when displaying a composite signal feed on
a picture monitor.
− It is observed as a rainbow-like cast, moving across the stripe pattern.
14Cross Color Cross Color Suppression ON
Cross Color Suppression
– Even with the latest filtering technology, composite signals cannot be perfectly separated into their
original luminance and chrominance components. This results in cross color effect.
– Cross Color Suppression technology presents a solution to the limitations of Y/C separation on TV receivers.
– The idea behind Cross Color Suppression is to eliminate signal components that can result in cross color
before the camera outputs the composite signal.
– These signal components are eliminated from the Y/R-Y/B-Y signals within the camera head using
sophisticated digital three line comb filtering.
– Adding this process allows the output composite signal to easily be separated into its chrominance and
luminance components at the TV receiver.
– This results in greatly reduced cross color and dot crawl (dot noise appearing at the boundaries of
different colors) as compared to a composite output that does not use this process.
15
CCD Image Chip
Individual Pixels
Volts
Time / H location
CCD output signal after
integration of charges
16
Volts
Time / H location
Charge levels on
Pixels
Detail Correction, Detail Signal and Detail Level
Volts
Time / H location Time / H location
Ideal Signal CCD Output Signal
Volts
Distinct edge:
Instantaneous transition from black to white
Blurred edge:
Gradual transition from black to white 17
Detail Correction, Detail Signal and Detail Level
Ideal Signal
CCD output signal
Correction signal
Corrected signal
+ =
18
Detail Correction, Detail Signal and Detail Level
– The video signal output from a CCD unit lacks detail information because the unit’s ability to resolve detail
is limited by the size of the light sensitive pixels.
– An electronic circuit in the camera compensates for the missing detail information in the Video signal.
– It Makes picture edges appear sharper than the native resolution provided by the camera (also called
“image enhancement”).
– This is achieved by overshooting the signal at the picture edges using a spike-shaped signal called the
detail signal.
– The amount of detail correction can usually be adjusted. This is called detail level.
• Increasing the detail level sharpens the picture.
• Decreasing it softens the picture.
19
Detail Correction, Detail Signal and Detail Level
Horizontal Detail Correction
Original signal
(a) Is delayed by 50 nsec
(a) Is delayed by 100 nsec
(a)+(c)
2(b)-(d)
2(e)
20
Vertical Detail Signal
– The mechanism for creating the vertical detail
signal is basically the same as horizontal detail
correction.
– The only difference is that the delay periods for
creating signals (b) and (c) are one horizontal
scanning line and two horizontal scanning lines,
respectively.
Note:
− Excessive detail correction can lead to an
artificial appearance of the picture, as though
objects have been cut out from the background.
− Therefore, detail correction must be applied with
care.
21
Original signal
(a) Is delayed by one
horizontal scanning line
(a) Is delayed by two
horizontal scanning lines
(a)+(c)
2(b)-(d)
2(e)
H/V Ratio
Detail correction is applied to both horizontal and vertical picture edges using separate horizontal detail and
vertical detail circuits.
H/V Ratio:
The ratio between the amount of detail applied to the horizontal and vertical picture edges.
– It is important to maintain the balance of the horizontal and vertical detail signals to achieve natural
picture enhancement.
H/V Ratio should thus be checked every time detail signals are adjusted.
22
– When shooting a shiny and velvety object, the texture of the object’s surface may sometimes be overemphasized or
blurred on the screen.
– For example, if the object is a wrapped bar of soap, the image may appear as shown in the below “Before setting,” where
the wrinkles are emphasized by dark shadows.
– This is because video cameras add a signal (DETAIL signal) that emphasizes dark to- bright transitions, in this particular
case, around the wrinkles on the left side of the transparent wrapping.
– In such situations, the sheen and texture of the object can be reproduced more naturally by adjusting the level of the
detail signal to be added.
Tips for Reproducing Sheen and Textures of Objects Realistically
23
Tips for Reproducing Sheen and Textures of Objects Realistically
24
Tips for Reproducing Sheen and Textures of Objects Realistically
25
Highlight DTL
− Provides better expression in highlight scene (High Amplitude)
Conventional model 26
Fine DTL
− Expanding the small edge in the low contrast object
− Compressing the edge component in the high contract object
− The impression for the glare (a bright unpleasant light) of picture with too much edge is reduced and the
natural image can be obtained.
27
Compressing the edge
component in the high
contrast object
Expanding the small
edge component in the
low contrast object
Mix Ratio
– The term Mix Ratio, describes the ratio between the amount of detail applied at the pre-gamma detail
correction and post gamma detail correction.
– There is no standard setting for Mix Ratio.
– It is a parameter adjusted completely according to the operator’s preference.
28
Camera gamma
(𝜸 𝒄 = 𝟎. 𝟒𝟓)
Linear gamma
(𝜸 𝒎 𝜸 𝒄 = 𝟏)
Display gamma
(𝜸 𝒎 = 𝟐. 𝟐𝟐)
Gamma boosts the contrast at dark picture
areas while compressing contrast in highlights
The detail signals added to the
dark picture areas are
amplified, while those added to
the highlight areas are
compressed.
The post-
gamma detail
correction not
subject to the
gamma
correction
Pre-gamma and post-gamma detail correction
– Gamma correction used in video cameras
• boosts the contrast at dark picture areas
• compressing contrast in highlights
– Detail signals generated in the pre-gamma detail
correction are also subject to this gamma correction. For
this reason
– The detail signals added to the dark picture areas are
amplified
– The detail signals added to the highlight areas are
compressed.
– For this reason, pre-gamma detail correction is effective
for enhancing contrast at dark areas of the image, but not
for highlights.
– This issue is solved using the post-gamma detail
correction, which effectively enhancing brighter parts of
the image (is not subject to the gamma correction).
Mix Ratio
29
Gamma boosts the contrast at dark picture
areas while compressing contrast in highlights
The detail signals added to the
dark picture areas are amplified,
while those added to the highlight
areas are compressed.
The post-gamma
detail correction
not subject to
the gamma
correction
Pre-gamma and post-gamma detail correction
30
Knee Aperture
Knee Correction Knee Aperture
The KNEE APERATURE function enhances signals in the highlight areas.
Signals in the highlight areas that were
compressed by the KNEE function.
Compressed signals enhanced
by the KNEE APERATURE function.
− Knee Correction is an effective function for preventing image highlights from being overexposed by
compressing them to fall within the standard video signal range (100 to 109%).
− This function can sometimes degrade picture contrast and sharpness in the highlight areas, this is because
1. Compressing highlight signal levels also results in compressing highlight contrast (contrast loss)
2. Detail signals applied to such areas are also compressed by the Knee Correction (sharpness loss)
– To compensate for this contrast and sharpness loss, cameras have a Knee Aperture circuit placed right
after the Knee Correction process.
– This function emphasizes the picture edges of highlight areas which where compressed by the Knee
Correction process (highlights above the knee point).
– Knee Aperture can be adjusted in the same way as Detail Correction, but only for signal levels above the
knee point.
Knee Correction Knee Aperture
31
Knee Aperture
– When shooting a bouquet using a spotlight, the bright areas of the image can be overexposed.
– This phenomenon can be eliminated using the KNEE function, keeping the brightness level (luminance level) of the image
within the video signal’s dynamic range.
– However, in certain cases, the KNEE process can also cause the picture edges of objects to appear blurred. This is
because the contrast of highlight areas is reduced as a result of compressing the luminance signal.
– The “Before setting” image demonstrates how the KNEE function eliminates highlight “washed-outs,” but also shows that
the picture edges of bright objects such as the flower petals and plastic cubes get blurred.
– In such situations, the picture edges of the highlight areas can be reproduced with more contrast by applying image
enhancement only to the signals compressed by the KNEE function.
Tips for Reproducing Solid Picture Edges of an Image’s Highlights
32
Tips for Reproducing Solid Picture Edges of an Image’s Highlights
33
Tips for Reproducing Solid Picture Edges of an Image’s Highlights
34
Crispening
35
The detail signals with small amplitudes are regarded as noise and
removed to avoid detail signals being generated around noise
The Crispening is a function used to avoid detail signals being generated around noise.
Crispening
The Crispening is a function used to avoid detail signals being
generated around noise.
– By activating this function, detail signals with small amplitudes,
which are most likely generated around noise, are removed
from the signal.
– In the Crispening process, only detail signals that exceed a
designated threshold are used for image enhancement.
– Conversely, detail signals with small amplitudes are regarded
as noise and removed.
36
– Increasing a camera’s DETAIL LEVEL can effectively sharpen the picture edges of an image.
– However, as shown in the “Before setting” image, this operation can also coarsen the entire image, even
though the picture edges of the perfume bottles and plastic cubes are correctly enhanced.
– This effect occurs because the DETAIL process is applied to all areas of the image, including unnecessary
noise.
– In such situations, adjusting the Crispening level can reduce this effect while picture edges are kept sharp.
Tips for Improving Picture Sharpness without Coarsening the Image
37
Tips for Improving Picture Sharpness without Coarsening the Image
38
Tips for Improving Picture Sharpness without Coarsening the Image
39
Level Dependent
− Level Dependent allows operators to suppress the detail signals generated in the low luminance areas
alone.
40
Level Dependent allows low luminance detail to be suppressed
Level Dependent
41
Level Dependent
– Noise is most noticeable in dark picture areas and applying heavy detail
correction can significantly emphasize it.
– This can make it challenging to capture an image with both extremely fine
picture detail and dark image areas.
– With Crispening, the detail signals generated around noise can be
suppressed, but this will also reduce the detail signals of other fine picture
areas.
– ‘Crispening’ removes detail signals generated by noise at all signal levels but
Level Dependent allows operators to suppress the detail signals generated in
the low luminance areas alone (therefore, Level Dependent can solve this
issue).
– By Level Dependent the picture edges of the main content (with fine detail)
can be kept sharp, while detail signals generated around noise in the low
luminance areas can be suppressed.
42
– To reproduce sharp images, the DETAIL function is used.
– However, DETAIL can also cause black or dark areas of an image to coarsen. For example, as shown in the “Before
Setting” image, the cockles of the leather and the texture of the metal are reproduced sharply, but the dark areas of the
background and the bottom of the image look coarsened.
– This phenomenon occurs because noise in dark areas of the image is also emphasized by the DETAIL function.
– In such situations, adjusting the camera so the DETAIL process is not applied to low luminance signal levels, reduces the
coarseness in the black or dark areas of the image.
Tips for Shooting Without Coarsening Dark Areas of an Image
43
Tips for Shooting Without Coarsening Dark Areas of an Image
44
Tips for Shooting Without Coarsening Dark Areas of an Image
45
Limiter (Detail Limiter)
– When there is a large luminance level variance at
dark-to-light or light-to-dark picture edges, the detail
circuit can generate over-emphasized picture edges,
making objects appear to ‘float’ on top of the
background.
– This is because detail signals are generated in
proportion to the luminance level change at the
picture edge.
– A limiter is a circuit used to suppress this unwanted
effect.
46
Limiters prevent excessive detail correction
Skin Tone Detail Correction
47
48
Skin Tone Detail Correction
Eliminates the DTL edge only for high frequency area of skin tone to have more effect. 49
Skin Tone Detail Correction
Skin Tone Detail Correction is a function that allows the detail level of a user-specified color to be adjusted
(enhanced or reduced) independently, without affecting the detail signals of other picture areas.
– Skin Tone Detail Correction was originally developed to reduce unwanted image enchantement (detail signals)
on facial imperfections such as wrinkles, smoothening the reproduction of human skin.
– By selecting a specific skin color, the detail signals for that skin color can be individually controlled and
suppressed.
– High-end professional video cameras offer a Triple Skin Tone Detail Function, which allows independent detail
control over three user-specified colors.
– This enhances the flexibility of Detail Correction
• one color selection can be used for reducing the detail level of skin color
• two other selections can be used for either increasing or decreasing the detail level of two other objects.
Skin Tone Detail Correction
50
Electronic Soft Focus
– Skin Tone Detail can be effective for reducing the picture sharpness of objects with specific colors.
– However, for some applications it does have its limitations.
– This is because Skin Tone Detail Correction does not really blur the image, it simply decreases the detail
signal level to make the selected color look less sharp.
– To apply further softness across images with specific colors, Electronic Soft Focus is used.
51
– This function uses the detail signal to reduce, rather than increase, the sharpness of a picture.
– By subtracting the detail signal from the original signal as opposed to adding it in image enhancement,
Electronic Soft Focus provides a picture that is ‘softer’ than achieved with the detail correction switched
off completely.
52
Electronic Soft Focus
Zebra
53
A 75 IRE Zebra pattern is displayed
− Zebra is a feature used to assist manual iris adjustments by displaying a striped pattern (called a ‘zebra
pattern’) in the viewfinder across image highlights above a designated brightness level.
Two types of zebra modes are available:
I. One to indicate highlights above 100 IRE
II. One to indicate signal levels between the 70 and 90 IRE range
• The 100 IRE Zebra displays a zebra pattern only across picture areas which exceed 100 IRE, the video level
of pure white.
 Using this zebra mode, camera operators adjust the lens iris ring until this zebra pattern appears in the brightest areas
of the picture.
• The second zebra mode displays a zebra pattern across highlights between 70-90 IRE, and disappears
above the 90 IRE level.
 This is useful to determine the correct exposure for facial skin tones since properly exposed skin (in the case of
Caucasian skin) usually falls within the 80 IRE areas.
Zebra
54
Low Key Saturation
– Low-light areas can be subject to a reduction in color saturation.
– The Low Key Saturation function adjusts the color saturation at lowlight levels by boosting the
chrominance signals to an optimized level, thus providing more natural color reproduction.
55
Low Key Saturation
With the Low Key Saturation function activated, the color of the bottle is reproduced with more richness and depth.
56
– When shooting in an environment with insufficient lighting on the subject, the colors in dark areas of the image may not be
fully reproduced.
– For example, in the “Before Setting” image, the colors of the candies are not fully reproduced, causing the image to look
less attractive.
– This is due to a drop in the color-difference signal levels that determines the color saturation of the reproduced image.
– In such situations, it is required to manually adjust the color-difference signal levels of low-light areas.
Enhancing Colors in Low-Light Areas (Low Key Saturation function)
57
Enhancing Colors in Low-Light Areas (Low Key Saturation function)
58
Enhancing Colors in Low-Light Areas (Low Key Saturation function)
59
Linear Matrix Circuit
– All hues in the visible spectrum can be
matched by mixing the three primary colors
• Red (R)
• Green (G)
• Blue (B)
The ideal spectrum characteristics of three primary colors
60
− Some areas contain negative spectral response.
− Since negative light cannot be produced, this means that some colors cannot be matched using any R,
G, and B combination.
– In video cameras, this would result in particular colors not being faithfully reproduced.
– The Linear Matrix Circuit compensates for these negative light values by electronically generating and
adding them to the corresponding R, G, and B video signals.
– This circuit is placed before the gamma correction so that compensation does not vary due to the
amount of gamma correction applied.
– In today’s cameras, the Linear Matrix Circuit is used to create a specific color look, such as defined by the
broadcaster.
Linear Matrix Circuit
61
– The colors are selected by their Hue (Phase), Saturation, and Width (hue range).
– In conventional color correction or matrix control, control parameters interact with each other.
– The Multi Matrix function allows color adjustments to be applied over a single color range, while keeping
other colors intact.
– The Multi Matrix function divides the color spectrum into 16 areas of adjustment, where the operator can
select the hue and/or saturation of the area to be color modified.
Multi Matrix
62
Multi Matrix
63
Built-in 16-axis color matrix and focus-assist function
Multi Matrix
– The example shows the orange pencil being changed to
pink, while the other color pencils remain unchanged.
– In addition to such special color effects, this function is
also useful for matching the color reproduction of
multiple cameras.
64
Advanced Matrix
65
TLCS (Total Level Control System)
66
By activating TLCS, the correct exposure is automatically set for normal, dark, and very bright shooting
environments.
− With conventional cameras, the exposure control range is often limited between the smallest and widest
opening of the lens iris.
− TLCS widens this range by combining three exposure control features into one:
• The Lens Automatic Iris
• The CCD Electronic Shutter
• The Automatic Gain Control
– When proper exposure can be achieved within the lens iris range, TLCS will only control the lens iris.
– For scenes that are too dark even with the widest iris opening, TLCS will activate the Automatic Gain
Control function and boost the signal to an appropriate level.
– For scenes that are too bright even with the smallest iris opening, TLCS will activate the electronic shutter
so the video signal level falls within the 1.0 V signal range.
67
TLCS (Total Level Control System)
EZ Mode
− EZ Mode is a feature that instantly sets the camera’s main parameters to their standard positions and
activates automatic functions such as
• ATW (Auto Tracing White Balance)
• TLCS (Total Level Control System)
• DCC (Dynamic Contrast Control )
68
Knee Saturation
− When a camera’s KNEE function is set to ON, the bright areas of the image are compressed in the KNEE circuit. In this
process, both the luminance (Y) signals and color-difference (R-Y, B-Y) signals are compressed together, which can
sometimes cause the color saturation of the image to drop.
− KNEE SATURATION is a function that eliminates this saturation drop while maintaining the original function of the KNEE
circuit.
− When KNEE SATURATION is set to ON, the color-difference (R-Y, B-Y) signals are sent to the KNEE SATURATION circuit which
applies a knee function optimized for these color signals.
− These signals are then added back to the mainstream signals, obtaining the final output signal.
69
– When shooting a bouquet using a spotlight, the bright areas of the image can be overexposed and “washed out” on the
screen.
– This phenomenon can be eliminated using the camera’s KNEE function, keeping the image’s brightness (luminance) of
objects within the video signal’s dynamic range (the range of the luminance that can be processed).
– However, in some cases the KNEE process can cause the image color to look pale.
– This is because the KNEE function is also applied to the chroma signals, resulting in a drop in color saturation.
– For example, as shown in the “Before setting” image below, the colored areas look paler than they actually are.
– In such situations, the bouquet can be reproduced more naturally by using a separate KNEE process for the chroma
signals (R-Y/B-Y).
Tips for Reproducing Vivid Colors Under a Bright Environment
70
Tips for Reproducing Vivid Colors Under a Bright Environment
71
Tips for Reproducing Vivid Colors Under a Bright Environment
72
TruEye™ Processing
– TruEye processing is an innovative function that has been developed to overcome some of the drawbacks of
conventional Knee Correction .
– This technology makes it possible to reproduce color much more naturally even when shooting scenes with severe
highlights.
– The effect of TruEye processing is observed in the color reproduction of highlight areas.
73
– Knee Correction is applied individually to each R, G, and B channel. The issue with this method is that only those channels
that exceed the knee point will be compressed.
– As shown in figure (a), suppose that only the red channel exceeds the knee point at a given time, T1. Since only the red
channel is compressed at the knee point using a preset knee slope, the color balance between the Red, Green and Blue
channel changes.
– This is observed as the hue being rotated and saturation being reduced where the Knee Correction was applied.
74
TruEye™ Processing
– The TruEye process overcomes this problem by applying the same Knee Correction (compression) to all channels,
regardless of whether or not they all exceed the knee point.
– This is shown in figure (b) where only the red channel exceeds the knee point. The green and blue channels are also
compressed using the same red knee slope, maintaining the correct color balance between the three channels, while
effectively achieving highlight compression for the red channel.
75
TruEye™ Processing
− With TruEye turned off, the highlight areas are tinged with yellow, and when turned on, the correct color balance is
reproduced.
76
TruEye™ Processing
Decibels (dB)
Imagine referring to an amount of bread or rice. In both cases, it would be confusing to describe them in
ounces or by the number of rice grains.
– Instead, we use a more convenient expression such as a ‘slice’ or a ‘loaf’ of bread, or a ‘bowl’ of rice.
– The point to note here is that, instead of referring to the actual amount, it is often more convenient to
describe it using a common point of reference.
– This also holds true when describing values related to the video signal.
– In video electronics, signal values must be handled over a wide range – from the very smallest to those
that can be several million times larger.
For this reason, decibels are described using a ‘logarithm’ equation, which offers an easy way of expressing all
values together.
77
Decibels (dB)
The decibels are defined by the following equation:
dB = 20 × log(V’/V)
(V’: value to express in decibels V: well-known value = 1.0 (V))
 This can be rearranged as:
dB/20
V’ = V × 10
Since the relative value is being discussed, by substituting
V = 1.0 (volt) the given equation is:
dB/20
V’ = 10
78
The most decibel values that need to be remembered are shown in the following table.
Referring to this table:
 A 20 dB signal gain up means the signal level has been boosted by 10 times.
A 6 dB signal drop (= minus 6 dB) means the signal level has fallen to one half.
79
Decibels (dB)
S/N (Signal-to-Noise) Ratio
Noise refers to the distortion of the original signal due to external electrical or mechanical Factor.
(S/N )ratio = 20 × log (Vp-p/Nrms) (dB)
A 60 dB S/N means that the noise level is one-thousandth of the signal level.
Test signals
– A horizontal shallow ramp signal of about 20 to 25 IRE units amplitude with a pedestal level of 40 IRE units.
– If this shallow ramp test signal is not available, a 50 IRE units flat field signal with the dither signal could be
used.
80
FIGURE 3
Test signals for S/N measurements
a) Shallow-ramp signal b) Flat-field with dither signal
30 mV dither signal
20-25 IRE units
40 IRE units
D03
S/N (Signal-to-Noise) Ratio
– For video, the signal amplitude (Vp-p) is calculated as 1 volt, which is the voltage (potential difference)
from the bottom of the H-sync signal (sync tip) to the white video level.
(S/N )ratio = 20 × log (1 Volt /Nrms) (dB)
• Noise level changes over time, and amplitude cannot be used to express its amount.
• Root mean square (rms) is a statistical measure for expressing a group of varying values, and allows the
magnitude of noise to be expressed with a single value.
• Root mean square can be considered a kind of average of temporal values.
81
Gain Up
– When shooting in low-light conditions, the signal level (amplitude) may be insufficient due to a lack of
light directed to the imager and thus fewer charges generated by the photoelectric conversion.
– To overcome this, most video cameras have a Gain Up function, which is used to electronically amplify
the video signal to a sufficient level for practical use.
– The Gain Up function usually offers several Gain Up values, which are selectable by the operator for
different lighting conditions.
– When using the Gain Up function, it is important to note that a large Gain Up value will result in degrading
the S/N ratio , since noise is equally amplified together.
– Some cameras also have a minus Gain Up setting to improve the camera’s S/N ratio.
82
Turbo Gain
– It helps shooting in the dark.
– Turbo Gain is an extension of the conventional Gain Up function but offers a much larger level boost (+42
dB) of the video signal, to achieve a lower minimum illumination.
83
File System
− It allows a variety of detailed and complex adjustments in order to reproduce the desired colorimetry for
each shooting scenario
− It allows to compensate for technical limitations in certain camera components.
84
− For broadcasters and large video production facilities, it is imperative that all cameras are set up to have
a consistent color tone or ‘look’, specified by that facility.
I. Reference File stores the standard image factory setting data, and this file contains the reference
values of the auto setup adjustment. This file can be stored in the camera and memory stick.
II. Reference File is used to store user-defined reference settings (current paint data), so they can be
quickly recalled , reloaded, or transferred from camera to camera.
− The parameters that can be stored in a camera’s reference file may slightly differ between camera types.
This difference is due to the different design philosophy of which base parameters should be commonly
shared between the cameras.
Reference File
85
The standard image factory setting
data (the reference values of the
auto setup adjustmentUser-defined reference settings
86
Reference File
− In general, each camera lens has different ‘offset’ characteristics which are compensated for within the
camera by applying appropriate adjustments.
− This compensation must be performed on a lens basis.
• Therefore, when multiple lenses are used on the same camera, the camera must be readjusted each time
the lens is changed.
• Camera operators can store lens compensation settings for individual lenses within the camera as data
files. These files are called lens files.
• Since each lens file is assigned a file number designated by the operator, pre-adjusted lens-compensation
data can be instantly recalled by selecting the corresponding file number.
87
Lens File
− Reference File is used to store parameter data that governs the overall ‘look’ common to each camera
used in a facility.
− Scene Files, in contrast, are used to store parameter settings made for individual ‘scenes’.
• It stores the temporary video setting data according to the scene. This file can be stored in the camera and
memory stick.
• A Scene File can be easily created and stored for the desired scene by overriding the data in the Reference File.
• Scene Files allow camera operators to instantly recall previously adjusted camera data such as created for
scenes outdoors, indoors, or under other lighting conditions.
The parameter settings
made for individual ‘scenes’
88
Scene File
− It stores the items displayed on the viewfinder and switch
settings for camera operator.
− This file can be stored in the memory stick, yet the video
data (paint data) cannot be stored.
89
Operator File
Super Motion
Super Motion is a function designed to reproduce high
quality slow-motion video, which is often required in
sports TV programs.
– The core of Super Motion system is a high speed
camera that operates and captures images at 180i
(150i for PAL) compared to conventional video
cameras operating at 60i (50i for PAL).
– The Super Motion camera drives its internal
clocking, the CCD imager, and all image
processing three times faster.
– By replaying the high-rate 180i video at 60i normal
speed, a clean 1/3- speed image is reproduced.
90
Super Motion
To capture images at 1080i
– Images must be transferred to the recording device at 1080i.
– The data rate of these images is three times larger than HD images captured by conventional HD
cameras (50i).
– The Super Motion system therefore requires a high bit-rate transmission interface and a recorder with high-
speed signal processing.
– For signal transmission, one optical fiber cable is used between the camera and CCU (camera control
unit) , and three HD-SDI connections are used between the CCU and the recorder.
– At the CCU output, the Super Motion system divides the data into three sub data segments in order to fit
the full data into the three HD-SDI connections.
– The sub-data segments are rebuilt into one signal to form the original 1080i data in the recorder.
91
Variable Frame Rate Recording
Over-cranking (Slow motion):
Increase the frame rate to slow down action in the
scene.
Under-cranking (Quick motion):
Decrease the frame rate to speed up action in the
scene.
Examples:
• Recording 20 frames/sec, then playing 10 frames/sec:
– Slow scene (1/2 normal speed)
• Recording 6 frames/sec, then playing 24 frames/sec:
– Fast scene (4x normal speed)
92
Variable Frame Rate Recording
93
Picture Cache Recording
94
– The Picture Cache Recording function has a buffer memory to store both video and audio data. This
buffer memory is kept active and repeats storing video and audio data regardless of whether or not the
camcorder is in REC mode.
– The buffer memory can store approximately seven to eight seconds of video and audio data.
– As soon as the REC button is pressed, the buffered data is first readout from memory and recorded onto
the recording media – tape or disc.
– While Picture Cache Recording can be a convenient function, it is important to note that it introduces a
timing gap between the capture of scenes and the actual recording to the tape/disc media. This is
because the buffered data is recorded to the media first, extending the total recording duration to
include both the buffer data length and the duration between the REC start and stop actions.
– Hence, the REC start button cannot be pressed immediately after the previous recording – the operator
needs to wait for the length of the buffered data.
95
Picture Cache Recording
Interval Recording
– The camera captures images at normal frame rate, but stores them only intermittently into a buffer
memory at pre-determined intervals.
– Once the buffer memory is full, the video images are read out sequentially from the memory and
recorded to tape or disc as seamless video.
96
− This function is very useful when precise, simple or complex
changes to the lens or camera settings are required during the
scene - for example, when changing the focus from the
background to the foreground of a scene.
− It allows for smooth, precise and repeatable automatic scene
transitions to occur. The operator can program the duration and
select from three transition profiles: Linear, Soft Stop or Soft
Transition.
− Many lens parameters such as the start and end settings for zoom,
focus and/or camera parameters such as white balance and gain
can be programmed to transition in unison. It works by
automatically calculating the intermediate values during the scene
transition.
− The Shot Transition function can be triggered manually or
synchronised with the camera’s REC start function
97
Shot Transition™ function
Viewfinder
98
– The primary role of a viewfinder is to make focus adjustments and correctly frame the desired image.
– There are three important points to remember when selecting a viewfinder:
I. Larger viewfinders typically have a higher resolution than smaller sizes.
II. Black and white viewfinders have a higher resolution and are less expensive than color viewfinders, hence they are
still the most popular.
III. Color viewfinders are convenient when subjects must be identified by their color.
99
Viewfinder
– In Viewfinders due to their small size screens,
resolutions can be limited, making precise focus
adjustments difficult.
– This function boosts the viewfinder signal in frequency
ranges that correspond to the image’s VERTICAL
picture edges.
– As a result, sharp images are reproduced on the
viewfinder screen, allowing correct focus adjustments.
– The PEAKING level, which determines the boost level,
is adjustable depending on the operator’s
preferences.
Viewfinder Peaking
100
− Although higher PEAKING levels offer sharper
viewfinder images, when adjusting PEAKING, two
factors must be considered:
I. Raising PEAKING level equally boosts the
viewfinder signal’s noise level
II. Too much PEAKING can create excessively
bright picture edges along viewfinder
characters and icons, such as on-screen
indications including Gain ,ND/CC, Shutter,
settings, and markers.
− It is therefore important to balance these factors
with the required image sharpness on the
viewfinder.
101
Viewfinder Peaking
– As cameras offer higher image resolutions, focus accuracy becomes a more critical issue than ever
before.
– In addition to the viewfinder PEAKING function, high-end cameras incorporate a VF DETAIL function,
offering a better choice for facilitating focus adjustments.
– Compared to PEAKING, the VF DETAIL function offers two unique features.
• The PEAKING sharpens only vertical picture edges, VF DETAIL increases the sharpness of viewfinder
images both vertically and horizontally.
• The PEAKING is processed within the viewfinder, VF DETAIL takes place within the camera.
 This means that the VF DETAIL function applies sharpness only to video signals shot by the camera, avoiding on-
display characters created in the viewfinder from being overemphasized.
The VF DETAIL mechanism uses a process similar to the camera’s main detail function. However, an exclusive detail
circuit is used to create the detail signals and add them to the video signal sent to the viewfinder display.
102
Viewfinder (VF) Detail
Clear VF DTL
– Clear VF DTL supports fine focusing in critical situation of HDTV shooting
– Enables the camera operator to focus much easier
103
Focus Assist Function
This makes a photographer easy to adjust the VF focus.
104
New Focus Assist Function for 4K
105
Expanded Focus
− The center of the screen on the LCD monitor and viewfinder of the camcorder can be magnified to about
twice the size, making it easier to confirm focus settings during manual focusing.
106
EZ Focus
EZ Focus is a feature that makes manual focusing adjustments much easier.
– When activated, the camera automatically opens the lens Iris to its widest range. This allows the camera
operator to make correct focus adjustments much easier.
– To avoid over-exposure during this mode, the video level is automatically adjusted by activating the
electronic shutter.
– The lens iris will be kept open for several seconds and then return to the same iris opening before EZ Focus
was activated.
107
MIX VF Function
– The MIX VF function is a method of displaying Return Video images on a
camera’s viewfinder screen.
– The MIX VF function enables both the camera’s images and Return
Video images to be overlapped on the viewfinder using a mix effect.
– It offers convenient operation by eliminating the need to toggle
between the two signals for display.
– In MIX VF mode, both images are kept at full-screen size.
– When using a color viewfinder, the Return Video image can be
displayed as a black/white image, while the image shot by the camera
is displayed in color, so the two can be easily identified.
Return Video
Camera Image
Return Video
Camera Image
108
Safety Zone Marker:
– The Safety Zone Marker indicates the area where all consumer TVs can display images.
– Using the Safety Zone Marker as a guide, camera operators can shoot images so they are correctly
displayed on all home TVs.
VF Markers
(Safety Zone Marker, Aspect Marker, Center Marker)
109
Aspect Marker:
– Aspect Markers indicate the picture area specified by aspect ratios. Using these markers, camera angles
can be correctly selected in consideration of the final content’s aspect ratio.
– For example, when shooting in 16:9 aspect ratio for 4:3 on air, it is imperative to keep camera angles within
the 4:3 area, but also prevent unnecessary objects from appearing in the 16:9 area.
110
VF Markers
(Safety Zone Marker, Aspect Marker, Center Marker)
Center Marker:
− The Center Marker allows the camera operator to maintain the center of the image when zooming in to or
zooming out of a subject.
− This is important since lenses usually have slightly different optical centers in their wide and telephoto
positions.
111
VF Markers
(Safety Zone Marker, Aspect Marker, Center Marker)
Cursor Marker:
− The Cursor Marker is used to indicate the position and size of where an image is planned to be added to
the camera’s raw output, such as a station logo prior to on-air transmission.
− The Cursor Marker keeps camera operators aware of these areas, to avoid framing important subjects
from being hidden. This allows camera angles to be flexibly decided, while keeping important content
viewable on the picture screen.
112
VF Markers
(Safety Zone Marker, Aspect Marker, Center Marker)
– The ClipLink allows shooting data to be effectively used in the entire production process. (a unique feature
in DVCAM camcorders)
– With conventional video cameras, shot lists (logging time code, etc.) were typically generated manually
using a clipboard, a pencil, and a stopwatch.
– The ClipLink feature relieves shooting crews from such a dilemma.
– During acquisition with a ClipLink-equipped camcorder, the in-point/out-point time code of each shot,
together with their OK/NG status, is recorded (in the DVCAM Cassette Memory).
– This data can then be transferred to the appropriate (DVCAM) editing system, and the in-point/out-point
time codes can be immediately used as a rough EDL for the editing process.
113
ClipLink
Index Picture
– The ClipLink feature also generates a small still image of each in-point, called the Index Picture, which is
recorded (to the DVCAM tape).
– This provides visual information of each shot taken.
– When used with the appropriate logging software, the entire ClipLink data, including the Index Pictures,
the in-points/out-points, and the OK/NG status, can be imported.
– This greatly enhances subsequent editing operation by relieving the editor from having to review all the
media (tapes) to pick up and sort the necessary shots before starting editing.
114
– This function automatically records, or logs, the camera settings - including the iris opening , the gain up
setting filter selection , as well as basic and advanced menu settings (to a DVCAM tape) (a unique
feature in DVCAM camcorders).
– SetupLog data is constantly recorded to the Video Auxiliary data area of the DVCAM tape while the
camcorder is in Rec mode - therefore making it possible to recall the camera settings that were used
during any given shoot.
– SetupLog is particularly useful when the camera must be serviced, since the technician can recall the
camera settings that were used when a recording was not made correctly to examine what when was
wrong.
115
SetupLog
– As opposed to SetupLog , which takes a log of the camera settings in real-time, Setup- Navi is specifically
intended for copying camera setup data between cameras using a DVCAM tape as the copy medium (a
unique feature in DVCAM camcorders).
– Activating this feature allows all camera setup parameters, including the key/button settings, basic and
advanced menus, service menus, etc to be recorded to a DVCAM tape.
– It is important to note that SetupNavi data is recorded only when activated by the operator and that the
DVCAM tape used to record this data cannot be used for other purposes.
– It is also convenient when the camera is used by multiple operators or for multiple projects, since the
exact camera settings can be quickly recalled by inserting the DVCAM tape with the SetupNavi data.
116
SetupNavi
FWIGSS
Focus
• Prior to the start of recording,
the camera operator manually
sets the focus following four
simple steps.
Switch to manual focus mode Zoom in on subject’s eyes
Adjust focus ring until sharp Zoom out to compose shot
The six primary device settings:
• Focus
• White balance
• Iris
• Gain
• Shutter speed
• Sound
117
FWIGSS
FWIGSS
White Balance
• Prior to the start of recording, the
camera operator manually
zooms in on a white card held by
the subject to set the white
balance.
118
FWIGSS
FWIGSS
Iris, Gain, and Shutter Speed
• Prior to the start of recording, the camera
operator adjusts the iris, gain, and shutter speed
as required or desired until the shot is properly
exposed.
• The zebra lines are an aid in setting exposure.
119
FWIGSS
FWIGSS
Sound
• Prior to the start of recording, the
camera operator conducts a sound
check and adjusts the record levels
for optimal sound reproduction.
120
FWIGSS
4K/HD Simulcast
– Independent image processing for 4K and HD.
– GAMMA Curve, Color and DTL can be adjust together or separately.
121
4K/HD Simulcast and HDR
122
HD Cutout Function for Clear Images
In Zoom & Perspective mode, one portion can be cut
out while performing perspective transformation,
according to the focal length of the lens (The cutout
region can be controlled with a mouse).
In simple HD mode two portions can be cut out at
the same time (The cutout region can be controlled
with a mouse).
123
Super Resolution Process
– It is not just upscaling from HD signal.
– It includes so called Super Resolution with image enhancement in the Ultra HD band, a new technology to
reconstruct high resolution signals that is not possible in conventional HD processing!
124
125
Super Resolution Process
– It is not just upscaling from HD signal.
– It includes so called Super Resolution with image enhancement in the Ultra HD band, a new technology to
reconstruct high resolution signals that is not possible in conventional HD processing!
NIT
− The measure of light output over a given surface area.
1 Nit = 1 Candela per Square Meter
Dynamic Range
− The range of dark to light in an image or system.
− Dynamic range is the ratio between the whitest whites and blackest blacks in an image (10000:0.1)
High Dynamic Range
− Wider range of dark to light.
126
Light Levels
In Rec709 100% white (700mV) is reference to 100 Nits
127
Light Levels
– There are two parts to High Dynamic Range (HDR)
– Monitor (Display)
– Camera (Acquisition)
– In the Monitor, it is trying to have the range of the material presented to it. Making things brighter with more resolution.
– In the Camera it is trying to get many more ‘F’ stops, wider dynamic range with the data for that range.
– HDR increases the subjective sharpness of images , perceived color saturation and immersion.
– SDR or LDR is Standard Dynamic Range
128
HDR Two Parts
(Inner triangle: HDTV primaries, Outer triangle: UHDTV primaries)
0 .1 .2 .3 .4 .5 .6 .7 .8
0
.1
.2
.3
.4
.5
.6
.7
.8
y
0 .1 .2 .3 .4 .5 .6 .7 .8
0
.1
.2
.3
.4
.5
.6
.7
.8
y
(a) Carnation
x
(b) Geranium and marigold
x
129
Wide Color Gamut Makes Deeper Colors Available
BT. 601 and BT.709 Color Spaces
– The maximum (“brightest”) and minimum
(“darkest”) values of the three
components R, G, B define a volume in
that space known as the “color volume”.
– Rec-601 and Rec-709 are basically on
top of each other
– So, we can use the same screen for SD
and HD with out going through
conversion in the Monitor to change the
color space
130
BT. 2020 Color Space
– Rec. 2020 color space covers 75.8%, of
CIE 1931 while Rec. 709 covers 35.9%.
131
Color Gamut Conversion (Gamut Mapping and Inverse Mapping)
132
Wide Color Space (ITU-R Rec. BT.2020)
75.8%, of CIE 1931
Color Space (ITU-R Rec. BT.709)
35.9%, of CIE 1931
A
1
B C
2
D
3
RGB
100% Color Bar
Rec. 709
Rec. 2020
CIE 1931 Color Space
(ITU-R Rec. BT.2020)
(ITU-R Rec. BT.709)
Transformation from a Wider Gamut Space to a Smaller One
133
BT.2020 Signal BT.709
– Without any corrections (gamut mapping), the image appear less saturated.
Munsell Chart
A
1
Three Approaches:
I. Clipping the RGB (clipping distortions)
II. Perceptual gamut mapping (more computations and possibly
changing the ‘creative intent’)
III. Leaving the RGB values as they are and let the screen think that they
relate to primaries of ITU-R BT.709.
– Without any corrections color saturation will be increased.
Smaller Gamut Space in a Wide Gamut Display
134
Munsell Chart
BT.709 Signal BT.2020
(ITU-R Rec. BT.2020)
(ITU-R Rec. BT.709)
D
3
– Opto-Electronic Transfer Function (OETF): Scene light to electrical signal
– Electro-Optical Transfer Function (EOTF): Electrical signal to scene light
Gamma, EOTF, OETF
135
The CRT EOTF is commonly
known as gamma
136
Gamma, EOTF, OETF
– Opto-Electronic Transfer Function (OETF): Scene light to electrical signal
– Electro-Optical Transfer Function (EOTF): Electrical signal to scene light
– Adjustment or Artistic Intent (Non-Linear Overall Transfer Function)
– System (total) gamma to adjust the final look of displayed images (Actual scene light to display luminance Transfer function)
– The “reference OOTF” compensates for difference in tonal perception between the environment of the camera and that of the display
specification (OOTF varies according to viewing environment and display brightness)
OOTF (Opto-Optical Transfer Function)
OOTF
Same Look
137
– On a flat screen display (LCD,
Plasa,..) without OOTF, it appears as
if the black level is elevated a little.
– To compensate the black level
elevation and to make images look
closer to CRT, a display gamma =
2.4 has been defined under BT.1886.
– As a result, OOTF = 1.2
Display EOTF
gamma 2.4
Camera OETF
1/2.2
OOTF = 1.2
OOTF (Overall System Gamma, Artistic Rendering Intent)
Opto-Optical Transfer Function (OOTF)
Non-Linear Overall Transfer Function
138
– Perceptual Quantization (PQ) (Optional Metadata)
– Hybrid Log-Gamma (HLG)
OOTF Position
For viewing in the end-user consumer TV, a display mapping should be performed to adjust the reference OOTF on
the basis of mastering peak luminance metadata of professional display
139
OOTF is implemented within the display and is aware of its peak luminance and environment (No metadata)
Scene-Referred and Display-Referred
Scene-Referred:
– The HLG signal describes the relative light in the scene
– Every pixel in the image represents the light intensity in the captured scene
– The signal produced by the camera is independent of the display
– The signal is specified by the camera OETF characteristic
Display-Referred:
– The PQ signal describes the absolute output light from the mastering display
– The signal is specified by the display EOTF
140
Code Levels Distribution in HDR
Uniform CodeWords for Perceived Brightness
141
Barten Ramp
Human eye’s sensitivity to contrast in different levels
100
MinimumDetectableContrast(%)
MinimumContrastStep(%)
Luminance (nit)
Contouring
Banding
∆𝐿
𝐿
×100
∆𝐿 & L are Large,Less bits are required,Larger quantize step size∆𝐿 & L are small,More bits are required,Smaller quantize step size
Minimum detectable contrast (%) =
𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐝𝐞𝐭𝐞𝐜𝐭𝐚𝐛𝐥𝐞 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐢𝐧 𝐥𝐮𝐦𝐢𝐧𝐚𝐧𝐜𝐞
𝐋𝐮𝐦𝐢𝐧𝐮𝐧𝐜𝐞
× 𝟏𝟎𝟎 =
∆𝑳
𝑳
×100
2
L
∆𝑳
𝑳
×100
∆𝑳
𝑳
×100
142
 The threshold of visibility for quantization error (Minimum detectable contrast) (banding or
contouring) becomes higher as the image gets darker.
 The threshold for perceiving quantization error (banding or contouring) is approximately
constant in the brighter parts and highlights of an image.
PQ EOTF
Code words are equally spaced in perceived brightness over this range nits.
BrightnessCodeWords
143
PQ EOTF
Code words are equally spaced in perceived brightness over this range nits.
BrightnessCodeWords
144
Minimum Detectable Contrast (%) =
𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐃𝐞𝐭𝐞𝐜𝐭𝐚𝐛𝐥𝐞 𝐃𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐢𝐧 𝐋𝐮𝐦𝐢𝐧𝐚𝐧𝐜𝐞
𝐋𝐮𝐦𝐢𝐧𝐮𝐧𝐜𝐞
× 𝟏𝟎𝟎 =
∆𝑳
𝑳
×100
2
L
Code Words Utilization by Luminance Range in PQ
– PQ headroom from 5000 to 10,000
nits = 7% of code space
– 100 nits is near the midpoint of the
code range
145
0
0.2
0.4
0.6
0.8
1
1.2
0 0.2 0.4 0.6 0.8 1
SignalValue
Linear light
SDR gamma curve
SDR with Knee
HDR HLG
Hybrid Log-Gamma (HLG) HDR-TV
E : The signal for each color component {RS, GS, BS} proportional to scene linear light and scaled by
camera exposure, normalized to the range [0:12].
E′ : The resulting non-linear HLG coded signal {R', G', B'} in the range [0:1].
a = 0.17883277, b = 0.28466892, c = 0.55991073
More Code Words for Dark Area
 
 





112ln
03
OETF
12
1
12
1
EcbEa
EE
EE
146
Less Code Words for Bright Area
ITU-R Application 2 ,ARIB B67 (Association of Radio Industries and Businesses)
HDR
BT.2020
SDR
BT.709
147
HDR & SDR Mastering
Tone Mapping and Inverse Tone Mapping
Tone Mapping (Down-conversion)
Limiting Luminance Range
Inverse Tone Mapping (Up-conversion)
Expanding Luminance Range
148
HDR
BT.2020
SDR Signal
(BT.709 or BT.2020) HDR
HDR Signal
(BT.2020)
SDR Signal
(BT.709 or BT.2020)
SDR
(BT.709 or BT.2020)
HDR Signal
(BT.2020) SDR
149
– Optimized only for the brightest scene in the contents
– This avoids hard clipping of detail in the highlights
– It is not invariant under blind multiple round-trip conversions.
Static Tone Mapping (HDR10)
Static and Dynamic Tone Mapping
200 1500
150
– Optimized for each scene in the contents
– Ex: frame-by-frame, or scene-by-scene basis (Varying the EETF based on statistics of the image).
– This approach could survive multiple round-trip conversions
Dynamic Tone Mapping
Static and Dynamic Tone Mapping
Static and Dynamic Metadata in HDR
Static Metadata
– Mastering Display Color Volume (MDCV) Metadata (SMPTE ST2086)
– The chromaticity of the red, green, and blue display primaries
– White point of the mastering display
– Black level and peak luminance level of the mastering display
– Content Light Levels Metadata (The Blu-ray Disc Association and DECE groups):
– MaxCLL (Maximum Content Light Level): Largest individual pixel light value of any video frame in the program
– MaxFALL (Maximum Frame-Average Light Level): Largest average pixel light value of any video frame in the program
(The maximum value of frame-average maxRGB for all frames in the content)
(The frame-average maxRGB : The average luminance of all pixels in each frame)
– They could be generated by the color grading software or other video analysis software.
Dynamic Metadata
– Content-dependent Metadata (SMPTE ST2094 (pending))
– Frame-by-frame or scene-by-scene Color Remapping Information (CRI)
– Variable color transformation along the content timeline.
151
Mapping
– During the transition from SDR to HDR production (More SDR Display) or due to content owner preference
– To preserve the “look” of the SDR content on HDR Display
– Display-referred mapping
To preserve the colors and relative tones of SDR on HDR Display
– Scene-referred mapping
To match the colors and lowlights and mid-tones of SDR camera with HDR camera.
152
SDR camera output
(BT.709 or BT.2020)
HDR Signal HDR
BT.2020
Preserved SDR Look
in HDR Program (Ex: 20%)
(Without Expanded
Luminance Range)
HDR Signal
HDR
BT.2020
SDR Content
(BT.709 or BT.2020)
(Without Expanded
Luminance Range)
Preserved SDR Look
in HDR Program (Ex:20%)
Backwards Compatibility
– Most of encoder/decoder and TVs are SDR (encoders/decoders replacement !!?? )
– Dolby Vision, Technicolor, Philips and BBC/NHK are all backwards compatible.
– Backwards compatibility is less of an issue in over-the-top (OTT).
HDR Signal
SDR UHDTV
ITU-R BT.709 color space HDR metadata simply is ignored
(Limited compatibility)
153
(Color Signal)
(B & W Display)
HLG and PQ Backwards Compatibility with SDR Displays
HLG
BT.2020
SDR
BT.2020 color space
− It has a degree of compatibility.
− Hue changes can be perceptible in bright areas of highly
saturated color or very high code values (Specular)
− Both PQ and HLG provide limited compatibility
HLG/PQ
BT.2020
SDR
BT.709 color space
154
Ex: Benefit of 4K Lens for WCG and HDR
– Both HD and 4K lens covers BT.2020.
– Improve the transparency of Blue in 4K lens
– Better S/N ratio.
– 4K lens can cut the flare and reduce black floating even in
a backlit conditions.
– Black floating is more noticeable in HDR.
– Same object and same white level, but black level of
– HD: 21.9% (HD lens reduces dynamic range!)
– Full 4K:11.6%
Same object and
same white level, but
different black level
155
HDR & HDMI
HDMI 2.0a supports ST2084 (PQ) and ST2086 (Mastering Display Color Volume Metadata)
HDMI 2.0b followed up on HDMI 2.0a and added support for HLG and the HDR10
The HDMI 2.1 Specification will supersede 2.0b will support dynamic metadata and High Frame Rate
156
HDR Standards
Dynamic Metadata for Color Transform (DMCVT)
Dolby Vision, HDR10+ (License-free Dynamic Metadata), SL-HDR1, Technicolor (PQ)
Static Metadata (Mastering Display Color Volume (MDCV) Metadata+ MaxCLL+ MaxFALL)
HDR10 (PQ + static metadata)
PQ10 (+ Optional static metadata)
No Metadata
HLG10, PQ10 (without metadata)
StandoutExperienceSimplicity
157
HDR Metadata and HDR/SDR Signal ID
158
(FIFA World Cup 2018)
159
Global Picture of Sony “SR Live” for Live Productions (FIFA World Cup 2018)
– 8 Cameras Dual output UHD/HDR and HD/SDR
– 11 Cameras Dual output HD/HDR and HD/SDR
– 21 Cameras Single output HD/SDR
– All Replays HD/SDR
Shading of all cameras is done
on the HD/SDR (BT. 709)
160
Global Picture of “HLG-Live” for Live Productions
Shading of all cameras is done
on the HD/SDR (BT. 709)
161
SD and HD Vectors
709 Color Space 601 Color Space
Vector look is same as each other
162
BT.2020 and BT.709 Vectors
709 Color Space 2020 Color Space
Vector look is same as each other
163
Standard Definition100% color bar test pattern.
Standard Definition 100% color bar RGB parade
Standard Definition 100% color bar YPbPr parade
High Definition 100% color bar YPbPr parade
Why small Spikes in the RGB waveform parade? The
unequal rise time between Luma and Color Difference
bandwidths and the conversion of SDI Y’P’bP’r back to
R’G’B’ in the waveform display.
164
HD 100% color bars YPbPr parade, Rec. 709.
UHD 100% color bars YPbPr parade, Rec. 709.
UHD 100% Color Bars YPbPr parade, Rec. 2020.
Spike transitions is normal because no video filtering is
applied to each link. This allows the quad links to be
seamlessly stitched together, otherwise a thin black line
would be seen between the links.
165
UHD 100% Split Field Color Bars with both 709 and 2020 color spaces in YPbPr Parade display.
166
RGB Paraded waveform display of 100% Color Bar split field test signal with Rec. 709 and Rec. 2020 color spaces.
 In some cases the SMPTE 352 VPID may contain information on the colorimetry data that is used.
Often however, this may not be the case and a known test signal such as color bars will be
necessary to assist the user in determining the correct color space.
 The user must manually select from the configuration menu between the 709 and 2020
colorspaces.
 When the correct colorspace is selected then the traces will be at 0% and 100% (700mv) levels.
167
White and Highlight Level Determination for HDR
Diffuse white (reference white) in video:
Diffuse white is the reflectance of an illuminated white object (white on calibration card).
Since perfectly reflective objects don’t occur in the real world, diffuse white is about 90% reflectance (100% reflectance white card is used either).
The reference level, HDR Reference White, is defined as the nominal signal level of white card.
Highlights (Specular reflections & Emissive objects (self-luminous)) :
The luminances that are higher than reference white are referred to as highlights. In traditional video, the highlights levels were generally set to be no
higher than 1.25×diffuse white level. (in cinema up to 2.7×diffuse white).
- Specular reflections
Specular regions luminance can be over 1000 times higher than the diffuse surface in nit.
- Emissive objects (self-luminous)
Emissive objects and their resulting luminance levels can have magnitudes much higher than the diffuse range in a scene or image. (Sun,
has a luminance s~1.6 billion nits).
A more unique aspect of the emissive is that they can also be of very saturated color (sunsets, magma, neon, lasers, etc.).Black
White
18% Reflectance
168
Nominal signal levels for PQ and HLG production
Reflectance Object or Reference
(Luminance Factor, %)
Nominal Luminance Value, nit
(PQ & HLG)
[Display Peak Luminance, 1000 nit]
Nominal
Signal Level
(%)
PQ
Nominal
Signal Level
(%)
HLG
Grey Card (18%) 26 nit 38 38
Greyscale Chart Max (83%) 162 nit 56 71
Greyscale Chart Max (90%) 179 nit 57 73
Reference Level:
HDR Reference White (100%) also
Diffuse White and Graphics White
203 nit 58 75
− PQ and HLG production on a display with 1000 nits nominal peak luminance, under controlled studio lighting (Test
chart should be illuminated by forward lights and camera should shoot it from a non-specular direction).
− The percentages represent signal values that lie between the minimum and maximum non-linear values
normalized to the range 0 to 1.
90% Reflectance
18% Reflectance (the closest standard reflectance card to skin tones)
Here, the reference level, HDR Reference White, is defined as the nominal signal level of a 100% reflectance white card. (the signal level that would
result from a 100% Lambertian reflector placed at the center of interest within a scene under controlled lighting, commonly referred to as diffuse white).
169
Waveform View in HD, UHD or 4K ?
170
Camera Black Set (Lightning)
171
Capturing Camera Log Footage (Spider Cube)
− Use a suitable grey scale camera chart
or Spider Cube
− This cube has a hole that produce super
black, a reflective black base, and
segments for 18% grey and 90%
reflective white. The ball bearing on the
top produces reflective specular
highlights.
− Setup your test chart within the scene.
− Adjust the lighting to evenly illuminate
the chart.
− Adjust the camera controls to set the
levels
− –ISO/Gain, Iris, Shutter, White Balance
Specular
Highlights
18% Grey
90% Reflectance
White
Super
Black
Black
Data color Spider Cube.
172
SMPTE 2084 PQ (1K) scale with 100% reflectance white
NitsLevel (%)
The 90% reflectance white of the signal should be at about 51% level that is equivalent to
100 Nits
The 18% grey will be at 36% level that is equivalent to 20 Nits
10,000Nits is equal to the 100% level of HDR signal Specular
Highlights
18% Grey
90%
Reflectance
White
Super Black Black
Data color
Spider Cube.
The 2% Black point will be at 19% level that is equivalent to 2.2 Nits
Camera operators can use the graticule
lines at 2%, 18% or 90% Reflectance to
properly setup camera exposure with a
camera test chart of 2% black, 18% gray
and 90% white.
173
SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of reflectance
NitsLevel (%)
Specular
Highlights
18% Grey
90%
Reflectance
White
Super Black Black
Data color
Spider Cube.
The 90% reflectance white level of the signal should be at about 51% level that is equivalent
to 90 Nits
The 18% grey level will be at 36% level that is equivalent to 18
Nits
The 2% Black point level will be at 19% level that is equivalent to 2 Nits
9,000Nits is equal to the 100% level of HDR signal
174
SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of Code Values
Level (Hex)Code Value (Decimal)
175
SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of STOPS
StopLevel (%)
176
10K PQ with a 1000 nits Limit, Full range
Waveform in 10K PQ Full range with the Video at 1K grade.
If you use the full 10K curve and set your video grading to 1000 Nits you will have about the
top 25% of the waveform screen not being used.
We have implemented both Narrow (SMPTE) SDI levels and Full SDI levels.
Waveform setting on 10K PQ Full range: On the waveform you see 4d as 0 nits and 1019d as 10,000 nits in Full.
177
HDR 1k Grade SMPTE Levels
Normal reflective Whites are around 100Nits
Peek is going to 1000Nits no more.
HDR has the blacks stretch and the Whites are compressed.
178
HDR Reflectance View
The normal reflective whites are around 100Nits, which is at 90% Reflectance (709 100 IRE)
18% grey will be at 36% level that is equivalent to 18 Nits
2% Black will be at 19% level that is equivalent to 2 Nits
1000 Nits shows up at 100% Reflectance
179
Stop View
(Relative to 20 nits)
Stop
1000 Nits
The normal reflective Whites are around 100Nits, which is at +2.3 Stops (709 100 IRE)
0 Stop is shown as the 18% Grey point (=20 nits).
2% Black point at -3.1 Stops.
𝑺𝒕𝒐𝒑 𝑽𝒂𝒍𝒖𝒆 𝒇𝒐𝒓 𝟏𝟎𝟎𝟎 𝒏𝒊𝒕𝒔 = 𝒍𝒐𝒈 𝟐
𝟏𝟎𝟎𝟎 𝒏𝒊𝒕
𝟐𝟎 𝒏𝒊𝒕
= 𝟓. 𝟓
180
HDR 2K Grade SMPTE Levels
Normal Whites are just around 100Nits just a little higher.
Max white is at 2000Nits
HDR has the blacks stretch and the Whites are compressed.
181
HDR 1K Grade Full Levels
Black (0) is at 4h
White is around 100Nits
Highlights are going up to 1000Nits
Waveform setting on 10K PQ Full range: On the waveform
you see 4d as 0 nits and 1019d as 10,000 nits in Full.
182
Rec 709 Video on the HDR Graticule
Whites are going to 100%.
The black are all down at the bottom of the waveform.
The whites are stretched to 100%
183
HDR Zoom Mode
184
Specular Highlights Bright Ups
185
HDR Heat-map tool
− 7 simultaneous and programmable color overlay bands
− Individual upper & lower overlay threshold controls
− User presets for SDR & HDR modes
− Selectable background grey /color
− Identify shadows, mid-tones or specular highlights
186
Capturing a Camera Log Image
Gamma
0% Black
10-bit Code-Value
%
18% Grey 10-bit Code-Value
(20nits illumination)
%
90% Reflectance
10-bitCode Value
%
S-Log1 90 3 394 37.7 636 65
S-Log2 90 3 347 32.3 582 59
S-Log3 95 3.5 420 40.6 598 61
Log C (ARRI) 134 3.5 400 38.4 569 58
C-Log (Canon) 128 7.3 351 32.8 614 63
ACES (Proxy) ND ND 426 41.3 524 55
BT.709 64 0 423 41.0 940 100
− Today’s video cameras are able to capture a wide dynamic range of 14-16 Stops depending on the camera.
− In order to record this information a log curve is used by each camera manufacturer to be able to store this wide
dynamic range effectively with 12- 16 bits or resolution as a Camera RAW file.
− Each curve has defined Black, 18% Grey and 90% reflectance white levels.
187
S-Log2 Waveform to nits
540 or 1000 Nits
Max Highlights
Monitor dependent
(Display with 540 or 1000 nits)
100 Nits (59%)
Normal White
20 Nits (32.3%)
18% Grey
188
Spider Cube S-Log2 as Shot from the Camera in Log
Digital Values Stop values
189
Spider Cube S-Log2 as Shot from the Camera in Log
Showing S-Log2 in normal 709 type screens
190
S-Log2 to Rec. 709
191
Camera (Scene) Referenced BT.709 to PQ LUT Conversion
− SDR and HDR displays DO NOT match.
− Blacks are stretched in the BT1886 Display but not the PQ Display (matches scene)
Camera-Side Conversion
BT.709 to PQ
2084 HDR 0% 2 % 18% 90% 100%
BT.709 100nits 0 9 41 95 100
HDR 1000nits 0 37 58 75 76
HDR 2000nits 0 31 51 68 68
HDR 5000nit 0 24 42 58 59
9 41 95 100
192
Specification of color bar test pattern for high dynamic range TV systems
BT.2111-07
(40%)
(75%)
(0%)(75%)(0%)
(0%)
(75%)
(40%)
(75% colour bars)
(100% colour bars)
(–2%) (+2%) (+4%)
BT. 709 colour bars
Ramp (–7% - 109%)
Stair (–7%, 0%, 10%, 20%, ..., 90%, 100%, 109%HLG)
Recommendation ITU-R BT.2111-0
(12/2017)
Specification of colour bar test pattern for
high dynamic range television systems
BT Series
Broadcasting service
(television)
193
Color Correcting your 4K Content
Image without full dynamic range. Blacks are lifted (above 0) and whites aren't at 100% (or 700 mv).
194
Tonal range after spreading
195
Before the gamma adjustment
196
After the gamma adjustment
Neutral image
Warm, “golden hour” image. Cool, contrasty image
197
Color Correcting your 4K Content
Misbalanced Chip Chart
198
Balanced Chip Chart
Misbalanced Chip Chart.
Chip Chart is only made up of black, white and gray
chips, so the entire trace should be very close to the
center.
199
Balanced Chip Chart
A fairly balanced image on an RGB Parade waveform monitor, but the image contains a lot of green grass
200
Questions??
Discussion!!
Suggestions!!
Criticism!!
201

Weitere ähnliche Inhalte

Was ist angesagt?

VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
 
An Introduction to Video Principles-Part 1
An Introduction to Video Principles-Part 1   An Introduction to Video Principles-Part 1
An Introduction to Video Principles-Part 1 Dr. Mohieddin Moradi
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Dr. Mohieddin Moradi
 
An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2Dr. Mohieddin Moradi
 
Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Dr. Mohieddin Moradi
 
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDesigning an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDr. Mohieddin Moradi
 
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Dr. Mohieddin Moradi
 
An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2Dr. Mohieddin Moradi
 
Video Coding Standard
Video Coding StandardVideo Coding Standard
Video Coding StandardVideoguy
 
Audio Video Engineering
Audio Video Engineering Audio Video Engineering
Audio Video Engineering Yogesh kanade
 
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsVideo Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
 

Was ist angesagt? (20)

VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
 
An Introduction to Video Principles-Part 1
An Introduction to Video Principles-Part 1   An Introduction to Video Principles-Part 1
An Introduction to Video Principles-Part 1
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts
 
An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2
 
Broadcast Lens Technology Part 1
Broadcast Lens Technology Part 1Broadcast Lens Technology Part 1
Broadcast Lens Technology Part 1
 
HDR and WCG Principles-Part 1
HDR and WCG Principles-Part 1HDR and WCG Principles-Part 1
HDR and WCG Principles-Part 1
 
HDR and WCG Principles-Part 5
HDR and WCG Principles-Part 5HDR and WCG Principles-Part 5
HDR and WCG Principles-Part 5
 
SDI to IP 2110 Transition Part 1
SDI to IP 2110 Transition Part 1SDI to IP 2110 Transition Part 1
SDI to IP 2110 Transition Part 1
 
Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles
 
HDR and WCG Principles-Part 6
HDR and WCG Principles-Part 6HDR and WCG Principles-Part 6
HDR and WCG Principles-Part 6
 
HDR and WCG Principles-Part 2
HDR and WCG Principles-Part 2HDR and WCG Principles-Part 2
HDR and WCG Principles-Part 2
 
HDR and WCG Principles-Part 4
HDR and WCG Principles-Part 4HDR and WCG Principles-Part 4
HDR and WCG Principles-Part 4
 
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDesigning an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
 
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts
 
An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2
 
Video Coding Standard
Video Coding StandardVideo Coding Standard
Video Coding Standard
 
SDI to IP 2110 Transition Part 2
SDI to IP 2110 Transition Part 2SDI to IP 2110 Transition Part 2
SDI to IP 2110 Transition Part 2
 
digital tv DTMB
digital tv DTMBdigital tv DTMB
digital tv DTMB
 
Audio Video Engineering
Audio Video Engineering Audio Video Engineering
Audio Video Engineering
 
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsVideo Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video Codecs
 

Ähnlich wie Broadcast Camera Technology, Part 3

Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Kunal mane
 
Iaetsd designing of cmos image sensor test-chip and its characterization
Iaetsd designing of cmos image sensor test-chip and its characterizationIaetsd designing of cmos image sensor test-chip and its characterization
Iaetsd designing of cmos image sensor test-chip and its characterizationIaetsd Iaetsd
 
UHK-430 White paper
UHK-430 White paperUHK-430 White paper
UHK-430 White paperKris Hill
 
DLP Projection systems
DLP Projection systemsDLP Projection systems
DLP Projection systemsVarun Kambrath
 
CCD (Charge Coupled Device)
CCD (Charge Coupled Device)CCD (Charge Coupled Device)
CCD (Charge Coupled Device)Sagar Reddy
 
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...Edge AI and Vision Alliance
 
Introduction to Video Signals
Introduction to Video SignalsIntroduction to Video Signals
Introduction to Video SignalsDevashish Raval
 
Brian Elliott's "Camera Basics" Lecture
Brian Elliott's "Camera Basics" LectureBrian Elliott's "Camera Basics" Lecture
Brian Elliott's "Camera Basics" Lecturejpowers
 
Graphics display devices
Graphics display devicesGraphics display devices
Graphics display devicesalldesign
 
Overview of Graphics System
Overview of Graphics SystemOverview of Graphics System
Overview of Graphics SystemPrathimaBaliga
 
Introduction to computer graphics and multimedia
Introduction to computer graphics and multimediaIntroduction to computer graphics and multimedia
Introduction to computer graphics and multimediaShweta Shah
 

Ähnlich wie Broadcast Camera Technology, Part 3 (20)

SECURICO CCTV BOOK
SECURICO CCTV BOOK SECURICO CCTV BOOK
SECURICO CCTV BOOK
 
Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237Aiar. unit v. machine vision 1462642546237
Aiar. unit v. machine vision 1462642546237
 
Broadcaster Notes
Broadcaster NotesBroadcaster Notes
Broadcaster Notes
 
Ccdcams
CcdcamsCcdcams
Ccdcams
 
Unit i
Unit  iUnit  i
Unit i
 
Iaetsd designing of cmos image sensor test-chip and its characterization
Iaetsd designing of cmos image sensor test-chip and its characterizationIaetsd designing of cmos image sensor test-chip and its characterization
Iaetsd designing of cmos image sensor test-chip and its characterization
 
UHK-430 White paper
UHK-430 White paperUHK-430 White paper
UHK-430 White paper
 
DLP Projection systems
DLP Projection systemsDLP Projection systems
DLP Projection systems
 
CCD (Charge Coupled Device)
CCD (Charge Coupled Device)CCD (Charge Coupled Device)
CCD (Charge Coupled Device)
 
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
 
Computer Graphics - Introduction and CRT Devices
Computer Graphics - Introduction and CRT DevicesComputer Graphics - Introduction and CRT Devices
Computer Graphics - Introduction and CRT Devices
 
lect-2.ppt
lect-2.pptlect-2.ppt
lect-2.ppt
 
Digital.cc
Digital.ccDigital.cc
Digital.cc
 
Introduction to Video Signals
Introduction to Video SignalsIntroduction to Video Signals
Introduction to Video Signals
 
Brian Elliott's "Camera Basics" Lecture
Brian Elliott's "Camera Basics" LectureBrian Elliott's "Camera Basics" Lecture
Brian Elliott's "Camera Basics" Lecture
 
Graphics display devices
Graphics display devicesGraphics display devices
Graphics display devices
 
Camera pdf
Camera pdfCamera pdf
Camera pdf
 
Train.pptx
Train.pptxTrain.pptx
Train.pptx
 
Overview of Graphics System
Overview of Graphics SystemOverview of Graphics System
Overview of Graphics System
 
Introduction to computer graphics and multimedia
Introduction to computer graphics and multimediaIntroduction to computer graphics and multimedia
Introduction to computer graphics and multimedia
 

Mehr von Dr. Mohieddin Moradi

An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3Dr. Mohieddin Moradi
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Dr. Mohieddin Moradi
 
An Introduction to Audio Principles
An Introduction to Audio Principles An Introduction to Audio Principles
An Introduction to Audio Principles Dr. Mohieddin Moradi
 
Video Compression, Part 4 Section 1, Video Quality Assessment
Video Compression, Part 4 Section 1,  Video Quality Assessment Video Compression, Part 4 Section 1,  Video Quality Assessment
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
 
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 4 Section 2,  Video Quality Assessment Video Compression, Part 4 Section 2,  Video Quality Assessment
Video Compression, Part 4 Section 2, Video Quality Assessment Dr. Mohieddin Moradi
 
Video Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsVideo Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
 

Mehr von Dr. Mohieddin Moradi (10)

Video Quality Control
Video Quality ControlVideo Quality Control
Video Quality Control
 
HDR and WCG Principles-Part 3
HDR and WCG Principles-Part 3HDR and WCG Principles-Part 3
HDR and WCG Principles-Part 3
 
Broadcast Lens Technology Part 2
Broadcast Lens Technology Part 2Broadcast Lens Technology Part 2
Broadcast Lens Technology Part 2
 
An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
 
An Introduction to Audio Principles
An Introduction to Audio Principles An Introduction to Audio Principles
An Introduction to Audio Principles
 
Video Compression, Part 4 Section 1, Video Quality Assessment
Video Compression, Part 4 Section 1,  Video Quality Assessment Video Compression, Part 4 Section 1,  Video Quality Assessment
Video Compression, Part 4 Section 1, Video Quality Assessment
 
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 4 Section 2,  Video Quality Assessment Video Compression, Part 4 Section 2,  Video Quality Assessment
Video Compression, Part 4 Section 2, Video Quality Assessment
 
Video Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsVideo Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video Codecs
 

Kürzlich hochgeladen

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 

Kürzlich hochgeladen (20)

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 

Broadcast Camera Technology, Part 3

  • 3. – When the CRT’s vertical frequency is lower than the camera’s operating frequency, the camera CCD will output (readout) the image before all lines of the CRT are scanned. – The lines that were not scanned within the CCD charge accumulation period will appear black. 3 Clear Scan 𝒕 𝑪𝑹𝑻: Frame time on display =1/50 sec 𝒕 𝑪𝑪𝑫 : CCD charge and discharge time (scan time) =1/70 sec
  • 4. Clear Scan – Most CRT computer displays have a higher vertical frequency than video cameras operating at 50Hz (for PAL areas). – When capturing the CRT image using such cameras, the camera CCD will capture part of the computer scan twice. – This results in more light being captured for that part of the scan and a white bar being output. 4 𝒕 𝑪𝑹𝑻: Frame time on display =1/80 sec 𝒕 𝑪𝑪𝑫 : CCD charge and discharge time (scan time) =1/50 sec
  • 5. – By activating Clear Scan, the electronic shutter speed (= CCD charge accumulation period) can be controlled in small increments so it can be matched to the computer display’s vertical frequency. • In this way, the banding is effectively eliminated. • This banding effect, both white and black is not seen when shooting a Plasma or LCD display. – Clear Scan is also effective for eliminating the flicker effect when shooting under fluorescent lights, which have a different frequency than the standard CCD accumulation period. 5 Clear Scan
  • 6. Slow Shutter – The Slow Shutter feature extends the CCD accumulation period to longer than the frame (or field) rate, instead of shortening it, which is the case with conventional electronic shutters. – For example, by setting the Slow Shutter speed to 16 frames (32 fields), a unique blur effect can be produced. – This is because the CCD captures movement across the 16-frame period as one image. 6 Normal shutter speed Blur effect using slow shutter (16-frame accumulation)
  • 7. Slow Shutter – In addition to producing such effects, the Slow Shutter mechanism can help shooting dark scenes with higher sensitivity. – The longer accumulation period allows more electrical charges to be accumulated under low light conditions. 7 Normal shutter speed High sensitivity using slow shutter (16-frame accumulation)
  • 8. – A 1/10 second shutter speed translates into an accumulation period six times longer the signal’s field rate (1/10 sec = 1/60 sec × 6). – This means that only 10 fields are available to generate a 60-field video signal. – This ‘field count’ discrepancy is compensated for using a memory buffer. This image is read out from the buffer six times, at the video signal’s 1/60-second field rate. – After the 6th output, the memory is refreshed with the next image. – This mechanism allows moving pictures to be reproduced with unique effects and high sensitivity. – However, motion may sometimes appear jerky (≠smooth) when the accumulation duration is excessively longer than the speed of the moving scene. Slow Shutter Example 8
  • 9. – In general, the frequency response of a camera is defined by its CCD’s pixel count. – Due to this fact, the frequency response of a 720P signal varies depending on the pixel count of the CCD used to produce it. – The vertical frequency response of a native 720P camera (using a CCD with 720 vertical pixels) draws a gradual downward curve toward the 720 TV line vertical resolution. – In contrast, a super-sampled 720P signal, originated from a 1080P camera (using 1080 vertical pixels), maintains a higher response level up to the 720 TV line range. – This higher response allows the reproduction of much sharper picture edges. – This is because Super Sampling uses the full pixel count of the 1080P CCD. 9 Super Sampling
  • 10. – The 720P output is achieved using a digital filtering method called Super Sampling technology. – Super Sampling digitally cuts off the vertical frequency response right before the 720 TV line resolution. – This is achieved without any degradation of the higher 1080P response level. – As a result, excellent response characteristics (almost flat) are obtained for the 720P output. 10 Super Sampling
  • 11. Optical Low Pass Filter Effective in studio 11
  • 12. Optical Low Pass Filter – Due to the physical size and alignment of the photo sensors on a CCD imager, when an object with fine detail (such as a fine striped pattern) is shot, a rainbow-colored pattern known as Moiré may appear across the image. – This tends to happen when the image’s spatial frequency exceeds the CCD’s spatial-offset frequency or, put more simply, when The image details are smaller than the spacing between each photo sensor. – In order to reduce such Moiré patterns from appearing, optical low pass filters are used in CCD cameras. – An optical low pass filter is placed in front of the CCD prism block to blur image details that may result in Moiré. – Since this type of filtering can reduce picture resolution, the characteristics of an optical low pass filter are determined with special care to effectively reduce Moiré, but without degrading the camera’s maximum resolving power. 12
  • 13. Optical Low Pass Filter 13
  • 14. Cross Color Suppression − Cross color is an artifact seen across very fine striped patterns when displaying a composite signal feed on a picture monitor. − It is observed as a rainbow-like cast, moving across the stripe pattern. 14Cross Color Cross Color Suppression ON
  • 15. Cross Color Suppression – Even with the latest filtering technology, composite signals cannot be perfectly separated into their original luminance and chrominance components. This results in cross color effect. – Cross Color Suppression technology presents a solution to the limitations of Y/C separation on TV receivers. – The idea behind Cross Color Suppression is to eliminate signal components that can result in cross color before the camera outputs the composite signal. – These signal components are eliminated from the Y/R-Y/B-Y signals within the camera head using sophisticated digital three line comb filtering. – Adding this process allows the output composite signal to easily be separated into its chrominance and luminance components at the TV receiver. – This results in greatly reduced cross color and dot crawl (dot noise appearing at the boundaries of different colors) as compared to a composite output that does not use this process. 15
  • 16. CCD Image Chip Individual Pixels Volts Time / H location CCD output signal after integration of charges 16 Volts Time / H location Charge levels on Pixels Detail Correction, Detail Signal and Detail Level
  • 17. Volts Time / H location Time / H location Ideal Signal CCD Output Signal Volts Distinct edge: Instantaneous transition from black to white Blurred edge: Gradual transition from black to white 17 Detail Correction, Detail Signal and Detail Level
  • 18. Ideal Signal CCD output signal Correction signal Corrected signal + = 18 Detail Correction, Detail Signal and Detail Level
  • 19. – The video signal output from a CCD unit lacks detail information because the unit’s ability to resolve detail is limited by the size of the light sensitive pixels. – An electronic circuit in the camera compensates for the missing detail information in the Video signal. – It Makes picture edges appear sharper than the native resolution provided by the camera (also called “image enhancement”). – This is achieved by overshooting the signal at the picture edges using a spike-shaped signal called the detail signal. – The amount of detail correction can usually be adjusted. This is called detail level. • Increasing the detail level sharpens the picture. • Decreasing it softens the picture. 19 Detail Correction, Detail Signal and Detail Level
  • 20. Horizontal Detail Correction Original signal (a) Is delayed by 50 nsec (a) Is delayed by 100 nsec (a)+(c) 2(b)-(d) 2(e) 20
  • 21. Vertical Detail Signal – The mechanism for creating the vertical detail signal is basically the same as horizontal detail correction. – The only difference is that the delay periods for creating signals (b) and (c) are one horizontal scanning line and two horizontal scanning lines, respectively. Note: − Excessive detail correction can lead to an artificial appearance of the picture, as though objects have been cut out from the background. − Therefore, detail correction must be applied with care. 21 Original signal (a) Is delayed by one horizontal scanning line (a) Is delayed by two horizontal scanning lines (a)+(c) 2(b)-(d) 2(e)
  • 22. H/V Ratio Detail correction is applied to both horizontal and vertical picture edges using separate horizontal detail and vertical detail circuits. H/V Ratio: The ratio between the amount of detail applied to the horizontal and vertical picture edges. – It is important to maintain the balance of the horizontal and vertical detail signals to achieve natural picture enhancement. H/V Ratio should thus be checked every time detail signals are adjusted. 22
  • 23. – When shooting a shiny and velvety object, the texture of the object’s surface may sometimes be overemphasized or blurred on the screen. – For example, if the object is a wrapped bar of soap, the image may appear as shown in the below “Before setting,” where the wrinkles are emphasized by dark shadows. – This is because video cameras add a signal (DETAIL signal) that emphasizes dark to- bright transitions, in this particular case, around the wrinkles on the left side of the transparent wrapping. – In such situations, the sheen and texture of the object can be reproduced more naturally by adjusting the level of the detail signal to be added. Tips for Reproducing Sheen and Textures of Objects Realistically 23
  • 24. Tips for Reproducing Sheen and Textures of Objects Realistically 24
  • 25. Tips for Reproducing Sheen and Textures of Objects Realistically 25
  • 26. Highlight DTL − Provides better expression in highlight scene (High Amplitude) Conventional model 26
  • 27. Fine DTL − Expanding the small edge in the low contrast object − Compressing the edge component in the high contract object − The impression for the glare (a bright unpleasant light) of picture with too much edge is reduced and the natural image can be obtained. 27 Compressing the edge component in the high contrast object Expanding the small edge component in the low contrast object
  • 28. Mix Ratio – The term Mix Ratio, describes the ratio between the amount of detail applied at the pre-gamma detail correction and post gamma detail correction. – There is no standard setting for Mix Ratio. – It is a parameter adjusted completely according to the operator’s preference. 28 Camera gamma (𝜸 𝒄 = 𝟎. 𝟒𝟓) Linear gamma (𝜸 𝒎 𝜸 𝒄 = 𝟏) Display gamma (𝜸 𝒎 = 𝟐. 𝟐𝟐) Gamma boosts the contrast at dark picture areas while compressing contrast in highlights The detail signals added to the dark picture areas are amplified, while those added to the highlight areas are compressed. The post- gamma detail correction not subject to the gamma correction Pre-gamma and post-gamma detail correction
  • 29. – Gamma correction used in video cameras • boosts the contrast at dark picture areas • compressing contrast in highlights – Detail signals generated in the pre-gamma detail correction are also subject to this gamma correction. For this reason – The detail signals added to the dark picture areas are amplified – The detail signals added to the highlight areas are compressed. – For this reason, pre-gamma detail correction is effective for enhancing contrast at dark areas of the image, but not for highlights. – This issue is solved using the post-gamma detail correction, which effectively enhancing brighter parts of the image (is not subject to the gamma correction). Mix Ratio 29 Gamma boosts the contrast at dark picture areas while compressing contrast in highlights The detail signals added to the dark picture areas are amplified, while those added to the highlight areas are compressed. The post-gamma detail correction not subject to the gamma correction Pre-gamma and post-gamma detail correction
  • 30. 30 Knee Aperture Knee Correction Knee Aperture The KNEE APERATURE function enhances signals in the highlight areas. Signals in the highlight areas that were compressed by the KNEE function. Compressed signals enhanced by the KNEE APERATURE function.
  • 31. − Knee Correction is an effective function for preventing image highlights from being overexposed by compressing them to fall within the standard video signal range (100 to 109%). − This function can sometimes degrade picture contrast and sharpness in the highlight areas, this is because 1. Compressing highlight signal levels also results in compressing highlight contrast (contrast loss) 2. Detail signals applied to such areas are also compressed by the Knee Correction (sharpness loss) – To compensate for this contrast and sharpness loss, cameras have a Knee Aperture circuit placed right after the Knee Correction process. – This function emphasizes the picture edges of highlight areas which where compressed by the Knee Correction process (highlights above the knee point). – Knee Aperture can be adjusted in the same way as Detail Correction, but only for signal levels above the knee point. Knee Correction Knee Aperture 31 Knee Aperture
  • 32. – When shooting a bouquet using a spotlight, the bright areas of the image can be overexposed. – This phenomenon can be eliminated using the KNEE function, keeping the brightness level (luminance level) of the image within the video signal’s dynamic range. – However, in certain cases, the KNEE process can also cause the picture edges of objects to appear blurred. This is because the contrast of highlight areas is reduced as a result of compressing the luminance signal. – The “Before setting” image demonstrates how the KNEE function eliminates highlight “washed-outs,” but also shows that the picture edges of bright objects such as the flower petals and plastic cubes get blurred. – In such situations, the picture edges of the highlight areas can be reproduced with more contrast by applying image enhancement only to the signals compressed by the KNEE function. Tips for Reproducing Solid Picture Edges of an Image’s Highlights 32
  • 33. Tips for Reproducing Solid Picture Edges of an Image’s Highlights 33
  • 34. Tips for Reproducing Solid Picture Edges of an Image’s Highlights 34
  • 35. Crispening 35 The detail signals with small amplitudes are regarded as noise and removed to avoid detail signals being generated around noise The Crispening is a function used to avoid detail signals being generated around noise.
  • 36. Crispening The Crispening is a function used to avoid detail signals being generated around noise. – By activating this function, detail signals with small amplitudes, which are most likely generated around noise, are removed from the signal. – In the Crispening process, only detail signals that exceed a designated threshold are used for image enhancement. – Conversely, detail signals with small amplitudes are regarded as noise and removed. 36
  • 37. – Increasing a camera’s DETAIL LEVEL can effectively sharpen the picture edges of an image. – However, as shown in the “Before setting” image, this operation can also coarsen the entire image, even though the picture edges of the perfume bottles and plastic cubes are correctly enhanced. – This effect occurs because the DETAIL process is applied to all areas of the image, including unnecessary noise. – In such situations, adjusting the Crispening level can reduce this effect while picture edges are kept sharp. Tips for Improving Picture Sharpness without Coarsening the Image 37
  • 38. Tips for Improving Picture Sharpness without Coarsening the Image 38
  • 39. Tips for Improving Picture Sharpness without Coarsening the Image 39
  • 40. Level Dependent − Level Dependent allows operators to suppress the detail signals generated in the low luminance areas alone. 40 Level Dependent allows low luminance detail to be suppressed
  • 42. Level Dependent – Noise is most noticeable in dark picture areas and applying heavy detail correction can significantly emphasize it. – This can make it challenging to capture an image with both extremely fine picture detail and dark image areas. – With Crispening, the detail signals generated around noise can be suppressed, but this will also reduce the detail signals of other fine picture areas. – ‘Crispening’ removes detail signals generated by noise at all signal levels but Level Dependent allows operators to suppress the detail signals generated in the low luminance areas alone (therefore, Level Dependent can solve this issue). – By Level Dependent the picture edges of the main content (with fine detail) can be kept sharp, while detail signals generated around noise in the low luminance areas can be suppressed. 42
  • 43. – To reproduce sharp images, the DETAIL function is used. – However, DETAIL can also cause black or dark areas of an image to coarsen. For example, as shown in the “Before Setting” image, the cockles of the leather and the texture of the metal are reproduced sharply, but the dark areas of the background and the bottom of the image look coarsened. – This phenomenon occurs because noise in dark areas of the image is also emphasized by the DETAIL function. – In such situations, adjusting the camera so the DETAIL process is not applied to low luminance signal levels, reduces the coarseness in the black or dark areas of the image. Tips for Shooting Without Coarsening Dark Areas of an Image 43
  • 44. Tips for Shooting Without Coarsening Dark Areas of an Image 44
  • 45. Tips for Shooting Without Coarsening Dark Areas of an Image 45
  • 46. Limiter (Detail Limiter) – When there is a large luminance level variance at dark-to-light or light-to-dark picture edges, the detail circuit can generate over-emphasized picture edges, making objects appear to ‘float’ on top of the background. – This is because detail signals are generated in proportion to the luminance level change at the picture edge. – A limiter is a circuit used to suppress this unwanted effect. 46 Limiters prevent excessive detail correction
  • 47. Skin Tone Detail Correction 47
  • 48. 48 Skin Tone Detail Correction
  • 49. Eliminates the DTL edge only for high frequency area of skin tone to have more effect. 49 Skin Tone Detail Correction
  • 50. Skin Tone Detail Correction is a function that allows the detail level of a user-specified color to be adjusted (enhanced or reduced) independently, without affecting the detail signals of other picture areas. – Skin Tone Detail Correction was originally developed to reduce unwanted image enchantement (detail signals) on facial imperfections such as wrinkles, smoothening the reproduction of human skin. – By selecting a specific skin color, the detail signals for that skin color can be individually controlled and suppressed. – High-end professional video cameras offer a Triple Skin Tone Detail Function, which allows independent detail control over three user-specified colors. – This enhances the flexibility of Detail Correction • one color selection can be used for reducing the detail level of skin color • two other selections can be used for either increasing or decreasing the detail level of two other objects. Skin Tone Detail Correction 50
  • 51. Electronic Soft Focus – Skin Tone Detail can be effective for reducing the picture sharpness of objects with specific colors. – However, for some applications it does have its limitations. – This is because Skin Tone Detail Correction does not really blur the image, it simply decreases the detail signal level to make the selected color look less sharp. – To apply further softness across images with specific colors, Electronic Soft Focus is used. 51
  • 52. – This function uses the detail signal to reduce, rather than increase, the sharpness of a picture. – By subtracting the detail signal from the original signal as opposed to adding it in image enhancement, Electronic Soft Focus provides a picture that is ‘softer’ than achieved with the detail correction switched off completely. 52 Electronic Soft Focus
  • 53. Zebra 53 A 75 IRE Zebra pattern is displayed
  • 54. − Zebra is a feature used to assist manual iris adjustments by displaying a striped pattern (called a ‘zebra pattern’) in the viewfinder across image highlights above a designated brightness level. Two types of zebra modes are available: I. One to indicate highlights above 100 IRE II. One to indicate signal levels between the 70 and 90 IRE range • The 100 IRE Zebra displays a zebra pattern only across picture areas which exceed 100 IRE, the video level of pure white.  Using this zebra mode, camera operators adjust the lens iris ring until this zebra pattern appears in the brightest areas of the picture. • The second zebra mode displays a zebra pattern across highlights between 70-90 IRE, and disappears above the 90 IRE level.  This is useful to determine the correct exposure for facial skin tones since properly exposed skin (in the case of Caucasian skin) usually falls within the 80 IRE areas. Zebra 54
  • 55. Low Key Saturation – Low-light areas can be subject to a reduction in color saturation. – The Low Key Saturation function adjusts the color saturation at lowlight levels by boosting the chrominance signals to an optimized level, thus providing more natural color reproduction. 55
  • 56. Low Key Saturation With the Low Key Saturation function activated, the color of the bottle is reproduced with more richness and depth. 56
  • 57. – When shooting in an environment with insufficient lighting on the subject, the colors in dark areas of the image may not be fully reproduced. – For example, in the “Before Setting” image, the colors of the candies are not fully reproduced, causing the image to look less attractive. – This is due to a drop in the color-difference signal levels that determines the color saturation of the reproduced image. – In such situations, it is required to manually adjust the color-difference signal levels of low-light areas. Enhancing Colors in Low-Light Areas (Low Key Saturation function) 57
  • 58. Enhancing Colors in Low-Light Areas (Low Key Saturation function) 58
  • 59. Enhancing Colors in Low-Light Areas (Low Key Saturation function) 59
  • 60. Linear Matrix Circuit – All hues in the visible spectrum can be matched by mixing the three primary colors • Red (R) • Green (G) • Blue (B) The ideal spectrum characteristics of three primary colors 60
  • 61. − Some areas contain negative spectral response. − Since negative light cannot be produced, this means that some colors cannot be matched using any R, G, and B combination. – In video cameras, this would result in particular colors not being faithfully reproduced. – The Linear Matrix Circuit compensates for these negative light values by electronically generating and adding them to the corresponding R, G, and B video signals. – This circuit is placed before the gamma correction so that compensation does not vary due to the amount of gamma correction applied. – In today’s cameras, the Linear Matrix Circuit is used to create a specific color look, such as defined by the broadcaster. Linear Matrix Circuit 61
  • 62. – The colors are selected by their Hue (Phase), Saturation, and Width (hue range). – In conventional color correction or matrix control, control parameters interact with each other. – The Multi Matrix function allows color adjustments to be applied over a single color range, while keeping other colors intact. – The Multi Matrix function divides the color spectrum into 16 areas of adjustment, where the operator can select the hue and/or saturation of the area to be color modified. Multi Matrix 62
  • 63. Multi Matrix 63 Built-in 16-axis color matrix and focus-assist function
  • 64. Multi Matrix – The example shows the orange pencil being changed to pink, while the other color pencils remain unchanged. – In addition to such special color effects, this function is also useful for matching the color reproduction of multiple cameras. 64
  • 66. TLCS (Total Level Control System) 66
  • 67. By activating TLCS, the correct exposure is automatically set for normal, dark, and very bright shooting environments. − With conventional cameras, the exposure control range is often limited between the smallest and widest opening of the lens iris. − TLCS widens this range by combining three exposure control features into one: • The Lens Automatic Iris • The CCD Electronic Shutter • The Automatic Gain Control – When proper exposure can be achieved within the lens iris range, TLCS will only control the lens iris. – For scenes that are too dark even with the widest iris opening, TLCS will activate the Automatic Gain Control function and boost the signal to an appropriate level. – For scenes that are too bright even with the smallest iris opening, TLCS will activate the electronic shutter so the video signal level falls within the 1.0 V signal range. 67 TLCS (Total Level Control System)
  • 68. EZ Mode − EZ Mode is a feature that instantly sets the camera’s main parameters to their standard positions and activates automatic functions such as • ATW (Auto Tracing White Balance) • TLCS (Total Level Control System) • DCC (Dynamic Contrast Control ) 68
  • 69. Knee Saturation − When a camera’s KNEE function is set to ON, the bright areas of the image are compressed in the KNEE circuit. In this process, both the luminance (Y) signals and color-difference (R-Y, B-Y) signals are compressed together, which can sometimes cause the color saturation of the image to drop. − KNEE SATURATION is a function that eliminates this saturation drop while maintaining the original function of the KNEE circuit. − When KNEE SATURATION is set to ON, the color-difference (R-Y, B-Y) signals are sent to the KNEE SATURATION circuit which applies a knee function optimized for these color signals. − These signals are then added back to the mainstream signals, obtaining the final output signal. 69
  • 70. – When shooting a bouquet using a spotlight, the bright areas of the image can be overexposed and “washed out” on the screen. – This phenomenon can be eliminated using the camera’s KNEE function, keeping the image’s brightness (luminance) of objects within the video signal’s dynamic range (the range of the luminance that can be processed). – However, in some cases the KNEE process can cause the image color to look pale. – This is because the KNEE function is also applied to the chroma signals, resulting in a drop in color saturation. – For example, as shown in the “Before setting” image below, the colored areas look paler than they actually are. – In such situations, the bouquet can be reproduced more naturally by using a separate KNEE process for the chroma signals (R-Y/B-Y). Tips for Reproducing Vivid Colors Under a Bright Environment 70
  • 71. Tips for Reproducing Vivid Colors Under a Bright Environment 71
  • 72. Tips for Reproducing Vivid Colors Under a Bright Environment 72
  • 73. TruEye™ Processing – TruEye processing is an innovative function that has been developed to overcome some of the drawbacks of conventional Knee Correction . – This technology makes it possible to reproduce color much more naturally even when shooting scenes with severe highlights. – The effect of TruEye processing is observed in the color reproduction of highlight areas. 73
  • 74. – Knee Correction is applied individually to each R, G, and B channel. The issue with this method is that only those channels that exceed the knee point will be compressed. – As shown in figure (a), suppose that only the red channel exceeds the knee point at a given time, T1. Since only the red channel is compressed at the knee point using a preset knee slope, the color balance between the Red, Green and Blue channel changes. – This is observed as the hue being rotated and saturation being reduced where the Knee Correction was applied. 74 TruEye™ Processing
  • 75. – The TruEye process overcomes this problem by applying the same Knee Correction (compression) to all channels, regardless of whether or not they all exceed the knee point. – This is shown in figure (b) where only the red channel exceeds the knee point. The green and blue channels are also compressed using the same red knee slope, maintaining the correct color balance between the three channels, while effectively achieving highlight compression for the red channel. 75 TruEye™ Processing
  • 76. − With TruEye turned off, the highlight areas are tinged with yellow, and when turned on, the correct color balance is reproduced. 76 TruEye™ Processing
  • 77. Decibels (dB) Imagine referring to an amount of bread or rice. In both cases, it would be confusing to describe them in ounces or by the number of rice grains. – Instead, we use a more convenient expression such as a ‘slice’ or a ‘loaf’ of bread, or a ‘bowl’ of rice. – The point to note here is that, instead of referring to the actual amount, it is often more convenient to describe it using a common point of reference. – This also holds true when describing values related to the video signal. – In video electronics, signal values must be handled over a wide range – from the very smallest to those that can be several million times larger. For this reason, decibels are described using a ‘logarithm’ equation, which offers an easy way of expressing all values together. 77
  • 78. Decibels (dB) The decibels are defined by the following equation: dB = 20 × log(V’/V) (V’: value to express in decibels V: well-known value = 1.0 (V))  This can be rearranged as: dB/20 V’ = V × 10 Since the relative value is being discussed, by substituting V = 1.0 (volt) the given equation is: dB/20 V’ = 10 78
  • 79. The most decibel values that need to be remembered are shown in the following table. Referring to this table:  A 20 dB signal gain up means the signal level has been boosted by 10 times. A 6 dB signal drop (= minus 6 dB) means the signal level has fallen to one half. 79 Decibels (dB)
  • 80. S/N (Signal-to-Noise) Ratio Noise refers to the distortion of the original signal due to external electrical or mechanical Factor. (S/N )ratio = 20 × log (Vp-p/Nrms) (dB) A 60 dB S/N means that the noise level is one-thousandth of the signal level. Test signals – A horizontal shallow ramp signal of about 20 to 25 IRE units amplitude with a pedestal level of 40 IRE units. – If this shallow ramp test signal is not available, a 50 IRE units flat field signal with the dither signal could be used. 80 FIGURE 3 Test signals for S/N measurements a) Shallow-ramp signal b) Flat-field with dither signal 30 mV dither signal 20-25 IRE units 40 IRE units D03
  • 81. S/N (Signal-to-Noise) Ratio – For video, the signal amplitude (Vp-p) is calculated as 1 volt, which is the voltage (potential difference) from the bottom of the H-sync signal (sync tip) to the white video level. (S/N )ratio = 20 × log (1 Volt /Nrms) (dB) • Noise level changes over time, and amplitude cannot be used to express its amount. • Root mean square (rms) is a statistical measure for expressing a group of varying values, and allows the magnitude of noise to be expressed with a single value. • Root mean square can be considered a kind of average of temporal values. 81
  • 82. Gain Up – When shooting in low-light conditions, the signal level (amplitude) may be insufficient due to a lack of light directed to the imager and thus fewer charges generated by the photoelectric conversion. – To overcome this, most video cameras have a Gain Up function, which is used to electronically amplify the video signal to a sufficient level for practical use. – The Gain Up function usually offers several Gain Up values, which are selectable by the operator for different lighting conditions. – When using the Gain Up function, it is important to note that a large Gain Up value will result in degrading the S/N ratio , since noise is equally amplified together. – Some cameras also have a minus Gain Up setting to improve the camera’s S/N ratio. 82
  • 83. Turbo Gain – It helps shooting in the dark. – Turbo Gain is an extension of the conventional Gain Up function but offers a much larger level boost (+42 dB) of the video signal, to achieve a lower minimum illumination. 83
  • 84. File System − It allows a variety of detailed and complex adjustments in order to reproduce the desired colorimetry for each shooting scenario − It allows to compensate for technical limitations in certain camera components. 84
  • 85. − For broadcasters and large video production facilities, it is imperative that all cameras are set up to have a consistent color tone or ‘look’, specified by that facility. I. Reference File stores the standard image factory setting data, and this file contains the reference values of the auto setup adjustment. This file can be stored in the camera and memory stick. II. Reference File is used to store user-defined reference settings (current paint data), so they can be quickly recalled , reloaded, or transferred from camera to camera. − The parameters that can be stored in a camera’s reference file may slightly differ between camera types. This difference is due to the different design philosophy of which base parameters should be commonly shared between the cameras. Reference File 85
  • 86. The standard image factory setting data (the reference values of the auto setup adjustmentUser-defined reference settings 86 Reference File
  • 87. − In general, each camera lens has different ‘offset’ characteristics which are compensated for within the camera by applying appropriate adjustments. − This compensation must be performed on a lens basis. • Therefore, when multiple lenses are used on the same camera, the camera must be readjusted each time the lens is changed. • Camera operators can store lens compensation settings for individual lenses within the camera as data files. These files are called lens files. • Since each lens file is assigned a file number designated by the operator, pre-adjusted lens-compensation data can be instantly recalled by selecting the corresponding file number. 87 Lens File
  • 88. − Reference File is used to store parameter data that governs the overall ‘look’ common to each camera used in a facility. − Scene Files, in contrast, are used to store parameter settings made for individual ‘scenes’. • It stores the temporary video setting data according to the scene. This file can be stored in the camera and memory stick. • A Scene File can be easily created and stored for the desired scene by overriding the data in the Reference File. • Scene Files allow camera operators to instantly recall previously adjusted camera data such as created for scenes outdoors, indoors, or under other lighting conditions. The parameter settings made for individual ‘scenes’ 88 Scene File
  • 89. − It stores the items displayed on the viewfinder and switch settings for camera operator. − This file can be stored in the memory stick, yet the video data (paint data) cannot be stored. 89 Operator File
  • 90. Super Motion Super Motion is a function designed to reproduce high quality slow-motion video, which is often required in sports TV programs. – The core of Super Motion system is a high speed camera that operates and captures images at 180i (150i for PAL) compared to conventional video cameras operating at 60i (50i for PAL). – The Super Motion camera drives its internal clocking, the CCD imager, and all image processing three times faster. – By replaying the high-rate 180i video at 60i normal speed, a clean 1/3- speed image is reproduced. 90
  • 91. Super Motion To capture images at 1080i – Images must be transferred to the recording device at 1080i. – The data rate of these images is three times larger than HD images captured by conventional HD cameras (50i). – The Super Motion system therefore requires a high bit-rate transmission interface and a recorder with high- speed signal processing. – For signal transmission, one optical fiber cable is used between the camera and CCU (camera control unit) , and three HD-SDI connections are used between the CCU and the recorder. – At the CCU output, the Super Motion system divides the data into three sub data segments in order to fit the full data into the three HD-SDI connections. – The sub-data segments are rebuilt into one signal to form the original 1080i data in the recorder. 91
  • 92. Variable Frame Rate Recording Over-cranking (Slow motion): Increase the frame rate to slow down action in the scene. Under-cranking (Quick motion): Decrease the frame rate to speed up action in the scene. Examples: • Recording 20 frames/sec, then playing 10 frames/sec: – Slow scene (1/2 normal speed) • Recording 6 frames/sec, then playing 24 frames/sec: – Fast scene (4x normal speed) 92
  • 93. Variable Frame Rate Recording 93
  • 95. – The Picture Cache Recording function has a buffer memory to store both video and audio data. This buffer memory is kept active and repeats storing video and audio data regardless of whether or not the camcorder is in REC mode. – The buffer memory can store approximately seven to eight seconds of video and audio data. – As soon as the REC button is pressed, the buffered data is first readout from memory and recorded onto the recording media – tape or disc. – While Picture Cache Recording can be a convenient function, it is important to note that it introduces a timing gap between the capture of scenes and the actual recording to the tape/disc media. This is because the buffered data is recorded to the media first, extending the total recording duration to include both the buffer data length and the duration between the REC start and stop actions. – Hence, the REC start button cannot be pressed immediately after the previous recording – the operator needs to wait for the length of the buffered data. 95 Picture Cache Recording
  • 96. Interval Recording – The camera captures images at normal frame rate, but stores them only intermittently into a buffer memory at pre-determined intervals. – Once the buffer memory is full, the video images are read out sequentially from the memory and recorded to tape or disc as seamless video. 96
  • 97. − This function is very useful when precise, simple or complex changes to the lens or camera settings are required during the scene - for example, when changing the focus from the background to the foreground of a scene. − It allows for smooth, precise and repeatable automatic scene transitions to occur. The operator can program the duration and select from three transition profiles: Linear, Soft Stop or Soft Transition. − Many lens parameters such as the start and end settings for zoom, focus and/or camera parameters such as white balance and gain can be programmed to transition in unison. It works by automatically calculating the intermediate values during the scene transition. − The Shot Transition function can be triggered manually or synchronised with the camera’s REC start function 97 Shot Transition™ function
  • 99. – The primary role of a viewfinder is to make focus adjustments and correctly frame the desired image. – There are three important points to remember when selecting a viewfinder: I. Larger viewfinders typically have a higher resolution than smaller sizes. II. Black and white viewfinders have a higher resolution and are less expensive than color viewfinders, hence they are still the most popular. III. Color viewfinders are convenient when subjects must be identified by their color. 99 Viewfinder
  • 100. – In Viewfinders due to their small size screens, resolutions can be limited, making precise focus adjustments difficult. – This function boosts the viewfinder signal in frequency ranges that correspond to the image’s VERTICAL picture edges. – As a result, sharp images are reproduced on the viewfinder screen, allowing correct focus adjustments. – The PEAKING level, which determines the boost level, is adjustable depending on the operator’s preferences. Viewfinder Peaking 100
  • 101. − Although higher PEAKING levels offer sharper viewfinder images, when adjusting PEAKING, two factors must be considered: I. Raising PEAKING level equally boosts the viewfinder signal’s noise level II. Too much PEAKING can create excessively bright picture edges along viewfinder characters and icons, such as on-screen indications including Gain ,ND/CC, Shutter, settings, and markers. − It is therefore important to balance these factors with the required image sharpness on the viewfinder. 101 Viewfinder Peaking
  • 102. – As cameras offer higher image resolutions, focus accuracy becomes a more critical issue than ever before. – In addition to the viewfinder PEAKING function, high-end cameras incorporate a VF DETAIL function, offering a better choice for facilitating focus adjustments. – Compared to PEAKING, the VF DETAIL function offers two unique features. • The PEAKING sharpens only vertical picture edges, VF DETAIL increases the sharpness of viewfinder images both vertically and horizontally. • The PEAKING is processed within the viewfinder, VF DETAIL takes place within the camera.  This means that the VF DETAIL function applies sharpness only to video signals shot by the camera, avoiding on- display characters created in the viewfinder from being overemphasized. The VF DETAIL mechanism uses a process similar to the camera’s main detail function. However, an exclusive detail circuit is used to create the detail signals and add them to the video signal sent to the viewfinder display. 102 Viewfinder (VF) Detail
  • 103. Clear VF DTL – Clear VF DTL supports fine focusing in critical situation of HDTV shooting – Enables the camera operator to focus much easier 103
  • 104. Focus Assist Function This makes a photographer easy to adjust the VF focus. 104
  • 105. New Focus Assist Function for 4K 105
  • 106. Expanded Focus − The center of the screen on the LCD monitor and viewfinder of the camcorder can be magnified to about twice the size, making it easier to confirm focus settings during manual focusing. 106
  • 107. EZ Focus EZ Focus is a feature that makes manual focusing adjustments much easier. – When activated, the camera automatically opens the lens Iris to its widest range. This allows the camera operator to make correct focus adjustments much easier. – To avoid over-exposure during this mode, the video level is automatically adjusted by activating the electronic shutter. – The lens iris will be kept open for several seconds and then return to the same iris opening before EZ Focus was activated. 107
  • 108. MIX VF Function – The MIX VF function is a method of displaying Return Video images on a camera’s viewfinder screen. – The MIX VF function enables both the camera’s images and Return Video images to be overlapped on the viewfinder using a mix effect. – It offers convenient operation by eliminating the need to toggle between the two signals for display. – In MIX VF mode, both images are kept at full-screen size. – When using a color viewfinder, the Return Video image can be displayed as a black/white image, while the image shot by the camera is displayed in color, so the two can be easily identified. Return Video Camera Image Return Video Camera Image 108
  • 109. Safety Zone Marker: – The Safety Zone Marker indicates the area where all consumer TVs can display images. – Using the Safety Zone Marker as a guide, camera operators can shoot images so they are correctly displayed on all home TVs. VF Markers (Safety Zone Marker, Aspect Marker, Center Marker) 109
  • 110. Aspect Marker: – Aspect Markers indicate the picture area specified by aspect ratios. Using these markers, camera angles can be correctly selected in consideration of the final content’s aspect ratio. – For example, when shooting in 16:9 aspect ratio for 4:3 on air, it is imperative to keep camera angles within the 4:3 area, but also prevent unnecessary objects from appearing in the 16:9 area. 110 VF Markers (Safety Zone Marker, Aspect Marker, Center Marker)
  • 111. Center Marker: − The Center Marker allows the camera operator to maintain the center of the image when zooming in to or zooming out of a subject. − This is important since lenses usually have slightly different optical centers in their wide and telephoto positions. 111 VF Markers (Safety Zone Marker, Aspect Marker, Center Marker)
  • 112. Cursor Marker: − The Cursor Marker is used to indicate the position and size of where an image is planned to be added to the camera’s raw output, such as a station logo prior to on-air transmission. − The Cursor Marker keeps camera operators aware of these areas, to avoid framing important subjects from being hidden. This allows camera angles to be flexibly decided, while keeping important content viewable on the picture screen. 112 VF Markers (Safety Zone Marker, Aspect Marker, Center Marker)
  • 113. – The ClipLink allows shooting data to be effectively used in the entire production process. (a unique feature in DVCAM camcorders) – With conventional video cameras, shot lists (logging time code, etc.) were typically generated manually using a clipboard, a pencil, and a stopwatch. – The ClipLink feature relieves shooting crews from such a dilemma. – During acquisition with a ClipLink-equipped camcorder, the in-point/out-point time code of each shot, together with their OK/NG status, is recorded (in the DVCAM Cassette Memory). – This data can then be transferred to the appropriate (DVCAM) editing system, and the in-point/out-point time codes can be immediately used as a rough EDL for the editing process. 113 ClipLink
  • 114. Index Picture – The ClipLink feature also generates a small still image of each in-point, called the Index Picture, which is recorded (to the DVCAM tape). – This provides visual information of each shot taken. – When used with the appropriate logging software, the entire ClipLink data, including the Index Pictures, the in-points/out-points, and the OK/NG status, can be imported. – This greatly enhances subsequent editing operation by relieving the editor from having to review all the media (tapes) to pick up and sort the necessary shots before starting editing. 114
  • 115. – This function automatically records, or logs, the camera settings - including the iris opening , the gain up setting filter selection , as well as basic and advanced menu settings (to a DVCAM tape) (a unique feature in DVCAM camcorders). – SetupLog data is constantly recorded to the Video Auxiliary data area of the DVCAM tape while the camcorder is in Rec mode - therefore making it possible to recall the camera settings that were used during any given shoot. – SetupLog is particularly useful when the camera must be serviced, since the technician can recall the camera settings that were used when a recording was not made correctly to examine what when was wrong. 115 SetupLog
  • 116. – As opposed to SetupLog , which takes a log of the camera settings in real-time, Setup- Navi is specifically intended for copying camera setup data between cameras using a DVCAM tape as the copy medium (a unique feature in DVCAM camcorders). – Activating this feature allows all camera setup parameters, including the key/button settings, basic and advanced menus, service menus, etc to be recorded to a DVCAM tape. – It is important to note that SetupNavi data is recorded only when activated by the operator and that the DVCAM tape used to record this data cannot be used for other purposes. – It is also convenient when the camera is used by multiple operators or for multiple projects, since the exact camera settings can be quickly recalled by inserting the DVCAM tape with the SetupNavi data. 116 SetupNavi
  • 117. FWIGSS Focus • Prior to the start of recording, the camera operator manually sets the focus following four simple steps. Switch to manual focus mode Zoom in on subject’s eyes Adjust focus ring until sharp Zoom out to compose shot The six primary device settings: • Focus • White balance • Iris • Gain • Shutter speed • Sound 117 FWIGSS
  • 118. FWIGSS White Balance • Prior to the start of recording, the camera operator manually zooms in on a white card held by the subject to set the white balance. 118 FWIGSS
  • 119. FWIGSS Iris, Gain, and Shutter Speed • Prior to the start of recording, the camera operator adjusts the iris, gain, and shutter speed as required or desired until the shot is properly exposed. • The zebra lines are an aid in setting exposure. 119 FWIGSS
  • 120. FWIGSS Sound • Prior to the start of recording, the camera operator conducts a sound check and adjusts the record levels for optimal sound reproduction. 120 FWIGSS
  • 121. 4K/HD Simulcast – Independent image processing for 4K and HD. – GAMMA Curve, Color and DTL can be adjust together or separately. 121
  • 123. HD Cutout Function for Clear Images In Zoom & Perspective mode, one portion can be cut out while performing perspective transformation, according to the focal length of the lens (The cutout region can be controlled with a mouse). In simple HD mode two portions can be cut out at the same time (The cutout region can be controlled with a mouse). 123
  • 124. Super Resolution Process – It is not just upscaling from HD signal. – It includes so called Super Resolution with image enhancement in the Ultra HD band, a new technology to reconstruct high resolution signals that is not possible in conventional HD processing! 124
  • 125. 125 Super Resolution Process – It is not just upscaling from HD signal. – It includes so called Super Resolution with image enhancement in the Ultra HD band, a new technology to reconstruct high resolution signals that is not possible in conventional HD processing!
  • 126. NIT − The measure of light output over a given surface area. 1 Nit = 1 Candela per Square Meter Dynamic Range − The range of dark to light in an image or system. − Dynamic range is the ratio between the whitest whites and blackest blacks in an image (10000:0.1) High Dynamic Range − Wider range of dark to light. 126 Light Levels
  • 127. In Rec709 100% white (700mV) is reference to 100 Nits 127 Light Levels
  • 128. – There are two parts to High Dynamic Range (HDR) – Monitor (Display) – Camera (Acquisition) – In the Monitor, it is trying to have the range of the material presented to it. Making things brighter with more resolution. – In the Camera it is trying to get many more ‘F’ stops, wider dynamic range with the data for that range. – HDR increases the subjective sharpness of images , perceived color saturation and immersion. – SDR or LDR is Standard Dynamic Range 128 HDR Two Parts
  • 129. (Inner triangle: HDTV primaries, Outer triangle: UHDTV primaries) 0 .1 .2 .3 .4 .5 .6 .7 .8 0 .1 .2 .3 .4 .5 .6 .7 .8 y 0 .1 .2 .3 .4 .5 .6 .7 .8 0 .1 .2 .3 .4 .5 .6 .7 .8 y (a) Carnation x (b) Geranium and marigold x 129 Wide Color Gamut Makes Deeper Colors Available
  • 130. BT. 601 and BT.709 Color Spaces – The maximum (“brightest”) and minimum (“darkest”) values of the three components R, G, B define a volume in that space known as the “color volume”. – Rec-601 and Rec-709 are basically on top of each other – So, we can use the same screen for SD and HD with out going through conversion in the Monitor to change the color space 130
  • 131. BT. 2020 Color Space – Rec. 2020 color space covers 75.8%, of CIE 1931 while Rec. 709 covers 35.9%. 131
  • 132. Color Gamut Conversion (Gamut Mapping and Inverse Mapping) 132 Wide Color Space (ITU-R Rec. BT.2020) 75.8%, of CIE 1931 Color Space (ITU-R Rec. BT.709) 35.9%, of CIE 1931 A 1 B C 2 D 3 RGB 100% Color Bar Rec. 709 Rec. 2020 CIE 1931 Color Space
  • 133. (ITU-R Rec. BT.2020) (ITU-R Rec. BT.709) Transformation from a Wider Gamut Space to a Smaller One 133 BT.2020 Signal BT.709 – Without any corrections (gamut mapping), the image appear less saturated. Munsell Chart A 1 Three Approaches: I. Clipping the RGB (clipping distortions) II. Perceptual gamut mapping (more computations and possibly changing the ‘creative intent’) III. Leaving the RGB values as they are and let the screen think that they relate to primaries of ITU-R BT.709.
  • 134. – Without any corrections color saturation will be increased. Smaller Gamut Space in a Wide Gamut Display 134 Munsell Chart BT.709 Signal BT.2020 (ITU-R Rec. BT.2020) (ITU-R Rec. BT.709) D 3
  • 135. – Opto-Electronic Transfer Function (OETF): Scene light to electrical signal – Electro-Optical Transfer Function (EOTF): Electrical signal to scene light Gamma, EOTF, OETF 135
  • 136. The CRT EOTF is commonly known as gamma 136 Gamma, EOTF, OETF – Opto-Electronic Transfer Function (OETF): Scene light to electrical signal – Electro-Optical Transfer Function (EOTF): Electrical signal to scene light
  • 137. – Adjustment or Artistic Intent (Non-Linear Overall Transfer Function) – System (total) gamma to adjust the final look of displayed images (Actual scene light to display luminance Transfer function) – The “reference OOTF” compensates for difference in tonal perception between the environment of the camera and that of the display specification (OOTF varies according to viewing environment and display brightness) OOTF (Opto-Optical Transfer Function) OOTF Same Look 137
  • 138. – On a flat screen display (LCD, Plasa,..) without OOTF, it appears as if the black level is elevated a little. – To compensate the black level elevation and to make images look closer to CRT, a display gamma = 2.4 has been defined under BT.1886. – As a result, OOTF = 1.2 Display EOTF gamma 2.4 Camera OETF 1/2.2 OOTF = 1.2 OOTF (Overall System Gamma, Artistic Rendering Intent) Opto-Optical Transfer Function (OOTF) Non-Linear Overall Transfer Function 138
  • 139. – Perceptual Quantization (PQ) (Optional Metadata) – Hybrid Log-Gamma (HLG) OOTF Position For viewing in the end-user consumer TV, a display mapping should be performed to adjust the reference OOTF on the basis of mastering peak luminance metadata of professional display 139 OOTF is implemented within the display and is aware of its peak luminance and environment (No metadata)
  • 140. Scene-Referred and Display-Referred Scene-Referred: – The HLG signal describes the relative light in the scene – Every pixel in the image represents the light intensity in the captured scene – The signal produced by the camera is independent of the display – The signal is specified by the camera OETF characteristic Display-Referred: – The PQ signal describes the absolute output light from the mastering display – The signal is specified by the display EOTF 140
  • 141. Code Levels Distribution in HDR Uniform CodeWords for Perceived Brightness 141
  • 142. Barten Ramp Human eye’s sensitivity to contrast in different levels 100 MinimumDetectableContrast(%) MinimumContrastStep(%) Luminance (nit) Contouring Banding ∆𝐿 𝐿 ×100 ∆𝐿 & L are Large,Less bits are required,Larger quantize step size∆𝐿 & L are small,More bits are required,Smaller quantize step size Minimum detectable contrast (%) = 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐝𝐞𝐭𝐞𝐜𝐭𝐚𝐛𝐥𝐞 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐢𝐧 𝐥𝐮𝐦𝐢𝐧𝐚𝐧𝐜𝐞 𝐋𝐮𝐦𝐢𝐧𝐮𝐧𝐜𝐞 × 𝟏𝟎𝟎 = ∆𝑳 𝑳 ×100 2 L ∆𝑳 𝑳 ×100 ∆𝑳 𝑳 ×100 142  The threshold of visibility for quantization error (Minimum detectable contrast) (banding or contouring) becomes higher as the image gets darker.  The threshold for perceiving quantization error (banding or contouring) is approximately constant in the brighter parts and highlights of an image.
  • 143. PQ EOTF Code words are equally spaced in perceived brightness over this range nits. BrightnessCodeWords 143
  • 144. PQ EOTF Code words are equally spaced in perceived brightness over this range nits. BrightnessCodeWords 144 Minimum Detectable Contrast (%) = 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐃𝐞𝐭𝐞𝐜𝐭𝐚𝐛𝐥𝐞 𝐃𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐢𝐧 𝐋𝐮𝐦𝐢𝐧𝐚𝐧𝐜𝐞 𝐋𝐮𝐦𝐢𝐧𝐮𝐧𝐜𝐞 × 𝟏𝟎𝟎 = ∆𝑳 𝑳 ×100 2 L
  • 145. Code Words Utilization by Luminance Range in PQ – PQ headroom from 5000 to 10,000 nits = 7% of code space – 100 nits is near the midpoint of the code range 145
  • 146. 0 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 SignalValue Linear light SDR gamma curve SDR with Knee HDR HLG Hybrid Log-Gamma (HLG) HDR-TV E : The signal for each color component {RS, GS, BS} proportional to scene linear light and scaled by camera exposure, normalized to the range [0:12]. E′ : The resulting non-linear HLG coded signal {R', G', B'} in the range [0:1]. a = 0.17883277, b = 0.28466892, c = 0.55991073 More Code Words for Dark Area          112ln 03 OETF 12 1 12 1 EcbEa EE EE 146 Less Code Words for Bright Area ITU-R Application 2 ,ARIB B67 (Association of Radio Industries and Businesses)
  • 148. Tone Mapping and Inverse Tone Mapping Tone Mapping (Down-conversion) Limiting Luminance Range Inverse Tone Mapping (Up-conversion) Expanding Luminance Range 148 HDR BT.2020 SDR Signal (BT.709 or BT.2020) HDR HDR Signal (BT.2020) SDR Signal (BT.709 or BT.2020) SDR (BT.709 or BT.2020) HDR Signal (BT.2020) SDR
  • 149. 149 – Optimized only for the brightest scene in the contents – This avoids hard clipping of detail in the highlights – It is not invariant under blind multiple round-trip conversions. Static Tone Mapping (HDR10) Static and Dynamic Tone Mapping 200 1500
  • 150. 150 – Optimized for each scene in the contents – Ex: frame-by-frame, or scene-by-scene basis (Varying the EETF based on statistics of the image). – This approach could survive multiple round-trip conversions Dynamic Tone Mapping Static and Dynamic Tone Mapping
  • 151. Static and Dynamic Metadata in HDR Static Metadata – Mastering Display Color Volume (MDCV) Metadata (SMPTE ST2086) – The chromaticity of the red, green, and blue display primaries – White point of the mastering display – Black level and peak luminance level of the mastering display – Content Light Levels Metadata (The Blu-ray Disc Association and DECE groups): – MaxCLL (Maximum Content Light Level): Largest individual pixel light value of any video frame in the program – MaxFALL (Maximum Frame-Average Light Level): Largest average pixel light value of any video frame in the program (The maximum value of frame-average maxRGB for all frames in the content) (The frame-average maxRGB : The average luminance of all pixels in each frame) – They could be generated by the color grading software or other video analysis software. Dynamic Metadata – Content-dependent Metadata (SMPTE ST2094 (pending)) – Frame-by-frame or scene-by-scene Color Remapping Information (CRI) – Variable color transformation along the content timeline. 151
  • 152. Mapping – During the transition from SDR to HDR production (More SDR Display) or due to content owner preference – To preserve the “look” of the SDR content on HDR Display – Display-referred mapping To preserve the colors and relative tones of SDR on HDR Display – Scene-referred mapping To match the colors and lowlights and mid-tones of SDR camera with HDR camera. 152 SDR camera output (BT.709 or BT.2020) HDR Signal HDR BT.2020 Preserved SDR Look in HDR Program (Ex: 20%) (Without Expanded Luminance Range) HDR Signal HDR BT.2020 SDR Content (BT.709 or BT.2020) (Without Expanded Luminance Range) Preserved SDR Look in HDR Program (Ex:20%)
  • 153. Backwards Compatibility – Most of encoder/decoder and TVs are SDR (encoders/decoders replacement !!?? ) – Dolby Vision, Technicolor, Philips and BBC/NHK are all backwards compatible. – Backwards compatibility is less of an issue in over-the-top (OTT). HDR Signal SDR UHDTV ITU-R BT.709 color space HDR metadata simply is ignored (Limited compatibility) 153 (Color Signal) (B & W Display)
  • 154. HLG and PQ Backwards Compatibility with SDR Displays HLG BT.2020 SDR BT.2020 color space − It has a degree of compatibility. − Hue changes can be perceptible in bright areas of highly saturated color or very high code values (Specular) − Both PQ and HLG provide limited compatibility HLG/PQ BT.2020 SDR BT.709 color space 154
  • 155. Ex: Benefit of 4K Lens for WCG and HDR – Both HD and 4K lens covers BT.2020. – Improve the transparency of Blue in 4K lens – Better S/N ratio. – 4K lens can cut the flare and reduce black floating even in a backlit conditions. – Black floating is more noticeable in HDR. – Same object and same white level, but black level of – HD: 21.9% (HD lens reduces dynamic range!) – Full 4K:11.6% Same object and same white level, but different black level 155
  • 156. HDR & HDMI HDMI 2.0a supports ST2084 (PQ) and ST2086 (Mastering Display Color Volume Metadata) HDMI 2.0b followed up on HDMI 2.0a and added support for HLG and the HDR10 The HDMI 2.1 Specification will supersede 2.0b will support dynamic metadata and High Frame Rate 156
  • 157. HDR Standards Dynamic Metadata for Color Transform (DMCVT) Dolby Vision, HDR10+ (License-free Dynamic Metadata), SL-HDR1, Technicolor (PQ) Static Metadata (Mastering Display Color Volume (MDCV) Metadata+ MaxCLL+ MaxFALL) HDR10 (PQ + static metadata) PQ10 (+ Optional static metadata) No Metadata HLG10, PQ10 (without metadata) StandoutExperienceSimplicity 157
  • 158. HDR Metadata and HDR/SDR Signal ID 158
  • 159. (FIFA World Cup 2018) 159
  • 160. Global Picture of Sony “SR Live” for Live Productions (FIFA World Cup 2018) – 8 Cameras Dual output UHD/HDR and HD/SDR – 11 Cameras Dual output HD/HDR and HD/SDR – 21 Cameras Single output HD/SDR – All Replays HD/SDR Shading of all cameras is done on the HD/SDR (BT. 709) 160
  • 161. Global Picture of “HLG-Live” for Live Productions Shading of all cameras is done on the HD/SDR (BT. 709) 161
  • 162. SD and HD Vectors 709 Color Space 601 Color Space Vector look is same as each other 162
  • 163. BT.2020 and BT.709 Vectors 709 Color Space 2020 Color Space Vector look is same as each other 163
  • 164. Standard Definition100% color bar test pattern. Standard Definition 100% color bar RGB parade Standard Definition 100% color bar YPbPr parade High Definition 100% color bar YPbPr parade Why small Spikes in the RGB waveform parade? The unequal rise time between Luma and Color Difference bandwidths and the conversion of SDI Y’P’bP’r back to R’G’B’ in the waveform display. 164
  • 165. HD 100% color bars YPbPr parade, Rec. 709. UHD 100% color bars YPbPr parade, Rec. 709. UHD 100% Color Bars YPbPr parade, Rec. 2020. Spike transitions is normal because no video filtering is applied to each link. This allows the quad links to be seamlessly stitched together, otherwise a thin black line would be seen between the links. 165
  • 166. UHD 100% Split Field Color Bars with both 709 and 2020 color spaces in YPbPr Parade display. 166
  • 167. RGB Paraded waveform display of 100% Color Bar split field test signal with Rec. 709 and Rec. 2020 color spaces.  In some cases the SMPTE 352 VPID may contain information on the colorimetry data that is used. Often however, this may not be the case and a known test signal such as color bars will be necessary to assist the user in determining the correct color space.  The user must manually select from the configuration menu between the 709 and 2020 colorspaces.  When the correct colorspace is selected then the traces will be at 0% and 100% (700mv) levels. 167
  • 168. White and Highlight Level Determination for HDR Diffuse white (reference white) in video: Diffuse white is the reflectance of an illuminated white object (white on calibration card). Since perfectly reflective objects don’t occur in the real world, diffuse white is about 90% reflectance (100% reflectance white card is used either). The reference level, HDR Reference White, is defined as the nominal signal level of white card. Highlights (Specular reflections & Emissive objects (self-luminous)) : The luminances that are higher than reference white are referred to as highlights. In traditional video, the highlights levels were generally set to be no higher than 1.25×diffuse white level. (in cinema up to 2.7×diffuse white). - Specular reflections Specular regions luminance can be over 1000 times higher than the diffuse surface in nit. - Emissive objects (self-luminous) Emissive objects and their resulting luminance levels can have magnitudes much higher than the diffuse range in a scene or image. (Sun, has a luminance s~1.6 billion nits). A more unique aspect of the emissive is that they can also be of very saturated color (sunsets, magma, neon, lasers, etc.).Black White 18% Reflectance 168
  • 169. Nominal signal levels for PQ and HLG production Reflectance Object or Reference (Luminance Factor, %) Nominal Luminance Value, nit (PQ & HLG) [Display Peak Luminance, 1000 nit] Nominal Signal Level (%) PQ Nominal Signal Level (%) HLG Grey Card (18%) 26 nit 38 38 Greyscale Chart Max (83%) 162 nit 56 71 Greyscale Chart Max (90%) 179 nit 57 73 Reference Level: HDR Reference White (100%) also Diffuse White and Graphics White 203 nit 58 75 − PQ and HLG production on a display with 1000 nits nominal peak luminance, under controlled studio lighting (Test chart should be illuminated by forward lights and camera should shoot it from a non-specular direction). − The percentages represent signal values that lie between the minimum and maximum non-linear values normalized to the range 0 to 1. 90% Reflectance 18% Reflectance (the closest standard reflectance card to skin tones) Here, the reference level, HDR Reference White, is defined as the nominal signal level of a 100% reflectance white card. (the signal level that would result from a 100% Lambertian reflector placed at the center of interest within a scene under controlled lighting, commonly referred to as diffuse white). 169
  • 170. Waveform View in HD, UHD or 4K ? 170
  • 171. Camera Black Set (Lightning) 171
  • 172. Capturing Camera Log Footage (Spider Cube) − Use a suitable grey scale camera chart or Spider Cube − This cube has a hole that produce super black, a reflective black base, and segments for 18% grey and 90% reflective white. The ball bearing on the top produces reflective specular highlights. − Setup your test chart within the scene. − Adjust the lighting to evenly illuminate the chart. − Adjust the camera controls to set the levels − –ISO/Gain, Iris, Shutter, White Balance Specular Highlights 18% Grey 90% Reflectance White Super Black Black Data color Spider Cube. 172
  • 173. SMPTE 2084 PQ (1K) scale with 100% reflectance white NitsLevel (%) The 90% reflectance white of the signal should be at about 51% level that is equivalent to 100 Nits The 18% grey will be at 36% level that is equivalent to 20 Nits 10,000Nits is equal to the 100% level of HDR signal Specular Highlights 18% Grey 90% Reflectance White Super Black Black Data color Spider Cube. The 2% Black point will be at 19% level that is equivalent to 2.2 Nits Camera operators can use the graticule lines at 2%, 18% or 90% Reflectance to properly setup camera exposure with a camera test chart of 2% black, 18% gray and 90% white. 173
  • 174. SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of reflectance NitsLevel (%) Specular Highlights 18% Grey 90% Reflectance White Super Black Black Data color Spider Cube. The 90% reflectance white level of the signal should be at about 51% level that is equivalent to 90 Nits The 18% grey level will be at 36% level that is equivalent to 18 Nits The 2% Black point level will be at 19% level that is equivalent to 2 Nits 9,000Nits is equal to the 100% level of HDR signal 174
  • 175. SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of Code Values Level (Hex)Code Value (Decimal) 175
  • 176. SMPTE 2084 (10K) with 90% reflectance white with graticule scale in terms of STOPS StopLevel (%) 176
  • 177. 10K PQ with a 1000 nits Limit, Full range Waveform in 10K PQ Full range with the Video at 1K grade. If you use the full 10K curve and set your video grading to 1000 Nits you will have about the top 25% of the waveform screen not being used. We have implemented both Narrow (SMPTE) SDI levels and Full SDI levels. Waveform setting on 10K PQ Full range: On the waveform you see 4d as 0 nits and 1019d as 10,000 nits in Full. 177
  • 178. HDR 1k Grade SMPTE Levels Normal reflective Whites are around 100Nits Peek is going to 1000Nits no more. HDR has the blacks stretch and the Whites are compressed. 178
  • 179. HDR Reflectance View The normal reflective whites are around 100Nits, which is at 90% Reflectance (709 100 IRE) 18% grey will be at 36% level that is equivalent to 18 Nits 2% Black will be at 19% level that is equivalent to 2 Nits 1000 Nits shows up at 100% Reflectance 179
  • 180. Stop View (Relative to 20 nits) Stop 1000 Nits The normal reflective Whites are around 100Nits, which is at +2.3 Stops (709 100 IRE) 0 Stop is shown as the 18% Grey point (=20 nits). 2% Black point at -3.1 Stops. 𝑺𝒕𝒐𝒑 𝑽𝒂𝒍𝒖𝒆 𝒇𝒐𝒓 𝟏𝟎𝟎𝟎 𝒏𝒊𝒕𝒔 = 𝒍𝒐𝒈 𝟐 𝟏𝟎𝟎𝟎 𝒏𝒊𝒕 𝟐𝟎 𝒏𝒊𝒕 = 𝟓. 𝟓 180
  • 181. HDR 2K Grade SMPTE Levels Normal Whites are just around 100Nits just a little higher. Max white is at 2000Nits HDR has the blacks stretch and the Whites are compressed. 181
  • 182. HDR 1K Grade Full Levels Black (0) is at 4h White is around 100Nits Highlights are going up to 1000Nits Waveform setting on 10K PQ Full range: On the waveform you see 4d as 0 nits and 1019d as 10,000 nits in Full. 182
  • 183. Rec 709 Video on the HDR Graticule Whites are going to 100%. The black are all down at the bottom of the waveform. The whites are stretched to 100% 183
  • 186. HDR Heat-map tool − 7 simultaneous and programmable color overlay bands − Individual upper & lower overlay threshold controls − User presets for SDR & HDR modes − Selectable background grey /color − Identify shadows, mid-tones or specular highlights 186
  • 187. Capturing a Camera Log Image Gamma 0% Black 10-bit Code-Value % 18% Grey 10-bit Code-Value (20nits illumination) % 90% Reflectance 10-bitCode Value % S-Log1 90 3 394 37.7 636 65 S-Log2 90 3 347 32.3 582 59 S-Log3 95 3.5 420 40.6 598 61 Log C (ARRI) 134 3.5 400 38.4 569 58 C-Log (Canon) 128 7.3 351 32.8 614 63 ACES (Proxy) ND ND 426 41.3 524 55 BT.709 64 0 423 41.0 940 100 − Today’s video cameras are able to capture a wide dynamic range of 14-16 Stops depending on the camera. − In order to record this information a log curve is used by each camera manufacturer to be able to store this wide dynamic range effectively with 12- 16 bits or resolution as a Camera RAW file. − Each curve has defined Black, 18% Grey and 90% reflectance white levels. 187
  • 188. S-Log2 Waveform to nits 540 or 1000 Nits Max Highlights Monitor dependent (Display with 540 or 1000 nits) 100 Nits (59%) Normal White 20 Nits (32.3%) 18% Grey 188
  • 189. Spider Cube S-Log2 as Shot from the Camera in Log Digital Values Stop values 189
  • 190. Spider Cube S-Log2 as Shot from the Camera in Log Showing S-Log2 in normal 709 type screens 190
  • 191. S-Log2 to Rec. 709 191
  • 192. Camera (Scene) Referenced BT.709 to PQ LUT Conversion − SDR and HDR displays DO NOT match. − Blacks are stretched in the BT1886 Display but not the PQ Display (matches scene) Camera-Side Conversion BT.709 to PQ 2084 HDR 0% 2 % 18% 90% 100% BT.709 100nits 0 9 41 95 100 HDR 1000nits 0 37 58 75 76 HDR 2000nits 0 31 51 68 68 HDR 5000nit 0 24 42 58 59 9 41 95 100 192
  • 193. Specification of color bar test pattern for high dynamic range TV systems BT.2111-07 (40%) (75%) (0%)(75%)(0%) (0%) (75%) (40%) (75% colour bars) (100% colour bars) (–2%) (+2%) (+4%) BT. 709 colour bars Ramp (–7% - 109%) Stair (–7%, 0%, 10%, 20%, ..., 90%, 100%, 109%HLG) Recommendation ITU-R BT.2111-0 (12/2017) Specification of colour bar test pattern for high dynamic range television systems BT Series Broadcasting service (television) 193
  • 194. Color Correcting your 4K Content Image without full dynamic range. Blacks are lifted (above 0) and whites aren't at 100% (or 700 mv). 194
  • 195. Tonal range after spreading 195
  • 196. Before the gamma adjustment 196 After the gamma adjustment
  • 197. Neutral image Warm, “golden hour” image. Cool, contrasty image 197 Color Correcting your 4K Content
  • 199. Misbalanced Chip Chart. Chip Chart is only made up of black, white and gray chips, so the entire trace should be very close to the center. 199 Balanced Chip Chart
  • 200. A fairly balanced image on an RGB Parade waveform monitor, but the image contains a lot of green grass 200