Explanation of different Field Orders
Moderator: Ken Berry
-
jasondyates
- Posts: 24
- Joined: Fri Jan 11, 2008 8:52 am
Explanation of different Field Orders
Can someone explain the difference in upper lower feilds and I can't remember the third setting? What is the difference? Also in the video transfer rate project settings what is the difference between variable and constant? Please help
1. For technical reasons, the scanning of an interlaced frame is like reading every other line in a book, then going back to read the missing lines. Upper field first is like reading lines 1, 3, 5... first, then 2, 4, 6.... Lower field first is the opposite. Frame is like reading the book normally with progressive scanning.
2. Constant bitrate encodes the video at almost a constant image detail and is the best way when the project permits it. Variable bitrate pushes up the detail theoretically when the variations from frame to frame are great and reduces the detail when the variations are minimal. This may be preferable when trying to put a long project into a limited disc space (e.g., >2 hour onto a SL DVD), but the results are not always what you expect because the biggest frame differences are when you apply a transition, not when an F1 car passes at 300 km/h! See http://phpbb.ulead.com.tw/EN/viewtopic.php?t=26715 for further info on this subject.
3. The answer to your next question (which you haven't yet asked) is the difference between single and two-pass encoding. With CBR, there is no advantage in two-pass. With VBR, on one pass encoding, it starts at around the average rate and then pushes it up when it comes across big changes and down on fairly still scenes, always trying to keep an average consistent with the settings. This fails miserably when there is a great difference at the start or the end of a project. Imagine a case where you start a project with an athletics meeting with loads of movement, followed by a talking heads discussion. It assumes that this is average for the whole project and that there will be both more and less movement later. The quality will be therefore not good enough for the field events and too good for the discussion. With two-pass, an initial analysis determines that the movements are greatest for the first sequences and least for the last ones. The encoding is done on the second pass and the analysed data would tell the encoder when to push up and down the bitrate, to wit, in my example, respectively for the first and the second halves. However, as I said previously, the biggest "movements" that the encoder sees are during the transitions and not necessarily during the pole vault or even the 100 m.
4. Always remember that many set-top players are unable to decode high bitrates from DVD+/-R discs. For this reason, keep your combined audio plus video bitrates down to about 7000 kbit/s maximum, especially if you don't know which player will be used (distributed discs). Only pressed discs can be played at higher bitrates in such players (higher contrast between the 1s and the 0s).
2. Constant bitrate encodes the video at almost a constant image detail and is the best way when the project permits it. Variable bitrate pushes up the detail theoretically when the variations from frame to frame are great and reduces the detail when the variations are minimal. This may be preferable when trying to put a long project into a limited disc space (e.g., >2 hour onto a SL DVD), but the results are not always what you expect because the biggest frame differences are when you apply a transition, not when an F1 car passes at 300 km/h! See http://phpbb.ulead.com.tw/EN/viewtopic.php?t=26715 for further info on this subject.
3. The answer to your next question (which you haven't yet asked) is the difference between single and two-pass encoding. With CBR, there is no advantage in two-pass. With VBR, on one pass encoding, it starts at around the average rate and then pushes it up when it comes across big changes and down on fairly still scenes, always trying to keep an average consistent with the settings. This fails miserably when there is a great difference at the start or the end of a project. Imagine a case where you start a project with an athletics meeting with loads of movement, followed by a talking heads discussion. It assumes that this is average for the whole project and that there will be both more and less movement later. The quality will be therefore not good enough for the field events and too good for the discussion. With two-pass, an initial analysis determines that the movements are greatest for the first sequences and least for the last ones. The encoding is done on the second pass and the analysed data would tell the encoder when to push up and down the bitrate, to wit, in my example, respectively for the first and the second halves. However, as I said previously, the biggest "movements" that the encoder sees are during the transitions and not necessarily during the pole vault or even the 100 m.
4. Always remember that many set-top players are unable to decode high bitrates from DVD+/-R discs. For this reason, keep your combined audio plus video bitrates down to about 7000 kbit/s maximum, especially if you don't know which player will be used (distributed discs). Only pressed discs can be played at higher bitrates in such players (higher contrast between the 1s and the 0s).
[b][i][color=red]Devil[/color][/i][/b]
[size=84]P4 Core 2 Duo 2.6 GHz/Elite NVidia NF650iSLIT-A/2 Gb dual channel FSB 1333 MHz/Gainward NVidia 7300/2 x 80 Gb, 1 x 300 Gb, 1 x 200 Gb/DVCAM DRV-1000P drive/ Pan NV-DX1&-DX100/MSP8/WS2/PI11/C3D etc.[/size]
[size=84]P4 Core 2 Duo 2.6 GHz/Elite NVidia NF650iSLIT-A/2 Gb dual channel FSB 1333 MHz/Gainward NVidia 7300/2 x 80 Gb, 1 x 300 Gb, 1 x 200 Gb/DVCAM DRV-1000P drive/ Pan NV-DX1&-DX100/MSP8/WS2/PI11/C3D etc.[/size]
- Ken Berry
- Site Admin
- Posts: 22481
- Joined: Fri Dec 10, 2004 9:36 pm
- System_Drive: C
- 32bit or 64bit: 64 Bit
- motherboard: Gigabyte B550M DS3H AC
- processor: AMD Ryzen 9 5900X
- ram: 32 GB DDR4
- Video Card: AMD RX 6600 XT
- Hard_Drive_Capacity: 1 TB SSD + 2 TB HDD
- Monitor/Display Make & Model: Kogan 32" 4K 3840 x 2160
- Corel programs: VS2022; PSP2023; DRAW2021; Painter 2022
- Location: Levin, New Zealand
I had prepared the following answer several hours ago, when I could not longer access this forum. I am assuming that perhaps the changeover of the web server for the forum occurred at that time. Anyway, here is the answer, which is not inconsistent with (most of) what Devil has already had to say.
When video is broadcast, say, on traditional television, the most efficient way found at the time was to transmit a signal in two phases, each having half of the picture. As you know, standard television broadcasts in lines, so each half of the picture contains alternating lines (1, 3, 5, 7, 9 etc followed by lines 2, 4, 6, 8 etc). But the transmission is so quick that the eye puts the lines together and sees it as one picture. In other words, the two alternate half-frames are 'interlaced' both by the equipment and by the eye. Hence we call it interlaced video. And line 1, 3, 5 etc i.e. the odd ones are called the Upper (or sometimes Top) Field, and the even numbers (2, 4 etc) are Lower (or Bottom) Field.
Different video systems will use one or the other field order. DV/AVI from a mini DV camera is filmed, and captured, using Lower Field First. Mpeg-2 on DVD cameras, hard disk cameras and high definition cameras using mini DV tapes use Upper Field First. Different devices and systems use different field orders. The end result, however, is the same: a single picture or frame.
The main thing you have to remember is to maintain the same field order throughout a project. If you started with video using UFF then that has to be used throughout; ditto for LFF. And you can't mix the two in one project. If you think about it, this is because one or the other will be playing back the wrong field first, and the eye will detect something wrong with this.
Then there is Frame Based, which essentially applies to still images. Since it is still and a single, complete image (i.e. one 'frame'), it is broadcast/captured as a single, complete frame -- hence frame based. Slideshows can be done with this. You can also usually use still photos or a frame based video in a project which uses either UFF or LFF since it will take on the characteristics of these and not matter since you are still in effect only broadcasting/capturing the full still frame regardless of which field comes first.
This has an analogy in the relatively new world of digital high definition television and video, where frame based is similar to the notion of Progressive scan video. In addition to almost doubling the number of lines broadcast (which gives more detail and thus higher quality i.e. "high definition"), "true" high definition will broadcast each line in sequence (1-2-3-4-5 etc) i.e. "progressively". A true high definition TV will thus be described as 1080p -- the final letter indicating it uses progressive scanning (as opposed to 1080i where the i stands for interlaced). Note, though, that a 1080i TV is still a high definition TV. Moreover, all high def TVs can happily decode and play back interlaced standard DVDs and usually with good quality. Blu-ray uses progressive scanning and very high bitrates.
As for Constant Bitrate (CBR) and Variable Bitrate (VBR), their names in effect speak for themselves. A constant bitrate is ... er ... constant. It does not vary, regardless of the content of a video. Some argue that a variable bitrate is at the very least more efficient, since it will use a higher rate when there is lots of action or detail, but brake right back when there is little action or frames are darker, with little detail. Others argue that it can be better to use CBR to produce a more predictable -- even higher quality -- end result, and this may indeed be the case, particularly if you are not particularly worried about the space taken by the compression used in your projects.
But I personally am not convinced that simply by using a constant, high bitrate you will necessarily get a better quality video than if you used a VBR which used an even higher bitrate for action/detailed bits, but used a much lower rate for the darker/quieter bits... For instance, using a CBR of 6000 kbps will give you a good DVD (depending of course on the quality of the video used as input), but it won't necessarily be better than one which used a VBR of 6000 kbps, but which goes up to, say, 8000 kbps or higher for the action parts but drops down to 4000 kbps for the slower or darker parts.
When video is broadcast, say, on traditional television, the most efficient way found at the time was to transmit a signal in two phases, each having half of the picture. As you know, standard television broadcasts in lines, so each half of the picture contains alternating lines (1, 3, 5, 7, 9 etc followed by lines 2, 4, 6, 8 etc). But the transmission is so quick that the eye puts the lines together and sees it as one picture. In other words, the two alternate half-frames are 'interlaced' both by the equipment and by the eye. Hence we call it interlaced video. And line 1, 3, 5 etc i.e. the odd ones are called the Upper (or sometimes Top) Field, and the even numbers (2, 4 etc) are Lower (or Bottom) Field.
Different video systems will use one or the other field order. DV/AVI from a mini DV camera is filmed, and captured, using Lower Field First. Mpeg-2 on DVD cameras, hard disk cameras and high definition cameras using mini DV tapes use Upper Field First. Different devices and systems use different field orders. The end result, however, is the same: a single picture or frame.
The main thing you have to remember is to maintain the same field order throughout a project. If you started with video using UFF then that has to be used throughout; ditto for LFF. And you can't mix the two in one project. If you think about it, this is because one or the other will be playing back the wrong field first, and the eye will detect something wrong with this.
Then there is Frame Based, which essentially applies to still images. Since it is still and a single, complete image (i.e. one 'frame'), it is broadcast/captured as a single, complete frame -- hence frame based. Slideshows can be done with this. You can also usually use still photos or a frame based video in a project which uses either UFF or LFF since it will take on the characteristics of these and not matter since you are still in effect only broadcasting/capturing the full still frame regardless of which field comes first.
This has an analogy in the relatively new world of digital high definition television and video, where frame based is similar to the notion of Progressive scan video. In addition to almost doubling the number of lines broadcast (which gives more detail and thus higher quality i.e. "high definition"), "true" high definition will broadcast each line in sequence (1-2-3-4-5 etc) i.e. "progressively". A true high definition TV will thus be described as 1080p -- the final letter indicating it uses progressive scanning (as opposed to 1080i where the i stands for interlaced). Note, though, that a 1080i TV is still a high definition TV. Moreover, all high def TVs can happily decode and play back interlaced standard DVDs and usually with good quality. Blu-ray uses progressive scanning and very high bitrates.
As for Constant Bitrate (CBR) and Variable Bitrate (VBR), their names in effect speak for themselves. A constant bitrate is ... er ... constant. It does not vary, regardless of the content of a video. Some argue that a variable bitrate is at the very least more efficient, since it will use a higher rate when there is lots of action or detail, but brake right back when there is little action or frames are darker, with little detail. Others argue that it can be better to use CBR to produce a more predictable -- even higher quality -- end result, and this may indeed be the case, particularly if you are not particularly worried about the space taken by the compression used in your projects.
But I personally am not convinced that simply by using a constant, high bitrate you will necessarily get a better quality video than if you used a VBR which used an even higher bitrate for action/detailed bits, but used a much lower rate for the darker/quieter bits... For instance, using a CBR of 6000 kbps will give you a good DVD (depending of course on the quality of the video used as input), but it won't necessarily be better than one which used a VBR of 6000 kbps, but which goes up to, say, 8000 kbps or higher for the action parts but drops down to 4000 kbps for the slower or darker parts.
Ken Berry
Cart before the horse. The refresh rate was originally fixed at mains frequency (50/60 Hz) to avoid scrolling bars. The interlacing was decided on so as to reduce the bandwidth need,
[b][i][color=red]Devil[/color][/i][/b]
[size=84]P4 Core 2 Duo 2.6 GHz/Elite NVidia NF650iSLIT-A/2 Gb dual channel FSB 1333 MHz/Gainward NVidia 7300/2 x 80 Gb, 1 x 300 Gb, 1 x 200 Gb/DVCAM DRV-1000P drive/ Pan NV-DX1&-DX100/MSP8/WS2/PI11/C3D etc.[/size]
[size=84]P4 Core 2 Duo 2.6 GHz/Elite NVidia NF650iSLIT-A/2 Gb dual channel FSB 1333 MHz/Gainward NVidia 7300/2 x 80 Gb, 1 x 300 Gb, 1 x 200 Gb/DVCAM DRV-1000P drive/ Pan NV-DX1&-DX100/MSP8/WS2/PI11/C3D etc.[/size]
