Combining Upper & lower field
Moderator: Ken Berry
-
sjj1805
- Posts: 14383
- Joined: Wed Jan 26, 2005 7:20 am
- System_Drive: C
- 32bit or 64bit: 32 Bit
- motherboard: Equium P200-178
- processor: Intel Pentium Dual-Core Processor T2080
- ram: 2 GB
- Video Card: Intel 945 Express
- sound_card: Intel GMA 950
- Hard_Drive_Capacity: 1160 GB
- Location: Birmingham UK
Mixed Field Orders.
This will vary according to the material you have. Mostly mixing field orders is considered taboo and should not be done. If you examine the material concerned you may be able to mix field orders occasionally.
Firstly - still images. These will be frame based and do not present a problem when mixed in with video which has a field order (lower or upper field first)
Most video that originated from a digital source is lower field first.
Most video produced from an analogue source is upper field first.
If you get the field order the wrong way round - the most noticeable problem is jerky effects of moving uprights such as lamp posts and telegraph poles (no they aren't moving - your camcorder was!)
If the footage concerned does not have any (or very little) of these uprights that may be affected by sideways movement then you can get away with mixing the field order. Further more the brightness of the video plays a part. If the scene is brightly lit then you are more likely to notice the jerkiness than if it was a night scene.
The easy answer is try it and see. Once you know the rules you know how far they can be bent.
This will vary according to the material you have. Mostly mixing field orders is considered taboo and should not be done. If you examine the material concerned you may be able to mix field orders occasionally.
Firstly - still images. These will be frame based and do not present a problem when mixed in with video which has a field order (lower or upper field first)
Most video that originated from a digital source is lower field first.
Most video produced from an analogue source is upper field first.
If you get the field order the wrong way round - the most noticeable problem is jerky effects of moving uprights such as lamp posts and telegraph poles (no they aren't moving - your camcorder was!)
If the footage concerned does not have any (or very little) of these uprights that may be affected by sideways movement then you can get away with mixing the field order. Further more the brightness of the video plays a part. If the scene is brightly lit then you are more likely to notice the jerkiness than if it was a night scene.
The easy answer is try it and see. Once you know the rules you know how far they can be bent.
- Ken Berry
- Site Admin
- Posts: 22481
- Joined: Fri Dec 10, 2004 9:36 pm
- System_Drive: C
- 32bit or 64bit: 64 Bit
- motherboard: Gigabyte B550M DS3H AC
- processor: AMD Ryzen 9 5900X
- ram: 32 GB DDR4
- Video Card: AMD RX 6600 XT
- Hard_Drive_Capacity: 1 TB SSD + 2 TB HDD
- Monitor/Display Make & Model: Kogan 32" 4K 3840 x 2160
- Corel programs: VS2022; PSP2023; DRAW2021; Painter 2022
- Location: Levin, New Zealand
Notionally, yes. But not all that many programs have been written which take advantage of the 4 cores in a Quad. That being said, I am happy to say that X2 *does* appear to take advantage of all 4. So from that point of view, it may be worth it, but with other programs which only take advantage of 2 cores, you would be better off with a faster Core 2 Duo...
Ken Berry
- Ken Berry
- Site Admin
- Posts: 22481
- Joined: Fri Dec 10, 2004 9:36 pm
- System_Drive: C
- 32bit or 64bit: 64 Bit
- motherboard: Gigabyte B550M DS3H AC
- processor: AMD Ryzen 9 5900X
- ram: 32 GB DDR4
- Video Card: AMD RX 6600 XT
- Hard_Drive_Capacity: 1 TB SSD + 2 TB HDD
- Monitor/Display Make & Model: Kogan 32" 4K 3840 x 2160
- Corel programs: VS2022; PSP2023; DRAW2021; Painter 2022
- Location: Levin, New Zealand
I may be wrong but the way I understand it field order is related to interlaced scanning i.e. the way a television picture is constructed. If this is the case then the composite video sourced from the uff camera will be just raw video as is normally recieved from another source such as a DVD player, off air etc. Am I missing somthing in my understanding of UFF, LFF.
When an interlaced video stream goes through digital video editing equipment (and often analog as well), the fields are paired off into frames. The video stream is treated on a frame-by-frame basis from that point onward, until it finally gets shipped back out to a TV. This statement, from http://www.mir.com/DMG/interl.html seems to support the point I am trying to make i.e. once the video is back in its composite form field order is not that important it only has to maintained throughout the editing process.
-
sjj1805
- Posts: 14383
- Joined: Wed Jan 26, 2005 7:20 am
- System_Drive: C
- 32bit or 64bit: 32 Bit
- motherboard: Equium P200-178
- processor: Intel Pentium Dual-Core Processor T2080
- ram: 2 GB
- Video Card: Intel 945 Express
- sound_card: Intel GMA 950
- Hard_Drive_Capacity: 1160 GB
- Location: Birmingham UK
How do you interpret the next sentence of that same article?
Each frame being shuffled through the editing process still contains an upper field and a lower field. The fields are typically presented in one of two ways. A field-sequential frame is encoded as two half-height images: all the scanlines for one field, followed by all the lines for the other field. An interleaved frame is encoded as a single full-height image: the scanlines from each field are placed in their proper spatial locations in the image.
When thought about the TV either does not care about the field order or can accept both LFF and UFF, because if it could not then either LFF or UFF could not be viewed as either are transmitted (as composite video) from either an LFF camera or an UFF camera. I am not trying to be argumentative about tjis issue, just trying to ensure I purchase a camera that suits all of my requirements.
- Ken Berry
- Site Admin
- Posts: 22481
- Joined: Fri Dec 10, 2004 9:36 pm
- System_Drive: C
- 32bit or 64bit: 64 Bit
- motherboard: Gigabyte B550M DS3H AC
- processor: AMD Ryzen 9 5900X
- ram: 32 GB DDR4
- Video Card: AMD RX 6600 XT
- Hard_Drive_Capacity: 1 TB SSD + 2 TB HDD
- Monitor/Display Make & Model: Kogan 32" 4K 3840 x 2160
- Corel programs: VS2022; PSP2023; DRAW2021; Painter 2022
- Location: Levin, New Zealand
Well of course a TV can play video which is either UFF or LFF! That's what it is all about. But it's the coding in the initial group of pictures which gives the TV the clue how to play it. You send it a video clip filmed, like yours, on a hard disk or DVD camera, and the coding says 'This video has to be played with the Upper Field First". So the TV automatically plays the Upper Field First. But if it receives video from a mini DV camera, the code will tell the TV that it has to play the Lower Field First. And it does. The TV is just obeying the interlacing code in the video itself, and broadcasts accordingly.
As you will know, UFF essentially means that in the two scans which constitute a full image, lines 1, 3, 5, 7... etc are scanned first, then a microsecond later, lines 2, 4, 6, 8 ... etc. They eye perceives it as a single image, but in reality it is two half images broadcast so rapidly that the eye blends them together.
But if the coding gets mixed up and UFF video is converted to, or plays as, LFF so that the even lines are broadcast before the odd lines, the eye perceives there is something not quite right about the way the image comes together. With vertical straight lines, there will appear to be jagged edges, and with fast panning shots, they will appear to be slightly jerky.
And that's what I still think will happen if you try your idea of sending the UFF signal to a mini DV camera... But try it and see.
And if I am wrong, I will happily admit it, and add a new method of mixing different field orders in a single project to my arsenal! 
As you will know, UFF essentially means that in the two scans which constitute a full image, lines 1, 3, 5, 7... etc are scanned first, then a microsecond later, lines 2, 4, 6, 8 ... etc. They eye perceives it as a single image, but in reality it is two half images broadcast so rapidly that the eye blends them together.
But if the coding gets mixed up and UFF video is converted to, or plays as, LFF so that the even lines are broadcast before the odd lines, the eye perceives there is something not quite right about the way the image comes together. With vertical straight lines, there will appear to be jagged edges, and with fast panning shots, they will appear to be slightly jerky.
And that's what I still think will happen if you try your idea of sending the UFF signal to a mini DV camera... But try it and see.
Ken Berry
-
kfenaughty
- Posts: 12
- Joined: Wed May 28, 2008 2:15 am
- Location: Wellington, New Zealand
John,
I converted DV (type 2) clips to MPEG, and combined them successfully with an MPEG video clip from my Sony SR5E camcorder, all on the one DVD. The DV clips were from an analogue Sony camcorder passed through a Canopus ADVC-55.
So I had five video clips, all DVD-compliant - 4 from DV (LFF), one from the Sony (UFF). I am running VS11.5+. I used the create DVD tool, added all five clips, created a thumbnail menu, and then burnt the files to my hard drive first.
What is most interesting about this is that VS created a VTS_02 file of about the same size as the UFF MPEG. All the LFF clips were combined into VTS_01 files in the usual way, with the 1 Gb limit creating more of them as required.
The resultant DVD plays fine on my CRT TV, no jerking motion observed. Admittedly it is a fairly static piece. So the question I pose is: "Is VS clever about not mixing the field orders when it created the VOB files?".
I converted DV (type 2) clips to MPEG, and combined them successfully with an MPEG video clip from my Sony SR5E camcorder, all on the one DVD. The DV clips were from an analogue Sony camcorder passed through a Canopus ADVC-55.
So I had five video clips, all DVD-compliant - 4 from DV (LFF), one from the Sony (UFF). I am running VS11.5+. I used the create DVD tool, added all five clips, created a thumbnail menu, and then burnt the files to my hard drive first.
What is most interesting about this is that VS created a VTS_02 file of about the same size as the UFF MPEG. All the LFF clips were combined into VTS_01 files in the usual way, with the 1 Gb limit creating more of them as required.
The resultant DVD plays fine on my CRT TV, no jerking motion observed. Admittedly it is a fairly static piece. So the question I pose is: "Is VS clever about not mixing the field orders when it created the VOB files?".
Kevin Fenaughty
- Ken Berry
- Site Admin
- Posts: 22481
- Joined: Fri Dec 10, 2004 9:36 pm
- System_Drive: C
- 32bit or 64bit: 64 Bit
- motherboard: Gigabyte B550M DS3H AC
- processor: AMD Ryzen 9 5900X
- ram: 32 GB DDR4
- Video Card: AMD RX 6600 XT
- Hard_Drive_Capacity: 1 TB SSD + 2 TB HDD
- Monitor/Display Make & Model: Kogan 32" 4K 3840 x 2160
- Corel programs: VS2022; PSP2023; DRAW2021; Painter 2022
- Location: Levin, New Zealand
-
kfenaughty
- Posts: 12
- Joined: Wed May 28, 2008 2:15 am
- Location: Wellington, New Zealand
Thanks for the follow-up suggestions, Ken. I have copied the VTS_02 file from the burnt DVD, renamed it to MPG and placed it in VS. The properties are UFF, just like the original. I have done the same for the first 1 Gb VTS_01 file, and that shows LFF. So VS is splitting the content along UFF and LFF lines, which is pretty sound, if a little unexpected!
I guess it's worth John Dale having a crack at it as well.
I guess it's worth John Dale having a crack at it as well.
Kevin Fenaughty
I have combined 5 minutes of mpeg2 (UFF) from the DVD camera and 5 minutes of DV (LFF) from the DV camera and burnt 2 DVDs from these files one as LFF the other as UFF and both appear to work ok. Also connected the DVD camera to the DV camera inut and converted to DV and the final DVD created from this seemed to work ok as well. Both had plenty of movement in the video and I could not see any jagged verticals or jerky movements.
