"A convergence puller can use software to tweak the inter-axial," explained Cameron.
"On a shoot, a depth budget is allocated to each feed; if anything is outside the depth budget it is flagged up on the waveform monitor and the stereographer or convergence puller will make sure that none of the 3D is going out of the screen.
"When shooting football, you don't want a footballer to stick outside of the scoreline – so you need to be very careful as to how much depth you give."
The depth budget that Sky uses for its live feeds is minimal, notes Cameron.
"A general guide is that if you have a 42-inch TV screen you would have a depth budget of -3 per cent and +3 per cent.
"Sky uses a -1 and +2 depth budget, they would rather create the 3D in the screen – they don't want things to pop out too much.
"They can't afford to have a very big depth budget, as any 3D you have in the home would look a lot bigger in the cinema.
"One budget does not fit all sizes, but even the smallest budget will offer decent 3D."
Sky is currently using 42 inches as the desired size for its 3D broadcasts, but the BBC, which is working on 3D as well, is looking to up its depth budget for bigger screens, as 3D TVs are more popular in the larger sizes.
"The BBC is thinking about authoring to monitor to 50-inch screens," said Cameron. "Looking at that size means that you are covering all bases. It is harder to get 3D right on bigger screens.
"To get 3D working on smaller screens like the 3DS, then the 3D would need to pushed -5 per cent and +5 per cent."
To experience shooting 3D for ourselves, a number of brilliantly lo-fi contraptions were used.
Other than the tens of thousands of pounds worth of camera set-up, a toy bubble maker, smoke machine and a rubber duck on a stick were used.
The pictures may make you chuckle, but these are some of the techniques used to help camera operators understand the depth perception associated with 3D filming.
"Our brain looks at a number of depth cues – the brain realises the relative position by its focal distance," explained Cameron.
"Focus is closely related to depth of field. In 2D, cinematographers would use narrow depth of field to make it more intimate, so you focus on a particular object.
"In 3D you can relax the depth of field a lot more. It is easier to see 3D when the objects are in sharper focus – you don't need narrow depth of field in 3D."
Two new depth cues for 3D come in the form of vergence and stereopsis. In stereopsis, the brain calculates depth from the difference of two images as there is a disparity between what our left and right eye sees; with vergence, our eyes angle inwards to focus on close objects and the brain interprets this as a close object.
The depth of 3D is determined by the interocular distance between our eyes, which is usually 2.5 inches in adults.
As Cameron explains: "As a starting point we set the sensors of the two cameras at 65mm (2.5 inches).
"Depending on the scene, though, the 3D rig may never go beyond 30-40mm, even 20mm.
"If you do put 65mm between the lenses then the 3D is stuck at 20 metres and no 3D would be in the foreground."
3D is here to stay. The perfect storm of movie studios supporting the technology and 3D finally being in the home means that filmmakers the world over have to understand the new techniques associated with shooting in 3D.
This is what Sony's 3D workshop is aiming to do, and there are experts in the industry lining up to teach these techniques. Just recently, cameraman Ian Burton took charge of one of the courses.
Cameron is candid about 3D filming, however, explaining that it is not a dark art and it's easy to do from a technical point of view. "You only have to learn two new things: how to focus the 3D and convergence, but it is about how you use 3D in the shot.
"Rather than having huge amounts of 3D, be subtle and add the occasional 'wow' moment – if you don't then 3D will be uncomfortable for consumers."
Suddenly the rubber ducks, bubbles and smoke all start to make more sense.