Thermal search for Snow Leopards
So I have a lot of questions regarding my use of the superhawk and how thermal works. Firstly a brief overview IR starts at 700nm which is the max in which the human eye can see (the limit of visible light) however this level is considered Near IR and does not create heat. You need to go into FIR or far IR / true IR to be within the thermal range. The first thing to know about thermal camera work is that it does not work like regular cameras, light has no effect on the image. Thermal imagers make pictures from heat, they detect tiny differences in heat as small as 0.01C and display these as a “thermogram” or shades of grey or other forms of false colours you may have seen such as green shades and rainbow shades. The hotter something is the more energy it emits and it creates a unique heat signature, this is how you can detect different objects within a frame.
I used a military grade camera called the Superhawk which is a predecessor of the merlin seen before in other productions. The SLX-Superhawk Thermal Camera is Leonardo’s latest high definition, long range thermal imager with 15x Zoom Lens. The SLX-Superhawk Thermal Camera uses a native High Definition (1280 x 1024) cooled Medium Wave Infra-Red (MWIR) sensor to provide the highest level of resolution and sensitivity performance available from a thermal imager today. In addition, the SLX-Superhawk Thermal Camera includes various advanced image processing capabilities including Local Area Contrast Enhancement (LACE) to maximise the detail in displayed/recorded images from the camera. Learning to use the superhawk was in some ways intuitive to someone who knows camera kit. It uses a number of hardware I have worked with before including kit like an atomos recorder and rt motion focus system. All which I knew my way around well. However it was then connected to the main lens and sensor box, this is something i was lucky to visit the lab before my shoot and look within on the mechanics but not something i can share with any of you, it is military grade after all. Notes on the camera use though, the lens breathes a lot which is something to think about when pulling focus within an image. There is a setting called Lace which increases dynamic range so you don't end up with just white and black. But too high you can end up with image issues and artifacts so its best to stay between 4 and 6. Other settings include the Edge enhancement that makes things sharper but increases noise so better to leave at 0 - Top left hand corner there's a bunch of dead pixels/ Sensor noise that needs to be removed in post. Avoid anything wider than 40mm as Lens Ring reflection is visible, in other words you start to record the camera's own heat signature. It can take time to find the right focus and “dynamic range” on the subject, especially in the early evening with an environment cooling down quickly after sunset. I found myself shooting a lot of content at the end of the lens during my shoot, I would love to work with this set up again with a larger, closer and less shy subject!
Full spectrum side by side rig
OK so you have learnt about one side of visible light but how about the other. I was hired by Nutopia via esprit to build a rig to shoot the full spectrum of a rainbow. This meant shooting in real time the UV, Visible and IR spectrum. We wanted to do without the need for post manipulation but actually seeing three identical images laid upon one another to create the final spectrum. There was little done before with this other than the odd photograph and scientific study. We ended up making our discoveries on how overlapping the Near IR and red visible wavelengths were. To create this effect I used two 3d side by side rigs which I built custom made camera bridge plates to raise each camera to the same height and distance. We could not use a mirror rig due to the mirror systems often having built in UV filters. Side-by-side are cameras aligned parallel to each other, or can be angulated so that their optical axes meet at a chosen distance. They can only be used for distant shots like large landscapes ( an issue that arises for me once on location). Once I had built the double side by side rig set up and learn how to use stereographic skills to align the cameras. I next had to learn how to make the cameras function in UV to IR. For this we used two quartz a7s’ and a standard A7sm2. We needed lenses that had no uv or IR blocks built in so we looked at some retro glass, ended up with Nikkor Fx prime set. We then combined these with Kolari vision block filters (highly recommend their sight to learn more about full spectrum photography). Now set up with my rig, lenses and filters i took them all out for a test run. We used a local quarry and fireman team to create some rainbows to shoot. We ended up bringing UV and IR lights due to the awful British overcast weather blocking most of the rays we needed. Through this I learnt on the go the best ways to colour temperature and expose for each image. To keep the settings the name I ended up piling on the ND for the IR and Standard due to the UV pass filter being rather dark. IR needed to be balanced on a green card or grass with an additional pink tint to bring a more neutral colour space. Finally I had to align all the images, I used 2 3d monitors connected to my atomos recorders to check the overlays of all three images were matching. All that was left was iguazu falls.