Thursday 17 December 2009

Notes on Fisheye Lens Use for Pictorial Use

[Originally posted in June 2009 on my photo pages]

I recently acquired a Sigma 10mm/2.8 fisheye lens, mainly for panography to replace the Peleng 8mm fisheye. I haven't really used a fisheye lens for "straight" (non-panoramic) photography but thought it could be interesting to try some street and candid photography with such a lens. The main challenge is that the wider a lens, the closer you usally have to get your subject to get a sense of perspective. So with a lens with a diagonal field of view of 180 degrees, it would be a steep learning curve to make the best use of this lens.

I decided to hop on a train to Edinburgh, it would also provide a slight change of scene from my usual haunts. One thing that Edinburgh can keep is the plethora of tourists it attracts compared to Glasgow. The streets throng with them, but one good aspect is that it was easier to blend in with my DSLR and picture taking antics.

The huge field of view presents a few technical issues. Depth of focus (not to be confused with depth of field) is very small so AF, on my camera body at least, has a tendancy for front-focus. Conversely, the huge depth of field means that this is not usally a problem. It is standard practice to manually focus the lens and then tape down the focus ring. Even at f/2.8 and the focus set close to infinity, this is sufficient for most shooting circumstance. Stopping down to f/5.6 (optimum aperture) leads to a depth of field starting at 45cm all the way to infinity.

Another issue with such a large field of view is metering. Most outdoor photos will include the sky, and it will be a large part of the image. Using evaluative metering, the camera will tend to set exposure for the subject and hence blow out the sky. Exposure compensation is recommended to balance the two. I find that centre-weighted metering and exposure compensation depending of the scene gives the most reliable results.

Of course the most noticeable aspect of a fisheye lens is the geometry of the projected image. Fisheye projections are a class of mappings between the incident angle of a light ray from the scene and the distance on the image plane from the lens axis. The canonical fisheye mapping is a linear map between angle and distance. Other similar mappings may deviate slightly from this linear relation but may preserve other geometric feature such as angles (conformal maps) or areas (equal-area maps). A rectilinear (normal lens preserving straight lines) map has a singularity at the 90 degree incident angle. The Sigma 10mm seems to be close to a "classic" fisheye mapping between incident angle and distance on the image plane.

This fisheye geometry looks unsusual to us as we are used to rectilinear lenses which preserve straight lines. A rectilinear mapping has a practical limit of about 120 degrees field of view due to stretching (and geometrical vignetting, the fact that the same amount of light is spread over a greater amount of image area far-off the lens axis) near the corners. Fisheye mappings thus are better able to cover large angles of view (the 6mm Nikon fisheye lens covers 220 degrees on 35mm format) hence opening up new possibilities for creative imaging.

The challenge is to use the large field of view without the "look" becoming cliched, but that is an issue with any technique in photography.


Monday 14 December 2009

My prints don't match my display!

A common problem I see on forums is that prints come out too dark compared to what is seen on the monitor. This occurs most commonly when editing photos on an uncalibrated monitor and viewing prints in lower lighting conditions.

A common problem with LCD displays is that they are often by default too bright, sometimes 300 cd/m^2. The recommended brightness of an LCD display should be around 120 cd/m^2 or less. The reason for this is so that pure white on the screen should be about the same brightness as a plain white sheet of paper under good lighting. By editing your photos with your LCD set at this much lower brightness, you should now adjust the dark areas of the photo to match the printed output.

The second part of the problem is when viewing prints. The ability of the human eye to pick out detail depends strongly on the ambient lighting conditions since one of the effects is to change pupil size. The ideal pupil size is around 2-3mm, larger than this, aberrations dominate, and below this diffraction takes over. For getting the print and screen to look similar, the print brightness should match that of your on-screen version. This usually means quite bright light illuminating the print. If you know where the print will be viewed, you may try to optimise the print density for those conditions.

To set the brightness of your monitor, a hardware calibrator is usually required. However, you may be able to use a light meter instead, like the one in your camera. An approximate reading for 120 cd/m^2 is f/4 and 1/30th at ISO100. Alternatively, get the monitor brightness and a blank piece of paper under the bright viewing illumination to match by comparing the metered exposure.

A useful link.

Handy Tip: Velbon Ultra Luxi M (and possibly other similar models)

In case you find that the legs of your Velbon ULTRA LUXi M tripod are a bit floppy, you can tighten them using the following steps:

1. Remove the centre column


2. Use a screwdriver or similar and insert through the holes in the central hexagonal stub of the spider (the bit to which the legs are connected), as in the photo above

3. Tighten by rotating clockwise, loosen by rotating anti-clockwise

By tightening the screw, more pressure is applied to the bearings between the legs and the spider, and this will allow the legs to stand out at an angle more reliably. By loosening this retaining nut all the way, the two halves of the central spider can be separated and the joint mechanism accessed.

Thursday 3 December 2009

Photographic Personal Philosophy

After about a decade of seriously pursuing photography, I would guess that I am in the "mid-life" of my photography journey. Along the way, my thoughts and approach to photography have evolved: I have gone though different favoured subjects such as landscape, candid, portrait; used different media such as black and white film, slide film and digital; and shot for different purposes in mind, documenting my own travels and day to day experiences, other people's events, weddings included, for commission, and for my own portfolio. It is useful to stop and reflect on how I have come to be where I am at the moment, codify my own personal photographic philosophy.

In essence, I would characterise it as "balance", though this could be said of life in general. I see this delineated in my photography, where balance is applied to its main elements which can be broadly identified as equipment, technique, vision and opportunity. It seems that much argument and distress is caused by failing to see the larger picture and concentrating almost to the exclusion of the rest on a single aspect.

Pronouncements like "equipment doesn't matter" miss the heart of the art and craft of photography. Equipment, technique, vision, opportunity, these are all ingredients which contribute to the final result. Depending on the type of photography you do, some of these may be more important that the other but the absence of any one of them severely impairs the final result. The question is at what point improving any one of those aspects ceases to make a difference, then it is a case of working on the ones which will.

For me, I am at the stage where getting more/better equipment would not improve my photos much. I'm lucky in that my photographic subjects are relatively undemanding of bodies and lenses. I do not take sports photos or wildlife so I don't need huge lenses and fast bodies. Conversely I cannot make the excuse that I could have done better if I have a more expensive lens/camera.

I've also developed my technique to suit what I do and the equipment I have. I am familiar with the operation of my equipment and I know its features and limitations. Through reading and experience, I have a fair idea of what shutter, aperture, and ISO are appropriate for the photographic situations I regularly encounter.

However, vision and opportunity is what I have to work on in order to improve the photos I take. One can become too comfortable with what one is proficient at, whioch can lead to staleness of vision. This is why it pays to go outside one's comfort zone and try something new. I had a go at taking bird photos which was enjoyable, though I was under-equipped and unfamiliar with the technique. Street photography has always been a keen interest of mine but I'm not really cut out for it, lacking the basic "chutzpah" required. I have the opportunity to get into studio photography which would be another way of expanding my vision, where photographs have to be made, not taken.

Unfortunately, opportunity is constrained by other commitments (ones which pay the bills). Just going out and taking photos keeps you in practice, helps you maintain the way of photographic seeing. However, I could be looking at making the most of the opportunity I do have by expanding the vision of what photos I could take.

Balancing these four main factors is the key to being a good photographer, being a successful one also requires the fifth ingredient and that is luck, though some would say that you make your own luck through perseverance.

Saturday 1 August 2009

Imaging at the Quantum Limit

Update 6th August 2009: Typos and clarification of assumed pixel size.

Just like 0-60mph times, the ability to take pictures of black cats down coal mines seems to be an overblown metric of the worth of a camera. For some photographers, this may be an essential quality of a camera but for the majority of situations, the image quality of any current DSLR at ISO800 or below is perfectly acceptable for most purposes. For printing at 100mmx150mm, even ISO3200 is typically unobjectionable. But once we have tasted the forbidden fruit, the desire and expectation for ever better performance is stoked. Hence one might ask what are the fundamental limits to low light imaging.

There are of course practical issues, namely to do with geometry and mass. I assume that we cannot build arbitrarily large lenses. An f/1.4 lens about the limit for fast affordable mainstream lenses (notwithstanding f/1.2, f/1 or even f/0.7 lenses). There is not much scope in any case to extend the light gathering properties of a lens; even if it were possible to fully enclose your subject within your imaging device you would only have a fixed amount of light to collect. Large lenses also weigh more, require greater precision in manufacture, and consequently cost significantly more than slower lenses.

Hence, the sensor is the place where significant gains can be made in allowing low-light photography. Fundamentally, the sensor samples the electromagnetic (EM) field at a spatio-temporal location. The objective is to be able to retrodict the value of the EM field from the output of the sensor. At very low-light levels, quantum features of the EM field emerge. We can describe it as being made up of quanta, or discrete packets of energy called photons. In fact, photography has always used this feature of light for its functioning. Film consists of tiny silver halide crystals which can absorb photons of certain frequencies. The absorption of a photos triggers the nucleation of a silver metal grain, which is then accelerated in the development process. In a digital camera, a photon is absorbed by a silicon diode, each photon interacts with the semiconductor and the energy is transformed into liberating an electron. This electron is collected in the pixel, the more photons absorbed the more charge which builds up. At the end of the exposure, this charge is measured and its magnitude correlated to the amount of light which was present at that pixel. Ideally, each photon arriving at the pixel would be absorbed producing one electron, and then each electron would be counted exactly.

Hence, the main bottlenecks to greater low-light sensitivity are quantum efficiency (the probability that a photon incident on the sensor ends up producing a photo-electron which is then stored in the photodiode), and read noise (the error in counting the number of collected photoelectrons). In principle, the quantum efficiency can be made close to 100% and the read noise can be made insignificant, i.e. we can count individual electrons with negligible error. Assuming this is the case, why can't we achieve arbitrarily "high ISO"? The problem lies with the quantum nature of the electromagnetic field. A conventional sensors tries to estimate the number of photons in the field, and a single measurement will give an integer number, i.e. 0, 1, 2,.... Even if the true average value of the field is, say 15.6 photons, a single measurement will give an answer of 16 or 15 most of the time, 17 and 14 less often, but in general the outcome will be random. Thus, from the actual measured result, we will have only an approximate estimation of the true value of the average photon number.

Noise in this case is given by the randomness in the estimation of the true photon number at each point of the picture. The signal to noise ratio determines the overall picture quality and this quantity varies as the square root of the average photon number. One can simulate various amounts of noise corresponding to the average photon number received at the sensor (corresponding to ISO) and determine an acceptable amount of noise for various pictorial uses. We show various cases below (click the thumbnails to view the full sized version).



This represents the "Noiseless"

This represents the case where the brightest pixel received 2550 photons, the average pixel about 459 (using the 18% grey rule). Max photon level 2550.

This represents the case where the brightest pixel received 255 photons, the average pixel about 46. Max photon level 255.

This represents the case where the brightest pixel received 153 photons, the average pixel about 28. Max photon level 153.

This represents the case where the brightest pixel received 26 photons, the average pixel about 5. Max photon level 26.

In all the above, the only noise comes from the quantum nature of the light. Looking at the full versions of the images, the 255 photon version would give a reasonable print, and the 153 photon version would be acceptable in a pinch. For extreme cases, even the last case would be recognisable.

Assuming a 6 micron pixel, the 255 photon maximum per pixel picture represents an ISO of approximately 75,000. Consequently an ISO128,000 for the 153 photon maximum per pixel and the last example shows an ISO of 750,000.

We have not considered other types of measurement, such as homodyne measurement. We could think of doing a measurement whose outcomes correspond not to photon number but to "coherent" states but whether this would provide higher sensitivity is not clear.

Thursday 23 July 2009

Pining for the A700 Successor

There has been much anticipation of upcoming Sony Alpha cameras, especially after the debut of the refreshed 2xx/3xx series led many to fear the dumbing down of the the higher end models. A recent leak of the A500/550/850 model names on the US Sonystyle support site has stirred speculation about what these models represent, and what features they will introduce or omit. A recurring theme in many of the online discussions is the fate of the A700 and its lineage. Many are anxious that the current "7-series" of camera will the be the last of its breed, either dumbed-down in the form of the A500 or priced out of the reach of its traditional market by a premium A850 model.

Another interpretation looks back to the significance of the 7-series camera in the alpha mount. Starting from the Minolta 7000 (first AF SLR system), 7000i (creative expansion cards), the 7xi (power zoom), the 700si (multi-predictive focus), 7 (rear LCD, control setup), 7D (Sensor Anti-Shake), A700 (Quick-Navi), each new 7-series camera has set the tone for that generation of cameras. What can we infer from this? The current fad in DSLRs is live view and video, two features which it was widely believed Sony would implement quickly. That it has decided to go a different route to the mainstream indicates that it sees that the traditional approached suffer drawbacks in their implementation and utility. It may be that Sony is waiting until its development of these features is polished enough before it will introduce main sensor live view and video into its line of DSLRs with the true successor of the A700. A guess would put this around the time of PMA 2010.

Where does this leave the A500/550/850? The Alpha line-up would benefit from a camera appealing to those upgrading from the A2xx/3xx series, yet who feel that the A700 is too long in the tooth. Hence the need for the A5xx cameras. The A850 is quite a mystery, though it is certainly possible that Sony will try to open up the "affordable full format" market by introducing a camera with a 36mm x 24mm sensor in a body intermediate in build to the A700 and A900. This could mean a viewfinder with specifications akin to the film 7, rather than the 9 series, and a body derived from the A700. The 8000i and 800si point towards a camera which serves as a refinement of the features of the 7-series camera but with an injection of the build of the 9-series camera.

Despite the economic downturn, Sony seems to be committed to the Alpha line and is playing for the long turn. This means capturing new DSLR users (through the A2xx/3xx cameras), keeping them with the A5xx/7xx cameras, and having halo products in the form of the A8xx/9xx cameras. With a solid line-up of cameras and a growing lens catalogue, Sony is looking to establish itself as a credible alternative to the big two. Hopefully its resistance to jump on the main-sensor live view/video bandwagon will mark it as a serious player interested in providing photographic tools, not gadgets.

Update 1/8/2009: Before I had a chance to post the above, the A850 manual has been leaked. Basically, the A850 is the A900 but with a 98% viewfinder and 3fps instead of 5fps. Indications are that it has the same body and sensor otherwise.

Some have said that this would kill A900 sales, and that a lower pixel-count, low-light sensor packing body would have been better. I see the possibility that Sony will introduce an upgraded version of the A900, but also the curious lack of an A800. In analogy with the A300/350 duo, it would make sense for Sony to release an A800 which was based on the chassis of the A900/850 but with a 16MP sensor, possibly with a higher frame-rate. The next generation 7-series may still herald a truly mainstream 36mm x 24mm full format sensor camera.