Tuesday, 23 February 2010
Print Pricing
I have two issues with this. One is that the photographer is handcuffing themselves at the start. They have to guess the tradeoff between price and exclusivity. The amount of money a print edition can generate depends on the product of price per print times the number of prints sold, the latter capped at the beginning by the edition quantity, and the former set as high as possible but not too high so that there are unsold prints in the edition. The problem for the photographer is that if a particular print is unexpectedlyy popular, then the price may have been set too low at the beginning and the photographer misses out on the volume of prints they may have been able to sell had the edition quantity been higher. If the photographer places the price too high, then there are unsold prints in the edition which again is a waste, considering the greater exclusivity which could have been guaranteed if the edition quantity had been set lower and hence justified a higher selling price. Finding that balance between price and exclusivity is tricky, especially for someone new to the field.
The other issue I have is that limiting the number of prints to a fairly arbitrary fixed number seems to be contrary to the spirit of photography, where there are no natural constraints to the reproduction of an image, in contrast to painting where one can usually point to an ur-painting. Limited editions try to mimic the exclusivity which buying a Monet, or van Gough gives a collector, which is something which I am uncomfortable with.
As a compromise, I am developing a system of print pricing which allows in principle an unlimited number of prints, however in practice should be self-limiting. If an image is unexpectedly popular, the extra demand can be catered for. This is achieved by using a sliding scale, low numbered prints are less expensive than higher numbered prints. Early buyers of a print are rewarded by a smaller entry cost. If a print becomes popular, late buyers pay extra. This should encourage speculation yet not denying collectors who are able to buy any print at any stage, albeit at prices reflecting the popularity if the image.
For simplicity, I have adopted an exponential pricing structure. Print N=0 is the photographer's proof version and is nominally set a price P. Each additional print is numbered and priced at a fixed proportion r of the price of the proceeding print, i.e. print number N is priced at P*r^N, where N=1 is the first print to be sold. There are two parameters to be set, P and r. The nominal initial price P set an overall scale (perhaps depending on the size of the print, e.g. P=£10 for a small print, £100 for a large print), whereas the rate r sets a scale for the exclusivity. As an example, if I set P=£100 and r=1.023293, then the first sold print would be £102.30 whereas the 100th print would £1000. If I managed to sell the 400th print, it would be £1 million. Naturally this would limit the edition quantity, depending on just how popular the photo was.
For small prints, it may not make sense to have limited editions, a flat pricing may cater better to non-collectors. But for large prints, this pricing structure may have benefits for both the photographer and buyer as it naturally self-limits to a level dictated by the popularity of the image.
Thursday, 18 February 2010
Epson 2100 Print Head Alignment Woes
I probably compounded things by running the print head alignment utility using photo paper instead of plain paper. Also, it is recommended that high speed printing be turned off, something which I didn't make sure of. Hence, I probably managed to put in totally wrong settings for this part of the alignment test.
Trying to get back to a reasonable baseline has been problematic. From a support chat with Epson, it seems as if the alignment settings are actually stored in the printer, but I couldn't find out whether there were a series of printer button presses which would reset the printer to default. In the end, it seems that the only "solution" is to uninstall the printer driver, reinstall, set up the default printing options to disable high speed printing (turned off finest detail, and smoothing for good measure), and run the utility again and try to stick to the default numbers, (8,8) for the vertical line test, (4) for the horizontal banding test, and (4,4,4) for the graininess test.
Update: The actual problem was the fact that the paper could not absorb the ink fast enough leading to coalescence. After a lot of experimenting, I found that by using a different media setting (Watercolour paper at 1440dpi) and reducing ink density by 15%, I could limit the coalescence. The print quality is slightly reduced and the colours aren't accurate any more, e.g. red prints as orange. This requires a profile for the ink+paper combination which I managed to achieve using the ColorMunki. The final results are more hue accurate but the density is a bit higher than I am used to. The detail still seems to be there in the shadows but requires quite a lot of light to see. A bit more experimenting with media setting might be required in order to get better results.
Thursday, 17 December 2009
Notes on Fisheye Lens Use for Pictorial Use

I recently acquired a Sigma 10mm/2.8 fisheye lens, mainly for panography to replace the Peleng 8mm fisheye. I haven't really used a fisheye lens for "straight" (non-panoramic) photography but thought it could be interesting to try some street and candid photography with such a lens. The main challenge is that the wider a lens, the closer you usally have to get your subject to get a sense of perspective. So with a lens with a diagonal field of view of 180 degrees, it would be a steep learning curve to make the best use of this lens.
I decided to hop on a train to Edinburgh, it would also provide a slight change of scene from my usual haunts. One thing that Edinburgh can keep is the plethora of tourists it attracts compared to Glasgow. The streets throng with them, but one good aspect is that it was easier to blend in with my DSLR and picture taking antics.
The huge field of view presents a few technical issues. Depth of focus (not to be confused with depth of field) is very small so AF, on my camera body at least, has a tendancy for front-focus. Conversely, the huge depth of field means that this is not usally a problem. It is standard practice to manually focus the lens and then tape down the focus ring. Even at f/2.8 and the focus set close to infinity, this is sufficient for most shooting circumstance. Stopping down to f/5.6 (optimum aperture) leads to a depth of field starting at 45cm all the way to infinity.
Another issue with such a large field of view is metering. Most outdoor photos will include the sky, and it will be a large part of the image. Using evaluative metering, the camera will tend to set exposure for the subject and hence blow out the sky. Exposure compensation is recommended to balance the two. I find that centre-weighted metering and exposure compensation depending of the scene gives the most reliable results.
Of course the most noticeable aspect of a fisheye lens is the geometry of the projected image. Fisheye projections are a class of mappings between the incident angle of a light ray from the scene and the distance on the image plane from the lens axis. The canonical fisheye mapping is a linear map between angle and distance. Other similar mappings may deviate slightly from this linear relation but may preserve other geometric feature such as angles (conformal maps) or areas (equal-area maps). A rectilinear (normal lens preserving straight lines) map has a singularity at the 90 degree incident angle. The Sigma 10mm seems to be close to a "classic" fisheye mapping between incident angle and distance on the image plane.
This fisheye geometry looks unsusual to us as we are used to rectilinear lenses which preserve straight lines. A rectilinear mapping has a practical limit of about 120 degrees field of view due to stretching (and geometrical vignetting, the fact that the same amount of light is spread over a greater amount of image area far-off the lens axis) near the corners. Fisheye mappings thus are better able to cover large angles of view (the 6mm Nikon fisheye lens covers 220 degrees on 35mm format) hence opening up new possibilities for creative imaging.
The challenge is to use the large field of view without the "look" becoming cliched, but that is an issue with any technique in photography.
Monday, 14 December 2009
My prints don't match my display!
A common problem with LCD displays is that they are often by default too bright, sometimes 300 cd/m^2. The recommended brightness of an LCD display should be around 120 cd/m^2 or less. The reason for this is so that pure white on the screen should be about the same brightness as a plain white sheet of paper under good lighting. By editing your photos with your LCD set at this much lower brightness, you should now adjust the dark areas of the photo to match the printed output.
The second part of the problem is when viewing prints. The ability of the human eye to pick out detail depends strongly on the ambient lighting conditions since one of the effects is to change pupil size. The ideal pupil size is around 2-3mm, larger than this, aberrations dominate, and below this diffraction takes over. For getting the print and screen to look similar, the print brightness should match that of your on-screen version. This usually means quite bright light illuminating the print. If you know where the print will be viewed, you may try to optimise the print density for those conditions.
To set the brightness of your monitor, a hardware calibrator is usually required. However, you may be able to use a light meter instead, like the one in your camera. An approximate reading for 120 cd/m^2 is f/4 and 1/30th at ISO100. Alternatively, get the monitor brightness and a blank piece of paper under the bright viewing illumination to match by comparing the metered exposure.
A useful link.
Handy Tip: Velbon Ultra Luxi M (and possibly other similar models)
1. Remove the centre column

2. Use a screwdriver or similar and insert through the holes in the central hexagonal stub of the spider (the bit to which the legs are connected), as in the photo above
3. Tighten by rotating clockwise, loosen by rotating anti-clockwise
By tightening the screw, more pressure is applied to the bearings between the legs and the spider, and this will allow the legs to stand out at an angle more reliably. By loosening this retaining nut all the way, the two halves of the central spider can be separated and the joint mechanism accessed.
Thursday, 3 December 2009
Photographic Personal Philosophy
In essence, I would characterise it as "balance", though this could be said of life in general. I see this delineated in my photography, where balance is applied to its main elements which can be broadly identified as equipment, technique, vision and opportunity. It seems that much argument and distress is caused by failing to see the larger picture and concentrating almost to the exclusion of the rest on a single aspect.
Pronouncements like "equipment doesn't matter" miss the heart of the art and craft of photography. Equipment, technique, vision, opportunity, these are all ingredients which contribute to the final result. Depending on the type of photography you do, some of these may be more important that the other but the absence of any one of them severely impairs the final result. The question is at what point improving any one of those aspects ceases to make a difference, then it is a case of working on the ones which will.
For me, I am at the stage where getting more/better equipment would not improve my photos much. I'm lucky in that my photographic subjects are relatively undemanding of bodies and lenses. I do not take sports photos or wildlife so I don't need huge lenses and fast bodies. Conversely I cannot make the excuse that I could have done better if I have a more expensive lens/camera.
I've also developed my technique to suit what I do and the equipment I have. I am familiar with the operation of my equipment and I know its features and limitations. Through reading and experience, I have a fair idea of what shutter, aperture, and ISO are appropriate for the photographic situations I regularly encounter.
However, vision and opportunity is what I have to work on in order to improve the photos I take. One can become too comfortable with what one is proficient at, whioch can lead to staleness of vision. This is why it pays to go outside one's comfort zone and try something new. I had a go at taking bird photos which was enjoyable, though I was under-equipped and unfamiliar with the technique. Street photography has always been a keen interest of mine but I'm not really cut out for it, lacking the basic "chutzpah" required. I have the opportunity to get into studio photography which would be another way of expanding my vision, where photographs have to be made, not taken.
Unfortunately, opportunity is constrained by other commitments (ones which pay the bills). Just going out and taking photos keeps you in practice, helps you maintain the way of photographic seeing. However, I could be looking at making the most of the opportunity I do have by expanding the vision of what photos I could take.
Balancing these four main factors is the key to being a good photographer, being a successful one also requires the fifth ingredient and that is luck, though some would say that you make your own luck through perseverance.
Saturday, 1 August 2009
Imaging at the Quantum Limit
Update 6th August 2009: Typos and clarification of assumed pixel size.
Just like 0-60mph times, the ability to take pictures of black cats down coal mines seems to be an overblown metric of the worth of a camera. For some photographers, this may be an essential quality of a camera but for the majority of situations, the image quality of any current DSLR at ISO800 or below is perfectly acceptable for most purposes. For printing at 100mmx150mm, even ISO3200 is typically unobjectionable. But once we have tasted the forbidden fruit, the desire and expectation for ever better performance is stoked. Hence one might ask what are the fundamental limits to low light imaging.
There are of course practical issues, namely to do with geometry and mass. I assume that we cannot build arbitrarily large lenses. An f/1.4 lens about the limit for fast affordable mainstream lenses (notwithstanding f/1.2, f/1 or even f/0.7 lenses). There is not much scope in any case to extend the light gathering properties of a lens; even if it were possible to fully enclose your subject within your imaging device you would only have a fixed amount of light to collect. Large lenses also weigh more, require greater precision in manufacture, and consequently cost significantly more than slower lenses.
Hence, the sensor is the place where significant gains can be made in allowing low-light photography. Fundamentally, the sensor samples the electromagnetic (EM) field at a spatio-temporal location. The objective is to be able to retrodict the value of the EM field from the output of the sensor. At very low-light levels, quantum features of the EM field emerge. We can describe it as being made up of quanta, or discrete packets of energy called photons. In fact, photography has always used this feature of light for its functioning. Film consists of tiny silver halide crystals which can absorb photons of certain frequencies. The absorption of a photos triggers the nucleation of a silver metal grain, which is then accelerated in the development process. In a digital camera, a photon is absorbed by a silicon diode, each photon interacts with the semiconductor and the energy is transformed into liberating an electron. This electron is collected in the pixel, the more photons absorbed the more charge which builds up. At the end of the exposure, this charge is measured and its magnitude correlated to the amount of light which was present at that pixel. Ideally, each photon arriving at the pixel would be absorbed producing one electron, and then each electron would be counted exactly.
Hence, the main bottlenecks to greater low-light sensitivity are quantum efficiency (the probability that a photon incident on the sensor ends up producing a photo-electron which is then stored in the photodiode), and read noise (the error in counting the number of collected photoelectrons). In principle, the quantum efficiency can be made close to 100% and the read noise can be made insignificant, i.e. we can count individual electrons with negligible error. Assuming this is the case, why can't we achieve arbitrarily "high ISO"? The problem lies with the quantum nature of the electromagnetic field. A conventional sensors tries to estimate the number of photons in the field, and a single measurement will give an integer number, i.e. 0, 1, 2,.... Even if the true average value of the field is, say 15.6 photons, a single measurement will give an answer of 16 or 15 most of the time, 17 and 14 less often, but in general the outcome will be random. Thus, from the actual measured result, we will have only an approximate estimation of the true value of the average photon number.
Noise in this case is given by the randomness in the estimation of the true photon number at each point of the picture. The signal to noise ratio determines the overall picture quality and this quantity varies as the square root of the average photon number. One can simulate various amounts of noise corresponding to the average photon number received at the sensor (corresponding to ISO) and determine an acceptable amount of noise for various pictorial uses. We show various cases below (click the thumbnails to view the full sized version).
Max photon level 26.
In all the above, the only noise comes from the quantum nature of the light. Looking at the full versions of the images, the 255 photon version would give a reasonable print, and the 153 photon version would be acceptable in a pinch. For extreme cases, even the last case would be recognisable.
Assuming a 6 micron pixel, the 255 photon maximum per pixel picture represents an ISO of approximately 75,000. Consequently an ISO128,000 for the 153 photon maximum per pixel and the last example shows an ISO of 750,000.
We have not considered other types of measurement, such as homodyne measurement. We could think of doing a measurement whose outcomes correspond not to photon number but to "coherent" states but whether this would provide higher sensitivity is not clear.
Thursday, 23 July 2009
Pining for the A700 Successor
Another interpretation looks back to the significance of the 7-series camera in the alpha mount. Starting from the Minolta 7000 (first AF SLR system), 7000i (creative expansion cards), the 7xi (power zoom), the 700si (multi-predictive focus), 7 (rear LCD, control setup), 7D (Sensor Anti-Shake), A700 (Quick-Navi), each new 7-series camera has set the tone for that generation of cameras. What can we infer from this? The current fad in DSLRs is live view and video, two features which it was widely believed Sony would implement quickly. That it has decided to go a different route to the mainstream indicates that it sees that the traditional approached suffer drawbacks in their implementation and utility. It may be that Sony is waiting until its development of these features is polished enough before it will introduce main sensor live view and video into its line of DSLRs with the true successor of the A700. A guess would put this around the time of PMA 2010.
Where does this leave the A500/550/850? The Alpha line-up would benefit from a camera appealing to those upgrading from the A2xx/3xx series, yet who feel that the A700 is too long in the tooth. Hence the need for the A5xx cameras. The A850 is quite a mystery, though it is certainly possible that Sony will try to open up the "affordable full format" market by introducing a camera with a 36mm x 24mm sensor in a body intermediate in build to the A700 and A900. This could mean a viewfinder with specifications akin to the film 7, rather than the 9 series, and a body derived from the A700. The 8000i and 800si point towards a camera which serves as a refinement of the features of the 7-series camera but with an injection of the build of the 9-series camera.
Despite the economic downturn, Sony seems to be committed to the Alpha line and is playing for the long turn. This means capturing new DSLR users (through the A2xx/3xx cameras), keeping them with the A5xx/7xx cameras, and having halo products in the form of the A8xx/9xx cameras. With a solid line-up of cameras and a growing lens catalogue, Sony is looking to establish itself as a credible alternative to the big two. Hopefully its resistance to jump on the main-sensor live view/video bandwagon will mark it as a serious player interested in providing photographic tools, not gadgets.
Update 1/8/2009: Before I had a chance to post the above, the A850 manual has been leaked. Basically, the A850 is the A900 but with a 98% viewfinder and 3fps instead of 5fps. Indications are that it has the same body and sensor otherwise.
Some have said that this would kill A900 sales, and that a lower pixel-count, low-light sensor packing body would have been better. I see the possibility that Sony will introduce an upgraded version of the A900, but also the curious lack of an A800. In analogy with the A300/350 duo, it would make sense for Sony to release an A800 which was based on the chassis of the A900/850 but with a 16MP sensor, possibly with a higher frame-rate. The next generation 7-series may still herald a truly mainstream 36mm x 24mm full format sensor camera.
Thursday, 25 September 2008
Second Thoughts on the A900
The Alpha A900 was released two weeks ago and turned out much as expected. The build is a step up from the A700, the viewfinder is superb at 100% and x74 magnification, and the effective incorporation of SteadyShot with the 36mm x 24mm sensor has silenced the naysayers. Due to unrepresentative JPEG output, the true image quality of the 24.6MP sensor has been overlooked unfortunately. The forte of the A900 is high resolution at low ISO, something which seems to be forgotten by the (partisan) critics. Suitable development of the RAW files have shown that image quality is comparable with the A700 at the per-pixel level, unsurprising given the similar pixel pitch and presumably similar pixel architecture and readout. Downsizing to a half or a quarter image size seems to be give acceptable results up to ISO3200. There is not doubt it will be an effective photographic tool for its intended use.
However, for my own personal use I shall wait. It is not that the A900 is not good enough, it is that the A700 (which I have) satisfies my needs so well. For single shot performance, 12MP is sufficient for most of my needs. For ultra-high resolution panoramics, stitching several 12MP shots serves me well, though having twice the resolution would halve the number of required shots. I can only hope that Sony will offer a new model which complements the A900, one which would be optimised for lower light. The different features I would ask for are:
- ~16MP 36mm x 24mm sensor and column-ADC with slow-scan low-noise readout
- In-built flash for wireless flash control
- Auto-exposure metering with non-AF lenses
- Extended exposure bracketing (7 exposures at 1EV steps)
I do not need live-view or video recording but their inclusion would be an added bonus as long as it did not compromise photography. The A900 viewfinder and 5fps would be welcome though. The greatest barrier for me to upgrade to the A900 is the compulsion to get the ZA 16-35mm and ZA 24-70mm lenses to make the most of the high resolution sensor. There only so many organs I can sell :-).
Thursday, 4 September 2008
A900 Quick Comments

Image courtesy of Xitek
The leaks have begun just under a week before the official announcements on the 9th/10 September next week. A leaked ad from National Geographic and a Danish photography magazine have confirmed the name as Alpha 900. Other salient details include:
- 24.6MP CMOS EXMOR sensor (as announced earlier in the year)
- Dual BIONZ processor
- Intelligent Preview
- 100% Viewfinder 0.74x
- 3 inch, VGA Hybrid LCD (Same as the A700 presumably)
- 9-point Centre Dual-Cross AF (with f2.8 sensor and 10-point wide-area assist)
- 5 fps
- SteadyShot Inside
It's pretty much what was expected. I may make a few speculations and comments as on some of the details.
As the previously announced 24.6MP 36mm x 24mm sensor was specified with a 6.3fps (12 bit) maximum frame rate, the 5 fps seems reasonable. I hope, though don't expect, that a low-noise, "slow-scan" mode could be employed which lowered the frame-rate but reduced read-noise by operating the ADCs closer to their optimal corner frequency. Unless the dynamic range of the sensor can exceed 12 bits, I do not see a compelling reason to use more than a 12 bit RAW bit depth.
The data throughput of the A900 is twice that of the A700, ganging up two BIONZ processors is a simple way of handling this. It will be interesting what processing Sony decides to implement, especially after the "cooked RAW" issue.
Intelligent Preview could indicate an off-the-main-sensor live view, with Live View being reserved for the fast-AF style of live view as in the A350 and A300. Intelligent Preview could possibly be a way of differentiating (delineating) this type of pre-capture composing. It is presumably not suitable for quickly moving subjects and more suitable for posed, macro and landscape style photography.
The viewfinder coverage and magnification is quite good. In comparison, the Dynax 9 film camera had a 100% 0.73x viewfinder.
The rear LCD seems to share the same specifications as the one in the A700. I suspect that it may be hinged similarly to the A300/350 as a photo of a prototype spotted in Canada could have shown such a feature (unfortunately the photo was not clear enough to positively identify this).
The AF set-up is not entirely clear but it could be interpreted as meaning having 9 dual-cross sensors, of which the central one is f/2.8, and a further 10 (line) sensors to assist wide-field AF tracking. I speculate that the 9 dual cross (x and + type) sensors cover the central rectangle of the frame corresponding to an APS-C sensor. A further 10 sensors would lie outside this region to help with peripheral subjects. Having the 9 most sensitive AF sensors in covering the central region would be ideal for use with a 10/12MP "crop-mode", either with use with DT lenses or in order to boost frame-rates (though I doubt the latter as this would mainly be constrained by the mirror/shutter assembly, though some sort of electronic shuttering in Intelligent Preview mode could be possible). It is ambiguous at this stage whether the f/2.8 sensitivity is only for the single centre dual-cross sensor or extends to all 9 dual-cross central sensors. Edit 1: Possibly, in keeping with how the A700 AF system is described, only the centre AF sensor is dual cross, the other 8 main AF sensors could be line or single cross sensors. The 10 additional wide-field sensors may not be user selectable. Edit 2: Dual-cross probably doesn't mean + and x, but two crosses side by side as on the A700.
"SteadyShot Inside", as opposed to SuperSteadyShot, is an intriguing change of terminology. I suspect that it is merely a marketing rebranding of SuperSteadyShot to eliminate the "Super" as it may not fit in with the image of cameras at the high end of the market. Time will tell whether the effectiveness of the image stabilisation on the A900 is comparable to the systems employed on the APS-C sensor cameras.
Of course, we have no reliable indication as to the pricing position of the camera, especially in comparison to the Nikon D700. It is expected that Sony will announce several new lenses to complement the existing range, especially the ZA 24-70mm/2.8 SSM. A ZA 16-35mm/2.8 SSM is strongly rumoured to be among these. A replacement for the 70-200mm/2.8 G SSM is an outside possibility. Telephoto lenses may also be announced, especially as the 300mm/2.8 G SSM is the longest (non-mirror) prime lens in the Sony line-up.
It will be interesting how the A900 will be received, not by the minority of online forum posters, but by the general photographic community. The A900 could serve as a "halo" product, much as the existence of the ~US$8000 Canon 1DsIII has for Canon. Its importance could extend to much beyond its immediate sales but as a symbol of Sony's commitment to the Alpha system. I will eagerly awaiting the first user reports in the hands of real photographers. For most, the A900 is not appropriate for their style of shooting but for those who can make the most of the resolution (with nice glass), it looks at this point to be a very promising tool.