LEICA

Alternative views on the Leica world by Erwin Puts

new book finally ready

My new book about the Leica world in the 21st century is going to the printer and will be available in six weeks time. It has the usual dimensions 17 x 24 cm, full colour pages, hard cover and so on. price is not yet settled. It will be a very limited edition!

Anyone who already indicated their interest in buying the book will get a copy.

If you wish to be included and have not yet informed me of your interest in buying might do it.

cover-part-4

Why AF is impossible for M-lenses



When analyzing a lens prescription, it will be evident that not every lens element is a good candidate for autofocus movement. A lens element in one of the actual L-mount lenses weighs 3.8 grams, and has a focus movement of 640 steps controlled by a stepper motor. A comparable lens element that might be suitable for AF movement is the actual Summicron 2/28 mm ASPH which weighs 47 grams. Not only is this weight too high, but there is also not enough space in the mount to accommodate the required movement and the motor, however small it is. The 640 steps explain why AF is more accurate than manual focusing and much faster.
Most Leica users however take pictures of stationary subjects. AF is hardly needed in these situations.
A new generation of lenses would be needed for incorporating AF in the M camera. The M lenses have already grown to a significant volume. The quest for wide apertures and high performance has a price already. This balancing might be the reason why L-mount lenses are limited in aperture. The size of the diameter of the L-mount would not be a hindrance. Size and performance are in an inversely proportional relationship.
The performance parameters for actual Leica lenses are related to the MTF curves. This is good for the important characteristics that define image clarity: edge contrast (often referred to as micro-contrast) and resolution. The price to be paid is the uniformity of the reproduction.
The overall image quality is the result of a balance of aberrations. High MTF values imply that the residual aberrations are very well corrected with a smart balance of fifth-order aberrations. The reverse of this strategy is a uniform image quality. It is comparable to the sharpening algorithms in post processing software. They all function in the same way and the results are interchangeable.
The older generations of optical designers had less options to work with and used every trick they knew to get a decent result. These lenses were less good, optically speaking, but they showed a character that modern lenses do not have.

New Pocket Guide

The recently announced Pocket Guide has got an excellent reception. Most buyers value the content, especially the extended history of the company. The many listings and tables give succinct information about the Leica products since the early 1950s. The most heard complaint is the lack of coverage for the older SLR Leica cameras. I have omitted this info (it is in my e-books: Leica Chronicle and Leica Compendium) because these models ,with the exception of the Leicaflex and the R8/9, are not evolutionary landmarks or quite popular these days. The R3 has much Leitz DNA under the hood. The R4 to R7 are upgraded Minolta models. This strategy of rebranding is also visible in the many digital compacts from Panasonic. They are good sellers as the serial numbers in the pocket guide indicate.

The question is now: when is a Leica branded camera an original Leica model. The trend within the Leica company is moving away from the production of cameras and lenses to the design of the artefacts. Here the Apple company is the most obvious example: hardly anyone would not assume that the Apple product is a genuine Apple product, even when most components and the assembly are 'made in China'.

The design and the quality of the software are the main elements that explain the attraction of the Apple products. Leica is copying this strategy with a simplified design style and a high image performance. While the design makes the product unique, the same can no longer be claimed for the image performance.

The danger of algorithms



Photography is a simple technology. Establishing the correct exposure is not rocket science. The brightness value of the scene can be matched to the light sensitivity of the emulsion or imager. This match assumes that the brightness contrast between dark and light parts of the scene has a numerical value of 1:160. The assumption is that there is an even distribution between the brightness levels (the tonal range) in the scene. Compensating for a bias (many dark or bright tones, higher or lower contrast) is easy, especially with cameras fitted with imagers, because in this situation the bias compensation can be matched with the characteristics of the imager itself, because the variable of the developer is no longer part of the equation. When film emulsions were the norm, the method of compensation had to take into account a standard emulsion and its response.
The compensation is limited, perhaps plus/minus one stop or at most two stops. Any algorithm can handle this task without any problem. Exposure algorithms were already effectively implemented in cameras before the digital era.
When using emulsions, there is always a risk involved. A conscious decision by the photographer/operator is required to reduce this risk. Digital techniques have reduced this risk to a minimal level. The photographer with a digital camera relies of the algorithms to take risk-free images.
It might be better to rephrase the standard analog-digital dichotomy as a risk-taking versus a risk-aversion strategy. Risk-taking is part of the human condition and helps you grow. Risk aversion is the best prescription for creative stagnation. Look at the billions of pictures on the internet and you can see where current algorithms lead you to.