Rangefinder

Views on the photographic universe by Erwin Puts

The current landscape of lens reviews


The basics
Any lens (excluding the enormously expensive lithographic lenses, only corrected for UV light) has aberrations. Aberrations are like the Russian dolls: correct the third order and the fifth order will pop up, correct all fifth order aberrations and seventh-order aberrations will emerge and so on. The task of the optical designer was and is the balancing of all these aberrations in order to fulfill the main goal of all optics: transfer the subject points to image points without any loss of information. A daunting task that every designer solves individually. Modern design is supported by software (in many cases the Code V program). Most software works with merit functions that are translated into numbers. The program then proposes an optical system (lens curvatures, thicknesses and distances between lens elements) when the number of lens elements is given. The optical designer has now the task to construct the merit function. The main and big problem is to find a merit function that can represent the practical requirements of the intended user.
This is indeed the primary problem: how to find a balance between aberrations that supports ‘good’ imagery, where ‘good’ has a considerable bandwidth of interpretation. Once fixed, the optical system has to be manufactured and the tolerances of the process define the final and actual performance.
Scientists at the Zeiss Company studied this problem quite intensively and discovered that the image contrast (global and local) was the decisive factor in appreciating the image quality when the limits of eye vision are taken into account. In is known for a long time that the optimal human eye can only resolve details that have a certain size. It has been established that the eye can resolve at most 10 line pairs per mm at a distance of 25 cm. Ctein claims that any person can distinguish between 20 and 30 lp/mm on a print, but that statement is certainly not supported by scientific or empirical evidence.
The problem
Reviews of lenses have the task to analyze the characteristics of a lens and to relate these characteristics to a lens profile, such that a prospective user can know what to expect of a lens. This requirement is more problematic that it sounds. There are basically two approaches to lens evaluation: (1) the scientific lab analysis and (2) the personal and subjective analysis of selected characteristics of the lens in question. I have to stress the element of ‘selected’, because any lens is a compromise between conflicting demands. When the reviewer is most interested in close up performance, because that is what interests the reviewer, all practical ‘tests’ and the importance of the conclusions will be focused on this aspect. When a reviewer is most interested in finding colour fringes, all his work will be focused on finding these colour defects. What these characteristics mean in practice or how important they are for the majority of users is not considered.
The current landscape
Analyzing the performance of lenses is still an important element of many blogs, review sites and magazines.
In the past, lens testing was a specialized occupation. Projection of patterns (lines, points), and MTF measurements were the classical ways of doing this. Zeiss used their own MTF equipment and most others (including Leitz) used projection patterns. Needed was considerable expertise to analyze these patterns. The important issue was always how to relate the results to practical requirements.
Since the digital turn in cameras the camera has become the virtual test lab. Software is often used to analyze the results on the sensor surface. Many magazines and some blogs/sites use the software of Image Engineering or of Imatest to present the findings. These findings are what the designers of the software find important. While often interpreted as tests of camera sensors or lens optics, the fact is that these tests always use both elements. Not considered is the problem of the hardware, the accuracy of the camera and lens, the influence of the software to translate the pixel photon count to visual images and so on. All measurements will have a tolerance level of let us say 5%. The often used parameter of the lines per image height is presented with semi-scientific precision and the camera/lens performance is ranked according to these values: when a system is measured with a value of 2856 l/ih and another one with a value of 2788 l/ih there is no discussion that the real values might be 2713 and 2928, in fact reversing the order. Even when the tolerance would be 1% there is no discussion of the real values, related to the measurement values.
It is questionable whether the numerical results can be directly related to the practical requirements, but one requirement is obeyed: the consistency of the tests and the consistency of the lab conditions.
This requirement is not guaranteed by the many subjective evaluations that ask for attention on the web.
These personalized and very subjective reviews fall in two extremes: the subjective reviews that are full of the words ‘impression’, ‘feeling’ and ‘opinion’. The choice of these words indicate the intention of the reviewer: not an objective analysis, but a description of the emotions when using a specific lens or camera body. A good example is the website of Joeri van der Kloet. A cursory glance at the discussion platform of the TheOnlinePhotographer will show you that almost every comment made there on whatever topic is at best simply an expression of personal preference and at worst the continuation of some myth. It is amazing what internet discussions can reveal about the human psyche!
The other extreme is the pseudo-scientific approach. A good example is the website of Mr Chambers. His recent review of the Leica Summilux-SL 1.4/50 mm ASPh is a fine example of this attitude. First of all, he does not describe the test conditions and uses a range of different pictures to support his statements (lab rule # 1is to state the conditions of measurement and to hold as many influencing parameters as possible constant ). Secondly he uses the method of pixel shifting to artificially enlarge the resolution of the sensor. Presumably this is done to improve the resolution yardstick for the optical performance. Martin Doppelbauer has a sensible explanation why this is a fake option. My own experiments with the Olympus PenF and the option of enhanced resolution are also not impressive: in fact the results are worse than the original ones! Thirdly he is obsessed with the phenomenon of the occurrence of distortion and the use of software to correct this so-called aberration.Again, without scientific evidence, there is the claim that this software correction will reduce the optical performance. Any modern book about optical design will tell you that distortion can be easily corrected by mathematics without loss of information.
The last issue I would like to address is the pseudo scientific language, that is supposed to disguise the subjective bias of the reviewer.
I am always amazed that there are individuals who seem to be impressed by the length of the review (erroneously described as “depth’).
Geoffrey Crawley tried to review a lens with at most a hundred words and he thought that would be enough to give a serious profile of the reviewed lens. He also refused to add illustrations because they are easy to misinterpret or to put too much value on some phenomenon.
The best test for an optical system is still the brick wall and a slide film.