addendum to M10 review
20/01/17 13:29
There is some discussion about the characteristics of the sensor in the M10 and how closely it is related to the sensor in the SL and/or which manufacturer is responsible for the production of the sensor. One should first differentiate between the sensor design and the sensor manufacture. Given that the manufacturer has the required expertise and production technology it is in fact irrelevant which manufacturer does the job, as long as the results are within specifications and quality tolerances. The main characteristics of a sensor are: its sensitivity, resolution, noise, contrast ratio, dynamic range and colour gamut and accuracy.
These characteristics are determined and/or influenced by the technical layout of the imaging chain. The basic process is simple: light falls on a regular grid of detectors (a photodiode array or lattice), that produces a pattern of electric charges which are measured and then converted to numbers stored in an image file.
The main elements of the sensor architecture are the basic substrate (a p/n type silicon wafer) and the light-sensing unit (as a semiconductor Capacitor (MOS). In the substrate, pixels are defined as a grid of narrow electrode strips, known as gates. The collected electrons or charge are transferred to the output amplifier and then to the A/D converter. A full frame sensor is one in which the full sensor surface is sensitive to light and there are no dead spaces between the individual pixels. That is why I have problems with the use of ‘full frame’ as a designation for a sensor with the dimensions 24 x 36 mm. It would be better to use the term ‘full size’ for a sensor area of 24 x 36 mm.
Before the basic substrate are located a number of layers: Bayer pattern, IR filter, optionally a low pass filter and coupled to the sensor are the CMOS read out and the A/D converter that integrates with the DSP processor, the Maestro-II in case of Leica. To improve the light gathering capacity of the pixel, an array of specifically shaped microlenses is used in every case. This is not specific for the Leica designed sensors, but common practice.
In a CCD sensor the charge of every pixel is transferred through a narrow output channel to be converted to voltage. In a CMOS sensor every pixel has its own charge-to-voltage conversion and the sensor as a package may include amplifiers, noise-correction and digitization circuits.
The exact layout of the several Leica sensors in the Q, SL and M (various versions) full size sensor types is not (yet) communicated.
The complicated and often highly integrated and also the many separated elements that make up the full architecture make it quite difficult to say that one specific sensor is is or is not different from another one.
The remark that the filter layer and the shape of the microlenses of the sensor in the M10 have been (again) optimized for the use of Leica M lenses has not much information value. Such an optimization has been the case for every Leica digital M since the M8 and DMR R8- module. It would be only informative when the differences are specified in detail. These changes (in whatever extent and magnitude) do not imply that the rest of the sensor architecture is or is not identical to another one.
Depending on how one person assesses the magnitude of the differences in architecture, one may say that (to be specific) the sensor in the M10 is or is not identical with the sensor in the SL. After all, the sensor in the Q and SL have been claimed as the same or as improved, whatever this means.
The upshot is that it is the result that counts.
These characteristics are determined and/or influenced by the technical layout of the imaging chain. The basic process is simple: light falls on a regular grid of detectors (a photodiode array or lattice), that produces a pattern of electric charges which are measured and then converted to numbers stored in an image file.
The main elements of the sensor architecture are the basic substrate (a p/n type silicon wafer) and the light-sensing unit (as a semiconductor Capacitor (MOS). In the substrate, pixels are defined as a grid of narrow electrode strips, known as gates. The collected electrons or charge are transferred to the output amplifier and then to the A/D converter. A full frame sensor is one in which the full sensor surface is sensitive to light and there are no dead spaces between the individual pixels. That is why I have problems with the use of ‘full frame’ as a designation for a sensor with the dimensions 24 x 36 mm. It would be better to use the term ‘full size’ for a sensor area of 24 x 36 mm.
Before the basic substrate are located a number of layers: Bayer pattern, IR filter, optionally a low pass filter and coupled to the sensor are the CMOS read out and the A/D converter that integrates with the DSP processor, the Maestro-II in case of Leica. To improve the light gathering capacity of the pixel, an array of specifically shaped microlenses is used in every case. This is not specific for the Leica designed sensors, but common practice.
In a CCD sensor the charge of every pixel is transferred through a narrow output channel to be converted to voltage. In a CMOS sensor every pixel has its own charge-to-voltage conversion and the sensor as a package may include amplifiers, noise-correction and digitization circuits.
The exact layout of the several Leica sensors in the Q, SL and M (various versions) full size sensor types is not (yet) communicated.
The complicated and often highly integrated and also the many separated elements that make up the full architecture make it quite difficult to say that one specific sensor is is or is not different from another one.
The remark that the filter layer and the shape of the microlenses of the sensor in the M10 have been (again) optimized for the use of Leica M lenses has not much information value. Such an optimization has been the case for every Leica digital M since the M8 and DMR R8- module. It would be only informative when the differences are specified in detail. These changes (in whatever extent and magnitude) do not imply that the rest of the sensor architecture is or is not identical to another one.
Depending on how one person assesses the magnitude of the differences in architecture, one may say that (to be specific) the sensor in the M10 is or is not identical with the sensor in the SL. After all, the sensor in the Q and SL have been claimed as the same or as improved, whatever this means.
The upshot is that it is the result that counts.
December 2020
November 2020
October 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
December 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
November 2020
October 2020
September 2020
August 2020July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018February 2018
January 2018December 2017
November 2017
October 2017September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015March 2015
February 2015