Mechanische camera's

Kijk op het fotografisch universum door Erwin Puts

praktijk van pixel verschuiving/pixel shift

De meeste camera’s hebben tegenwoordig de techniek van de beweegbare beeldsensor ingebouwd. Deze bewegingen, vaak vijf-assig bieden de mogelijkheid van beeldstabilisatie, sensor oppervlak reiniging en inderdaad ook nog de optie van pixels shift of multishot. Omdat in het laatste geval acht beelden worden samengevoegd tot een beeld, waarbij elk afzonderlijk beeld over een minuscule afstand verschoven is (meestal een of een halve pixel) groeit het totale aantal pixels behoorlijk. Het idee van pixel shift komt van de video camera, die toentertijd met afzonderlijke slechts voor een kleur gevoelige beeldsensoren werkte. Elke sensor had slechts een beperkt aantal pixels (rond de een miljoen) en de techniek werkte zo dat het beeld werd opgebouwd uit e combinatie van elke sensor afzonderlijk. Om de bescheiden resolutie te verhogen werden er trucjes uitgehaald zoals het verschuiven van een sensor over een beperkte afstand, waarmee de kleurinterpretatie kon worden verbeterd en de ruis kon worden onderdrukt.
In digitale foto camera’s wordt gebruikt gemaakt van een enkele beeldsensor die met het bekende RGB Bayer patroon is overdekt. Hierdoor gaat per pixel nogal wat kleurinformatie verloren die door interpolatie wordt aangevuld.
De pixel shift technologie werkt als volgt: tijdens de opname wordt de beeldsensor over de afstand van een pixel verschoven, zodat elke pixel door alle RGB filters wordt belicht en daarmee alle kleurinfo kan opnemen. Omdat elke pixel ook meer licht kan opnemen wordt ook de ruis gereduceerd.
Nadeel is dat deze techniek alleen werkt met statief en met echt stationaire objecten.
Tot zover de theorie. Maar werkt het ook?
Ik heb geen Pentax of Sony of Panasonic of Leica (de SL2 is te duur, lomp en zwaar). Ik heb wel een Olympus Pen-F die een vergelijkbare techniek aanbiedt.
Voor alle duidelijkheid: in geen van de genoemde camera’s wordt de oppervlak van de photosites op de beeldsensor veranderd. Het fysieke oplossend vermogen blijft dus gelijk, ook als je de sensor beweegt en het aantal pixels softwarematig vergroot.
De Pen-F heeft een MFT formaat beeldsensor met 5184 x 3888 photosites. De lengte van een photosite (meestal ook als pixel aangeduid) is 17,4 mm / 5184 beeldpunten = 3.4 micrometer voor een totaal oppervlak van 226, 2 mm^ 2. Het oppervak per beeldpunt is 11,56 mm^2.
Het aantal effectieve beeldpunten is 20, 3 Megapixel. Ter vergelijk, de nieuwe Leica SL2 heeft een oppervlak per beeldpunt van 18,49 mm^2 bij een totaal effectief oppervlak van de beeldsensor van 46,7 Megapixel. Omdat een pixel dimensie-loos is (in tegenstelling tot de fysieke photosite), is het werkelijke oppervlak minder belangrijk. Het totale aantal megapixels wordt bij de multishot techniek van de SL2 vergroot tot 187 Megapixel. (= 46,7 x 4).
Bij de Olympus Pen-F zijn de getallen wat bescheidener. De originele 20,3 Megapixel wordt vergroot tot 50 Megapixel. De originele maten (5184 x 3888) worden in Raw formaat vergroot to 10368 x 7776, een verdubbeling dus. Waar de leica SL2 dus een pixel verschuiving heeft van een halve pixel, is dit voor de Pen-F maar een kwart pixel.
Om de theoretische toename in scheidend vermogen te controleren, heb ik met de Pen-F de volgende studie uitgevoerd. Ik heb een super scherpe dia van het USAF test patroon genomen (van Image Engineering), de Kaiser belichtingseenheid en als objectief de 60 mm macro van Olympus, gediafragmeerd op f/8. Uiteraard op statief en enkele opnames gemaakt met de normale en enkele met de High Res stand.
De beelden zijn met Photoshop geopend en niet bewerkt. Er is geen enkel verschil in beeldkwaliteit waarneembaar.
Zie hieronder de originele opname,

origineel

onder de High Res opname.
hr


De laatste opname is tweemaal groter dan de originele. Om redenen van vergelijking is de HR opname dus tweemaal verkleind. Ook de werkelijke grootte laat geen verschil zien bij de fijnste patronen.
Ik heb geen ervaring met andere camera’s. Als die op vergelijkbare wijze werken als de Olympus, zal het verschil dus wel minimaal zijn. Zo wie zo is de hedendaagse fixatie op aantallen pixels erg overdreven. Je mag wel spreken van een hype. De fijnere details komen er alleen uit bij bewegingloze opnames (camera en object). Of met studio flits! Als je uit de hand werkt, zakt de feitelijke resolutie aanzienlijk, al zal dit met beeldstabilisatie wel weer meevallen.

Here is the English version

Most cameras nowadays have the technique of the movable image sensor built in. These movements, often five-axis, offer the possibility of image stabilization, sensor surface cleaning and indeed also the option of pixel shift or multishot. Because in the latter case eight images are merged into one image, where each individual image is shifted over a minuscule distance (usually one or a half pixel), the total number of pixels grows considerably. The idea of pixel shift comes from the video camera, which originally worked with separate image sensors that were only sensitive to one colour. Each sensor had only a limited number of pixels (around one million) and the technique was such that the image was made up of a combination of each sensor individually. In order to increase the modest resolution, tricks such as shifting these sensors over a limited distance were used to improve colour interpretation and suppress noise.
In digital photo cameras, a single image sensor is used that is covered with the well-known RGB Bayer pattern. As a result, quite a lot of colour information is lost per pixel, which is supplemented by interpolation.
The pixel shift technology works as follows: during the recording, the image sensor is shifted over the distance of a pixel, so that each pixel is exposed by all RGB filters and can therefore record all colour information. Because each pixel can also capture more light, the noise is also reduced.
The disadvantage is that this technique only works with a tripod and with real stationary objects.
So much for the theory. But does it also work?
I don't have a Pentax or Sony or Panasonic or Leica (the SL2 is too expensive, bulky and heavy). I do have an Olympus Pen-F that offers a similar technique.
For the sake of clarity: in none of the mentioned cameras is the physical area of the photosites on the image sensor changed. The physical resolution remains therefore the same, even if you move the sensor and increase the number of pixels by software. This interpolated resolution is software generated and thus virtual. Anything can happen here!
The Pen-F has an MFT size image sensor with 5184 x 3888 photosites. The length of a photosite (usually also referred to as a pixel) is 17.4 mm / 5184 pixels = 3.4 micrometers, which translates into a total area of 226, 2 mm^ 2. The surface area for a pixel is 11.56 mm^2.
The number of effective pixels is 20. 3 Megapixels. To compare, the new Leica SL2 has a surface area per pixel of 18.49 mm^2 at a total effective surface of the image sensor of 46.7 Megapixels. Because a pixel is dimensionless (as opposed to the physical photosite), the actual surface area is less important. The total number of megapixels is increased to 187 megapixels with the SL2's multishot technique. (= 46.7 x 4).
With the Olympus Pen-F the numbers are a bit more modest. The original 20.3 megapixel is enlarged to 50 Megapixel. The original sizes (5184 x 3888) are enlarged in Raw format to 10368 x 7776, a doubling. While the Leica SL2 has a pixel shift of half a pixel, for the Pen-F this is only a quarter of a pixel.
To check the theoretical increase in resolution, I did the following study with the Pen-F. I took a super sharp slide from the USAF test pattern (from Image Engineering), the Kaiser exposure unit and as an objective the Olympus 2.8/60 mm macro, with the aperture at f/8. Of course on a tripod. Some shots made with the normal and some with the High Res mode.
The images were opened with Photoshop and not edited. There is no difference in image quality.
The pictures are above: first the original recording. The High Res recording is below this one.
The last picture is twice as large as the original one. For reasons of comparison, the HR recording has been reduced twice. Also the actual size shows no difference in the finest patterns.
I have no experience with other cameras.. If they work in a similar way to the Olympus, the difference will be minimal, so today's fixation on numbers of pixels is very exaggerated. You can speak of a hype. The finer details only come out with motionless shots (camera and object). Or with studio flash! If you get out of hand, the actual resolution drops considerably, although with image stabilization this will not be too bad.

Farewell to the Leica World


For more than 35 years I have been intimately involved in the Leica world, encompassing the history of the company, the analysis of the products and the use of the products, all under the umbrella concept of the Leica World.
I have experienced and discussed in detail with relevant persons in Wetzlar (old), Solms and Wetzlar (again, new) the digital turn and how the company evolved and changed while adopting the digitalization of the photographic process and the changing world of the internet based photography. The most recent event is the evolution from a manufacturing company to a software-based company. While a commercial success, this change of heart has accomplished a, perhaps not intended, impact: the soul of Leica products has been eradicated. A renewed interest in classical products is the result. The SL and Q are currently the hopeful products for the future. The ghosts of Huawei and Panasonic can be seen all over the campus and while the M-system is still being promoted as the true heir of the Leica lineage, it is now sidelined. Once upon a time, Leica followed its own path, guided by gifted and pioneering engineers and keen marketeers. Nowadays its products are as mainstream as every other camera manufacture.
The company has sketched a future and follows a path that I am no longer willing to go.

Leica M 50 mm objectieven

There are three 50 mm lenses for Leica M that many Leica aficionados are interested in: the classical Summicron lens from 1979, the Apo-Summicron-M FLE from 2012 and the Summilux-M FLE ASPH from 2004.
Fifty millimeter lenses are still quite popular and exhibit excellent characteristics. I am most interested in the performance for very fine textural details. The 40 lp/mm are a good benchmark for the crisp rendering of fine detail, especially at high MTF values. The classical rule of optical designers (50% contrast at 50 lp/mm) is still a very good rule and several recently introduced lenses for the AF Leica cameras with the L-mount show this level of performance.
The M lenses can be used on older cartridge loading models from M3 to M7 and modern versions (M-A = M4P; MP = M6) and there are films available that can capture extremely fine detail. So I used the limits of 80 lp/mm and 160 lp/mm on the Zeiss K8 equipment. I am less interested in the popular notion of bokeh (unsharpness before and after the plane of sharpness) because this is a very subjective impression.
I prefer hard numbers to base my conclusions on.
Let me be clear: 160 lp/mm would require points with a dimension of 0.003 mm or a sensor with a pixel pitch of 3 micron to record this fine detail. This is a sensor of more than 88 megapixel. None of the three lenses would be able to perform at this level.
The 80 lp/mm are a different matter. Here all three lenses can cope with this very fine detail. A few figures first: 80 lp/mm require points with a dimension of 0.006 mm or a sensor with pixel pitch of 6 micron. That is equal to 24 Megapixel, the current sensor characteristics of the M. (the same pixel pitch is also available in the forgotten M8/M8.2). So a sensor with more pixels than the current 24 Megapixel makes only sense for selective enlargements.
Let us look more closely at the numbers.
At 40 lp/mm the SX at f/1.4 is almost as good as the Summicron-classic at f/2. This Summicron lens is however surpassed by the APO-Summicron at the same aperture. The difference is on average ten percentage points more contrast for the Apo-version. The center is for all three lenses much better than the edges, which is to be expected. Modern design rules state that a straight line from center to edge is a more desirable performance. A careful study of the numbers will also sho that there is a difference between the sagittal and the tangential planes. The more these two differ, the more blurring and color fringes you will see.
At 80 lp/mm the SX at f/1.4 is still very good, but the recording of this level of fine detail depends on the illumination of the scene. The average value for contrast of 20% is the limit. This said, the performance of a high-speed lens at f/1.4 is too good to be true and sets indeed a benchmark.
At f/2 the Apo-version is ahead of the classic version. The same remarks as for the SX can be made here: this level of fine detail can be handled with ease by the Apo-version and is the limiting value for the classical version.
At f/4 the ranking is:all three lenses perform now at an optimum level and there is not much to choose between the three lenses reviewed here.
The classic rule that at medium apertures most lenses perform in the same league is not obsolete. If a prospective buyer uses the medium apertures most of the time, the classical Summicron is a bargain! The Apo-Summicron shows the best overall performance and the SX is the most versatile of the three. Subtle differences can be found in the performance from wide open till f/4, especially around f/2.8 the critical user might find arguments for the choice to be made.
It is remarkable how good the lens design was in 1980 and how much effort one needs to improve substantially on this design, either as an overall improvement (the Apo-version) or an extension to a wider aperture (the SX).

leica-M-MTF-gegevens

Developer review


There are now many reviews of film-developer combinations on the internet. Most of them are quite subjective and hardly worth the reading time. The often used format is simple: use a film-developer combination and take a range of pictures of different subjects under different conditions. Then use the eye to look at the results and use the emotion to guide you.
This is not the way to present information about film-developer combinations that a reader an use. The classical approach is also quite simple: take pictures of a grey card with a range of exposures to create a series of negatives with different densities and use a densitometer to measure this density. Then use a test chart to measure the resolution and sharpness under a microscope with sufficient enlargement.
This is what I did:
Use one film, the best on the market (Ilford 100Delta), a grey card and the Tirion test chart.
On my camera I had a lens of excellent quality, stopped down to f/5.6. The first remark is that even with this aperture, you need to be careful to get the best results. The microscope had enlargements of 40x and 100x. The maximum common enlargement is 15x to 20x.
The developers used in this test are:
1. FX39-II: the classical high acutance developer, formulated by the late Geoffrey Crawley and now made by Adox. This is one of the best developers on the market.
2. Adonal: also made by Adox and a reformulation of the classical Rodinal.
3. Pyro 48: made by Moersch Chemie, a new version of the classical pyro developer
4. Super Grain: a new version of the AM74.
5. The Df96 monobath, made by Cinestill. This is an interesting developer, because it combines developer, and fixer solutions in one. You need only one development time for all films. The classic formula has very fine grain but less sharpness. It is interesting to see how the modern remake functions.
The film was exposed at the nominal speed (ISO100), the light was measured with the Sekonic Speedmaster with the incident method. camera was Leica M7 with Summilux-M 1.4/50 ASPH.
The development data are:
1. FX39-II: dilution: 1+9; temperature 20 degrees; 7 minutes; continuous: first 30 seconds, then 2x per minute
2. Adonal: dilution: 1+25; temperature 20 degrees; 9 minutes; continuous: first 1 minute, then 30 seconds per minute
3. Pyro 48: dilution:2.5 + 5 + 250; temperature 20 degrees; 16 minutes; continuous: first 1 minute, then 2x per minute
4. Supergrain: dilution: 1+9; temperature 20 degrees; 6 minutes; continuous: first30 seconds, then 2x per 30 seconds
5. Df96: stock solution; temperature 22 degrees; 6 +4 minutes (to clear the negatives); moderate agitation: 2x per minute

Results: tonal range.

The density range gives information about the effective speed, the maximum useable density for the highlights and the steepness of the curve says something about the subtleties of the tonal differences: a steep curve is an indication that the mid tones are very well separated and a less steep curve tells you that the tonal differences are well recorded but more difficult to observe.
The graph below gives all the details.


curves-developer
There are in fact three groups: The first group comprises Adonal and SuperGrain: The tonal differences in the extreme highlights will be difficult or impossible to print, but the shadow areas are very well recorded with good local contrast. In practice the speed of the film is fully exploited, but the dilutions could be higher (Adonal: 1+50/100 and SG 1+15 of even 1+20) with proportional increase of the development time. A reduction of development time is advisable to reduce the densities of the highlights. Some experimentation is a rewarding exercise!
The second group combines the FX39-II and the Pyro 48. Both developers score very well on tonal range and highlight density, producing subtle tonal shades with high overall contrast. Nine stops is a good score that matches the claimed tonal range of most digital cameras. For best shadow recordings, the speed should be reduced by one third (FX39) or even half a stop (Pyro).
The third group is populated by only one developer: the monobath developer by Cinestill. It has a very convenient processing cycle: no stop bath and no fixer. The high lights are well recorded, and matches the second group. Disappointing is the steep drop in the shadow area. After two stops of underexposure there is nothing to record. Deep shadows will be completely black without any trace of subject contours.The solution to reduce speed will help and the instruction leaflet says that pull processing is possible. My recommendation: set ISO speed to 50 and use the 6 minute development time. This developer is best used when deep shadows are absent in the scene.
The score (speed and tonal range) is
1 FX39-II
2 Pyro 48
3 SuperGrain
4 Adonal
5 Df96
Note that the numbers 3 and 4 could get a better score after some experimentation with speed setting and development time.

Results: grain and definition


tirion-chart

The test chart has a number of intriguing details: it shows fine print in several sizes and printing is white on black and black on white. The chart is arranged as a range of pie charts, numbered 1 to 8. (1 is op top). The white on black print is more difficult to observe because the large area of black grains will sill over into the thin white lines of the fine print.
The grain is quite pronounced with Adonal (as expected) and hardly visible with Df96 (not expected, but assumed because of the large amount of sulphite). Between these two, the grain pattern is similar for the three others. The score is:

1 Df96
2 Pyro 48
3 FX39-II
4 SuperGrain
5 Adonal

The fine print that is just readable is the limit for the resolution:
here the score is
1 FX39-II
2 Pyro 48
3 Df96
4 SuperGrain
5 Adonal

General conclusion
Overall there is not much to choose between the five developers. It is also a tribute for the Ilford emulsion. There is a old statement that says that the main characteristics are fixed by the emulsion and all that the developer can do is adjust the balance between grain, tonal range and definition a bit. This test indicates the truth of this statement. There are more considerations today to look at: Adonal is quite flexible and has a long shelf life. It produces excellent sharpness with a pronounced grain, but can be used with every film and te shape of the curve can be influenced to a high degree. It is also very cheap.
The Df96 is also quite flexible, can be used with all films and has quite simple instructions, a long shelf life, but one litre is limited to 16 rolls of 135 film. The shadow recording is non existent, but when this is not a problem, the developer is very easy to use and you need no fixing solution.
The SuperGrain functions like an improved version of Rodinal: it gives very sharp results with moderate grain and a very fine tonal range. Most films require only one development time and one dilution.
The Pyro and FX39 are the best for the recording of extremely fine detail. Grain is fine and tonal range is well within the grade 2 of the print range in Splitgrade/Multigrade. The FX39 gives very clean negatives, where the Pyro has its staining effect. The only problem with Pyro is its restricted range of films that match this developer.
My choice then is for this Ilford film: FX39-II. It has excellent definition, fine gran and a long tonal range with good shadow details and subtle highlights.
Note: with the exception of the characteristic curves, all results were observed under the microscope with 40x enlargement. Scatter in the enlarger will reduce the final results and then Adonal and SuperGrain, because of the specific grain size and distribution may hold details to a larger degree. In fact, you can not make a wrong choice with any of these developers. Fine tuning the exposure method in combination with experiments with the development times, temperature and agitation method will improve the results. Any person has its own requirements and visual standards, but the results presented here should provide a good starting point.

Linear or nonlinear?


This question might surprise you. But there are good arguments to discuss this mathematical concept in the photographic environment. Assume you have a 24 Megapixel sensor, like the one in the current M series. The image quality (a very elusive concept) should be replaced by the information capacity, but for now it are the Image Engineering calculations that dominate the discussion. The Image Engineering analysis of the sensor performance (always including a lens) has a good correlation with perceived image quality. The German magazine Color Foto is a true believer of this software. The recent issue has a report of the Leica M10-D. The results are, for ISO100, 1931 line pairs per image height (LP/IH). One would assume that when doubling the number of pixels on the sensor, the lp/im would also double. This is the linear aproach: when x=y the 2x would be 2y. As it happens, there is also a report of the Nikon Z7 with 45.7 Megapixels. Almost twice the amount of the pixels on the Leica sensor. The result? At ISO 100 it is 2822 lp/ih. The Z6 (with 24.5 Mp) has 1988 lp/ih. The lenses used are different of course. This might have some influence, as is the selection of the JPG format. The results of the Z7 are intriguing. There is a 90% difference between the pixel amount of the Leica M10 compared to the pixel amount of the Z7, but only a 46% increase in resolution. A non-linear result! So a roughly 2 times the amount of pixels results in only 1.5 times resolution.
Another comparison: the Leica has pixel pitch of 6 micron, the Z6 of 5.9 and the Z7 of 4.3 micron. The APS-C sensor of the Ricoh GR III has a pixel pitch of 3.9 micron with an APS-C sensor of 24 Mp and a resolution of 2075 lp/ih. Presumably the pixel size is more important than the sensor size. The Leica M8 is a living proof for this argument!
If and when Leica will decide on an increase of the amount of pixel for the next generation of the M camera it will be somewhere between the 24 Mp of the current model and the ±65 Mp of the Leica S models. Not being in the position to being allowed to compete with the SL in future versions (let us assume 45 Mp) the final amount would be somewhere between 24 and 45: 34.5, which happens to be neatly between both extremes. Then the increase in amount of pixels will be ±40%. The predicted increase in resolution will be around 0.5*40% and 0.75 * 40% = between ± 20 and ±30% or 1900 *1.25 = 2375 lp/ih, not the result to be really happy with. Assuming the usual tolerance of 5% for the bandwidth of the measured results, these figures only give the direction of thinking. The exact values are less important.
The same argument can be found in the discussion about film emulsions that can record 200 lp/mm and film emulsions that can ‘only’ record 80 lp/mm. With 80 lp/mm almost every detail, that is visually relevant in a scene, can be captured. But the price for the higher resolution is slow speed, careful focusing and the use of a tripod. In handheld shooting, the increase in resolution can not be exploited. Again, assuming that the M camera will be the champion of handheld snapshot style of photography, the current level of resolution that is supported by the sensor is more than adequate for the task. Leica could improve the imaging chain and especially the demosaicing section for enhanced clarity and best results.
UPDATE June 10: There is some confusion here:
Let us first get the basic figures about the measurements, based on the IE software, related to the fixed ISO 100
Ricoh GR-III: 24 Mp and 2075 lp/ih (pixel pitch 3.9 micron)
Leica M10-D: 24 MP and 1911 lp/ih (pixel pitch 6 micron)
Nikon Z6: 24 Mp and 1988 lp/ih (pixel pitch 5.9 micron)
Nikon Z7: 46 MP and 2822 lp/ih (pixel pitch 4.3 micron)
It is universally assumed that in order to double the resolution, one needs a four times increase in area: to double the resolution of the Leica sensor (24 Mp) one needs a sensor size of 4 * 24 = 96 Mp. This increase in size would (theoretically!) elevate the resolution from 1900 to 3800 lp/ih.
The Nikon Z7, which has only twice the area of the Leica sensor and therefore its resolution would be less: it is in fact 2800 lp/ih. The sensor of the Ricoh with 24 Mp has 2075 lp/ih with a comparable pixel pitch. This is important to note, because the Nyquist limit is related to the pixel pitch. For a pixel pitch of 4 micron, the Nyquist frequency is 0.008 mm per mm (or cycle) = 125 lp per mm. (application of the Kell factor of 0.7 gives 87.5 lp per mm). 2000 lp/ih is 64 lp/mm for a 15.6 mm image height. So there is some room for improvement, at least theoretically. The pixel size of 6 micron for the Leica would produce 0.012 lp per mm or 83,3 lp per mm. The 1900 lp are for the image height of 24 mm, which is 79 lp/mm. Including the Kell factor which says that you can only reliably resolve 70% of the Nyquist frequency, the practical resolution limit of the Leica sensor would be .7* 83 = 58 lp for every mm. The Leica imaging chain is better than that of the Ricoh! Or one could also claim that the JPG demosaicing of the Leica is more aggressive and that spurious resolution is spoiling the results.
The Nikon Z7 with its 4 micron pixel pitch would be able to resolve 0.7*125 lp/mm = 87.5 lp per mm. The image height is 24 mm and the resolution is 2800 lp/ih. This would result in 117 lp per mm compared to a Nyquist frequency of 125 lp per mm. Compare the measured resolution with the calculated Nyquist limit and the Kell factor:
Leica M10-D: 79 lp/mm; 83.3 lp/mm; 58 lp/mm
Nikon Z7: 117 lp/mm; 125 lp/mm; 87.5 lp/mm
The measured resolution is quite close to the Nyquist number. This is not surprising because the IE software uses the Nyquist calculation as the limiting factor in their calculations. This limiting value would result at the point where the contrast is almost zero. Not very useful! The Kell factor is used because there is a contrast level below which there is no visual difference between two adjacent lines. A contrast difference of 15% is the minimum and the Kell factor is in many cases too conservative.

Now the calculation. Doubling the size (from 24 Mp to 46 Mp) produces an increase in resolution of 1900 lp/ih to 2800 lp/ih. That is an increase of 47% or a factor of about 1.5. This is indeed a one-dimensional relation. It compares only one direction and not the area. But here is the confusion. The resolution is measured one dimensional in line pairs per mm. This resolution is identical in the horizontal and the vertical direction.The pixel pitch is a square measure (the 6 micron length of the Leica are the same for both directions. The pixel has a square area!) Now an example: assume that we would like to have the resolution of the Z7 for a new Leica sensor. Going from 1900 to 2800 lp per mm and increasing the resolution in both directions would require that the amount of pixels for the same size of the sensor grows2800/1900*24 = 35.4 Mp to get the same resolution of 2800 lp/ih. This value is less than expected. But the Leica processing chain might be more effective. The 35 Mp number would require a pixel pitch of 4.1 micron. This would result in a Nyquist value of 122 lp/mm or 2920 lp/ih. If we require to double the resolution of 1900 lp per mm, we would need a decrease in pixel pitch from 6 to 3 micron to get a resolution of 166.7 lp per mm. (Nyquist limit). This would imply an increase in amount of pixels to 96 Mp or 4 times the actual Mp or 24 Mp.
Mixing the concept of the number of pixels in a certain area and the concept of the resolution of the pixel itself (in a one dimensional line) may be the reason for much confusion. The Nyquist frequency is a one dimensional measure, assuming a square sized pixel, and will calculate the resolution of the system. The resulting pixel pitch will define the number of pixels per sensor area.