Home / Gear / The DSLR Will Likely Die: Are Mirrorless the Future of Big Standalone Cameras?

The DSLR Will Likely Die: Are Mirrorless the Future of Big Standalone Cameras?

People often ask me, given the improvement and ubiquity of cell phones, whether DSLRs survive. This actually entails two slightly different questions: will standalone large-ish cameras survive, and will the particular reflex design (the R in DSLR) survive? I am cautiously optimistic about the former and very pessimistic about the latter. In this piece, I will discuss DSLR vs. mirrorless.

Historical Perspective

Let’s see why I think the reflex design is doomed, even though it has dominated serious photography for decades. DSLR means digital single lens reflex. The term Reflex comes from reflection and means that the photographer sees an optical image through the viewfinder thanks to a mirror placed at 45 degrees in front of the sensor.

DSLR diagram by gurucamera.com and licensed under CC BY 2.0

The mirror needs to be flipped up when taking a photograph, which, together with the shutter, is the source of the typical SLR noise. In contrast, in rangefinder or twin-lens reflex cameras (those old-looking cameras with two lenses), the photographer sees an image from an offset viewpoint, which can result in parallax error where the captured image is not exactly what was expected.

Historically, the reflex design has proven superior for two main reasons. First, the photographer sees exactly the image that will be taken “through the lens.” This in turns makes it possible to have a rich set of interchangeable lenses. In contrast, it is harder to change a lens on a rangefinder or twin-lens reflex because you also need to change the viewfinder lens or have marks to visualize the field of view of different lenses.

Second, and this came much later, the SLR design enables superior autofocus thanks to a secondary optical path through parts of the mirror to specialized AF sensors. These sensors essentially perform stereo vision between viewpoints at the edges of a lens for a discrete set of AF points on the image plane. This is often called phase-based autofocus, although stereo would be clearer in my opinion.

Video and digital cameras have fundamentally changed the photography landscape by enabling another type of “through-the-lens” viewing, where the sensor used to capture the final image can also be used for preview, albeit with an electronic screen and not directly optically. Originally, though, this still came at the cost of inferior autofocus performance because there is no space to route light towards phase-based sensors. As a result, non-reflex digital cameras first used slow contrast autofocus.

In a nutshell, they had to sweep through multiple possible distance settings to find the sharpest one (highest contrast), whereas phase-based autofocus could directly compute the correct distance in one step. Note that when DSLRs are used for video, however, they must keep the mirror up. This means that they are back to the same constraints as other cameras, and would originally use slow contrast autofocus.

A breakthrough occurred when camera manufacturers modified cameras to perform phase-based (or stereo) autofocus directly using the main sensor. The idea is to split some or all pixels into two sub-pixels that capture light coming from only half of the lens. This means that they now can perform stereo between images taken roughly from the center of each half of the lens aperture. This explains the dramatic improvement in autofocus ability for non-DSLR cameras such as the various mirrorless systems (Sony, Olympus.) This technology also gets integrated into some DSLRs, in particular Canon’s dual pixels, because it is needed for video shooting.



November 2018
« Oct    

Featured Video