The danger of algorithms
Photography is a simple technology. Establishing the correct exposure is not rocket science. The brightness value of the scene can be matched to the light sensitivity of the emulsion or imager. This match assumes that the brightness contrast between dark and light parts of the scene has a numerical value of 1:160. The assumption is that there is an even distribution between the brightness levels (the tonal range) in the scene. Compensating for a bias (many dark or bright tones, higher or lower contrast) is easy, especially with cameras fitted with imagers, because in this situation the bias compensation can be matched with the characteristics of the imager itself, because the variable of the developer is no longer part of the equation. When film emulsions were the norm, the method of compensation had to take into account a standard emulsion and its response.
The compensation is limited, perhaps plus/minus one stop or at most two stops. Any algorithm can handle this task without any problem. Exposure algorithms were already effectively implemented in cameras before the digital era.
When using emulsions, there is always a risk involved. A conscious decision by the photographer/operator is required to reduce this risk. Digital techniques have reduced this risk to a minimal level. The photographer with a digital camera relies of the algorithms to take risk-free images.
It might be better to rephrase the standard analog-digital dichotomy as a risk-taking versus a risk-aversion strategy. Risk-taking is part of the human condition and helps you grow. Risk aversion is the best prescription for creative stagnation. Look at the billions of pictures on the internet and you can see where current algorithms lead you to.