The associated shadertoy: www.shadertoy.com/view/MtBXWw
Premise of this post is that the costs associated with increasing resolution scale many times faster than just the increased area in pixels. Those in the creative trades probably can relate to the above statement. Roughing out a scene to establish spatial relations and the tonal variation, enough to fully convey the feeling of a scene, can be quite fast, transforming the same image to something that is photo-real is quite an undertaking. Increasing detail is increasing the accuracy required in the reproduction of visual truth. At NTSC resolution, skin rendering of a detailed arm is relatively easy in comparison to say 4K where hair on an arm becomes visible and then requires some kind of physical simulation to accurately reconstruct in video. The human mind is the worlds best prediction machine, the less information input, the larger the ability to create plausible reality. Unfortunately the more the input, the larger chance the prediction machine predicts the input is false.
This obsession with pixel quality instead of quantity has a direct purpose: to push towards "distraction free rendering".
When one sits in a movie theater watching a film, it is possible to forget being in a theater, to be so immersed, that the perception of the real world disappears. If one's eyes are open, this state is only entered when the mind is presented an image which is free of any kind of distraction that breaks the perception of visual truth.
Interesting Quantifiable Example
In VR with traditional sample based triangle rendering on the GPU, stereo separation and thus spatial accuracy is not just a function of resolution. It is more a function of {resolution * number of steps of intensity as an edge moves through a pixel}. Given changing pixel intensity along an edge, the mind can infer the sub-pixel position.
Using a pair of dots representing a single pixel,
. .
. .
When rendering a 2x2 quad of pixels with no extra sampling,
. . , ,
. . , ,
, , . .
, , . .
Normally shaded samples (represented by ^) are grid aligned in the center of the pixels,
. . , ,
.^. ,^,
, , . .
,^, .^.
It is possible however to adjust the sampling pattern (on GCN) to get more horizontal spatial intercepts by rotating the 2x2 pixel grid,
. N , ,
. . , E
W , . .
, , S .
However an edge with a sub-pixel offset not half way through a pixel will tend to have a dithered pattern (in this example one character per pixel),
. . . . . . . . . . .
. . . . . . . . . . .
X . X . X . X . X . X
X X X X X X X X X X X
X X X X X X X X X X X
Which requires a filter to correct for. Typically a 2 pixel diameter resolve kernel, to low-pass the signal. This pass reconstructs an image with 2x higher spatial accuracy on edges (2 steps instead of 1 for sub-pixel motion), but at the expense of decreasing sharpness or detail to 1/2x.
Given two displays, one with half the resolution in width and height, both with an image constructed with the same amount of shaded samples, it becomes free on the lower resolution display to adjust sample positions to increase spatial accuracy edges, because there is no longer a need for grid like positioning required for increased detail.
For the same cost in shaded samples, the lower resolution option has 2x the spatial accuracy!
There are other advantages. The lower resolution option is 4x less likely to show "start and stop" artifacts on moving edges do to lack of sub-pixel transition steps. The lower resolution option shows 4x less contrast in temporal aliasing (4 shaded samples per pixel instead of 1).
An associated shadertoy visual example: www.shadertoy.com/view/MtBXWw
// Each pair of rows has same number of shaded samples.
// Samples are shaded either to white or black.
// Top of pair is at full resolution.
// Bottom of pair is at 1/2 resolution (aka 1/4 area).
// Shows geometric aliasing in motion.
//
// Rows from top to bottom,
//
// _1x______ at full resolution
// _4xSGSSAA at 1/4 area in resolution
//
// _2xSGSSAA at full resolution
// _8xSGSSAA at 1/4 area in resolution
//
// _4xSGSSAA at full resolution
// 16xSGSSAA at 1/4 area in resolution
//
// _8xSGSSAA at full resolution
// 32xSGSSAA at 1/4 area in resolution
//
// 16xSGSSAA at full resolution
// 64xSGSSAA at 1/4 area in resolution
Premise of this post is that the costs associated with increasing resolution scale many times faster than just the increased area in pixels. Those in the creative trades probably can relate to the above statement. Roughing out a scene to establish spatial relations and the tonal variation, enough to fully convey the feeling of a scene, can be quite fast, transforming the same image to something that is photo-real is quite an undertaking. Increasing detail is increasing the accuracy required in the reproduction of visual truth. At NTSC resolution, skin rendering of a detailed arm is relatively easy in comparison to say 4K where hair on an arm becomes visible and then requires some kind of physical simulation to accurately reconstruct in video. The human mind is the worlds best prediction machine, the less information input, the larger the ability to create plausible reality. Unfortunately the more the input, the larger chance the prediction machine predicts the input is false.
This obsession with pixel quality instead of quantity has a direct purpose: to push towards "distraction free rendering".
When one sits in a movie theater watching a film, it is possible to forget being in a theater, to be so immersed, that the perception of the real world disappears. If one's eyes are open, this state is only entered when the mind is presented an image which is free of any kind of distraction that breaks the perception of visual truth.
Interesting Quantifiable Example
In VR with traditional sample based triangle rendering on the GPU, stereo separation and thus spatial accuracy is not just a function of resolution. It is more a function of {resolution * number of steps of intensity as an edge moves through a pixel}. Given changing pixel intensity along an edge, the mind can infer the sub-pixel position.
Using a pair of dots representing a single pixel,
. .
. .
When rendering a 2x2 quad of pixels with no extra sampling,
. . , ,
. . , ,
, , . .
, , . .
Normally shaded samples (represented by ^) are grid aligned in the center of the pixels,
. . , ,
.^. ,^,
, , . .
,^, .^.
It is possible however to adjust the sampling pattern (on GCN) to get more horizontal spatial intercepts by rotating the 2x2 pixel grid,
. N , ,
. . , E
W , . .
, , S .
However an edge with a sub-pixel offset not half way through a pixel will tend to have a dithered pattern (in this example one character per pixel),
. . . . . . . . . . .
. . . . . . . . . . .
X . X . X . X . X . X
X X X X X X X X X X X
X X X X X X X X X X X
Which requires a filter to correct for. Typically a 2 pixel diameter resolve kernel, to low-pass the signal. This pass reconstructs an image with 2x higher spatial accuracy on edges (2 steps instead of 1 for sub-pixel motion), but at the expense of decreasing sharpness or detail to 1/2x.
Given two displays, one with half the resolution in width and height, both with an image constructed with the same amount of shaded samples, it becomes free on the lower resolution display to adjust sample positions to increase spatial accuracy edges, because there is no longer a need for grid like positioning required for increased detail.
For the same cost in shaded samples, the lower resolution option has 2x the spatial accuracy!
There are other advantages. The lower resolution option is 4x less likely to show "start and stop" artifacts on moving edges do to lack of sub-pixel transition steps. The lower resolution option shows 4x less contrast in temporal aliasing (4 shaded samples per pixel instead of 1).
An associated shadertoy visual example: www.shadertoy.com/view/MtBXWw
// Each pair of rows has same number of shaded samples.
// Samples are shaded either to white or black.
// Top of pair is at full resolution.
// Bottom of pair is at 1/2 resolution (aka 1/4 area).
// Shows geometric aliasing in motion.
//
// Rows from top to bottom,
//
// _1x______ at full resolution
// _4xSGSSAA at 1/4 area in resolution
//
// _2xSGSSAA at full resolution
// _8xSGSSAA at 1/4 area in resolution
//
// _4xSGSSAA at full resolution
// 16xSGSSAA at 1/4 area in resolution
//
// _8xSGSSAA at full resolution
// 32xSGSSAA at 1/4 area in resolution
//
// 16xSGSSAA at full resolution
// 64xSGSSAA at 1/4 area in resolution