Budget so in the common case enough tiles are processed to completely update the screen at some sample density per pixel. When less tiles are needed to be updated (better scene temporal coherence) there is an ability to,
(a.) save power and battery life, or alternatively
(b.) get increased sample to pixel density for increased quality.
Core point is to have full scalability in engine.
Worst Case?
Worst case is really when OS grabs some percentage of GPU processing for some other task or OS doesn't schedule game on the GPU at the right time (perhaps because CPU ran late generating draw calls because a task got preempted, etc). Engines designed around constant workloads are guaranteed to hitch. However in this case, want engine to gracefully degrade quality while maintaining locked frame rate (this is an absolute must for VR). For example in this "worst case" hitch situation, engine could maintain say 60Hz, 90Hz, 120Hz or 144Hz, and amortize shading update to across multiple frames. If fixed costs (aka drawing the frame using the shade cache) are low, then the game has more opportunity to maintain a consistent frame rate in the worst case. Shading into the cache simply fills whatever is left over. This enables better ability to deal with variable amount of geometric complexity.
Authoring?
Want content creation to just be able to toss anything at an engine, with the engine automatically scaling to maintain the frame-rate requirement regardless of having a high-end or low-end GPU in a machine, or having simple or massively complex content. Likewise want game to transparently look better in the future as faster GPUs are released. Shaded sample density both spatially and temporally can be the buffer which enables scalability.
Perceptual Masking
EDIT (added). There are natural limits to human perception. Some limits are related to eye scanning speed, eye attention, others perhaps to how fast the mind gains a full understanding of a situation. If a title has a context sensitive idea of where a players attention is, the game could bias increased sample density in that area. Obvious candidates: the player in 3rd person, the player's active target, etc. Likewise on a scene change (or under fast motion which is not directly tracked by the eye) there are limits to the amount of detail the mind can perceive in just one frame. So if an image is initially not detailed in rapid change areas in such a way that does not directly trigger the mind's sense of a visual artifact (for example blocky high contrast edges), then rapidly converges to high quality faster than perceptual limits under visual coherence, human limits can mask lack of ability to produce the ideal still image in one frame.
(a.) save power and battery life, or alternatively
(b.) get increased sample to pixel density for increased quality.
Core point is to have full scalability in engine.
Worst Case?
Worst case is really when OS grabs some percentage of GPU processing for some other task or OS doesn't schedule game on the GPU at the right time (perhaps because CPU ran late generating draw calls because a task got preempted, etc). Engines designed around constant workloads are guaranteed to hitch. However in this case, want engine to gracefully degrade quality while maintaining locked frame rate (this is an absolute must for VR). For example in this "worst case" hitch situation, engine could maintain say 60Hz, 90Hz, 120Hz or 144Hz, and amortize shading update to across multiple frames. If fixed costs (aka drawing the frame using the shade cache) are low, then the game has more opportunity to maintain a consistent frame rate in the worst case. Shading into the cache simply fills whatever is left over. This enables better ability to deal with variable amount of geometric complexity.
Authoring?
Want content creation to just be able to toss anything at an engine, with the engine automatically scaling to maintain the frame-rate requirement regardless of having a high-end or low-end GPU in a machine, or having simple or massively complex content. Likewise want game to transparently look better in the future as faster GPUs are released. Shaded sample density both spatially and temporally can be the buffer which enables scalability.
Perceptual Masking
EDIT (added). There are natural limits to human perception. Some limits are related to eye scanning speed, eye attention, others perhaps to how fast the mind gains a full understanding of a situation. If a title has a context sensitive idea of where a players attention is, the game could bias increased sample density in that area. Obvious candidates: the player in 3rd person, the player's active target, etc. Likewise on a scene change (or under fast motion which is not directly tracked by the eye) there are limits to the amount of detail the mind can perceive in just one frame. So if an image is initially not detailed in rapid change areas in such a way that does not directly trigger the mind's sense of a visual artifact (for example blocky high contrast edges), then rapidly converges to high quality faster than perceptual limits under visual coherence, human limits can mask lack of ability to produce the ideal still image in one frame.