Public documentation on AMD's GCN arch,
AMD's Public Southern Islands Instruction Set Architecture
src/gallium/drivers/radeonsi (Linux Open Source Driver Source)
At home I've been tempted to take a Linux box with and AMD GCN GPU, and just go direct to the hardware since AMD has opened up a bunch of required documentation (ISA and driver source). The ultimate in graphics API is virtually no API, and no CPU work to draw anything. The engine would simply leverage the 64-bit virtual address space of the GPU, give all possible resources a unique virtual address, then for any given GPU target, pre-compile (meaning at author time) not just the shaders, but the command buffer chunks required to draw a resource in any of the render targets in the game's pipeline. For each render target in the rendering pipeline, the engine would maintain a GPU side array of 64-bit pointers to resources to test for visibility. Then for GPU side command buffer generation, for the GPU sorted array of visible resources, copy the command buffer chunk required to render the resource (or just copy a pointer to the chunk if the GPU supported hierarchical command buffers). Command and uniform buffers could be compiled double buffered if required for easy constant updates. Loading and rendering a resource is as easy as streaming into physical pages, updating page tables on the GPU and/or CPU (collected and done once per frame), then adding 64-bit pointer(s) to the buffer(s) for visibility testing.
Constants in the virtual address space would be mapped to write-combined memory on the CPU, so either the GPU or the CPU could update constants at run time. All the CPU overhead for generating command buffers (and resource constants for GCN) goes away. This removes the power and work done by typically 2 or more CPU threads and greatly increases the maximum amount of draw calls possible per frame.
This same concept can easily be extended to things CPU side also. Why allocate memory at runtime? Just setup large enough resources in virtual memory space to cover the worst case, and use physical memory backing at runtime to switch between common backed pages of "non-resident resources" and resident physically backed pages. Lay everything in the data flow network out into linear streams which are trivial pre-fetched by the CPU. Duplicate data if required, use compression on the packed source data sitting on disk, network, or solid state.
AMD's Public Southern Islands Instruction Set Architecture
src/gallium/drivers/radeonsi (Linux Open Source Driver Source)
At home I've been tempted to take a Linux box with and AMD GCN GPU, and just go direct to the hardware since AMD has opened up a bunch of required documentation (ISA and driver source). The ultimate in graphics API is virtually no API, and no CPU work to draw anything. The engine would simply leverage the 64-bit virtual address space of the GPU, give all possible resources a unique virtual address, then for any given GPU target, pre-compile (meaning at author time) not just the shaders, but the command buffer chunks required to draw a resource in any of the render targets in the game's pipeline. For each render target in the rendering pipeline, the engine would maintain a GPU side array of 64-bit pointers to resources to test for visibility. Then for GPU side command buffer generation, for the GPU sorted array of visible resources, copy the command buffer chunk required to render the resource (or just copy a pointer to the chunk if the GPU supported hierarchical command buffers). Command and uniform buffers could be compiled double buffered if required for easy constant updates. Loading and rendering a resource is as easy as streaming into physical pages, updating page tables on the GPU and/or CPU (collected and done once per frame), then adding 64-bit pointer(s) to the buffer(s) for visibility testing.
Constants in the virtual address space would be mapped to write-combined memory on the CPU, so either the GPU or the CPU could update constants at run time. All the CPU overhead for generating command buffers (and resource constants for GCN) goes away. This removes the power and work done by typically 2 or more CPU threads and greatly increases the maximum amount of draw calls possible per frame.
This same concept can easily be extended to things CPU side also. Why allocate memory at runtime? Just setup large enough resources in virtual memory space to cover the worst case, and use physical memory backing at runtime to switch between common backed pages of "non-resident resources" and resident physically backed pages. Lay everything in the data flow network out into linear streams which are trivial pre-fetched by the CPU. Duplicate data if required, use compression on the packed source data sitting on disk, network, or solid state.