Steam Machine
No news, they will have 300 users on prototype hardware, but no disclosure of what that hardware is.
AMD's GPU14 Event
4K Displays
A 5 Gflop GPU driving a 4K display is only 67% of the performance of a 1.84 Gflop PS4 driving a 1080p display. Perf/pixel matters. AMD hit 4 Gflops in 2012 June, looks like they will be at 5 Gflops in 2014. At this rate of performance scaling it will take (edited from initial post) more time before a single GPU product hits the 7.36 Gflops necessary to drive a native 4K resolution as well as a PS4 can drive 1080p. Practically speaking, 4K is for multi-GPU boxes right now.
TrueAudio
NVIDIA should take some notes here and just open up an NVAPI and an OpenGL extension which gives developers access to the GPU side audio ring buffers which would enable audio processing using the graphics API.
R9 290X
AMD R9 290X: 5+ Gflop/s, 300+ Gb/s, 4 Gtri/s, 6+ Btransistors
NVIDIA Titan: 4.5 Gflop/s, 288 Gb/s, 4.4 Gtri/s, 7 Btransistors
Given the presentation numbers, the AMD R9 290x looks to be roughly 1.4x the area of the last generation AMD HD7970 GHz and somewhere around 1.25x the performance (assuming exactly 5 Gflops). Looks like non-area perf scaling is waiting on another process change (this is still 28nm from my understanding). The best news here is a possible perf/$ story and that AMD doubled triangle throughput.
Mantle
Closer to the metal graphics API: 9x lower overhead than DX11 (9x the draw calls). More info in November, BF4 auto updated to use Mantle in December. Looks like Frostbite (the engine used EA-wide) is going to be directly optimized for AMD with some important performance and quality advantages compared to what is possible using just DX11 or GL. Estimating that on platforms where the GPU and CPU share power and bandwidth, Frostbite on AMD will have a serious overall graphics advantage (CPU uses less of the pie, more for graphics). Then on desktop, estimating the advantage will be that non-AMD-GCN platforms have a massive reduction in the number of objects drawn in the scene, much less responsive resource streaming, less consistant frame rate, and perhaps a bunch of eye-candy turned off. This is a huge deal, a major publisher using a to-the-metal graphics API on a PC.
No news, they will have 300 users on prototype hardware, but no disclosure of what that hardware is.
AMD's GPU14 Event
4K Displays
A 5 Gflop GPU driving a 4K display is only 67% of the performance of a 1.84 Gflop PS4 driving a 1080p display. Perf/pixel matters. AMD hit 4 Gflops in 2012 June, looks like they will be at 5 Gflops in 2014. At this rate of performance scaling it will take (edited from initial post) more time before a single GPU product hits the 7.36 Gflops necessary to drive a native 4K resolution as well as a PS4 can drive 1080p. Practically speaking, 4K is for multi-GPU boxes right now.
TrueAudio
NVIDIA should take some notes here and just open up an NVAPI and an OpenGL extension which gives developers access to the GPU side audio ring buffers which would enable audio processing using the graphics API.
R9 290X
AMD R9 290X: 5+ Gflop/s, 300+ Gb/s, 4 Gtri/s, 6+ Btransistors
NVIDIA Titan: 4.5 Gflop/s, 288 Gb/s, 4.4 Gtri/s, 7 Btransistors
Given the presentation numbers, the AMD R9 290x looks to be roughly 1.4x the area of the last generation AMD HD7970 GHz and somewhere around 1.25x the performance (assuming exactly 5 Gflops). Looks like non-area perf scaling is waiting on another process change (this is still 28nm from my understanding). The best news here is a possible perf/$ story and that AMD doubled triangle throughput.
Mantle
Closer to the metal graphics API: 9x lower overhead than DX11 (9x the draw calls). More info in November, BF4 auto updated to use Mantle in December. Looks like Frostbite (the engine used EA-wide) is going to be directly optimized for AMD with some important performance and quality advantages compared to what is possible using just DX11 or GL. Estimating that on platforms where the GPU and CPU share power and bandwidth, Frostbite on AMD will have a serious overall graphics advantage (CPU uses less of the pie, more for graphics). Then on desktop, estimating the advantage will be that non-AMD-GCN platforms have a massive reduction in the number of objects drawn in the scene, much less responsive resource streaming, less consistant frame rate, and perhaps a bunch of eye-candy turned off. This is a huge deal, a major publisher using a to-the-metal graphics API on a PC.