Spec | Sniper's PC | PlayStation 5 | Xbox Series X |
---|---|---|---|
CPU | 6-Core AMD Zen 2 | 8-Core AMD Zen 2 | 8-Core AMD Zen 2 |
CPU Max Clock | 4400 MHz | 3500 MHz | 3800 MHz |
GPU | Nvidia RTX 2080 | AMD RDNA 2 | AMD RDNA 2 |
GPU Chip Details | 2944 "CUDA" Cores @ 1800 MHz | 36 "Compute Units" @ 2230 MHz | 52 "Compute Units" @ 1825 MHz |
Memory | 8 GB @ 44,800 MB/s, 32 GB @ 23,466 MB/s | 16 GB @ 44,800 MB/s | 10 GB @ 56,000 GB/s, 6 GB @ 33,600 MB/s |
Storage | 512 MB @ 1775 MB/s, 3 TB @ 190 MB/s | 825 GB @ 5500 MB/s raw | 1 TB @ 2400 MB/s raw |
A little over a week ago, I wrote about how these specs manifest in terms of real-world performance. The nutshell is that both systems are slightly slower at delivering framerates than my RTX 2080, and my PC's advantage grows when ray tracing and / or DLSS are added to the mix.
However, I am seeing storage-related things on the PlayStation 5 side of which my current PC storage solution can only dream: fast travel in "Spider-Man Remastered" for example involves nothing more than a fade-out, fade-in.
As far as comparing the two dedicated systems goes, so far the PlayStation 5 is slightly faster than the Series X: Mark Cerny's theory that higher clocks would be better than trying to keep more "Compute Units" busy is proving correct so far, although over time developers may tweak their engines to take better advantage of the Series X's "Compute Unit" count advantage, perhaps by making their graphics subsystems more multi-threaded.
Additionally, the eventual emergence of some kind of DLSS-comparable upscaling is not impossible for the new systems, which would help them close the gap between them and my PC, or even allow them to surpass my PC. However, absent silicon dedicated for that purpose, any such solution will undoubtedly be inferior to DLSS itself.