12
Condor
4y

So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.

I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).

The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.

The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.

Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.

Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.

I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.

I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.

I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.

Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...

Have fun!

Comments
  • 3
    Actually it’s interesting thing.
    It’s already proven by xbox one vs ps4 that faster RAM but less size means more performance so I think it’s bus factor that have biggest impact.
    It doesn’t matter if you have best card in the world if other components can’t keep up.
  • 21
    It's not only that frames are stored there, that's only the final result. It's also that bitmap textures and shit are stored that which are used to calculate the frames. Mapping them on objects and stuff.
  • 1
    @Fast-Nop well yes but those are minor compared to the size of the final frame. Surely it can't be hundreds of times the size of the final frame?
  • 22
    @Condor Hi-res textures, and tons of them? It's not only the textures for the current frame, after all. The point is you transfer them through the bottleneck to the GPU up-front, ideally for the whole game, and then let them stay there.
  • 14
    Texture memory usage on a video card is not minor at all. You can also store geometry as well. Which can be VERY detailed. I know on Skyrim, which is an old game now, you can get high res textures that are hundreds of megabytes in size for in game objects. Those get stored on the video card. For newer games this is going to get more demanding.
  • 11
    I'm somehow lost...

    What do frames have to do with VRAM?

    I've read a few times the whole rant and it looks to me like you completely misunderstood 3D rendering and VRam usage. Or more precisely: made a lot of weird assumptions without consise background information.

    The VRAM explosion isn't a necessity per se - that's right.

    Application / Driver and abstractions / OS and GPU firmware determine how the VRAM will be used.

    If there's less VRAM, then less VRAM will be used (eg Mesa added recently a configurable limit for VRAM to aid debugging)

    The primary use for VRAM is to preload textures, so - in simple terms as it isn't entirely correct - it only needs to be rendered.

    Layman terms again - so not entirely correct - high VRAM is a necessity to workaround the upload from disk / CPU to PCI-X to GPU and just keep things where they need to be ready at all times.
  • 3
    That was interesting to read!

    By the way, I have "the" argument why you need a lot of memory on your GPU: https://twitter.com/Strife212/...
  • 3
    @Jilano Crysis is always an argument.
  • 12
    VRAM stores basically anything necessary for rendering a frame. Textures, models, shaders, lighting maps, etc. If those assets had to be pulled on-demand from your disk or even from system RAM, your FPS would absolutely tank due to bandwidth bottlenecks. Post-processing effects (especially antialiasing) also impact VRAM usage.

    All of these assets add up to a lot more data than the final rendered frame, which is why people are saying you should have at least 8GB of VRAM nowadays. 4GB cards are already struggling to run modern games on ultra settings, especially at 4k resolution.
  • 2
  • 4
    @Fast-Nop Thank God. I was gritting my teeth reading the rant. These days, 8GB isn't enough for shaders, meshes, and especially textures and normal maps. Not to mention not every game is rendered in one part. Lots of frames are composites of different renders using different shaders. Triple A games can fill up a graphics card, especially the shitty, unoptimised ones.
  • 3
    @Condor they are not minor a 4k hdr texture is huge + Normal map + Ambient Occlusion Map + edge detection and other fx maps. As a hobby shader dev belive me those things are huge.

    Also a huge misconception you have is that images aren't in jpg or png format in gpu memory. It's raw data!

    Let's say:

    4096 * 4096 = 16,777,216 pixels
    8 bits per channel * 4 channels = 32 bit = 4 bytes
    4 * 16,777,216 = 67,108,864 bytes = 64 megabytes

    In my game each object has at least 4 textures, Diffuse, spec, Normal Map and AO. Some have a Mask for team color whatever and also an fx texture. So if I use 4k textures I use 256 MB - 384 MB per unique object.

    Of course you can optimize here and there, but the texture sizes are huge
  • 3
    Do you have any idea what a real (not homemade breadboard-based) GPU does? Because looking at your numbers it's immediately obvious that drawing a buffer on the screen is the least important part of it. Everything that isn't a relic can do that flawlessly.
  • 0
    @electrineer I might give that a try actually! I have a gaming laptop (Lenovo Ideapad Y700) that has a GTX 960M in it that I don't even use for gaming. I bought the laptop for the better display, sound system and size compared to my ThinkPad X220. Given that I never play games on it, there's very little that that GPU is currently doing. The iGPU is responsible for drawing the screen. Seems like a nice reuse of the dedicated GPU to get to 20GB (16GB general purpose, 4GB VRAM) of memory.

    As far as the construction of frames goes and the textures necessary to store them, I admit that I greatly underestimated that. I've learned a lot from the comments, thanks!

    Edit: Also worthy of note would be https://extremetech.com/gaming/..., I've read this yesterday and it goes into detail what a "real" non-breadboard GPU would look like, and what it does.
  • 0
    Actually https://tomshardware.com/reviews/... also mentions the exact same calculations as I used in my rant, although it considers 4 subpixels (probably brightness?). This makes the final frame 8.3MB for them in a 1920x1080 monitor, compared to the 2560x1080 I went with earlier (because that's the display I used with that Raspberry Pi where I could tweak the memory split). Basically it says that memory is the least significant factor, there are more important ones. Within memory configurations for the same GPU, it's apparently often not worth paying for the higher memory ones, as the performance gains are negligible. It also mentions just like I did that this changes with resolution and presets within the game. One last important thing they mention is that not all textures are loaded into memory all the time, which sounds like a reasonable thing to do.
  • 0
    @Condor That's a fairly old article and they still say that to run Skyrim, a 9-year-old game, at 4k with the highest graphics settings, you still need at least 3GB of VRAM. Modern games are far more taxing, of course, and you'll regularly see more than 4GB of VRAM usage with graphics settings maxed.
  • 0
    @EmberQuill it also mentions how the resolution you choose greatly affects those requirements. That's something I've seen everywhere I've read. If you don't game at 4K but at 1080p, you will only need to render a quarter of that. That brings it comfortably under 1GB. Way more than the size of a frame, but not obscenely large that everyone would need a top of the line graphics card.
  • 0
    @Condor again, that was a nine-year-old game. GTA V, a (slightly) more recent game, sits around 4GB of VRAM usage at 1080p on ultra settings (and over 6GB at 4K).

    Some games are just poorly optimized, too. You'll need a 6GB card at minimum to run Shadow of Mordor smoothly at 1080p.

    And those games are from 2013 and 2014, respectively. Newer games are varying widely, depending on how they handle shader caching and how their assets are optimized. Some games can run on a 2GB card if the rest of its specs are powerful enough. Others are really pushing the limit of a 6GB card even at 1080p and 60 FPS.

    For 1080p a 6GB card should work fine for any game currently out (with 4GB being sufficient for the majority of games), but who knows whether that could change in the next year? You'll probably have to start turning down graphics settings sooner rather than later.

    Of course, this is assuming you want the highest graphics settings.
  • 0
    A single 4k by 4k texture is six time the size of framebuffer you're assuming.
  • 1
    I agree that we should not need the vast amount that we do and in general games can be optimized for size by a lot. A buddy of mine hacked the shit out of elder Scrolls. Even the Snowflake is a massive high res texture that is slow and downscaled (so worse quality).
    As others pointed out you missed the point of vram. The high poly objects are also loaded in there. As for why not use the general purpose memory. Besides that it's next to the GPU there is a vast difference between GDDR and DDR: https://quora.com/What-is-the-diffe.... Might be an interesting read for you.
  • 0
    There is a LOT of missing fundamental knowledge about how graphics cards work here....

    Graphics cards don't share main RAM memory. When a shader is using a texture, it's reading from graphics card memory. When you load a texture you're literally loading it from the CPU from main RAM and sending it across a bus into the graphics card which then stores it into it's own memory that is optimized for random access.
Add Comment