• 0 Posts
  • 3 Comments
Joined 7 months ago
cake
Cake day: February 18th, 2024

help-circle

  • So the GPU encoding isn’t using the GPU cores. It’s using separate fixed hardware. It supports way less operations than a CPU does. They’re not running the same code.

    But even if you did compare GPU cores to CPU cores, they’re not the same. GPUs also have a different set of operations from a CPU, because they’re designed for different things. GPUs have a bunch of “cores” bundled under one control unit. They all do the exact same operation at the same time, and have significantly less capability beyond that. Code that diverges a lot, especially if there’s not an easy way to restructure data so all 32 cores under a control unit, can pretty easily not benefit from that capability.

    As architectures get more complex, GPUs are adding things that there aren’t great analogues for in a CPU yet, and CPUs have more options to work with (smaller) sets of the same operation on multiple data points, but at the end of the day, the answer to your question is that they aren’t doing the same math, and because of the limitations of the kind of math GPUs are best at, no one is super incentivized to try to get a software solution that leverages GPU core acceleration.