• 5 Posts
  • 52 Comments
Joined 11 months ago
cake
Cake day: October 20th, 2023

help-circle







  • I misspoke, and I apologize. I could not recall the term TPU, so I just went with the name of the protocol (CUDA). Nvidia has various TPU devices that use CUDA protocol (like the K80 for example). TPUs (Tensor Processing Units) are coprocessors designed to run some GPU intensive tasks without the expense of an actual GPU unit. They are not a one to one replacement, as they perform calculations in completely different ways.

    I believe you would be well served by researching a bit and then making an informed decision on what to get (TPU, GPU or both).


  • OK mman, dont pop a vein over this. I’m a hobbyist, with some experience, but a hobbyist nonetheless. I’m speaking from personal experience, nothing else. You may well be right (and thanks for the links, they’re really good for me to learn even more).

    I guess, at the end of the day, the OP will need to make an informed decision on what will work for him while adhering to his budget.

    I’m glad to be here, because I can help people (at least some times) and learn at the same time.

    I just hope the OP ends up with something that’ll fit his needs and budget. I will he adding a K80 to my rig soon, only because I can let go of 50 bucks and want to test it until it burns.

    I wish you all a very nice weekend, and keep tweaking, its too Much fun.




  • Let’s get this out of the way. Not a single consumer grade board has more than 16 lanes on 1 PCI slot. With the exception of 2 or 3 very expensive new boards out there, you’ll be hard pressed to find a board with 3 slots giving you a total mas of 28 lanes (16+8+4). So, regardless of TPU or GPU that’s going to be your limit. GPUs are designed as general purpose processors that have to support millions of different applications and software. So while a GPU can run multiple functions at once, in order to do so, it must access registers or shared memory to read and store the intermediate calculation results. And since the GPU performs tons of parallel calculations on its thousands of ALUs, it also expends large amounts of energy in order to access memory, which in turn increases the footprint of the GPU. TPUs are application-specific integrated circuits (ASIC) designed specifically to handle the computational demands of machine learning and accelerate AI calculations and algorithms. They are created as a domain-specific architecture. What that means is that instead of designing a general purpose processor like a GPU or CPU, they were designed as a matrix processor that was specialized for neural network work loads. Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power. Get your facts straight and read more before you try to send others on wild goose chases. As I said, the OP already works this field, it shouldn’t be hard for him to find the information and make an educated decision.



  • By all means, unless I wanted to play some AAA games, I wouldn’t get any dedicated GPU. AMD and Intel’s integrated cards are more than adequate for most usecases of daily computer use, and work great in any Linux distros. For example, my work PC runs on a Ryzen 7 7735HS with the integrated 680m iGPU. I usually have 3 or 4 workspaces open at once, each running a different browser (Vivaldi, Brave, Librewolf and Mullvad) with anywhere between 4 to 20+ tabs open on each, OnlyOffice on one workspace and LibreOffuce Calc on another one, a Flatpak of Teams for work, freetube, and a bunch of different dashboards, and an instance of Sins of a Solar Empire (I play while I work). I still have to see my CPU go past 20%. Now, for more powerful needs, I have a laptop running on an Intel 11th gen I7 with a 3070TI Nvidia card. Great laptop (System76 Gazelle16), but I barely ever use it, since I don’t really need that much power anyways. The days of forcibly having a dedicated GPU to avoid CPU lag on daily use computing are long gone. Unless you’re going to push things to the limit with heavy video rendering, AAA gaming or AI, any integrated GPU will suffice.











  • That and you need to decide how much positive or negative pressure you want in there as well. You could always do some calculations. Treat your case as an open control volume where mass can transfer across the boundaries. Then the sum of air going into and out of the case must equal the rate of change of air in the case. Assuming the volume of air in your case is constant, this term would be zero. So you can look at the rated volume flow rate for each fan (CFM - aka cubic foot per minute) and see if this summation is positive or negative. A positive value would mean “positive pressure” and a negative value “negative pressure”. The only problem is if the fans are not running at max RPM and/or the rated CFM value - which is the case if you have your fans plugged into the motherboard( regardless of whether you’re using PWM or 3 pin). In this case, you would have to calculate the volume flow rate of each individual fan as a function of the RPM. This may not be a linear function and would probably require taking some data and coming up with a regression for the data. This would be way harder to do.

    tldr: add up the CFM going into the case, subtract the CFM leaving the case. If the value is positive you have “positive pressure”