I misspoke, and I apologize. I could not recall the term TPU, so I just went with the name of the protocol (CUDA). Nvidia has various TPU devices that use CUDA protocol (like the K80 for example). TPUs (Tensor Processing Units) are coprocessors designed to run some GPU intensive tasks without the expense of an actual GPU unit. They are not a one to one replacement, as they perform calculations in completely different ways.
I believe you would be well served by researching a bit and then making an informed decision on what to get (TPU, GPU or both).
OK mman, dont pop a vein over this. I’m a hobbyist, with some experience, but a hobbyist nonetheless. I’m speaking from personal experience, nothing else. You may well be right (and thanks for the links, they’re really good for me to learn even more).
I guess, at the end of the day, the OP will need to make an informed decision on what will work for him while adhering to his budget.
I’m glad to be here, because I can help people (at least some times) and learn at the same time.
I just hope the OP ends up with something that’ll fit his needs and budget. I will he adding a K80 to my rig soon, only because I can let go of 50 bucks and want to test it until it burns.
I wish you all a very nice weekend, and keep tweaking, its too Much fun.
I have absolutely no counter for you on this one, as I’m jot aware of the highest level stuff between manufacturers. And it makes sense. Nvidia has been the goto manufacturer for gaming and developers usually improve their code based on what’s needed to run the best possible on Nvidia hardware. I’ll research Kore on this when I have a chance, this seems to he a very interesting topic. Thank you for pointing this out.
I absolutely agree with your statement. However, the point in his questions is performance, because of his work with AI. I’d rather Nvidia opensourced their drivers (which is being worked on already, and has been for a while), I think that probably every Linux user wants this to happen already, but that does not change the fact that, even if proprietary drivers are needed for the vest performance, Nvidia is still ahead of anything else out there. But like you, I’m not a fan of having proprietary crap on my devices.
Let’s get this out of the way. Not a single consumer grade board has more than 16 lanes on 1 PCI slot. With the exception of 2 or 3 very expensive new boards out there, you’ll be hard pressed to find a board with 3 slots giving you a total mas of 28 lanes (16+8+4). So, regardless of TPU or GPU that’s going to be your limit. GPUs are designed as general purpose processors that have to support millions of different applications and software. So while a GPU can run multiple functions at once, in order to do so, it must access registers or shared memory to read and store the intermediate calculation results. And since the GPU performs tons of parallel calculations on its thousands of ALUs, it also expends large amounts of energy in order to access memory, which in turn increases the footprint of the GPU. TPUs are application-specific integrated circuits (ASIC) designed specifically to handle the computational demands of machine learning and accelerate AI calculations and algorithms. They are created as a domain-specific architecture. What that means is that instead of designing a general purpose processor like a GPU or CPU, they were designed as a matrix processor that was specialized for neural network work loads. Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power. Get your facts straight and read more before you try to send others on wild goose chases. As I said, the OP already works this field, it shouldn’t be hard for him to find the information and make an educated decision.
Simple, the difference in cost is negligible in terms of keeping the CPU at way lower temperatures, extending the life of the CPU and better avoiding throttling. And they are not louder than a regular fan with heatsink, since the fans spin at lower RPMs most of the time because they don’t need to increase it since the CPU is already running cooler. And if you add a high end GPU, that’s way louder and will drown the noise of any other fan in the rig when it kicks in.
By all means, unless I wanted to play some AAA games, I wouldn’t get any dedicated GPU. AMD and Intel’s integrated cards are more than adequate for most usecases of daily computer use, and work great in any Linux distros. For example, my work PC runs on a Ryzen 7 7735HS with the integrated 680m iGPU. I usually have 3 or 4 workspaces open at once, each running a different browser (Vivaldi, Brave, Librewolf and Mullvad) with anywhere between 4 to 20+ tabs open on each, OnlyOffice on one workspace and LibreOffuce Calc on another one, a Flatpak of Teams for work, freetube, and a bunch of different dashboards, and an instance of Sins of a Solar Empire (I play while I work). I still have to see my CPU go past 20%. Now, for more powerful needs, I have a laptop running on an Intel 11th gen I7 with a 3070TI Nvidia card. Great laptop (System76 Gazelle16), but I barely ever use it, since I don’t really need that much power anyways. The days of forcibly having a dedicated GPU to avoid CPU lag on daily use computing are long gone. Unless you’re going to push things to the limit with heavy video rendering, AAA gaming or AI, any integrated GPU will suffice.
I agree that I could be wrong on the comparison. Maybe they are not that far behind, but guaranteed not at the same level when comparing apples to apples. I wish that wasn’t the case, but it still is.
Same reply. And you can add as many TPUs as you want to push it to whatever level you want. At 59 bucks a piece, they’ll blow any 4070 out of the water for the same or less cost. But to the OP, you don’t have to believe any of us. You’re in that field, I’m sure you can find the jnfo on if these would fit your needs or not.
Dude, you KNOW I’m talking about TPUs. The name escaped my mind at the moment. Sorry if my English is not to your royalty level. Are you really so hired that you have to make a party out of that? Ran out of credits on pornhub or something?
Most new boards will have at least a display port and an HDMI port. Add that most also have Thunderbolt4, plug in an HDMI or Display Port dongle. The sky is the limit man. On the VM panorama, VMWare is now all fucked with their forced subscription model, Virrualbox is still a thing, but GPU passthrough (I’ve heard, can’t really confirm) seems to have turned into a real shitshow. KVM / Qemu seems to be the only alternative that makes sense right now.
Name one example in which an Nvidia card from the same gen of an AMD GPU performed equal or worse, regardless of the driver. Why do you think that even manufacturers focused on hardware for Linux choose Nvidia over AMD GPUs? Cost? Unlikely, since Nvidia is usually more expensive.
I have to agree here. I use PopOS mostly, but most Ubuntu derivative nowadays beat the living crap out of Ubuntu. PopOS, Zorin, Mint, etc. Like many others, Ubuntu was my gateway to Linux, but I lived out of that in less than a year. Started spinning Mandriva (damn I’m old), Debian itself, and I’ve tried Ubuntu a few times over the years, mostly on VMs now, since now I hold no hope that it’ll ever go back to what it was.
This is the reason why most of us have lived to NVMe. Speed, when compared to SATA, is ludicrous. But SATA is not going anywhere any time soon.
I have nothing to add here. Your assessment is spot on.
That and you need to decide how much positive or negative pressure you want in there as well. You could always do some calculations. Treat your case as an open control volume where mass can transfer across the boundaries. Then the sum of air going into and out of the case must equal the rate of change of air in the case. Assuming the volume of air in your case is constant, this term would be zero. So you can look at the rated volume flow rate for each fan (CFM - aka cubic foot per minute) and see if this summation is positive or negative. A positive value would mean “positive pressure” and a negative value “negative pressure”. The only problem is if the fans are not running at max RPM and/or the rated CFM value - which is the case if you have your fans plugged into the motherboard( regardless of whether you’re using PWM or 3 pin). In this case, you would have to calculate the volume flow rate of each individual fan as a function of the RPM. This may not be a linear function and would probably require taking some data and coming up with a regression for the data. This would be way harder to do.
tldr: add up the CFM going into the case, subtract the CFM leaving the case. If the value is positive you have “positive pressure”
SNAP is just a proprietary packaging by Canonical. Basically the same as a flatpak, but fully controlled by Canonical, store and all. Integrated graphics will give you as much resolution as most GPUs, albeit they won’t be able to render at dedicated GPU speeds. But unless you’re actually rendering very heavy videos, integrated, matched together with 1 or more CUDA TPUs, and YOU set the limits.
I found out about those about 6 months ago only, and it was by chance while going over the UnRaid forum for Frigate, so I decided to do some research. It took me almost 4 months to finally get my paws on one. They were seriously scarce back then, but have been available for a couple of month now. I only got mine finally at the end of November. They seem to be in an availability trend similar to Raspberry Pis.