Distributed AI has really come a long way, quickly. As it turns out, the GPU you have will prove important, but what is now referred to as an ancient card with a decent data set isn’t bad for 99% of people. Most will just pay for tokens on the cloud, a supervised experience which lets you stay current (ish) on hardware / software but perhaps behind on control of your own machine, and certainly paying to lease processing power.
Heavens, it’s all developing so fast right now.
Ironically, I’m rocking the last two RX 480’s that were part of my mining data center in 2016. At present Nvidia is much easier to work with than AMD in terms of software support, so that’s a speed bump which the reader may consider if looking at a new graphics card. Depends on if you root for the underdog in kind for sweat equity in your configs and a free education on the architecture, I suppose?
The Frankenstein’d 480’s are still up for the task, but the energy efficiency gains of newer cards make cost of power a constraint at scale - at the end of the day organizing data for your ai bot, either on the cloud or as a workstation, comes down to the cost of electricity - tokens, coal, or solar panels.
How much of an edge is 1%? Large corporations are pouring resources into this very question. It must be worth a lot to them, it must have something to do with owning your own instances.
Energy is money. https://zchg.org