I do love games, but most of what I do at my computer is maker projects. CAD, 3d printing, electronics design, coding. Lately I’ve been building a puzzle box for my niece’s birthday.
Interestingly, I did upgrade my GPU a year and a half or so ago (to a used 3070, I’m not made of money) and since then the main thing I’ve used that GPU for is actually AI experiments rather than games. E.g. for the puzzle box, I got Stable Diffusion to generate images for a puzzle for me. It’s four images, and when you combine them in the right way they reveal a fifth image. I don’t think I could have done the same puzzle without AI.
I do still play games, though. I’m just kind of off the big budget stuff these days.
I think it’s reasonably likely. There was a research paper about how to do basically that a couple years ago. If you need a basic LLM trained on a specialized form of input and output, getting the expensive existing LLMs to generate that text for you is pretty efficient/inexpensive, so it’s a reasonable way to get a baseline model. Then you can add stuff like chain of reasoning and mixture of experts to improve the performance back up to where you need it. It’s not going to be a way to push the state of the art forward, but it’s sure a cheap way to catch up to models that have done that pushing.