If you already didn’t know, you can run locally some small models with an entry level GPU.
For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.
Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”
They banned destrucive research for new rooms, because some researcher decade ago, enthusiastically drilled a bunch of holes to nowhere in order to do find them.
They still allowed the non destructive muon imaging a few year ago that heavily hinted to an unfound room.