I feel like one way to do this would be to break up models and their training data into mini-models and mini-batches of training data instead of one big model, and also restricting training data to that used with permission as well as public domain sources. For all other cases where a company is required to take down information in a model that their permission to use was revoked or expired, they can identify the relevant training data in the mini batches, remove it, then retrain the corresponding mini model more quickly and efficiently than having to retrain the entire massive model.
A major problem with this though would be figuring out how to efficiently query multiple mini models and come up with a single response. I’m not sure how you could do that, at least very well…
A couple I didn’t see on the list:
https://www.pcgamingwiki.com/wiki/AI:_The_Somnium_Files
https://www.pcgamingwiki.com/wiki/AI:_The_Somnium_Files_-_nirvanA_Initiative
https://www.pcgamingwiki.com/wiki/Rune_Factory_3_Special
https://www.pcgamingwiki.com/wiki/Rune_Factory_5