Honestly, an easy way to regulate generative A.I. is to just pretend the output was made by a person. If your “A.I.” is used to create a deepfake political ad, you should be fined or sued as if you had an intern make it. If you aren’t sure the LLM won’t hallucinate falsehoods, don’t use it for news articles unless you’re ok with libel laws being applied.
it’s kinda exactly the same as someone on the street handing you a bit of paper with a rumour on it and you publishing it without checking that it’s correct
Honestly, an easy way to regulate generative A.I. is to just pretend the output was made by a person. If your “A.I.” is used to create a deepfake political ad, you should be fined or sued as if you had an intern make it. If you aren’t sure the LLM won’t hallucinate falsehoods, don’t use it for news articles unless you’re ok with libel laws being applied.
it’s kinda exactly the same as someone on the street handing you a bit of paper with a rumour on it and you publishing it without checking that it’s correct