There are guardrails in place to avoid providing the user illegal and hateful information to the en user and specially to avoid situations like that (well not all companies do, but you can expect Google to have it in place),
I wonder:
1- How did the LLM hallucinate so much to generate that answer out of the blues given the previous context.
2- Why did the guardrails failed blocking this such obvious undesired output.
There are guardrails in place to avoid providing the user illegal and hateful information to the en user and specially to avoid situations like that (well not all companies do, but you can expect Google to have it in place),
I wonder: 1- How did the LLM hallucinate so much to generate that answer out of the blues given the previous context. 2- Why did the guardrails failed blocking this such obvious undesired output.