The tldr bot is pulling directly from the article - it used to use ChatGPT wayy back when it was originally created, but it got expensive for the creator, so now I believe it uses some sentence interpreter library to compare relevance of paragraphs, in combination with semantic HTML tags/markup.
Removing things is not sufficient for removing bias. Omission is a kind of bias. You can lie by cherry-picking just some of the truth and skipping the rest
The tldr bot is pulling directly from the article - it used to use ChatGPT wayy back when it was originally created, but it got expensive for the creator, so now I believe it uses some sentence interpreter library to compare relevance of paragraphs, in combination with semantic HTML tags/markup.
The code for it is on GitHub
I almost wish our bots would remove bias, unless it’s some kind of persuasive essay. I’m sure there are some out there.
And to be clear, I’m blaming the author, not the bot. It’s just forwarding the sentiment of the author, albeit more succinctly.
Removing things is not sufficient for removing bias. Omission is a kind of bias. You can lie by cherry-picking just some of the truth and skipping the rest
“I’m a bot, not a miracle worker”
I agree here entirely.
The article is pretty much at fault here, as far as the bot is concerned if garbage goes in, garbage comes out