• 82 Posts
  • 618 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle









  • The most approachable distros are Ubuntu based IMHO. That means Ubuntu (full featured, great interface, but can be slightly more demanding on old hardware), Lubuntu, Xubuntu (Ubuntu based, but with more basic desktop environment. They may be snappier on old hardware but not as fully featured), Mint (already mentioned here, generally considered the best of both worlds).

    If you want to use very limited hardware, you might also consider a distro like Puppy Linux.

    If you are new to Linux, the most overlooked consideration is the community support. You will have things come up that require help to do/fix. A strong, active community means you will have a much easier time.








  • Censorship and bias are two different issues.

    Censorship is a deliberate choice by the deployment. It comes from a realistic and demonstrated need to limit the misuse of the tool. Consider all the examples of people using early LLMs to generate plans for bombs, Nazi propaganda, revenge p*rn etc. Of course, once you begin to draw that line, you have to debate where the line is, and that falls to the lawyers and publicity departments.

    Bias is trickier to deal with because it comes from the bias in the training data. I remember on example where a writer found that it was impossible to get the model to generate a black doctor treating a white patient. Imagine the racist chaos that ensued when they applied an LLM to criminal sentencing.

    I am curious about how bias might be deliberately introduced into a model. We have seen the brute force method (eg “answer as though Donald Trump is the greatest American,” or whatever). However, if you could really control and fine tune the values directly, then even an “open source” model could be steered. As far as I know, the values are completely dependent on the training data. But it should be theoretically possible to “nudge” those values if you could develop a way to tune it.