• CodexArcanum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    20 hours ago

    I made a comment to a beehaw post about something similar, I should make it a post so the .world can see it.

    I’ve been running the 14B distilled model, based on Ali Baba’s Qwen2 model, but distilled by R1 and given it’s chain of thought ability. You can run it locally with Ollama and download it from their site.

    That version has a couple of odd quirks, like the first interaction in a new session seems much more prone triggering a generic brush-off response. But subsequent responses I’ve noticed very few guardrails.

    I got it to write a very harsh essay on Tiananmen Square, tell me how to make gunpowder (very generally, the 14B model doesn’t appear to have as much data available in some fields, like chemistry), offer very balanced views on Isreal and Palestine, and a few other spicy responses.

    At one point though I did get a very odd and suspicious message out of it regarding the “Realis” group within China and how the government always treats them very fairly. It misread “Isrealis” and apparently got defensive about something else entirely.