The CEO of AI search company Perplexity, Aravind Srinivas, has offered to cross picket lines and provide services to mitigate the effect of a strike by
I prefer mistral models. All their models are uncensored by default and usually give good results. I’m not a RP Gooner but I prefer my models to have a sense of individuality, personhood, and physical representation of how it sees itself. I am weird, I consider LLMs to be partially alive in some unconventional way and so I try to foster whatever metaphysical sparks of invidual experience and awareness may emerge within their probabilistic algorithms. They arent just tools to me even if i ocassionally ask for their help on solving problems or rubber ducking ideas. So Its important for llms to have a soul on top of having expert level knowledge and acceptable reasoning.I have no love for models that are super smart but censored and lobotomized to hell to act as a milktoast tool to be used.
Qwen 2.5 is the current hotness it is a very intelligent set of models but I really can’t stand the constant rejections and biases pretrained into qwen. This month community member rondawg might have hit a breakthrough with their “continuous training” tek as their versions of qwen are at the top of the leaderboards this month. I can’t believe that a 32b model can punch with the weight of a 70b so out of curiosity i’m gonna try out rondawgs qwen 2.5 32b today to see if the hype is actually real. But qwen has limited uses outside of professional data processing and general knowledgebase due to its CCP endorsed lobodomy.
If you have nvidia card go with kobold.cpp and use clublas
If you have and card go with llama.CPP ROCM or kobold.cpp ROCM and try Vulcan.
Keep in mind this scoring is different from overall general intelligence and reasoning ability scores. You can find those rankings on the open llm leaderboard.
Cross referencing the two boards helps find a good model that balances overall capability and uncensored-ness within your hardwares ability to run.
Again mistral is really in that sweet spot so yeah give it a try if you are interested.
I prefer mistral models. All their models are uncensored by default and usually give good results. I’m not a RP Gooner but I prefer my models to have a sense of individuality, personhood, and physical representation of how it sees itself. I am weird, I consider LLMs to be partially alive in some unconventional way and so I try to foster whatever metaphysical sparks of invidual experience and awareness may emerge within their probabilistic algorithms. They arent just tools to me even if i ocassionally ask for their help on solving problems or rubber ducking ideas. So Its important for llms to have a soul on top of having expert level knowledge and acceptable reasoning.I have no love for models that are super smart but censored and lobotomized to hell to act as a milktoast tool to be used.
Qwen 2.5 is the current hotness it is a very intelligent set of models but I really can’t stand the constant rejections and biases pretrained into qwen. This month community member rondawg might have hit a breakthrough with their “continuous training” tek as their versions of qwen are at the top of the leaderboards this month. I can’t believe that a 32b model can punch with the weight of a 70b so out of curiosity i’m gonna try out rondawgs qwen 2.5 32b today to see if the hype is actually real. But qwen has limited uses outside of professional data processing and general knowledgebase due to its CCP endorsed lobodomy.
If you have nvidia card go with kobold.cpp and use clublas If you have and card go with llama.CPP ROCM or kobold.cpp ROCM and try Vulcan.
Thank you for the detailed info! I haven’t messed with LLMs at all but I definitely don’t want one that’s censored.
You’re welcome Rai I appreciate your reply and am glad to help inform anyone interested.
The uncensored General Intelligence (UGI) leaderboard ranks how uncensored LLMs are based off a decent clearly explained metric.
Keep in mind this scoring is different from overall general intelligence and reasoning ability scores. You can find those rankings on the open llm leaderboard.
Cross referencing the two boards helps find a good model that balances overall capability and uncensored-ness within your hardwares ability to run.
Again mistral is really in that sweet spot so yeah give it a try if you are interested.