chatbots have lied about notable figures, pushed partisan messages, spewed misinformation or even advised users on how to commit suicide. To mitigate the tools' most obvious dangers, companies such as Google and OpenAI have carefully added controls that limit what the tools can say. Now, a new wave of chatbots, developed far from the epicentre of the AI boom, are coming online without many of those guardrails — setting off a polarising free-speech debate over whether chatbots should be moderated, and who should decide.
«This is about ownership and control,» Eric Hartford, a developer behind WizardLM-Uncensored, an unmoderated chatbot, wrote in a blog post. «If I ask my model a question, I want an answer, I do not want it arguing with me.» Several uncensored and loosely moderated chatbots have sprung to life in recent months under names such as GPT4All and FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by AI researchers.
Only a few groups made their models from the ground up. Most groups work from existing language models, only adding extra instructions to tweak how the technology responds to prompts. The uncensored chatbots offer tantalising new possibilities.
Users can download an unrestricted chatbot on their own computers, using it without the watchful eye of Big Tech. They could then train it on private messages, personal emails or secret documents without risking a privacy breach. Volunteer programmers can develop clever new add-ons, moving faster — and perhaps more haphazardly — than larger companies dare.
Read more on economictimes.indiatimes.com