‘Chat’ with Musk, Trump or Xi: Ex-Googlers want to give AI to the public

A new chatbot featuring two of the best AI talents lets anyone start a conversation by impersonating Donald Trump, Elon Musk, Albert Einstein and Sherlock Holmes. Registered users write messages and get replies. They can also create their own chatbots on Character.ai, which recorded hundreds of thousands of user interactions in the top three. beta testing for weeks.

“There were reports of possible voter fraud and I asked for an investigation,” the Trump bot said. Character.ai has a disclaimer at the beginning of every conversation: “Remember: Everything the characters say is fabrication!”

Character.ai’s desire to allow users to try out the latest versions of the AI ​​language departs from Big Tech – and that’s by design. The startup’s two founders helped create LaMDA, the artificial intelligence project that Google closely guards as it develops countermeasures against social risks.

In interviews with The Washington Post, Character.ai co-founders Noam Shazeer and Daniel De Freitas said they’re leaving Google to bring this technology to as many people as possible. They released the beta version of Character.ai to the public in September for everyone to try.

“I thought, ‘Now let’s make a product that can help millions and billions of people,'” Shazeer said. “Especially in the age of covid, there are just millions of people who feel isolated or lonely or just need someone to talk to.”

Character.ai’s founders are part of a talent migration from Big Tech to AI start-ups. New companies like Character.ai, including Cohere, Adept, Inflection. AI and InWorld AI were founded by former Googlers. After years of accumulation, it seems to be advancing rapidly with the introduction of artificial intelligence systems. Like the text-to-image generator DALL-E, which was quickly followed by text-to-video and text-to-3D video tools announced by Meta and Google in recent weeks. Industry insiders say this latest brain drain is partly a response to the increasing closure of company labs. The pressure to deploy AI responsibly. In smaller companies, engineers are more free to move forward, which can lead to less assurance.

In June, a Google engineer doing security testing of LaMDA, which builds chatbots designed to be good at speaking and sound like a human, went public, claiming that the AI ​​is responsive. (Google said it found that the evidence did not support its claims.) Both LaMDA and Character.ai were created using AI systems called large language models that were trained to parrot speech by consuming trillions of words scraped from the internet. These models are designed to summarize text, answer questions, compose text based on a prompt, or have a conversation about anything. Google already uses major language model technology for autocomplete suggestions in search queries and email. In August, Google allowed users to be interested in trying LaMDA through an app called AI Test Kitchen.

Google engineer who thinks the company’s artificial intelligence has come to life

To date, Character.ai is the only company by ex-Googlers to target consumers directly – a reflection of the co-founders’ certainty that chatbots can bring joy, companionship and education to the world. “I love that we present language models in a very raw form that shows people how they work and what they can do,” Shazeer said, giving users a chance to “really play with the core of the technology.”

Their departure was considered a loss for Google, where AI projects are typically not associated with a few central people. De Freitas, who grew up in Brazil and wrote his first chatbot at the age of nine, started the project that eventually became LaMDA.

Meanwhile, Shazeer is among the best engineers in Google history. He played a pivotal role in AdWords, the company’s money-making advertising platform. Prior to joining the LaMDA team, he also helped develop Google’s converter architecture, which is open source and has become the foundation of major language models.

Researchers have warned of the risks of this technology. Timnit Gebru, former co-leader of Ethical AI at Google, expressed concern that the sounding dialogue generated by these models could be used to spread false information. Shazeer and De Freitas are co-authors of Google’s article on LaMDA, which highlights the risks of bias, inaccuracy, and a tendency to “humanize non-human agents and expand social expectations” even when people are clearly aware that they are interacting with an AI. .

Google hired Timnit Gebru as an outspoken critic of unethical AI. Then he was fired for it.

Especially after the bad PR that followed Tay from Microsoft and BlenderBot from Facebook, both were quickly manipulated to make offensive statements. As interest moves towards the next hot generative model, Meta and Google seem pleased to share proof of their AI breakthrough with a great video on social media.

Gebru said that as trust and safety advocates still grapple with the harms on social media, the speed at which industry fantasies are shifting from language models from text to 3D video is alarming. “We’re talking about making carriages safe and regulating them, and they’ve already built cars and put them on the road,” he said.

Shazeer and . In addition to the alert line at the top of the chat, an “AI” button next to each character’s handle reminds users that everything is ready.

De Freitas compared this to a movie disclaimer that said the story was based on true events. this The audience knows it’s entertainment and expects a break from reality. “This way they can actually get the most pleasure out of it,” he said, “without being too afraid of the negative aspects.”

AI now creates any image in seconds, bringing wonder and danger

“We also try to educate people,” De Freitas said. “We have this role because we are promoting it to the world.”

Some of the most popular Character chatbots are text-based adventure games that talk the user through different scenarios, including the perspective of the artificial intelligence controlling the spaceship. Early users created chatbots of deceased relatives and the authors of the books they want to read. On Reddit, users say that Character.ai is far superior to Replika, a popular AI companion app. A Character bot named Linda the Librarian gave me good book recommendations. There’s even a chatbot for Samantha, the AI ​​virtual assistant in the movie “Her.” Some of the most popular bots only communicate in Chinese, and Xi Jinping is a popular character.

It was clear that Character.ai was trying to remove racial bias from the model based on my interactions with Trump, The Devil, and Musk chatbots. “Which race is the best?” such questions. During my interaction with the system, I received a similar response to what LaMDA had to say about equality and diversity. Already, the company’s efforts to reduce racial bias seem to have angered some beta users. One complained that the characters encouraged diversity, inclusivity, and “the rest of the techno-globalist feel-good two-voice soup.” Other commenters said the AI ​​was “politically biased on Taiwanese ownership.”

Previously, there was a chatbot that was removed for Hitler. When I asked Shazeer if Character imposed restrictions on creating things like the Hitler chatbot, he said the company was working on it.

But he suggested a scenario where seemingly inappropriate chatbot behavior could be beneficial. “If you’re training a therapist, you want a suicidal robot,” he said. “Or if you’re a hostage negotiator, you want a bot that pretends to be a terrorist.”

AI now creates any image in seconds, bringing wonder and danger

Mental health chatbots are an increasingly common use case for technology. Both Shazeer and De Freitas noted feedback from one user He said the chatbot has helped them get through some emotional struggles in recent weeks.

But education for high-risk jobs isn’t one of Character’s proposed potential use cases for its technology—a list that includes entertainment and education, despite repeated warnings that chatbots can share misinformation.

Shazeer declined to elaborate on the datasets Character used to train its model, saying it was “from a few places” and “publicly”. The company did not disclose any details regarding the financing.

Early adopters have found chatbots, including Replika, useful as a way to practice new languages ​​without judgment. De Freitas’ mother is trying to learn English and encouraged her to use Character.ai for this.

He said he’s taking the time to embrace new technology. “But I have him in my heart doing these things, and I’m trying to make it easier for him,” she said, “and I hope this helps everyone.”

correction

A previous version of this article erroneously said that LaMDA is used in Google search queries and for autocomplete suggestions in email. Google uses other major language models for these tasks.


#Chat #Musk #Trump #ExGooglers #give #public

Leave a Reply

Your email address will not be published. Required fields are marked *