Google-backed Anthropic launches Claude to be easier to talk and chatGPT rival

Mohamed Ashraf
0
Google-backed Anthropic launches Claude to be easier to talk and chatGPT rival

      Introduction      

Google recently made a pledge $300 million in Anthropic in exchange for a 10% stake in the startup.

Anthropic agreed to designate Google Cloud its "preferred cloud provider" under the terms of the agreement, originally reported by the Financial Times, with the companies "co-develop[ing] AI computing systems."

Google-backed Anthropic has launched its AI Chatbot Claude in an attempt to challenge OpenAI's dominance in the artificial intelligence space.

      what claude can do?      

Claude, in conjunction with ChatGPT, powers DuckDuckGo's recently launched DuckAssist tool, which directly explains simple search queries for users. Quora's experimental AI chat app, Poe, provides access to Claude.

Claude also works on the technical backend for Notion AI, an AI writing assistant incorporated with the Notion workspace. Claude is capable of performing many of the same tasks as ChatGPT, such as writing blog posts, summarising text, responding to emails, coding, and so on.

Claude, on the other hand, is said to be 'less likely to produce detrimental outputs, easier to engage in conversation with, and more steerable' by Anthropic.

The tone, personality, and actions of the chatbot can also be modified to meet the needs of users. Claude, like ChatGPT, has no internet access and is being trained on data until the spring of 2021.

Anthropic, on the other hand, has chosen to take a more principled approach to its 'Constitutional AI' AI chatbot. Claude has been trained on a large amount of data, allowing it to avoid possibly dangerous ’ve obtained on its principles and even recognise its own biases.

Compare Claude with ChatGPT and other AI chatbots in terms of slips and errors

Bots are notorious for using toxic, biassed, or otherwise offensive language. (See also: Bing Chat.) They also have a tendency to hallucinate, which means they invent factual information when asked about topics outside of their core areas of knowledge.

Claude has many of the same issues as ChatGPT and Microsoft's Bing Chat, such as having hallucinations and users being able to circumvent the chatbot's safety features with clever commands.

According to Bloomberg, Anthropic CEO Dario Amodei admitted that their chatbot, like other language models, can sometimes make things up. "I don't want to say all the problems have been solved," he explained. balancing the balance of the world.

      Other plans for Anthropic      

Anthropic's other plans include allowing developers to tailor Claude's constitutional principles to their specific requirements. Unsurprisingly, customer acquisition is another focus — Anthropic sees its core users as "startups making bold technological bets" as well as "larger, more established enterprises."

"At this time, we are not pursuing a broad direct to consumer approach," the Anthropic spokesperson added. "We believe that narrowing our focus will help us deliver a superior, targeted product."

Anthropic is undoubtedly under pressure from investors to recoup the hundreds of millions of dollars invested in its AI technology. A $580 million tranche from a group of investors including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn, and the Center for Emerging Risk Research has given the company significant outside backing.

Post a Comment

0Comments
Post a Comment (0)