Hugging Face, a New York-based startup known for its extensive repository of open-source AI resources and organizer of the notable AI event last year, recently unveiled its new offering: customizable Hugging Chat Assistants. This innovative, cost-free service enables Hugging Chat users—the company’s open-source alternative to OpenAI’s ChatGPT—to effortlessly craft personalized AI chatbots. This service mirrors the functionality and aims of OpenAI’s GPT Builder, which is accessible through various paid plans.
The process of creating tailored AI chatbots has been simplified.
Phillip Schmid, the Technical Lead & LLMs Director at Hugging Face, shared the announcement on the social platform X (previously Twitter), highlighting the simplicity of building a personal Hugging Face Chat Assistant with just a couple of clicks. Schmid also drew parallels between this new feature and OpenAI’s custom GPT offerings.
A key distinction of the Hugging Chat Assistant, aside from its free availability, is its independence from OpenAI’s proprietary models, GPT-4 and GPT-4 Vision/Turbo. Instead, it allows the use of a variety of open-source large language models (LLMs), ranging from Mistral’s Mixtral to Meta’s Llama 2, reflecting Hugging Face’s commitment to offering a wide selection of models and frameworks.
Positioned as a direct competitor to the GPT Store?
In a move similar to OpenAI’s recent GPT Store introduction, Hugging Face has also launched a central hub for third-party customized Hugging Chat Assistants, where users can browse and select assistants tailored to their needs, presented in a visually similar manner to the GPT Store.
While some community members have praised Hugging Chat Assistants as superior to GPTs for their customization flexibility and free access, limitations exist, such as the absence of web search capabilities and self-generated logos—a feature available through OpenAI’s DALL-E 3.
Nevertheless, the debut of Hugging Chat Assistants marks a significant stride in the open-source community’s challenge to proprietary counterparts like OpenAI, underscored by the recent leak of Mistral’s new open-source model, Miqu, which nearly rivals GPT-4’s performance. This development prompts a reflection on the sustainability of closed models’ dominance.