Categories
AI

Stability AI, the London-based start-up known for its AI-powered text-to-image generator, Stable Diffusion, has launched StableLM, its suite of large language models that competes with OpenAI’s GPT-4. This collection of models is open-source, enabling developers to inspect, use, and adapt them for commercial or research purposes, subject to the licence’s terms. The Alpha version of the model is currently available in 3bn and 7bn parameters, with larger 15bn to 65bn parameter models to follow. By contrast, OpenAI’s GPT-3 has 175bn parameters.

StableLM is the result of Stability AI’s efforts to make AI technology transparent, accessible, and supportive for everyone. The company aims to democratize the design of language models, which will be the backbone of the digital economy. The language models can generate text and code and will power a range of downstream applications.

The suite of language models developed by Stability AI builds on the company’s previous work with non-profit research hub EleutherAI. The language models created through this collaboration were trained on The Pile, an open-source dataset that incorporates information from various sources, including Wikipedia, Stack Exchange, and PubMed. Stability AI’s most recent language model, which contains 1.5trn tokens of content, was trained on a much larger version of The Pile.

StableLM is available now on GitHub (https://github.com/Stability-AI/StableLM) and Hugging Face (https://huggingface.co/stabilityai), a platform for hosting AI models and code. It is the latest offering from Stability AI, a company that aims to make foundational AI technology accessible to all.

Leave a Reply

Your email address will not be published. Required fields are marked *

Calendar

September 2024
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

Recent Comments