Stanford Scientists Cloned OpenAI’s ChatGPT – But Immediately Shelved It
Even though ChatGPT is supposed to be the wave of the future that’s coming to change everyone’s daily life, it’s apparently not that complicated to create.
Stanford scientists made their own version, called Alpaca GPT, fairly easily, according to a blurb inĀ New Atlas.
The researchers at the Center for Research on Foundation Models said they “fine-tuned” Meta’s LLaMA 7B large language model – and for a fraction of the price.
Their Alpaca AI exhibits “many behaviors similar to OpenAI’s text-davinci-003,” but they spent less than $600 re-creating the entire thing.
Well, $600 and all of the brainpower already amassed at Stanford University.
The models ended up with “very similar performance output,” but that the Stanford model actually came up with quicker and shorter outputs than the ChatGPT.
Like the original language model, Stanford’s Alpaca GPT suffers from significant issues like hallucinations, toxicity, and stereotypes.
These issues, along with growing concerns about the product returning too much misinformation, ultimately led to Stanford to pull it down.
Sign up to get our BEST stories of the week straight to your inbox.