Most of the tech companies that built them have been secretive about their inner workings, making it hard for outsiders to understand the flaws that can make them a source of misinformation, racism and other harms.
Research engineer Teven Le Scao, who helped create the new artificial intelligence language model called BLOOM, poses for a photo, Monday, July 11, 2022, in New York. The tech industry’s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to be a sentient computer, or maybe just a dinosaur or squirrel. But they’re not so good — and sometimes dangerously bad — at handling other seemingly straightforward tasks.
That’s one reason a coalition of AI researchers co-led by Le Scao —- with help from the French government — launched a new large language model Tuesday that’s supposed to serve as an antidote to closed systems such as GPT-3. The group is called BigScience and their model is BLOOM, for the BigScience Large Open-science Open-access Multilingual Language Model.
“For some companies this is their secret sauce,” Liang said. But they are often also worried that losing control could lead to irresponsible uses. As AI systems are increasingly able to write health advice websites, high school term papers or political screeds, misinformation can proliferate and it will get harder to know what’s coming from a human or a computer.
It doesn’t help that these models require so much computing power that only giant corporations and governments can afford them. BigScience, for instance, was able to train its models because it was offered access to France’s powerful Jean Zay supercomputer near Paris. “So we can’t actually examine the data that went into the GPT-3 training,” said Wolf, who is also a chief science officer at Hugging Face. “The core of this recent wave of AI tech is much more in the dataset than the models. The most important ingredient is data and OpenAI is very, very secretive about the data they use.”
Malaysia Latest News, Malaysia Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
The Transcripts of an AI That a Google Engineer Claims Is Sentient Are Pretty WildGoogle suspended software engineer Blake Lemoine after he made an eyebrow-raising claim: that one of the company's experimental AI had gained sentience.
Read more »
AI Art Is Challenging the Boundaries of CurationArtists working with programs like DALL-E do more than push a button—selecting outputs and engineering prompts are acts of aesthetic expression.
Read more »
6 Ways to Leverage AI for Lead Generation | HackerNoonLooking for new ways to generate leads with ease? Have you considered putting AI to work? Here are 6 ways to leverage AI for lead generation.
Read more »
AI Ethics Cautiously Assessing Whether Offering AI Biases Hunting Bounties To Catch And Nab Ethically Wicked Fully Autonomous Systems Is Prudent Or FutileThe latest in AI Ethics consists of AI bias bounty hunting, though this raises a lot of questions, including as considered for aiming at AI-based self-driving cars.
Read more »
Henry Cavill Is The Perfect New James Bond Actor In 007 AI ArtHenry Cavill looks great as the next James Bond in new AI-generated art.
Read more »
How AI Will Change Medicine ForeverAn explosion of artificial-intelligence technology could reshape the health-care landscape as we know it.
Read more »