Staying Current with AI Trends: Prepare to Learn New Acronyms

Staying Current with AI Trends: Prepare to Learn New Acronyms

The tech press is lighting up with reviews of the Rabbit R1, the bright orange “AI box” that recently started shipping to users after making a huge splash at the CES trade show in January. The $200 standalone device that’s smaller than a smartphone is designed to answer queries just as ChatGPT, Gemini, or any other general AI chatbot would, but also call you an Uber, order you food via DoorDash, play music from Spotify, and generate images with Midjourney. It’s largely been received as charming, puzzling, and mostly useless—it doesn’t do those things particularly well, frequently gives wrong information (as chatbots do), and has a lot of other limitations, according to reviewers. But regardless of its utility, one thing the R1 is doing successfully is putting LAMs on the map. 

In addition to an LLM, the Rabbit R1 uses another model called a Large Action Model, or LAM, to perform the four aforementioned tasks that involve other services (the company says it plans to add support for more services soon). LAMs are an emerging type of model that essentially turn LLMs into agents that can not only give you information but connect with external systems to take actions on your behalf. In short, they take in natural language (whatever you tell them to do) and spit out actions (do what you requested).

While LLMs are limited to generating content and can’t take other actions, LAMs do build on their success. LAMs use the same underlying transformer architecture that made LLMs possible—and they can potentially open up a slew of new use cases and AI applications. The long sought-after vision of a true AI assistant would obviously need to be able to perform tasks on its own, for example. LAMs are poised to play a major role in the continued development of AI to make these visions a reality, and if all goes as planned, they’ll also further elevate AI’s role and power in our lives.

The LAM-based actions the R1 is currently capable of aren’t hugely consequential. In his review for The Verge, David Pierce described asking the R1 to play Beyoncé’s new album only for it to excitedly present a lullaby version of “Crazy in Love” from an artist called “Rockabye Baby!” Online, users have also complained that the R1 ordered them the wrong meal or had it delivered to the wrong place—frustrating for sure, but not the end of the world. But while LAMs are still at an early stage, the ambition to use them across nearly all sectors and for more significant use cases is growing. 

Microsoft, for example, says it’s developed LAMs “that will be capable to perform complex tasks in a multitude of scenarios, like how to play a video game, how to move in the real world, and how to use an Operating System (OS).” One recent paper that includes authors from the company proposes a training paradigm for training AI agents across a wide range of domains and tasks, specifically demonstrating its performance across healthcare, robotics, and gaming.

Of course, this also includes growing interest in how the models can be deployed in the enterprise. Salesforce is turning to LAMs for its Service Cloud and Sales Cloud products, where it’s looking to have the models take actions on clients’ behalf. In March, banking platform NetXD unveiled a “LAM for the enterprise” geared toward banks, health care companies, and other institutions that it says can understand user instructions and generate code to automate the execution of microservices and actions. There’s also startup Adept, which has earned backing from Microsoft, Nvidia, and a $1 billion valuation for its pursuit of a “machine learning model that can interact with everything on your computer.” LLMs are already everywhere, and LAMs are certainly speeding up in the rear. 

Now here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Correction: Last week’s edition (April 25) stated that Eli Lilly last year acquired AI drug discovery startup XtalPi in a deal worth $250 million. It was a collaboration deal valued at that amount. 

AI IN THE NEWS

Anthropic launches Team, its first plan for enterprises. The new plan includes access to all three versions of the company’s Claude chatbot and offers a variety of other features catered toward businesses: admin tools, billing management, the ability to upload long documents, the potential for longer “multi-step” conversations, and more. It will cost companies $30 per user per month. Anthropic also announced an iOS app, which will be free for all users, CNBC reported.

DOJ Antitrust docs reveal Microsoft invested in OpenAI over fears of falling behind Google. That’s according to Business Insider. A 2019 email exchange made public as part of the Department of Justice’s antitrust case against Google shows Microsoft’s chief technology officer Kevin Scott warning CEO Satya Nadella and cofounder Bill Gates that Google’s AI was getting too good and that Microsoft was “multiple years behind the competition in terms of ML scale.” Nadella responded to the email, which had the subject line “Thoughts on OpenAI,” saying it demonstrated “why I want us to do this,” CCing Microsoft’s chief financial officer, Amy Hood. Much of Scott’s email was redacted, but what we can see shows how the competition in AI has been heating up for years—and yet again how tech giants use their resources and deep pockets to link up with startups to stay competitive. You can view the emails here

Nvidia adds support for new models and voice queries to its ChatRTX model. In addition to Mistral or Llama 2, ChatRTX users will now also be able to use Google’s Gemma, ChatGLM3, and OpenAI’s CLIP model, The Verge reported. Nvidia also announced it integrated with AI speech recognition system Whisper to let users search data using their voices. First introduced as “Chat with RTX” in February, Nvidia’s ChatRTX model is different from most in that it runs locally on your PC. Users need an RTX 30- or 40-series GPU with 8GB of VRAM or more to be able to run it. 

Sam’s Club expands its AI-powered anti-theft arches to 20% of its stores. Rather than have staff check customers’ purchases against their receipts upon exiting as the store historically required, the large blue exit arches now in 120 stores use computer vision to conduct the check. This has sped up how quickly customers can leave the store by 23%, the company told TechCrunch, though some have complained online that now there’s just a line to go through the arches rather than a line to get checked by the store associates. Sam’s Club says the exit arch cameras only capture images of your cart, do not use any biometrics, do not capture personal information, and store images temporarily to improve its models and then deletes them. The company also took a jab at Amazon—which recently pulled back on its Just Walk Out technology—when announcing the expansion, noting it arrived as “other retailers have struggled to deploy similar technology at scale, with some abandoning efforts.” 

EYE ON AI RESEARCH

Bye bye, bad data? Once bad data—such as data that could lead to biased outputs, privacy violations, or copyright infringement—is put into an LLM, it’s in there for good. This has been a problem plaguing leading models like Google’s Gemini and OpenAI’s GPT-4 because, unlike traditional software, they’re far too vast and complex to remove specific data points—and fine-tuning can only improve the model so much. AWS researchers, however, think they’ve found a potential way to remove problematic data from foundation models.

Semafor reported on the paper, which was presented recently at the Proceedings of the National Academy of Sciences and outlines both known and novel approaches to what the researchers are calling “model disgorgement.” One novel technique they propose involves training models differently from the start. More specifically, they suggest splitting the dataset into multiple subsets, referred to as “shards,” and then training separate models in isolation on each shard. 

“Then, disgorging a sample only concerns the sub-models trained on the subset containing the sample. Disgorgement requests can be addressed simply by eliminating or retraining the components of the model that have been exposed to the cohort of data in question,” the researchers write. 

FORTUNE ON AI

An AI company you never heard of just raised $1 billion. Here’s what CoreWeave’s new $19 billion valuation really means —Sharon Goldman

Microsoft inks $10 billion green energy deal as power-hungry AI forces its hand to meet emissions commitments —Dylan Sloan

For AI startups, a billion-dollar dilemma: Why lofty valuations could be a hurdle in the race for talent —Sharon Goldman and Allie Garfinkle

Amazon’s generative AI business has hit a multibillion dollar run rate that’s reaccelerated cloud growth —Jason Del Rey

Apple’s promised AI plan is ‘all that matters’, analysts say, as it tries to play catch up with Big Tech rivals —Rachyl Jones

Japan has had so many bear attacks in the past year it’s turning to AI to act as a warning system —Chris Morris

AI’s biggest bottlenecks, according to CIOs and CTOs —Andrew Nusca

AI CALENDAR

May 7: Leading with AI conference hosted by Harvard Business School and D^3

May 7-11: International Conference on Learning Representations (ICLR) in Vienna

May 21-23: Microsoft Build in Seattle

June 5: FedScoop’s FedTalks 2024 in Washington, D.C.

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)

July 30-31: Fortune Brainstorm AI Singapore (register here)

Aug. 12-14: Ai4 2024 in Las Vegas

EYE ON AI NUMBERS

$32 billion

That’s how much Alphabet, Microsoft, and Meta together spent on AI infrastructure including servers, data centers, and other infrastructure in Q1 2024, according to CB Insights. Alphabet’s spend, $12 billion, represents a whopping 91% increase from the same period last year. The ballooning costs further demonstrate how expensive the AI game is for these companies—and why they’re dominating the field. As we’ve been reporting at Fortune, data on AI fundraises and model training shows they’re basically the only ones that can afford it. 

Source link

Share this article
Shareable URL
Prev Post

Top 10 Dividend Stocks in the U.S. with the Highest Performance for the Month

Next Post

3 Environmentally Friendly Stocks Riding the Climate Change Wave

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Stay informed on the latest market trends