TRENDING NEWS
Back to news
25 Feb, 2025
Share:
The Myth Buster: Rodney Brooks Breaks Down the Hype Around AI
@Source: newsweek.com
"It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction," philosopher Harry Frankfurt famously wrote in his 2005 book On Bullshit. By that standard, roboticist and artificial intelligence researcher Rodney Brooks believes today's large language models (LLMs)—the powerful programs at the heart of generative AI—are masterful bullshitters. And Brooks should know. As the former director of MIT's Computer Science and Artificial Intelligence Laboratory and founder of iRobot (maker of the Roomba), he has seen more than his share of technological breakthroughs—and overblown promises.During a wide-ranging, insightful conversation with Newsweek two key themes emerged: We must not be deceived by LLMs' facile use of language into believing they have magical capabilities—their lack of grounding means humans will always be in the loop and must retain agency over machines. And change will come more slowly than you think."[LLMs] don't know what's true," Brooks says. "They just know what words sort of work together and they've read everything that's ever been written or digitized, so they know all sorts of tricks with language. They're bullshitters until we can ground them in reality, a truth."Yet Brooks is no AI skeptic. Rather, he is that rare figure: a true pioneer who combines deep technical knowledge with decades of experience building real-world applications. His perspective, explored here in the first of Newsweek's AI Impact interview series with Marcus Weldon, offers a crucial framework for understanding both the genuine promise and persistent limitations of artificial intelligence—and, most importantly, how to tell the difference.At the heart of Brooks' analysis is a warning about what he calls FOBAWTPALSL (Fear of Being a Wimpy Techno-Pessimist and Looking Stupid Later). This awkward acronym describes a very real phenomenon: the pressure many feel to embrace AI developments uncritically, lest they be seen as technology deniers who failed to recognize a revolutionary moment."People don't want to be in the position that we showed you this and you pooh-poohed it," Brooks explains. "And if you were smarter, you would have known we were going to be right." The result of this kind of insecurity, he argues, is that "people are not putting the level of judgment and precision on some of the claims that are being made."The consequences of this collective suspension of disbelief extend far beyond individual reputational concerns, giving rise to a herd mentality that can itself be an obstacle to innovation. "It is like 5-year-olds playing soccer: They will all go to the ball," Brooks observes with characteristic dry humor. This frenzied focus has real costs. Investment capital, research attention and talent all flow toward whatever seems most promising in the moment, potentially abandoning other valuable lines of inquiry."Everything new that we've had in the last two or three years was bubbling up since around 2017," Brooks notes. "When Deep Learning came out in 2012, that had been around for over 40 years, the pieces of it, [but] it hadn't burst out." Now he worries that the intense focus on LLMs and generative AI may be causing researchers to abandon "the hundred other things that people have been working on, which were getting close to something good."Brooks has seen this pattern before. The history of artificial intelligence is marked by waves of enthusiasm followed by periods of disillusionment—what are commonly known as "AI winters" when interest in the technology and funding new research dries up. What's different now, he argues, is the amplification effect of modern media. "For those of us who've been in AI or robotics for a long time, this has been a repeated thing. A new wave comes in. Everyone gets excited. The flock goes towards it, and then it dies off," he explains. "The difference now is that because of social media, because of the thirst for novel stories that will drive clicks for digital advertising, everything gets loose really quickly."This acceleration is compounded by what Brooks sees as a widespread misunderstanding about the nature of technological progress. The success of Moore's Law—which described the doubling of computer chip density roughly every two years—has led many to expect similar exponential improvements in all areas of technology. But as Brooks points out, exponentials "can't operate forever because you'd eat up the universe ultimately." And many apparent exponential trends turn out to be mere growth spurts that eventually level off. "Often it's not an exponential at all," Brooks says, particularly in the physical world, as we are already close to the limits of what is possible. "For robots lifting stuff up, maybe we can make them twice as efficient as they are now. ...We can't make [them] twice as efficient or three or four times, which means the price of a robot that's got to move stuff around is not going to go down exponentially over time."Moreover, there are some key unsolved problems in robotics like robotic hands, Brooks points out. Most industrial robots, even in successful applications like warehouse automation, still use either simple suction cups or parallel jaw grippers with no force sensing. "We've had those for 50 years now. There has not been much progress in building robot hands," he says. "So, for most things, you need a human to do the picking." This reality applies not only to robotic pickers, but also to AI systems that operate in the real world, he argues, as the lack of real-world grounding of these systems will require a human in the loop for the foreseeable future.But there is also another factor to be considered.It is also a fundamental error, in Brooks' view, to overestimate the pace at which new technologies will be adopted. Even purely digital changes can take decades, Brooks points to the internet's transition from IPv4, which can only address about 2 billion devices—far fewer than the number of people on Earth—to IPv6 addressing, which can handle trillions. "There was [meant to be] a two-year transition period from IPv4 to IPv6, back in 2001 to 2003," he notes. "As of last year, 2024, over 50 percent of internet traffic was still on IPv4." Physical world transformations take even longer. "Cars people are buying today will still be on roads in 15 to 20 years," he says.Infrastructure changes require massive investment and coordination. As Brooks puts it, "Everything takes a long time [to be adopted], even if it is software. It's embedded in a sea of other stuff, so you can't just have it run without safeguards." Cautious, measured adoption of new technologies is appropriate and the norm.Growing up in Adelaide, Australia, Brooks combined mathematical brilliance with an inventor's urge to build. "I was always interested in robots as a kid," he recalls. "I wasn't really good at making them; I could do circuits, but the mechanical stuff was hard." After studying pure mathematics at Flinders University, Brooks headed to Stanford University, where he joined the pioneering Hand-Eye Project in 1977. "That's when I became a sort of official roboticist." From there, his career took him to MIT, where he eventually became the Panasonic Professor of Robotics and director of the MIT Computer Science and Artificial Intelligence Laboratory—an institution that has produced 12 winners of the Turing Award, which is considered the Nobel Prize of computing.But Brooks was never content with pure research. His entrepreneurial drive led him to found iRobot, which would eventually create the Roomba vacuum cleaner. The path to success, however, was far from straight. The Roomba came to be only after 13 failed attempts to create a commercially viable robot—a testament to Brooks' persistence and willingness to learn from failure.A less well-known precursor to the Roomba was iRobot's development of military robots for bomb disposal. These machines first proved their worth at Ground Zero after the 9/11 attacks and later saw extensive use by U.S. forces in Iraq and Afghanistan, where they helped detect and neutralize improvised explosive devices (IEDs). But their most dramatic deployment came during one of the century's worst nuclear disasters."When the great tsunami of 2011 happened and there were three melted-down reactors at Fukushima Daiichi, people couldn't go in," Brooks recalls with evident pride. "Our robots got there about seven days afterward and were used to go in and see what was happening and get data out." The robots' success wasn't just about their technical capabilities—it was about their reliability under extreme conditions. "The reason we could have robots go in there was because we had had 6,500 of them deployed in war zones, battle-tested for real."This pattern—combining cutting-edge technology with practical applications that assist humans—has defined Brooks' subsequent ventures as well. At Rethink Robotics, he developed Baxter, a two-armed robot designed to work alongside humans in manufacturing settings. His current company, Robust.AI, is creating warehouse robots that assist rather than replace human workers.Throughout it all, Brooks has maintained a consistent philosophy: technological solutions must enhance human capabilities rather than attempt to replicate them. At the heart of Brooks' philosophy is a deceptively simple observation: People only accept new technologies when they don't lose their sense of control. "People don't have to understand every tool or machine they use," he explains, "but they want agency" and the ability to step in and override when things are going awry or not meeting their expectations. "That's actually one of the problems with driverless cars. If you're in a driverless car, you can't say, no, go another hundred feet before I get out. You are at the limit it decided, and it really annoys you."It is also critical that machines can be reliably controlled. Brooks agrees with the observation by David Eagleman that the brain accepts things that it can reliably control. He uses the car again as a good example; "When you first got in the car, it was a thing. It was this beast," Brooks explains. "And then after a while, it became an extension of you." This transformation occurs because the car's responses are consistent and predictable and, crucially, because we maintain control. We can choose when to accelerate, when to brake, which route to take.The opposite scenario plays out in settings where new technologies don't respect human agency. Brooks points to his experience of hospital workers with delivery robots: "There are lots of hospitals saying, 'We've got delivery robots that take the dirty dishes or the dirty sheets down to the basement.' But when you go there, you often see them turned off and pushed to the side." The problem? "The people there doing the lifesaving jobs were pushing something down a corridor, and this robot's coming out of the way, and it doesn't know what to do, so it stops and blocks them, and stops them doing their work. And that really annoys people."The lesson is clear: Successful AI systems must be designed not just for technical capability, but for human compatibility. As Brooks puts it, "If it's a plug-in to my world model and it behaves in a consistent expected way, I will add it."This understanding has profound implications for how we should think about artificial intelligence and its role in our future. Rather than pursuing full automation that removes humans from the loop, Brooks advocates for augmentation that enhances human capabilities while preserving human agency.Through this lens, Brooks argues, we can better understand what today's AI systems can—and cannot—do. "LLMs have learned to generalize language. Just language, not the meaning of language necessarily," Brooks notes, adding they are very good at it. "It's astonishing how well [they] generate language. I don't think most people 10 years ago could have believed that would work so well." But ultimately these tools, he says, are a "sophisticated autocomplete—not necessarily of an idea, but just the next word."Yet this very success can be misleading. When an AI system performs a task surprisingly well, we tend to assume it has broader competence, like a human would. "When humans explain how to get from one place to another in English, we know that they are broadly able to give directions between places and are also able to broadly communicate in English," but no such assumption is applicable for any machine, Brooks points out.This tendency to overestimate AI capabilities is exacerbated when we don't understand how the systems work. As science fiction author Arthur C. Clarke famously observed, "Any sufficiently advanced technology is indistinguishable from magic." Brooks sees this dynamic playing out in current reactions to LLMs: When we can't comprehend how a system achieves its results, we struggle to recognize its limitations.The path forward, Brooks believes, lies not in creating a single omniscient AI system, but in combining specialized AI tools with human oversight. "We'll use the LLM capability for the generality of language, and to get to a core set of things that are done by other modules that obey guardrails," he predicts. For example, language models could serve as intuitive interfaces to more specialized AI systems, each grounded in specific domains of knowledge and capability.This hybrid approach is already proving successful in various fields. Autonomous vehicles are remotely assisted by human operators who intervene when unexpected circumstances arise. Warehouse robots work alongside human workers who maintain override control. Analysis of medical imaging spots potential issues but leaves it to human doctors to make final diagnoses. Similarly, analysis of complex scientific problems, such as protein folding prediction or particle physics trajectories, or materials design assists human scientific understanding. Augmented Intelligence: Reflections on the Conversation with Rodney Brooks By Marcus Weldon, Newsweek Contributing Editor for AI and President Emeritus of Bell Labs The conversation with Rodney Brooks was as illuminating as always. He has a unique—some would say "typically antipodean"—ability to cut through the hyperbole and identify where real value is likely to be found, as well as the essential characteristics of "future value equations" and the associated human and business trajectories. When I reflect on our dialog (more analysis, of which you can find here), I think there are five defining observations and takeaways: 1) Current LLMs are correlators that lack causal understanding. These models are impressive in their ability to generate human-like language and have clearly encoded something "emergent" that is more than we expected from their training. But they are still not much more than autocomplete or auto-correlation engines that don't have a sufficient grounding in reality to be understand causation and therefore to be truly intelligent. In other words, they are unavoidably prone to "bullshit." 2) Humans are seduced by language. Language is the highest order, most evolved function of our brains and is the means by which we communicate our thoughts and ideas, and models of the world with others. And the ability of LLMs to emulate this capability results in us attributing magical powers to these models, such as the capability to do anything—to even become "superintelligent"—when the reality is much more prosaic. 3) Machines need real-world models and humans in the loop. Some of the hardest problems that confront humanity going forward require understanding and knowledge of the "real" physical world in which we live. However, we cannot even describe the models we use to interact with the world, despite being experts at operating in this world. But any AI system or AI-controlled robot must have such a model embedded—even for a limited space in which it operates—for it to be autonomous. So, until such models are learned or discovered, there will always be a need for a human in the loop. 4) Humans must have agency over machines. Our brains are marvelously adaptive learning machines that will willingly incorporate external machines and systems into our world model as "plug-ins"—just look at the way we interact with virtual gaming worlds or the variety of vehicles that we drive or machine we operate—even remotely—in the real world. But they must behave consistently according to our expectations and allow human control or override 'agency' when required, for us to allow them to augment us. 5) The world is not exponential for long. Moore's Law has deceived us into thinking that everything changes exponentially over prolonged periods (50 years in the case of Moore's Law). But the reality is that most things improve exponentially only when they have just been invented and therefore are far from being optimized. As Brooks says so eloquently, if everything continued exponentially, "we would eat the world" (and beyond), so the necessary and practical reality is that the world changes slowly. This is particularly true for natural systems—including humans—which have already been optimized over millions of years! In sum total, I think these are a phenomenal set of insights that should inform how we think about the future of intelligent machines and human augmentation.Where does Brooks see genuine opportunities for AI advancement in the coming years? His answer is characteristically practical: Look for problems where there's real economic or human value to be gained, then evaluate what combination of machine intelligence and human oversight will provide a reliable solution."Focus on the value," Brooks says. "For an individual company that's trying to make money on AI or robotics, it will be found by understanding who your customers are and where their pain points are and how you are uniquely qualified in some way to fix one of those things for them. Otherwise, it's not going to work."Based on his more than 50 years of experience, Brooks identifies several areas ripe for progress. In warehouses and factories, intelligent machines will increasingly optimize the movement and handling of goods. In agriculture, autonomous systems will help with tasks like fertilizing, weeding and harvesting. Perhaps in elder care—a particularly urgent need as populations age in countries from Japan to Italy—intelligent machines will help people maintain independence and dignity in their own homes longer.However, Brooks emphasizes that adoption will be governed by a fundamental principle: return on investment (ROI). "The gritty people who run the multitrillion-dollar logistics of the world are not going to be spending billions of dollars based on glitziness," he says.This measured view leads Brooks to predict that most jobs will be augmented rather than automated. The future is not one where we will "be able to give up work and sit around and eat grapes and write poetry while humanoid robots do everything for us." Instead, AI will increasingly serve as a tool that enhances human capabilities while preserving human agency and oversight.So, what of our opening question about AI systems as bullshitters? Brooks' framework suggests a more nuanced understanding. LLMs and other AI systems can produce remarkably convincing outputs without true comprehension. They're not exactly lying, but neither are they anchored in genuine understanding. They require human oversight to be truly useful—much like how our own quick, intuitive responses often need to be checked by slower, more deliberative thinking.This insight points toward a future that is both more promising and more pragmatic than many current visions. Rather than autonomous AI systems that replicate and replace human capabilities, Brooks envisions a world of hybrid systems that enhance human abilities while preserving human agency. Success will come not from chasing the latest hype, but from carefully matching technological capabilities to genuine needs.Perhaps, most importantly, Brooks' decades of experience suggest we should resist both irrational exuberance and cynical dismissal. "Rejection of the role of AI machines," he observes, "is like saying we're all going to be Amish." The challenge is not to avoid the technology but to develop it thoughtfully, with a clear understanding of both its capabilities and limitations.The future Brooks envisions might arrive more slowly than breathless headlines suggest, but it will likely transform our world more fundamentally than we expect—a phenomenon known as Amara's Law that says that "we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." The key is to maintain perspective, focusing on genuine value rather than hype.Or, as Brooks puts it with characteristic directness: "There's no magic. If it sounds like magic, that means you don't understand" what's really going on.
For advertisement: 510-931-9107
Copyright © 2025 Usfijitimes. All Rights Reserved.