Skip to main content

Blog Article

Yann LeCun Emphasizes the Promise of AI

The renowned Chief AI Scientist of Meta, Yann LeCun, discussed everything from his foundational research in neural networks to his optimistic outlook on the future of AI technology at a sold-out Tata Series on AI & Society event with the Academy’s President & CEO Nick Dirks while highlighting the importance of the open-source model.

Published April 8, 2024

By Nick Fetty

Yann LeCun, a Turing Award winning computer scientist, had a wide-ranging discussion about artificial intelligence (AI) with Nicholas Dirks, President and CEO of The New York Academy of Sciences, as part of the first installment of the Tata Series on AI & Society on March 14, 2024.

LeCun is the Vice President and Chief AI Scientist at Meta, as well as the Silver Professor for the Courant Institute of Mathematical Sciences at New York University. A leading researcher in machine learning, computer vision, mobile robotics, and computational neuroscience, LeCun has long been associated with the Academy, serving as a featured speaker during past machine learning conferences and also as a juror for the Blavatnik Awards for Young Scientists.

Advancing Neural Network Research

As a postdoc at the University of Toronto, LeCun worked alongside Geoffrey Hinton, who’s been dubbed the “godfather of AI,” conducting early research in neural networks. Some of this early work would later be applied to the field of generative AI. At this time, many of the field’s foremost experts cautioned against pursuing such endeavors. He shared with the audience what drove him to pursue this work, despite the reservations some had.

“Everything that lives can adapt but everything that has a brain can learn,” said LeCun. “The idea was that learning was going to be critical to make machines more intelligent, which I think was completely obvious, but I noticed that nobody was really working on this at the time.”

LeCun joked that because of the field’s relative infancy, he struggled at first to find a doctoral advisor, but he eventually pursued a PhD in computer science at the Université Pierre et Marie Curie where he studied under Maurice Milgram. He recalled some of the limitations, such as the lack of large-scale training data and limited processing power in computers, during those early years in the late 1980s and 1990s. By the early 2000s, he and his colleagues began developing a research community to revive and advance work in neural networks and machine learning.

Work in the field really started taking off in the late 2000s, LeCun said. Advances in speech and image recognition software were just a couple of the instances LeCun cited that used neural networks in deep learning applications.  LeCun said he had no doubt about the potential of neural networks once the data sets and computing power was sufficient.

Limitations of Large Language Models

Large language models (LLMs), such as ChatGPT or autocomplete, use machine learning to “predict and generate plausible language.”  While some have expressed concerns about machines surpassing human intelligence, LeCun admits that he takes an unpopular opinion in thinking that he doesn’t think LLMs are as intelligent as they may seem.

LLMs are developed using a finite number of words, or more specifically tokens which are roughly three-quarters of a word on average, according to LeCun. He said that many LLMs are developed using as many as 10 trillion tokens.

While much consideration goes into deciding what tunable parameters will be used to develop these systems, LeCun points out that “they’re not trained for any particular task, they’re basically trained to fill in the blanks.” He said that more than just language needs to be considered to develop an intelligent system.

“That’s pretty much why those LLMs are subject to hallucinations, which really you should call confabulations. They can’t really reason. They can’t really plan. They basically just produce one word after the other, without really thinking in advance about what they’re going to say,” LeCun said, adding that “we have a lot of work to do to get machines to the level of human intelligence, we’re nowhere near that.”

A More Efficient AI

LeCun argued that to have a smarter AI, these technologies should be informed by sensory input (observations and interactions) instead of language inputs. He pointed to orangutans, which are highly intelligent creatures that survive without using language.

Part of LeCun’s argument for why sensory inputs would lead to better AI systems is because the brain processes these inputs much faster. While reading text or digesting language, the human brain processes information at about 12 bytes per second, compared to sensory inputs from observations and interactions, which the brain processes at about 20 megabytes per second.

“To build truly intelligent systems, they’d need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models,” he said.

AI and Social Media

As part of his work with Meta, LeCun uses and develops AI tools to detect content that violates the terms of services on social media platforms like Facebook and Instagram, though he is not directly involved with the moderation of content itself. Roughly 88 percent of content removed is initially flagged by AI, which helps his team in taking down roughly 10 million items every three months. Despite these efforts, misinformation, disinformation, deep fakes, and other manipulated content continue to be problematic, though the means for detecting this content automatically has vastly improved.

LeCun referenced statistics stating that in late 2017, roughly 20 to 25 percent of hate speech content was flagged by AI tools. This number climbed to 96 percent just five years later. LeCun said this difference can be attributed to two things: first the emergence of self-supervised, language-based AI systems (which predated the existence of ChatGPT); and second, is the “transformer architecture” present in LLMs and other systems. He added that these systems can not only detect hate speech, but also violent speech, terrorist propaganda, bullying, fake news and deep fakes.

“The best countermeasure against these [concerns] is AI. AI is not really the problem here, it’s actually the solution,” said LeCun.

He said this will require a combination of better technological systems, “The AI of the good guys have to stay ahead of the AI of the bad guys,” as well as non-technological, societal input to easily detect content produced or adapted by AI. He added that an ideal standard would involve a watermark-like tool that verifies legitimate content, as opposed to a technology tasked with flagging inauthentic material.

Open Sourcing AI

LeCun pointed to a study by researchers at New York University which found that audiences over the age of 65 are most likely to be tricked by false or manipulated content. Younger audiences, particularly those who grew up with the internet, are less likely to be fooled, according to the research.

One element that separates Meta from its contemporaries is the former’s ability to control the AI algorithms that oversee much of its platforms’ content. Part of this is attributed to LeCun’s insistence on open sourcing their AI code, which is a sentiment shared by the company and part of the reason he ended up at Meta.

“I told [Meta executives] that if we create a research lab we’ll have to publish everything we do, and open source our code, because we don’t have a monopoly on good ideas,” said LeCun. “The best way I know, which I learned from working at Bell Labs and in academia, of making progress as quickly as possible is to get as many people as possible contributing to a particular problem.”

LeCun added that part of the reason AI has made the advances it has in recent years is because many in the industry have embraced the importance of open publication, open sourcing and collaboration.

“It’s an ecosystem and we build on each other’s ideas,” LeCun said.

Avoiding AI Monopolies

Another advantage is that open sourcing lessens the likelihood of a single company developing a monopoly over a particular technology. LeCun said a single company simply does not have the ability to finetune an AI system that will adequately serve the entire population of the world.

Many of the early systems have been developed using English, where data is abundant, but, for example, different inputs will need to be considered in a country such as India, where 22 different official languages are spoken. These inputs can be utilized in a way that a contributor doesn’t need to be literate – simply having the ability to speak a language would be enough to create a baseline for AI systems that serve diverse audiences. He said that freedom and diversity in AI is important in the same way that freedom and diversity is vital to having an independent press.

“The risk of slowing AI is much greater than the risk of disseminating it,” LeCun said.

Following a brief question and answer session, LeCun was presented with an Honorary Life Membership by the Academy’s President and CEO, Nick Dirks.

“This means that you’ll be coming back often to speak with us and we can all get our questions answered,” Dirks said with a smile to wrap up the event. “Thank you so much.”


Author

Image
Academy Communications Department
This article was written by a member of the Academy Communications team.