Skip to main content

Academy’s Past: Where It All Began

The Lyceum shared its first home with the College of Physicians and Surgeons, not far from the City Hall building that still stands today.

Published June 25, 2024

By Nick Fetty

The College of Physicians and Surgeons | 3 Barclay Street | January 1817 – April 1817

The story of The New York Academy of Sciences starts where many New York stories have – in downtown Manhattan.

It was here, on Barclay Street, near Broadway, that The Lyceum of Natural History in the City of New York (“The Lyceum” – which would become The New York Academy of Sciences in 1876) was founded. The Lyceum shared the building with the College of Physicians and Surgeons, part of the Medical Department of Columbia College, which moved into the space in 1813.

Originally built as a brick store house, the 25-foot by 38-foot, three-story building was later adapted to meet the needs of the medical school. This included a chemical lecture room, a lecture hall, and an anatomical theatre. Ornamental details included “a terminal balustrade and a cupola, surmounted by a statue of Apollo, to indicate the scientific and medical character of the institution.”

Establishing a ‘Cabinet of Natural History’

At the time, the Lyceum’s membership was largely composed of associates of the College of Physicians and Surgeons, including Dr. Samuel L. Mitchell, who served as the Lyceum’s first president. The first meeting was held at the Barclay Street facility on January 29, 1817, when members considered “the adoption of measures for instituting a ‘Cabinet of Natural History’ in New York City.”

The cabinet would eventually include numerous natural history displays and artifacts, many collected by Lyceum members, and would go on to rival the collections of the New-York Historical Society. The Lyceum’s collection ultimately became so extensive and popular with its members and visitors, that in 1820, “the Historical Society relinquished its collecting functions to the Lyceum, to which it also sold its valuable collection of natural history objects.”

The Lyceum hosted its preliminary meetings in this facility before officially adopting a constitution. The first formal meeting was held at Harmony Hall, a public house on the southeast corner of Duane and Wiliam Streets, where the original 21 members signed the constitution, and the first officers were elected.

By this time, the Lyceum had established its cultural utility to the city and was ready to move to its next home.

This is the first piece in an eleven-part series exploring the Academy’s past homes.

A hand-drawn rendition of the first home for the Academy, then called “the Lyceum.” © Koren Shadmi

Innovative New Art Exhibit Showcases the Importance of Coral Reefs

Artist Mara G. Haseltine combines ingenuity with practicality to draw attention to environmental issues facing coral reefs.

Published June 13, 2024

By Nick Fetty

Art and science come together in a striking new exhibition at The New York Academy of Sciences that celebrates World Oceans Day.

The exhibit, entitled “Blueprints to Save the Planet: 1 Coral Reefs: Exploring the ‘Art’ of Sustainable Reef Restoration,” includes a replica of the Rococo Cocco Reef designed by international environmental and sci-artist Mara G. Haseltine. It is currently on display in the Academy’s lobby at 115 Broadway in downtown Manhattan.

Broadly speaking, Haseltine’s work addresses “the link between our cultural and biological evolution.” Her current exhibit is a 20-year retrospective of her past work, but it also takes what she calls a “future-spective” by showcasing new designs for a novel coral reef in Cuba.

“A lot of it’s based on my imagination but founded in real science,” said Haseltine.

Combining Art and Science

Haseltine explains that we’re currently in the Age of the Anthropocene, which is considered the sixth mass extinction. She emphasizes that this is the first age of mass extinction caused by human activity, specifically the unsustainable use of land, water, and energy. Her work is focused on rectifying and mitigating issues that contribute to the demise of the planet’s natural environment.

The Roccoco Cocco Reef sculpture is a prototype for an ecotourist and experimental dive site. Ultimately this underwater sculpture site will be digitally fabricated primarily from recycled coral skeletons – calcium carbonate made from bleached coral. Similar to the unicellular plankton skeleton of a coccolithophore, which sequesters carbon dioxide when it’s oxygenating photosynthetic cells die and it sinks to bottom of the sea, the Rococo Cocco Reef would also be made of calcium carbonate and thus act as a carbon sink.

“The sculpture is inspired by one of the largest carbon sinks on the planet, coccolithophore skeletons, after an oxygenating bloom and they have sunk to the ocean’s floor,” said Haseltine, adding that the design embraces the concept of biomimicry.

Haseltine relies on the expertise of marine biologists and field scientists in her work. Fernando Bretos, a program officer for The Ocean Foundation, brought his deep knowledge on coral reefs in the Caribbean to this project.

“The coral is really the front line,” said Bretos. “It’s the food, it’s the shelter, it’s the defense. It’s everything for coastal communities in the Caribbean.”

Geotherapy

Haseltine has also worked closely with Tom Goreau, PhD, Director of the Global Coral Reef Alliance. Her collaboration with Dr. Goreau first started when they worked together to design “Floating Reef Structures” for the United Nations’ Small Island Developing States.

The design is based on a fractal pattern, known for its efficiency in dissipating wave energy, thereby safeguarding vulnerable coastlines from erosion and storm damage. Within a similar vein, their current project follows the concept of Geotherapy.

“Geotherapy treats the Earth as a medical patient suffering from severe heatstroke,” said Dr. Goreau. “The first steps of therapy are to identify the causes and regenerate the Earth’s natural biogeochemical and physical processes that regulate and stabilize temperature to bring it down to safe levels as fast as possible.”

A Message That Resonates

Bretos said he was happy to work with an artist to bring awareness to the issue. While he acknowledges the great work of fellow scientists, he said that artists are able to communicate messages in a way that can resonate differently with people.

“As scientists, we struggle a lot with getting the message out about ocean literacy,” said Bretos. “Art [on the other hand] is visual. It doesn’t speak a language. Not everyone can relate to a scientific paper, [but] anyone can relate to one of Mara’s sculptures.”

The exhibit was installed in coordination with the United Nations’ World Oceans Day which was celebrated on June 8, 2024. While this is not the first time she has had her art on display at the Academy, Haseltine said she appreciates the opportunity to work with an organization that “has such an outstanding academic reputation.”

“I mean, Charles Darwin was an early, honorary member of the Academy,” said Haseltine. “Need I say more?”

The New York Academy of Sciences and the Leon Levy Foundation Announce the 2024 Leon Levy Scholars in Neuroscience

New York, NY, May 29, 2024 — The New York Academy of Sciences and the Leon Levy Foundation announced today the 2024 cohort of Leon Levy Scholars in Neuroscience, continuing a program initiated by the Foundation in 2009 that has supported 170 fellows in neuroscience.

This highly regarded postdoctoral program supports exceptional young researchers across the five boroughs of New York City as they pursue innovative neuroscience research and advance their careers toward becoming independent principal investigators. Nine scholars were competitively selected for a three-year term from a broad pool of applications from more than a dozen institutions across New York City that offer postdoctoral positions in neuroscience.

Shelby White, founding trustee of the Leon Levy Foundation, said, “For two decades, the Foundation has supported over 170 of the best young neuroscience researchers in their risk-taking research and clinical work. We are proud to partner with The New York Academy of Sciences to continue to encourage these gifted young scientists, helping them not only to advance their careers but also to advance the cause of breakthrough research in the field of neuroscience.”

Nicholas Dirks, the Academy’s President and CEO said “Our distinguished jury selected nine outstanding neuroscientists across the five boroughs of New York City involved with cutting-edge research ranging from the study of neural circuitry of memory and decision-making, to psychedelic-based treatment of alcohol and substance abuse disorders, to the chemical communication of insects, to the use of organoids to study Alzheimer’s, to vocal learning research in mammals. We are excited to be working with the Leon Levy Foundation to welcome this new group of young neuroscientists to the Academy and the Leon Levy Scholar community.”

The Scholars program includes professional development opportunities such as structured mentorship by distinguished senior scientists, and workshops on grant writing, leadership development, communications, and management skills. The program facilitates networking among cohorts and alumni, data sharing, cross-institutional collaboration, and the annual Leon Levy Scholars symposium held in the Spring of 2025.

The 2024 Leon Levy Scholars


Tiphaine Bailly, PhD, The Rockefeller University

Recognized for: Genetically engineering the pheromone glands of ants to study chemical communication in insect societies.


Ernesto Griego, PhD, Albert Einstein College of Medicine

Recognized for: Mechanisms by which experience and brain disease modify inhibitory circuits in the dentate gyrus, a region of the brain that contributes to memory and learning.


Deepak Kaji, MD, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Using 3D organoids and assembloids to model abnormal protein accumulations and aggregations in the brain, a characteristic of Alzheimer’s Disease.


Jack Major, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Understanding the long-term effects of inflammation on somatosensory neurons, cells that perceive and communicate information about external stimuli and internal states such as touch, temperature and pain.


Brigid Maloney, PhD, Icahn School of Medicine at Mount Sinai

Recognized for: Identifying the transcriptomic (RNA transcript) specializations unique to advanced vocal learning mammals.


Amin Nejatbakhsh, PhD, Flatiron Institute

Recognized for: Statistical modeling of neural data to causally understand biological and artificial neural networks and the mechanisms therein.


Broc Pagni, PhD, NYU Langone Health

Recognized for: Identifying the psychological and neurobiological mechanisms of psychedelic-based treatments for alcohol and substance use disorders.


Adithya Rajagopalan, PhD, New York University

Recognized for: Examining how neurons within the brain’s orbitofrontal cortex, combine input from other brain regions to encode complex properties of the world that guide decision-making. 


Genelle Rankin, PhD, The Rockefeller University

Recognized for: Identifying and characterizing how thalamic nuclei, specialized areas of the thalamus responsible for relaying sensory and motor signals and regulating consciousness, supports working memory maintenance.

About the Leon Levy Foundation

The Leon Levy Foundation continues and builds upon the philanthropic legacy of Leon Levy, supporting preservation, understanding, and the expansion of knowledge, with a focus on the ancient world, arts and humanities, nature and gardens, neuroscience, human rights, and Jewish culture. The Foundation was created in 2004 from Leon Levy’s estate by his wife, founding trustee Shelby White. To learn more, visit: www.leonlevyfoundation.org.

For more information about the Scholarship program, contact: LeonLevy@nyas.org 

Media Contact: Kamala Murthy | Kmurthy@nyas.org

Building Trust Through Transparency in Biorisk Management

A group of people sit around a u-shaped table in a boardroom.

Note: The reflections in this blog are of Dr. Syra Madad and Dr. Filippa Lentzos and do not represent the viewpoints of TAG-RULS DUR, the World Health Organization or The New York Academy of Sciences.

Published May 13, 2024

By Syra Madad and Filippa Lentzos

First meeting of the Technical Advisory Group on the Responsible Use of the Life Sciences and Dual-Use Research in Geneva on 16-17 April 2024. Photo courtesy of Marc Bader/WHO.

In September 2022, the World Health Organization (WHO) marked a significant milestone in global health security by issuing the Global Guidance Framework for the Responsible Use of the Life Sciences, aimed at setting a global standard for mitigating biorisks and governing dual-use research. More recently, the WHO set up a Technical Advisory Group on the Responsible Use of the Life Sciences and Dual-Use Research (TAG-RULS DUR) to support implementation of the Guidance Framework, and as members of that group we had the privilege of participating in its first in-person meeting at the WHO headquarters in Geneva, Switzerland. This historic gathering underscored the critical need for communication, collaboration, and coordination.

Prior to the meeting, we reflected on a crucial lesson gleaned from the COVID-19 pandemic: the imperative to rebuild trust in science. Drawing from our expertise in biosecurity and biorisk management, we discussed the foundational principle for fostering global trust in science: transparency in biosecurity risk management.

Transparency in Biorisk Management

Transparency in biorisk management involves several layers, from disclosing research methodologies to sharing findings and potential risks associated with biological advances. This transparency is crucial not only for advancing scientific knowledge but also for maintaining public trust, understanding and engagement. To effectively unpack the concept of transparency in biorisk management, let’s consider its practical application across different dimensions. These layers of transparency are not just theoretical; they are implemented through specific practices that are essential for maintaining the integrity and trustworthiness of scientific research. Here are three critical aspects where transparency plays a pivotal role:

1. Disclosing Research and Outcomes

It is essential that scientific endeavors, especially where dual-use potential is high, are conducted as openly as possible, and that the intent, potential benefits and potential harms of the research are clearly communicated. This openness helps the scientific community and publics to better understand risk-benefit assessment associated with the research, as well as fosters an informed dialogue about what constitutes responsible science in this context and what safeguards might be necessary.

2. Engagement with Public and Stakeholders:

Effective risk communication is a vital aspect of transparency. It involves clear, consistent, and accessible information being provided to the public. In addition to scientists, our discussions highlighted the role of various stakeholders, including funders, publishers and host institutions, in disseminating balanced and factual information to demystify scientific processes and debunk myths and misinformation.

3. Collaborative Governance

The governance of dual-use research requires cooperation across national and international frameworks. By sharing best practices, challenges, and successes in a transparent manner, countries and institutions can better prepare and respond to biosecurity risks. Collaborative governance also includes public engagement in policy-making, ensuring that the voices of affected, or potentially affected, communities are heard and considered.

In our continuous journey towards safer and more secure scientific practice, the role of transparency cannot be overstated. It is not merely a procedural element but a foundational principle that supports the entire framework of responsible life sciences research.

By adhering to transparent practices, we not only safeguard against misuse but also build a more resilient trust in science that is crucial for societal advancement. Transparency, engaged governance, and robust stakeholder communication are not optional but essential to our collective efforts in ensuring the safe use of biotechnologies. The path forward is clear; it is one of openness, engagement, and unwavering commitment to global health security.


The Role of TAG-RULS DUR

The WHO’s Technical Advisory Group on the Responsible Use of the Life Sciences and Dual-Use Research (TAG-RULS DUR) plays a pivotal role in advising WHO and its Member States on the responsible use of life sciences, focusing on mitigating biorisks and governing dual-use research. Our mission aligns with the One Health approach, which optimizes the health of people, animals, and ecosystems, and recognizes the interdependence of health and biological sciences. The group’s formation reflects a collective commitment to address safety and security concerns posed by novel and existing technologies, which, while promising, can also harbor potential risks for accidental or deliberate harm. Learn more about TAG-RULS DUR.


Dr. Syra Madad (left) and Dr. Filippa Lentzos (right) at the World Health Organization Headquarters.

About the Authors

Syra Madad, D.H.Sc., M.Sc., MCP, CHEP is an internationally renowned epidemiologist in special pathogens preparedness and response, biosecurity advisor and science communicator. She serves as the Senior Director of the System-wide Special Pathogens Program at NYC Health + Hospitals, the U.S.’s largest municipal healthcare delivery system. Dr. Madad is a fellow at Harvard University’s Belfer Center for Science and International Affairs where she leads the Women in STEM and Diversity in STEM series; she’s Core Faculty at the National Emerging Special Pathogens Training and Education Center (NETEC), and affiliate faculty at Boston University’s Center on Emerging Infectious Diseases.

Filippa Lentzos is the Reader (Associate Professor) in Science & International Security at King’s College London. She holds a joint appointment in the Department of War Studies and the Department of Global Health & Social Medicine.


Read more from Dr. Madad on the Academy blog:

The COVID-19 Pandemic at Year Four: The Imperative for Global Health Solidarity

Crossing Species: The Rising Threat of H5N1 Bird Flu in the U.S.

Exploring the Ethics of Human Settlement in Space

While there are many scientific and engineering considerations that need to be applied to the human settlement of outer space, author Erika Nesvold argues in her new book that we mustn’t forget about the ethical and social justice dimensions.

Published April 15, 2024

By Nick Fetty

Astrophysicist Erika Nesvold discussed her recently published book, Off-Earth: Ethical Questions and Quandaries for Living in Outer Space, during the second installment of the Authors at the Academy Series, moderated by Chief Scientific Officer Brooke Grindlinger, PhD, at The New York Academy of Sciences on April 5, 2024.

Finding Her Inspiration

Nesvold, who holds a PhD in physics from the University of Maryland Baltimore County and is cofounder of the JustSpace Alliance, began the event by discussing what motivated her to write a book focused on space ethics. While working at the Carnegie Institute of Science in Washington D.C. she traveled to Silicon Valley to do a six-week research program for NASA, focused on planetary defense, or as she put it “defending the earth from asteroids.” Through this, she met representatives from several prominent organizations in the emerging private sector space industry.

“I was excited because I’ve always been very interested in human space travel and the idea of humans living in space, and a lot of these companies said their goal was to get humans living in space,” said Nesvold. “But then when I talked to these entrepreneurs, I actually found I was kind of disappointed. I asked them questions about things that I thought were going to be a big deal [but they didn’t really have answers].”

Nesvold said she was concerned with issues like how explorers will make sure their mining equipment doesn’t contaminate the landscape and how labor rights will be regulated. She said the answer she often got was “We’ll worry about that later” and she felt this was not the proper approach.

As an astrophysicist she knew she didn’t have the background to answer these questions herself so she decided to launch a podcast to explore some of her ethical ponderings. The podcast was “moderately successful,” and with a new network of experts ranging from labor rights activists to historians to space lawyers, Nesvold turned her podcast miniseries into a book.

“I intentionally put questions in the title because it’s really more questions than answers, but we have to start somewhere,” said Nesvold.

Who Gets to Go?

Another element Nesvold addresses in her book is determining who gets to be part of the crews that go into space. Much like in broader society, Nesvold said diversity of experiences and backgrounds will be an important consideration when determining who goes.

She pointed out that the current criteria NASA uses for determining who goes into space is “extremely strict” and joked that she’s applied three times now and “never gets past the first stage.” The number of people who want to go into space exceeds the supply of vehicles that will get them there, and this was demonstrated when the majority of the event’s attendees raised their hands when asked if they’d be interested in traveling to space.

“There’s more people in this room [right now] who want to go to space than what they hired in the last round of astronaut hires,” she said with a smile.

Once the settlement of space becomes more feasible, Nesvold said many factors will need to be considered to determine who gets to be part of that initial cohort. Making certain this cohort has the proper expertise from engineers and doctors to plumbers and technicians will be essential, she said.

Additionally, Nesvold argues that the first cohort should properly reflect humanity. This would likely include individuals from all over the world. Gender balance will be important and perhaps even certain genetic issues will need to be considered if this cohort will be producing the next generation, but Nesvold cautions they don’t want to wade into eugenics. She said these early space settlers will need to find the middle ground between the utilitarian (“…if the settlement collapses, then none of this matters…”) and societal values like equity and accessibility.

Who Owns Space?

The Outer Space Treaty of 1967 was established during the height of the Cold War and Nesvold called it “miraculous” that both the United States and the Soviet Union signed it, considering the political hostilities between the two countries. The treaty itself aimed to establish guidelines that forbade nations from acts like appropriating territory or detonating a nuclear weapon in space. The intent was to avoid the wars and other conflicts seen during previous eras of human migration and settlement.

“I think that was good foresight,” said Nesvold. “What they didn’t think too far ahead about was what private companies would want to do.”

With the rise of the private sector space industry, this issue has been brought back to the forefront. Nesvold said that based on many current interpretations of the treaty, issues such as individual or even company appropriation of territory would still be forbidden. She said this can be problematic in capitalistic economies where private property rights are key to driving growth and innovation.

Various countries, including the US, have passed national laws that state while companies cannot own land in space, they can own resources they extract in processes such as space mining. She compared this to international fishing regulations that forbid individuals or companies from claiming territory in international waters, but they do own the fish they catch in those waters.

The Birds and the Bees

Eventually the settlement of outer space will require humans reproducing to maintain the population. However, given reduced gravity and other elements of the environment, scientists need to think about both the technical and the ethical dimensions of reproduction.

“Part of the reason this is still a big open question is because we don’t even know how to figure that out scientifically in an ethical way because almost every medical researcher and bioethicist you talk to will say it’s not a good idea to do medical experiments with pregnant people and fetuses,” Nesvold said.

Once the reproduction question is figured out, Nesvold said they’ll need to study if these children will be physically able to handle gravity if they return to earth. Additionally, given the scarce resources during the early missions, overpopulation can become an issue if not regulated. She pointed out that this will then lead to additional ethical issues around government overreach, bodily autonomy and eugenics.

Conversely, underpopulation can also become problematic if illness or another accident takes out part of the settlement. Reproduction could become necessary to sustain the population, which becomes ethically concerning if people are forced to procreate.

“This comes down to questions about an individual’s right to say what happens to their own body versus the society’s demands on them, which are all questions we face on earth as well,” Nesvold said.

For on-demand video access to the full event, click here.

Learn more about upcoming events in the Authors at the Academy series:

Innovations in AI and Higher Education

Innovations in AI and Higher Education

From the future of higher education to regulating artificial intelligence (AI), Reid Hoffman and Nicholas Dirks had a wide-ranging discussion during the first installment of the Authors at the Academy series.

Published April 12, 2024

By Nick Fetty

It was nearly a full house when authors Nicholas Dirks and Reid Hoffman discussed their respective books during an event at The New York Academy of Sciences on March 27, 2024.

Hoffman, who co-founded LinkedIn as well as Inflection AI and currently serves as a partner at Greylock, discussed his book Impromptu: Amplifying Our Humanity Through AI. Dirks, who spent a career in academia before becoming President and CEO of the Academy, focused on his recently published book City of Intellect: The Uses and Abuses of the University. Their discussion, the first installment in the Authors at the Academy series, was largely centered on artificial intelligence (AI) and how it will impact education, business and creativity moving forward.

The Role of Philosophy

The talk kicked off with the duo joking about the century-old rivalry between the University of California-Berkeley, where Dirks serves on the faculty and formerly served as chancellor, and Stanford University, where Hoffman earned his undergraduate degree in symbolic systems and currently serves on the board for the university’s Institute for Human-Centered AI. From Stanford, Hoffman went to Oxford University as a Marshall Scholar to study philosophy. He began by discussing the role that his background in philosophy has played throughout his career.

“One of my conclusions about artificial intelligence back in the day, which is by the way still true, is that we don’t really understand what thinking is,” said Hoffman, who also serves on the Board of Governors for the Academy. “I thought maybe philosophers understand what thinking is, they’ve been at it a little longer, so that’s part of the reason I went to Oxford to study philosophy. It was extremely helpful in sharpening my mind toolset.”

Public Intellectual Discourse

He encouraged entrepreneurs to think about the theory of human nature in the work they’re doing. He said it’s important to think about what they want for the future, how to get there, and then to articulate that with precision. Another advantage of a philosophical focus is that it can strengthen public intellectual discourse, both nationally and globally, according to Hoffman.

“It’s [focused on] who are we and who do we want to be as individuals and as a society,” said Hoffman.

Early in his career, Hoffman concluded that working as a software entrepreneur would be the most effective way he could contribute to the public intellectual conversation. He dedicated a chapter in his book to “Public Intellectuals” and said that the best way to elevate humanity is through enlightened discourse and education, which was the focus of a separate chapter in his book.

Rethinking Networks in Academia

The topic of education was an opportunity for Hoffman to turn the tables and ask Dirks about his book. Hoffman asked Dirks how institutions of higher education need to think about themselves as nodes of networks and how they might reinvent themselves to be less siloed.

Dirks mentioned how throughout his life he’s experienced various campus structures and cultures from private liberal arts institutions like Wesleyan University, where Dirks earned his undergraduate degree, and STEM-focused research universities like Cal Tech to private universities in urban centers (University of Chicago, Columbia University) and public, state universities (University of Michigan, University of California-Berkeley).

While on the faculty at Cal Tech, Dirks recalled he was encouraged to attend roundtables where faculty from different disciplines would come together to discuss their research. He remembered hearing from prominent academics such as Max Delbrück, Richard Feynman, and Murray Gell-Mann. Dirks, with a smile, pointed out the meeting location for these roundtables was featured in the 1984 film Beverly Hills Cop.

An Emphasis on Collaboration in Higher Education

Dirks said that he thinks the collaborative culture at Cal Tech enabled these academics to achieve a distinctive kind of greatness.

“I began to see this is kind of interesting. It’s very different from the way I’ve been trained, and indeed anyone who has been trained in a PhD program,” said Dirks, adding that he often thinks about a quote from a colleague at Columbia who said, “you’re trained to learn more and more about less and less.”

Dirks said that the problem with this model is that the incentive structures and networks of one’s life at the university are largely organized around disciplines and individual departments. As Dirks rose through the ranks from faculty to administration (both as a dean at Columbia and as chancellor at Berkeley), he began gaining a bigger picture view of the entire university and how all the individual units can fit together. Additionally, Dirks challenged academic institutions to work more collaboratively with the off-campus world.

“A Combination of Competition and Cooperation”  

Dirks then asked Hoffman how networks operate within the context of artificial intelligence and Silicon Valley. Hoffman described the network within the Valley as “an intense learning machine.”

“It’s a combination of competition and cooperation that is kind of a fierce generator of not just companies and products, but ideas about how to do startups, ideas about how to scale them, ideas of which technology is going to make a difference, ideas about which things allow you to build a large-scale company, ideas about business models,” said Hoffman.

During a recent talk with business students at Columbia University, Hoffman said he was asked about the kinds of jobs the students should pursue upon graduation. His advice was that instead of pinpointing specific companies, jobseekers should choose “networks of vibrant industries.” Instead of striving for a specific job title, they should instead focus on finding a network that inspires ingenuity.

“Being a disciplinarian within a scholarly, or in some case scholastic, discipline is less important than [thinking about] which networks of tools and ideas are best for solving this particular problem and this particular thing in the world,” said Hoffman. “That’s the thing you should really be focused on.”

The Role of Language in Artificial Intelligence

Much of Hoffman’s book includes exchanges between him and ChatGPT-4, an example of a large language model (LLM). Dirks points out that Hoffman uses GPT-4 not just an example, but as an interlocutor throughout the book. By the end of the book, Dirks observed that the system had grown because of Hoffman’s inputs.

In the future, Hoffman said he sees LLMs being applied to a diverse array of industries. He used the example of the steel industry, in areas like sales, marketing, communications, financial analysis, and management.

“LLMs are going to have a transformative impact on steel manufacturing, and not necessarily because they’re going to invent new steel manufacturing processes, but [even then] that’s not beyond the pale. It’s still possible,” Hoffman said.

AI Understanding What Is Human

Hoffman said part of the reason he articulates the positives of AI is because he views the general discourse as so negative. One example of a positive application of AI would be having a medical assistant on smartphones and other devices, which can improve medical access in areas where it may be limited. He pointed out that AI can also be programmed as a tutor to teach “any subject to any age.”

“[AI] is the most creative thing we’ve done that also seems to have potential autonomy and agency and so forth, and that causes a bunch of very good philosophical questions, very good risk questions,” said Hoffman. “But part of the reason I articulate this so positively is because…[of] the possibility of making things enormously better for humanity.” 

Hoffman compared the societal acceptance of AI to automobiles more than a century ago. At the outset, automobiles didn’t have many regulations, but as they grew in scale, laws around seatbelts, speed limits, and driver’s licenses were established. Similarly, he pointed to weavers who were initially wary of the loom before understanding its utility to their work and the resulting benefit to broader society.

“AI can be part of the solution,” said Hoffman. “What are the specific worries in navigation toward the good things and what are the ways that we can navigate that in good ways. That’s the right place for a critical dialogue to happen.”

Regulation of AI

Hoffman said because of the speedy rate of development of new AI technologies, it can make effective regulation difficult. He said it can be helpful to pinpoint the two or three most important risks to focus on during the navigation process, and if feasible to fix those issues down the road.

Carbon emissions from automobiles was an example Hoffman used, pointing out that emissions weren’t necessarily on the minds of engineers and scientists when the automobile was being developed, but once research started pointing to the detrimental environmental impacts of carbon in the atmosphere, governments and companies took action to regulate and reduce emissions.

“[While] technology can help to create a problem, technologies can also help solve those problems,” Hoffman said. “We won’t know they’re problems until we’re into them and obviously we adjust as we know them.”

Hoffman is currently working on another book about AI and was invited to return to the Academy to discuss it once published.

For on-demand video access to the full event, click here.

Register today if you’d like to attend these upcoming events in the Authors at the Academy series:

Academy Staff Experience 2024 Eclipse

As a historic eclipse graced the skies above New York City and other parts of the country on April 8, 2024, staff members with The New York Academy of Sciences were not going to miss out on this rare opportunity. Below are submissions from a handful of Academy staff who documented this historic happening.



The Academy’s digital content manager Nick Fetty shot this southwest facing timelapse video (sped up to 20X speed) of the eclipse as it passed by the Academy’s office at 115 Broadway around 3:20 p.m. It looks like you can see the crescent moon (perhaps reflecting off the lens or other glass) in the middle of the video frame during the first 10 seconds.

See you in 2079, when the next total full solar eclipse will be visible in New York City!

Yann LeCun Emphasizes the Promise of AI

The renowned Chief AI Scientist of Meta, Yann LeCun, discussed everything from his foundational research in neural networks to his optimistic outlook on the future of AI technology at a sold-out Tata Series on AI & Society event with the Academy’s President & CEO Nick Dirks while highlighting the importance of the open-source model.

Published April 8, 2024

By Nick Fetty

Yann LeCun, a Turing Award winning computer scientist, had a wide-ranging discussion about artificial intelligence (AI) with Nicholas Dirks, President and CEO of The New York Academy of Sciences, as part of the first installment of the Tata Series on AI & Society on March 14, 2024.

LeCun is the Vice President and Chief AI Scientist at Meta, as well as the Silver Professor for the Courant Institute of Mathematical Sciences at New York University. A leading researcher in machine learning, computer vision, mobile robotics, and computational neuroscience, LeCun has long been associated with the Academy, serving as a featured speaker during past machine learning conferences and also as a juror for the Blavatnik Awards for Young Scientists.

Advancing Neural Network Research

As a postdoc at the University of Toronto, LeCun worked alongside Geoffrey Hinton, who’s been dubbed the “godfather of AI,” conducting early research in neural networks. Some of this early work would later be applied to the field of generative AI. At this time, many of the field’s foremost experts cautioned against pursuing such endeavors. He shared with the audience what drove him to pursue this work, despite the reservations some had.

“Everything that lives can adapt but everything that has a brain can learn,” said LeCun. “The idea was that learning was going to be critical to make machines more intelligent, which I think was completely obvious, but I noticed that nobody was really working on this at the time.”

LeCun joked that because of the field’s relative infancy, he struggled at first to find a doctoral advisor, but he eventually pursued a PhD in computer science at the Université Pierre et Marie Curie where he studied under Maurice Milgram. He recalled some of the limitations, such as the lack of large-scale training data and limited processing power in computers, during those early years in the late 1980s and 1990s. By the early 2000s, he and his colleagues began developing a research community to revive and advance work in neural networks and machine learning.

Work in the field really started taking off in the late 2000s, LeCun said. Advances in speech and image recognition software were just a couple of the instances LeCun cited that used neural networks in deep learning applications.  LeCun said he had no doubt about the potential of neural networks once the data sets and computing power was sufficient.

Limitations of Large Language Models

Large language models (LLMs), such as ChatGPT or autocomplete, use machine learning to “predict and generate plausible language.”  While some have expressed concerns about machines surpassing human intelligence, LeCun admits that he takes an unpopular opinion in thinking that he doesn’t think LLMs are as intelligent as they may seem.

LLMs are developed using a finite number of words, or more specifically tokens which are roughly three-quarters of a word on average, according to LeCun. He said that many LLMs are developed using as many as 10 trillion tokens.

While much consideration goes into deciding what tunable parameters will be used to develop these systems, LeCun points out that “they’re not trained for any particular task, they’re basically trained to fill in the blanks.” He said that more than just language needs to be considered to develop an intelligent system.

“That’s pretty much why those LLMs are subject to hallucinations, which really you should call confabulations. They can’t really reason. They can’t really plan. They basically just produce one word after the other, without really thinking in advance about what they’re going to say,” LeCun said, adding that “we have a lot of work to do to get machines to the level of human intelligence, we’re nowhere near that.”

A More Efficient AI

LeCun argued that to have a smarter AI, these technologies should be informed by sensory input (observations and interactions) instead of language inputs. He pointed to orangutans, which are highly intelligent creatures that survive without using language.

Part of LeCun’s argument for why sensory inputs would lead to better AI systems is because the brain processes these inputs much faster. While reading text or digesting language, the human brain processes information at about 12 bytes per second, compared to sensory inputs from observations and interactions, which the brain processes at about 20 megabytes per second.

“To build truly intelligent systems, they’d need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models,” he said.

AI and Social Media

As part of his work with Meta, LeCun uses and develops AI tools to detect content that violates the terms of services on social media platforms like Facebook and Instagram, though he is not directly involved with the moderation of content itself. Roughly 88 percent of content removed is initially flagged by AI, which helps his team in taking down roughly 10 million items every three months. Despite these efforts, misinformation, disinformation, deep fakes, and other manipulated content continue to be problematic, though the means for detecting this content automatically has vastly improved.

LeCun referenced statistics stating that in late 2017, roughly 20 to 25 percent of hate speech content was flagged by AI tools. This number climbed to 96 percent just five years later. LeCun said this difference can be attributed to two things: first the emergence of self-supervised, language-based AI systems (which predated the existence of ChatGPT); and second, is the “transformer architecture” present in LLMs and other systems. He added that these systems can not only detect hate speech, but also violent speech, terrorist propaganda, bullying, fake news and deep fakes.

“The best countermeasure against these [concerns] is AI. AI is not really the problem here, it’s actually the solution,” said LeCun.

He said this will require a combination of better technological systems, “The AI of the good guys have to stay ahead of the AI of the bad guys,” as well as non-technological, societal input to easily detect content produced or adapted by AI. He added that an ideal standard would involve a watermark-like tool that verifies legitimate content, as opposed to a technology tasked with flagging inauthentic material.

Open Sourcing AI

LeCun pointed to a study by researchers at New York University which found that audiences over the age of 65 are most likely to be tricked by false or manipulated content. Younger audiences, particularly those who grew up with the internet, are less likely to be fooled, according to the research.

One element that separates Meta from its contemporaries is the former’s ability to control the AI algorithms that oversee much of its platforms’ content. Part of this is attributed to LeCun’s insistence on open sourcing their AI code, which is a sentiment shared by the company and part of the reason he ended up at Meta.

“I told [Meta executives] that if we create a research lab we’ll have to publish everything we do, and open source our code, because we don’t have a monopoly on good ideas,” said LeCun. “The best way I know, which I learned from working at Bell Labs and in academia, of making progress as quickly as possible is to get as many people as possible contributing to a particular problem.”

LeCun added that part of the reason AI has made the advances it has in recent years is because many in the industry have embraced the importance of open publication, open sourcing and collaboration.

“It’s an ecosystem and we build on each other’s ideas,” LeCun said.

Avoiding AI Monopolies

Another advantage is that open sourcing lessens the likelihood of a single company developing a monopoly over a particular technology. LeCun said a single company simply does not have the ability to finetune an AI system that will adequately serve the entire population of the world.

Many of the early systems have been developed using English, where data is abundant, but, for example, different inputs will need to be considered in a country such as India, where 22 different official languages are spoken. These inputs can be utilized in a way that a contributor doesn’t need to be literate – simply having the ability to speak a language would be enough to create a baseline for AI systems that serve diverse audiences. He said that freedom and diversity in AI is important in the same way that freedom and diversity is vital to having an independent press.

“The risk of slowing AI is much greater than the risk of disseminating it,” LeCun said.

Following a brief question and answer session, LeCun was presented with an Honorary Life Membership by the Academy’s President and CEO, Nick Dirks.

“This means that you’ll be coming back often to speak with us and we can all get our questions answered,” Dirks said with a smile to wrap up the event. “Thank you so much.”

Distinguished Lecture: Cultural Anthropology

April 8, 2024 | 6-8:30 PM ET

The U.S.-Mexico Border as Political Theater

Contemporary political rhetoric on immigration frequently uses metaphors of war: “crisis,” “invasions,” “enemies,” “under siege,” and “surveillance.” As metaphors, they may draw our attention to “something happening” in our world, but they can also be misleading, altering our perceptions and distorting our understanding of events. Metaphors of war can thus lead to questionable actions, such as those currently taking place at the U.S.-Mexico border.

In this talk I walk back contemporary political discourse to provide some historical context for the border as a source of political theater, which has consistently used photo ops and media spectacles to create a sense of “crisis.” For over fifty years now, according to political rhetoric, we have been in a near constant state of immigrant “invasions” and border “crisis.” The southern border is where the “battle” takes place in a “war on illegal immigration.” Over the last few decades, the U.S.-Mexico border has been likened to a “war zone,” with increasing levels of militarization and with, at various times, the National Guard and military personnel conducting surveillance, as well as David Duke’s “Klan Border Watch” in 1977 to the Minutemen and other militias “guarding” the border since the 1990s. More recently, the border has served as the backdrop for media spectacles, photo ops, and the politics of a border/immigration in “crisis” for many politicians, including Texas Governor Greg Abbott, Florida Governor Ron DeSantis, Vice President Kamala Harris, and President Biden.

As spectacles of surveillance, photo ops, walls made of shipping containers, giant buoys, barbed wire, and buses loaded with migrants, are public performances to sway public opinion on a “crisis” that has been part of public discourse for decades. Long after any particular politician’s political life waxes and wanes, these images will remain an indelible part of our nation’s history. Migrants were the subjects in these spectacles. They were used to generate media attention in a political struggle over immigration policy, while at the same time masking the humanitarian crisis at the border. If there is an “immigration crisis,” is not decades of Congressional inaction on immigration reform and political infighting partly to blame? Lacking from border spectacles are agreements about solutions, such as finding ways for millions of undocumented immigrants to regularize their status, preparing for the demographic realities that create a demand for immigrant labor, and providing a rational and humane asylum process. Rather, the theatrics of a border in “crisis” and immigrant “invasions” maintain the status quo, which is very productive and useful for some politicians.


Please join Academy President, Nicholas Dirks, together with invited speakers and board members of the Anthropology Section of The New York Academy of Sciences, for a discussion about the interfaces between anthropology, science, and society.  Historically at the heart of The Academy, prominent anthropologists from Franz Boas to Ruth Benedict and Margaret Mead, established both the core of American anthropology as a discipline and were early and pivotal leaders in The New York Academy of Sciences. Today, the Anthropology Section continues this tradition of engaged public scholarship, hosting an annual Distinguished Lecture Series as well as workshops and other events to bring New York and tri-state area anthropologists into regular, sustained conversations about social and cultural research and contemporary issues. We welcome your participation in this conversation, and your engagement with the Anthropology Section.  All voices are welcome!

Speakers

Speaker

Discussant

Professor Leo R. Chavez

Author, Covering Immigration: Popular Images and the Politics of the Nation & Shadowed Lives: Undocumented Immigrants in American Society 

Professor Alyshia Gálvez

CUNY’s Lehman College (Department of Latino and Puerto Rican Studies Department) and the Graduate Center (Department of Anthropology)

Combating COVID-19

The Fight Against COVID-19

From March 25th to May 6th, 2020, over 2000 young innovators from 74 different countries came together to join the fight against COVID-19. In response to the coronavirus outbreak and global shutdown, the New York Academy of Sciences invited creative problem-solvers from around the world to participate in the challenge for a chance to receive a $500 travel scholarship to attend the Global STEM Alliance Summit. The winning solution, GOvid-19, is a virtual assistant and chatbot that provides users with accurate pandemic-related information. Learn more about the winning solution and the solvers who designed them.

The World Health Organization (WHO) declared the outbreak of the coronavirus disease 2019 (COVID-19) a pandemic in March 2020. As scientists and public health experts rush to find solutions to contain the spread, existing and emerging technologies are proving to be valuable. In fact, governments and health care facilities have increasingly turned to technology to help manage the outbreak. The rapid spread of COVID-19 has sparked alarm worldwide. Many countries are grappling with the rise in confirmed cases. It is urgent and crucial for us to discover ways to use technology to contain the outbreak and manage future public health emergencies.

The Challenge

The New York Academy of Sciences invited students ages 13-17 from around the world to participate in an open innovation challenge focused on slowing the spread of COVID-19 through technology-based solutions. Read the full challenge statement including the question and background here.

How It Works

After signing up to participate, students self-selected into teams and worked together on Launchpad, a virtual interactive platform that safely facilitates global collaboration and problem-solving. Using Launchpad, students from around the world participated, in teams or individually, to design a technology-based solution to the challenge question.

Grand Prize Winners

GOvid-19

A virtual assistant that provides users with accurate information about government responses, emergency resources, statistics on COVID-19 while utilizing grassroots feedback, streamlining medical supply chains with blockchain and AI techniques address potential accessibiliy issues among the most vulnerable groups.

Finalists

COVID Warriors – TISB Bangalore

A global centralized contract tracing solution that addresses the underlying issues of existing technology by integrating GPS and Bluetooth as well as combining RSSI modeling with analytics.

COVID COMBATANTS! (NoCOVID)

An AI-supported, 3D-printed rapid serological (saliva) testing kit and chest X-ray scan analyzer that detect SARS-CoV-2 in high-risk individuals, within the in- and out- patient settings.