Skip to main content

Using AI and Neuroscience to Transform Mental Health

A headshot of a woman smiling for the camera.

With a deep appreciation for the liberal arts, neuroscientist Marjorie Xie is developing AI systems to facilitate the treatment of mental health conditions and improve access to care.  

Published May 8, 2024

By Nick Fetty

As the daughter of a telecommunications professional and a software engineer, it may come as no surprise that Marjorie Xie was destined to pursue a career in STEM. What was less predictable was her journey through the field of artificial intelligence because of her liberal arts background.

From the City of Light to the Emerald City

Marjorie Xie, a member of the inaugural cohort of the AI and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, was born in Paris, France. Her parents, who grew up in Beijing, China, came to the City of Light to pursue their graduate studies, and they instilled in their daughter an appreciation for STEM as well as a strong work ethic.

The family moved to Seattle, Washington in 1995 when her father took a job with Microsoft. He was among the team of software engineers who developed the Windows operating system and the Internet Explorer web browser. Growing up, her father encouraged her to understand how computers work and even to learn some basic coding.

“Perhaps from his perspective, these skills were just as important as knowing how to read,” said Xie. “He emphasized to me; you want to be in control of the technology instead of letting technology control you.”

Xie’s parents gifted her a set of DK Encyclopedias as a child, her first serious exposure to science, which inspired her to take “field trips” into her backyard to collect and analyze samples. While her parents instilled in her an appreciation for science and technology, Xie admits her STEM classes were difficult and she had to work hard to understand the complexities. She said she was easily intimated by math growing up, but certain teachers helped her reframe her purpose in the classroom.

“My linear algebra teacher in college was extremely skilled at communicating abstract concepts and created a supportive learning environment – being a math student was no longer about knowing all the answers and avoiding mistakes,” she said. “It was about learning a new language of thought and exploring meaningful ways to use it. With this new perspective, I felt empowered to raise my hand and ask basic questions.”

She also loved reading and excelled in courses like philosophy, literature, and history, which gave her a deep appreciation for the humanities and would lay the groundwork for her future course of studies. Xie designed her own major in computational neuroscience at Princeton University, with her studies bringing in elements of philosophy, literature, and history.

“Throughout college, the task of choosing a major created a lot of tension within me between STEM and the humanities,” said Xie. “Designing my own major was a way of resolving this tension within the constraints of the academic system in which I was operating.”

She then pursued her PhD in Neurobiology and Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain.

A Deep Dive into the Science of Artificial and Biological Intelligence

Xie worked in Columbia’s Center for Theoretical Neuroscience where she studied alongside physicists and used AI to understand how nervous systems work. Much of her work is based on the research of the late neuroscientist David Marr who explained information-processing systems at three levels: computation (what the system does), algorithm (how it does it), and implementation (what substrates are used).

“We were essentially using AI tools – specifically neural networks – as a language for describing the cerebellum at all of Marr’s levels,” said Xie. “A lot of the work understanding how the cerebellar architecture works came down to understanding the mathematics of neural networks. An equally important part was ensuring that the components of the model be mapped onto biologically meaningful phenomena that could be measured in animal behavior experiments.”

Her dissertation focused on the cerebellum, the region of the brain used during motor control, coordination, and the processing of language and emotions. She said the neural architecture of the cerebellum is “evolutionarily conserved” meaning it can be observed across many species, yet scientists don’t know exactly what it does.

“The mathematically beautiful work from Marr-Albus in the 1970s played a big role in starting a whole movement of modeling brain systems with neural networks. We wanted to extend these theories to explain how cerebellum-like architecture could support a wide range of behaviors,” Xie said.

As a computational neuroscientist, Xie learned how to map ideas between the math world and the natural world. She attributes her PhD advisor, Ashok Litwin-Kumar, an assistant professor of neuroscience at Columbia University, for playing a critical role in her development of this skill.

“Even though my current research as a postdoc is less focused on the neural level, this skill is still my bread and butter. I am grateful for the countless hours Ashok spent with me at the whiteboard,” Xie said.

Joining a Community of Socially Responsible Researchers

After completing her PhD, Xie interned with Basis Research Institute, where she developed models of avian cognition and social behavior. It was here that her mentor, Emily Mackevicius, co-founder and director at Basis, encouraged her to apply to the AI and Society Fellowship.

The Fellowship has enabled Xie to continue growing professionally through opportunities such as collaborations with research labs, the winter academic sessions at Arizona State, the Academy’s weekly AI and Society seminars, and by working with a cohort of like-minded scholars across diverse backgrounds, including Tom Gilbert, PhD, an advisor for the AI and Society Fellowship, as well as the other two AI and Society Fellows Akuadasuo Ezenyilimba and Nitin Verma.

During the Fellowship, her interest in combining neuroscience and AI with mental health led her to develop research collaborations at Mt. Sinai Center for Computational Psychiatry. With the labs of Angela Radulescu and Xiaosi Gu, Xie is building computational models to understand causal relationships between attention and mood, with the goal of developing tools that will enable those with medical conditions like ADHD or bipolar disorder to better regulate their emotional states.

“The process of finding the right treatment can be a very trial-and-error based process,” said Xie. “When treatments work, we don’t necessarily know why they work. When they fail, we may not know why they fail. I’m interested in how AI, combined with a scientific understanding of the mind and brain, can facilitate the diagnosis and treatment process and respect its dynamic nature.”

Challenged to Look Beyond the Science

Xie says the Academy and Arizona State University communities have challenged her to venture beyond her role as a scientist and to think like a designer and as a public steward. This means thinking about AI from the perspective of stakeholders and engaging them in the decision-making process.

“Even the question of who are the stakeholders and what they care about requires careful investigation,” Xie said. “For whom am I building AI tools? What do these populations value and need? How can they be empowered and participate in decision-making effectively?”

More broadly, she considers what systems of accountability need to be in place to ensure that AI technology effectively serves the public. As a case study, Xie points to mainstream social media platforms that were designed to maximize user engagement, however the proxies they used for engagement have led to harmful effects such as addiction and increased polarization of beliefs.

She is also mindful that problems in mental health span multiple levels – biological, psychological, social, economic, and political.

“A big question on my mind is, what are the biggest public health needs around mental health and how can computational psychiatry and AI best support those needs?” Xie asked.

Xie hopes to explore these questions through avenues such as journalism and entrepreneurship. She wants to integrate various perspectives gained from lived experience.

“I want to see the world through the eyes of people experiencing mental health challenges and from providers of care. I want to be on the front lines of our mental health crises,” said Xie.

More than a Scientist

Outside of work, Xie serves as a resident fellow at the International House in New York City, where she organizes events to build community amongst a diverse group of graduate students from across the globe. Her curiosity about cultures around the world led her to visit a mosque for the first time, with Muslim residents from I-House, and to participate in Ramadan celebrations.

“That experience was deeply satisfying.” Xie said, “It compels me to get to know my neighbors even better.”

Xie starts her day by hitting the pool at 6:00 each morning with the U.S. Masters Swimming team at Columbia University. She approaches swimming differently now than when she was younger and competed competitively in an environment where she felt there was too much emphasis on living up to the expectations of others. Instead, she now looks at it as an opportunity to grow.

“Now, it’s about engaging in a continual process of learning,” she said. “Being around faster swimmers helps me learn through observation. It’s about being deliberate, exercising my autonomy to set my own goals instead of meeting other people’s expectations. It’s about giving my full attention to the present task, welcoming challenges, and approaching each challenge with openness and curiosity.”

Read about the other AI and Society Fellows:

From New Delhi to New York

Academy Fellow Nitin Verma is taking a closer look at deepfakes and the impact they can have on public opinion.

Published April 23, 2024

By Nick Fetty

Nitin Verma’s interest in STEM can be traced back to his childhood growing up in New Delhi, India.

Verma, a member of the inaugural cohort for the Artificial Intelligence (AI) and Society Fellowship, a collaboration between The New York Academy of Sciences and Arizona State University’s School for the Future of Innovation in Society, remembers being fascinated by physics and biology as a child. When he and his brother would play with toys like kites and spinning tops, he would always think about the science behind why the kite stays in the sky or why the top continues to spin.

Later, he developed an interest in radio and was mesmerized by the ability to pick up radio stations from far away on the shortwave band of the household radio. In the early 1990s, he remembers television programs like Turning Point and Brahmānd (Hindi: ब्रह्मांड, literally translated to “the Universe”) further inspired him.

“These two programs shaped my interest in science, and then through a pretty rigorous school system in India, I got a good grasp of the core concepts of the major sciences—physics, chemistry, biology—and mathematics by the time I graduated high school,” said Verma. “Even though I am an information scientist today, I remain absolutely enraptured by the night sky, physics, telecommunication, biology, and astronomy.”

Forging His Path in STEM

Verma went on to pursue a bachelor’s in electronic science at the University of Delhi where he continued to pursue his interest in radio communications while developing technical knowledge of electronic circuits, semiconductors and amplifiers. After graduating, he spent nearly a decade working as an embedded software programmer, though he found himself somewhat unfulfilled by his work.

“In industry, I felt extremely disconnected with my inner desire to pursue research on important questions in STEM and social science,” he said.

This lack of fulfillment led him to the University of Texas at Austin where he pursued his MS and PhD in information studies. Much like his interest in radio communications, he was also deeply fascinated by photography and optics, which inspired his dissertation research.

This research examined the impact that deepfake technology can have on public trust of photographic and video content. He wanted to learn how people came to trust visual evidence in the first place and what is at stake with the arrival of deepfake technology. He found that perceived, or actual, familiarity with content creators and depicted environments, contexts, prior beliefs, and prior perceptual experiences guide public trust in the material deemed trustworthy.

“My main thesis is that deepfake technology could be exploited to break our trust in visual media, and thus render the broader public vulnerable to misinformation and propaganda,” Verma said.

A New York State of Mind

Verma captured this image of the historic eclipse that occurred on April 8, 2024.

After completing his PhD, he applied for and was admitted into the AI and Society Fellowship. The fellowship has enabled him to further his understanding of AI through opportunities such as the weekly lecture series, collaborations with researchers at New York University, presentations he has given around the city, and by working on projects with Academy colleagues such as Tom Gilbert, Marjorie Xie, and Akuadasuo Ezenyilimba.

Additionally, he is part of the Academy’s Scientist-in-Residence program, in which he teaches STEM concepts to students at a Brooklyn middle school.

“I have loved the opportunity to interact regularly with the research community in the New York area,” he said, adding that living in the city feels like a “mini earth” because of the diverse people and culture.

In the city he has found inspiration for some of his non-work hobbies such as playing guitar and composing music. The city provides countless opportunities for him to hone his photography skills, and he’s often exploring New York with his Nikon DSLR and a couple of lenses in tow.

Deepfakes and Politics

In much of his recent work, he’s examined the societal dimensions (culture, politics, language) that he says are crucial when developing AI technologies that effectively serve the public, echoing the Academy’s mission of “science for the public good.” With a polarizing presidential election on the horizon, Verma has expressed concerns about bad actors utilizing deepfakes and other manipulated content to sway public opinion.

“It is going to be very challenging, given how photorealistic visual deepfakes can get, and how authentic-sounding audio deepfakes have gotten lately,” Verma cautioned.

He encourages people to refrain from reacting to and sharing information they encounter on social media, even if the posts bear the signature of a credible news outlet. Basic vetting, such as visiting the actual webpage to ensure it is indeed the correct webpage of the purported news organization, and checking the timestamp of a post, can serve as a good first line of defense against disinformation, according to Verma. Particularly when viewing material that may reinforce one’s beliefs, Verma challenges them to ask themselves: “What do I not know after watching this content?”

While Verma has concerns about “the potential for intentional abuse and unintentional catastrophes that might result from an overzealous deployment of AI in society,” he feels that AI can serve the public good if properly practiced and regulated.

“I think AI holds the promise of attaining what—in my opinion—has been the ultimate pursuit behind building machines and the raison d’être of computer science: to enable humans to automate daily tasks that come in the way of living a happy and meaningful life,” Verma said. “Present day AI promises to accelerate scientific discovery including drug development, and it is enabling access to natural language programming tools that will lead to an explosive democratization of programming skills.”

Read about the other AI and Society Fellows:

Applying Human Computer Interaction to Brain Injuries

With an appreciation for the value of education and an athlete’s work ethic, Akuadasuo Ezenyilimba brings a unique perspective to her research.

Published April 19, 2024

By Nick Fetty

Athletes, military personnel, and others who endure traumatic brain injuries (TBI) may experience improved outcomes during the rehabilitation process thanks to research by a Fellow with Arizona State University and The New York Academy of Sciences.

Akuadasuo Ezenyilimba, a member of the inaugural cohort of the Academy’s AI and Society Fellowship, conducts research that aims to improve both the quality and the accessibility of TBI care by using human computer interaction. For Ezenyilimba, her interest in this research and STEM more broadly can be traced back to her upbringing in upstate New York.

Instilled with the Value of Education

Growing up in Rochester, New York, Ezenyilimba’s parents instilled in her, and her three younger siblings, the value of education and hard work. Her father, Matthew, migrated to the United States from Nigeria and spent his career in chemistry, while her mother, Kelley, grew up in Akron, Ohio and worked in accounting and insurance. Akuadasuo Ezenyilimba remembers competing as a 6-year-old with her younger sister in various activities pertaining to their after-school studies.

“Both my mother and father placed a strong emphasis on STEM-related education for all of us growing up and I believe that helped to shape us into the individuals we are today, and a big reason for the educational and career paths we all have taken,” said Ezenyilimba.

This competitive spirit also occurred outside of academics. Ezenyilimba competed as a hammer, weight, and discus thrower on the track and field team at La Cueva High School in New Mexico. An accomplished student athlete, Ezenyilimba was a discus state champion her senior year, and was back-to-back City Champion in discus as a junior and senior.

Her athletic prowess landed her a spot on the women’s track and field team as an undergraduate at New Mexico State University, where she competed in the discus and hammer throw. Off the field, she majored in psychology, which was her first step onto a professional path that would involve studying the human brain.

Studying the Brain

After completing her BS in psychology, Ezenyilimba went on to earn a MS in applied psychology from Sacred Heart University while throwing weight for the women’s track and field team, and then went on to earn a MS in human systems engineering from Arizona State University. She then pursued her PhD in human systems engineering at Arizona State, where her dissertation research focused on mild TBI and human computer interaction in regard to executive function rehabilitation. As a doctoral student, she participated in the National Science Foundation’s Research Traineeship Program.

“My dissertation focused on prototype of a wireframe I developed for a web-based application for mild traumatic brain injury rehabilitation when time, finance, insurance, or knowledge are potential constraints,” said Ezenyilimba. “The application is called Ụbụrụ.”

As part of her participation in the AI and Society Fellowship, she splits her time between Tempe, Arizona and New York. Arizona State University’s School for the Future of Innovation in Society partnered with the Academy for this Fellowship.

Understanding the Societal Impacts of AI

The Fellowship has provided Ezenyilimba the opportunity to consider the societal dimensions of AI and how that might be applied to her own research. In particular, she is mindful of the potential negative impact AI can have on marginalized communities if members of those communities are not included in the development of the technology.

“It is important to ensure everyone, regardless of background, is considered,” said Ezenyilimba. “We cannot overlook the history of distrust that has impacted marginalized communities when new innovations or changes do not properly consider them.”

Her participation in the Fellowship has enabled her to build and foster relationships with other professionals doing work related to TBI and AI. She also collaborates with her fellow cohort postdocs in brainstorming new ways to address the topic of AI in society.

“As a Fellow I have also been able to develop my skills through various professional workshops that I feel have helped make me more equipped and competitive as a researcher,” she said.

Looking Ahead

Ezenyilimba will continue advancing her research on TBI. Through serious gamification, she looks at how to lessen the negative context that can be associated with rehabilitation and how to better enhance the overall user experience.

“My research looks at how to increase accessibility to relevant care and ensure that everyone who needs it is equipped with the necessary knowledge to take control of their rehabilitation journey whether that be an athlete, military personnel, or a civilian,” she said.

Going forward she wants to continue contributing to TBI rehabilitation as well as telehealth with an emphasis on human factors and user experience. She also wants to be a part of an initiative that ensures accessibility to and trust in telehealth, so everyone is capable of being equipped with the necessary tools.

Outside of her professional work, Ezenyilimba enjoys listening to music and attending concerts with family and friends. Some of her favorite artists include Victoria Monet and Coco Jones. She is also getting back into the gym and focusing on weightlifting, harkening back to her days as a track and field student-athlete.

Like many, Ezenyilimba has concerns about the potential misuses of AI by bad actors, but she also sees potential in the positive applications if the proper inputs are considered during the development process.

“I think a promising aspect of AI is the limitless possibilities that we have with it. With AI, when properly used, we can utilize it to overcome potential biases that are innate to humans and utilize AI to address the needs of the vast majority in an inclusive manner,” she said.

Read about the other AI and Society Fellows:

The Artificial Intelligence and Society Fellowship Program

Overview
The logo for The New York Academy of Sciences.

In response to the urgent need to incorporate ethical and humanistic principles into the development and application of artificial intelligence (AI), the New York Academy of Sciences offers a new AI and Society post-doctoral fellowship program, in partnership with Arizona State University’s School for the Future of Innovation in Society.

Merging technical AI research with perspectives from the social sciences and humanities, the goal of the program is the development of multidisciplinary scholars more holistically prepared to inform the future use of AI in society for the benefit of humankind.

Promising young researchers from disciplines spanning computer science, the social sciences, and the humanities will be recruited to participate in a curated research program. Fellows’ time will be divided between New York City, Arizona State University, and on-site internships, working alongside seasoned researchers who are well-versed in academia, industry, or policy work.

Program Requirements

To qualify, candidates must have a PhD in a relevant field such as computer science, artificial intelligence, psychology, philosophy, sociology, ethics, law (JD), or a related field. Strong research background and expertise in the field of AI and Society, including publications in leading academic journals, is recommended.

Arizona State University
The logo for The New York Academy of Sciences.

Applications are now closed.

2023 Fellows

2023 Fellows

Akuadasuo Ezenyilimba, PhD

2023 Fellow

Akuadasuo Ezenyilimba is a recent Human Systems Engineering PhD graduate. Her academic background consists of a Bachelor’s in Psychology, Master’s in Applied Psychology, and a Master’s in Human Systems Engineering, Her research interest include human computer interaction, traumatic brain injury (TBI), and TBI rehabilitation. She is looking forward to beginning her post doctoral work focused on Artificial Intelligence in Society with Arizona State University and the New York Academy of Sciences.

Nitin Verma, PhD

2023 Fellow

Nitin is an incoming Postdoctoral Research Scholar at Arizona State University’s School for the Future of Innovation in Society in collaboration with the New York Academy of Sciences. His doctoral dissertation research at the School of Information at The University of Texas at Austin investigated the notion of public trust in video with the emergence of deepfake and allied generative-AI technologies. Nitin’s broader research interests include the interrelationship between society (individuals, platforms, governments, and other stakeholders) and AI, the role of the photographic record in shaping history, and in the deep connection between human curiosity and the continuing evolution of the scientific method.

Marjorie Xie, PhD

2023 Fellow

Dr. Marjorie Xie serves as an AI & Society fellow at the New York Academy of Sciences, joint with Arizona State University’s School for the Future of Innovation in Society. Marjorie’s work combines AI, mental health, and education. Her goals are: 1) Develop technology to enable social-emotional learning and to facilitate collaborative interpersonal relationships; 2) Develop systems for effective AI governance. As an AI researcher, engineer, and social entrepreneur, she hopes to collaborate with mental health professionals, educators, business leaders, and social media experts.

Prior to serving as a fellow, Marjorie interned at Basis Research Institute, building AI tools for reasoning about collaborative intelligence in animals. Marjorie completed her Ph.D. in Neurobiology & Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain. Before her PhD, she designed and completed an independent major in computational neuroscience at Princeton University, where she also pursued intensive studies in philosophy, literature, and history. Born in France and raised in Seattle, Washington by Chinese immigrants, she currently lives and serves as a resident fellow at the International House in New York City.

Upcoming Events

Ethical Implications in the Development of AI

An AI researcher poses for the camera.

Published November 21, 2023

By Nick Fetty

Betty Li Hou, a Ph.D. student in computer science at the New York University Courant Institute of Mathematical Sciences, presented her lecture “AI Alignment Through a Societal Lens” on November 9 at The New York Academy of Sciences.

Seminar attendees included the 2023 cohort of the Academy’s AI and Society post-doctoral fellowship program (a collaboration with Arizona State University’s School for the Future of Innovation in Society), who asked questions and engaged in a dialog throughout the talk. Hou’s hour-long presentation examined the ethical impacts that AI systems can have on societies, and how machine learning, philosophy, sociology, and law should all come together in the development of these systems.

“AI doesn’t exist independently from these other disciplines and so AI research in many ways needs to consider these dimensions, otherwise we’re only looking at one piece of the picture,” said Hou.

Hou’s research aims to capture the broader societal dynamics and issues surrounding the so-called ‘alignment problem,’ a term coined by author and researcher Brian Christian in his 2020 book of the same name. The alignment problem aims to ensure that AI systems pursue goals that match human values and interests, while trying to avoid unintended or undesirable outcomes.

Developing Ethical AI Systems

As values and interests vary across (and even within) countries and cultures, researchers are nonetheless struggling to develop ethical AI systems that transcend these differences and serve societies in a beneficial way. When there isn’t a clear guide for developing ethical AI systems, one of the key questions from Hou’s research becomes apparent: What values are implicitly/explicitly encoded in products?

“I think there are a lot of problems and risks that we need to sort through before extracting benefits from AI,” said Hou. “But I also see so many ways AI provides potential benefits, anything from helping with environmental issues to detecting harmful content online to helping businesses operate more efficiently. Even using AI for complex medical tasks like radiology.”

Social media content moderation is one area where AI algorithms have shown potential for serving society in a positive way. For example, on YouTube, 90% of videos that are reviewed are initially flagged by AI algorithms seeking to spot copyrighted material or other content that violates YouTube’s terms of service.

Hou, whose current work is also supported by a DeepMind Ph.D. Scholarship and an NSF Graduate Research Fellowship, previously served as a Hackworth Fellow at the Markkula Center for Applied Ethics as an undergraduate studying computer science and engineering at Santa Clara University. She closed her recent lecture by reemphasizing the importance of interdisciplinary research and collaboration in the development of AI systems that adequately serve society going forward.

“Computer scientists need to look beyond their field when answering certain ethical and societal issues around AI,” Hou said. “Interdisciplinary collaboration is absolutely necessary.”

The New York Academy of Sciences Announces First Cohort of Post-Doctoral Fellows in Inaugural Artificial Intelligence and Society Fellowship Program with Arizona State University

Published August 17, 2023

FOR IMMEDIATE RELEASE

August 14, 2023, New York, NY – Three post-doctoral scholars have been named as the first cohort of Fellows for the Artificial Intelligence and Society Fellowship program.

Launched by The New York Academy of Sciences and Arizona State University in April 2023, the fellowship was developed to address the unmet need for scholars who are trained across technical AI and social sciences and the humanities. This innovative training program will produce the next generation of scholars and public figures who are prepared to shape the future use of AI in ways that will advance the public good.

The Fellows are:

Nitin Verma, PhD, University of Texas at Austin, School of Information

Nitin studies the ethical, societal, and legal impacts of deepfakes and other generative AI technologies. His multidisciplinary research interests include misinformation, trust, human values, and human-computer interaction. He is a native of India, and attended the University of Delhi, graduating with a B.Sc. in electronic science.

Akuadasuo Ezenyilimba, PhD, Arizona State University (ASU), The Polytechnic School; Human Systems Engineering

As a National Science Foundation Research Trainee, Akuadasuo has worked on citizen-centered solutions for real-world problems. Currently, she is researching the relationship between human-computer interaction and traumatic brain injury, executive function, and traumatic brain injury rehabilitation.

Marjorie Xie, PhD, Columbia University Medical Center, Center for Theoretical Neuroscience

Marjorie’s work combines AI, mental health, and education. She interned at Basis Research Institute, building AI tools for reasoning about collaborative intelligence in animals. Marjorie completed her Ph.D. in Neurobiology & Behavior at Columbia University, where she used AI tools to build interpretable models of neural systems in the brain.

Developing the Next Generation of AI Researchers

“AI now permeates every facet of our society,” said Nicholas Dirks, Ph.D., President and CEO, The New York Academy of Sciences. “The technology holds extraordinary promise. It is crucial that researchers have the training and capacity to bring an ethical perspective to its application, to ensure it is used for the betterment of society. That’s why our with partnership with Arizona State University, where much of the pioneering research in AI and society is being conducted, is so imperative.”

“ASU is very excited to join with The New York Academy of Sciences for this fellowship,” said David Guston, professor and founding director of ASU’s School for the Future of Innovation in Society, with which the post-docs will be affiliated. “Our goal is to create a powerhouse of trainees, mentors, ideas, and resources to develop the next generation of AI researchers poised to produce ethical, humanistic AI applications and promote these emerging technologies for the public interest” he added.

Beginning in August 2023, the promising young researchers will participate in a curated research program and professional development training at the Academy’s headquarters in New York City, Arizona State University, and on-site internships, with seasoned researchers from academia, industry, or public policy organizations.

About Arizona State University

Arizona State University, ranked the No. 1 “Most Innovative School” in the nation by U.S. News & World Report for eight years in succession, has forged the model for a New American University by operating on the principles that learning is a personal and original journey for each student; that they thrive on experience and that the process of discovery cannot be bound by traditional academic disciplines. Through innovation and a commitment to accessibility, ASU has drawn pioneering researchers to its faculty even as it expands opportunities for qualified students.

As an extension of its commitment to assuming fundamental responsibility for the economic, social, cultural and overall health of the communities it serves, ASU established the Julie Ann Wrigley Global Futures Laboratory, the world’s first comprehensive laboratory dedicated to the empowerment of our planet and its inhabitants so that all may thrive. It is designed to address the complex social, economic and scientific challenges spawned by the current and future threats from the degradation of our world’s systems.

This platform lays the foundation to anticipate and respond to existing and emerging challenges and use innovation to purposefully shape and inform our future. It includes the College of Global Futures, home to four pioneering schools including the School for the Future of Innovation in Society that is dedicated to changing the world through responsible innovation. For more information, visit globalfutures.asu.edu.

Contact: Media@nyas.org
Desk: 1718-343-3937
Mobile call/text: 1917-679-6299

The New York Academy of Sciences Launches New Post-Doctoral Fellowship in Artificial Intelligence and Society with Arizona State University

In response to the urgent need to incorporate ethical and humanistic principles into the development and application of artificial intelligence (AI), The New York Academy of Sciences has partnered with Arizona State University’s School for the Future of Innovation in Society to launch an AI and Society post-doctoral fellowship program. Merging technical AI research with perspectives from the social sciences and humanities, the program’s goal is to develop a new generation of multidisciplinary scholars prepared to inform the future use of AI in society for the benefit of humankind.

“The New York Academy of Sciences is thrilled to launch this unique partnership with Arizona State University, where much of the pioneering research in this field is being conducted,” said Nicholas B. Dirks, President and CEO of The New York Academy of Sciences. “AI is transforming our society at lightning speed. It is essential, however, that we work to better understand the range and nature of AI’s impact and what we can do to anticipate, and then navigate, the many ethical, regulatory, and governance questions that we have only recently begun to comprehend and debate, even as we press forward with leveraging AI’s benefits,” he added.

“ASU is very excited to join with The New York Academy of Sciences for this fellowship,” said David Guston, professor and founding director of the School for the Future of Innovation in Society. “Our goal is to create a powerhouse of trainees, mentors, ideas, and resources to develop the next generation of AI researchers poised to produce ethical, humanistic AI applications to promote science for the greater good” he added.

Recruiting Promising Young Researchers

Beginning in September 2023, this program will recruit promising young researchers from disciplines spanning computer science, the social sciences, and the humanities to participate in a curated research program housed at the Academy. Fellows’ time will be shared between New York City, Arizona State University, and on-site internships, with seasoned researchers who are well-versed in academia, industry, or policy work.

To qualify, candidates must have a PhD in a relevant field such as computer science, artificial intelligence, psychology, philosophy, sociology, ethics, law (JD), or a related field. Strong research background and expertise in the field of AI and Society, including publications in leading academic journals, is recommended.

To learn more, click here.