Volume 58 Issue 05 June 2025
Conferences and Events

CSE25 Panel Considers the Fair and Responsible Use of Artificial Intelligence

As the broadening impacts of artificial intelligence (AI) percolate through computational science and engineering (CSE) disciplines, a solid understanding of the technology’s explainability, fair and responsible deployment, and theoretical underpinnings is becoming increasingly imperative. In recent years, the CSE community has refined several basic principles of AI, such as fairness and bias mitigation, accountability, transparency, and privacy and data governance. These areas are rife with research opportunities and can directly influence policy and decision-making processes as the field continues to evolve [1].

During the 2025 SIAM Conference on Computational Science and Engineering (CSE25), which took place this past March in Fort Worth, Texas, a panel of seasoned researchers shared their perceptions of AI’s current and future trajectory. David Bindel of Cornell University—co-chair of the CSE25 Organizing Committee—moderated the session, which comprised panelists Patricia Kovatch of the Icahn School of Medicine at Mount Sinai (ISMMS), Manish Parashar of the University of Utah, Eric Stahlberg of MD Anderson Cancer Center, and Moshe Vardi of Rice University. Each panelist overviewed their individual experiences with AI before collectively fielding questions from the audience.

Bindel opened the session with some general thoughts about the omnipresence of AI in today’s world. “It’s hard to avoid discussions about AI right now,” he said. “We’re dealing with aspects of fairness and responsibility from both a technical and nontechnical perspective.” He cited the Association for Computing Machinery’s Conference on Fairness, Accountability, and Transparency as an important forum for AI discourse, and noted that multiple organizations have published compelling reports about fair and responsible AI, such as the Institute of Electrical and Electronics Engineers’ Ethically Aligned Design [6] and the European Commission’s Ethics Guidelines for Trustworthy AI [5].

A group of researchers shared their thoughts and experiences with responsible artificial intelligence during a panel session at the 2025 SIAM Conference on Computational Science and Engineering, which took place in Fort Worth, Texas, this past March. From left to right: Eric Stahlberg of MD Anderson Cancer Center, Manish Parashar of the University of Utah, Moshe Vardi of Rice University, Patricia Kovatch of the Icahn School of Medicine at Mount Sinai, and moderator David Bindel of Cornell University. SIAM photo.
A group of researchers shared their thoughts and experiences with responsible artificial intelligence during a panel session at the 2025 SIAM Conference on Computational Science and Engineering, which took place in Fort Worth, Texas, this past March. From left to right: Eric Stahlberg of MD Anderson Cancer Center, Manish Parashar of the University of Utah, Moshe Vardi of Rice University, Patricia Kovatch of the Icahn School of Medicine at Mount Sinai, and moderator David Bindel of Cornell University. SIAM photo.

Vardi’s presentation focused on responsible AI, a prevalent term that emerged in part due to widespread concerns about AI’s safe implementation in present-day society. In keeping with the theme, Vardi shared the following definition of responsible AI that was generated by Google’s Gemini chatbot: Responsible AI is a growing consensus that AI development and deployment must prioritize ethical considerations, fairness, transparency, and accountability. Yet despite its pervasiveness, Vardi has misgivings about this commonly accepted definition. “Responsible AI is a very vague term without saying who is really responsible,” he said. “Responsibility should imply accountability. Responsibility should be people and organizations, not technology.”

Instead, Vardi prefers the phrase “AI, responsibly,” which puts a greater onus on people. “It is the social responsibility of computational science educators to educate socially responsible students,” he said. “Social responsibility must be part of the computing curriculum” [2]. Beyond academia, Vardi asserted that corporations should adopt appropriate AI regulations to ensure that their products work for the benefit (rather than the detriment) of society. Such guidelines are especially important for projects that seek to maximize profits above all else, which introduces concerns about the concentration of exceptional levels of power in the hands of relatively few individuals [4].

Parashar offered a comparatively sanguine perspective on the utility of AI applications. “I take a very optimistic view of AI and think it has a lot of potential,” he said. “But that doesn’t mean we don’t have to be responsible.” He endorsed the U.S. National Institute of Standards and Technology’s building blocks of trustworthy and responsible AI: validity and reliability, safety, security and resiliency, accountability and transparency, explainability and interpretability, privacy, and fairness and mitigation of harmful bias. Some of these components require policy solutions—like the creation of regulatory frameworks—while others involve technological developments in algorithms and hardware.

“AI critically depends on computing, data, and technology,” Parashar said. “One technical challenge is understanding the quality of the data that goes into the AI and trusting that data. But we also know that to achieve the attributes of being responsible, it’s important that a diversity of folks have access to the technology.” Data diversity is similarly crucial because AI technologies and infrastructures evolve according to their inputs. “Greater inclusivity in contributions to research and development increases the diversity of approaches, quality of research, and fairness of the results,” Parashar continued.

Several ongoing efforts aim to democratize AI research and development and overcome barriers—such as a lack of awareness or access to resources—that preclude the realization of fair and responsible AI. For instance, the U.S. National Science Foundation’s National Artificial Intelligence Research Resource pilot seeks to strengthen the AI innovation ecosystem while protecting people’s privacy, rights, and civil liberties. Likewise—and closer to home for Parashar—the University of Utah’s One-U Responsible AI Initiative intends to responsibly advance translational AI for societal good in target areas like education, the environment, and healthcare while simultaneously safeguarding civil rights and promoting fairness, accountability, and transparency.

In healthcare settings, the improper deployment of AI can compromise patient needs or wellbeing [3]. Stahlberg, who leads MD Anderson’s Institute for Data Science in Oncology (IDSO), identified six essential aspects of data that are necessary to establish trust in AI applications: context, quality, provenance, transparency, portability, and understanding. Scientists must consider how data is collected, shared, and utilized, and how the surrounding policy and governance structures are managed. Careful attention to these questions can help healthcare organizations build ecosystem capacity; alleviate barriers; and empower patients, communities, researchers, and practitioners. The goal is to jumpstart a recurrent cycle that allows practitioners to effectively transform challenges and opportunities into tangible, impactful results within clinical settings — thus informing future innovation and learning.

As MD Anderson strives to be this point of translation, IDSO is cultivating a team-based culture that embraces data science across various domains. The institute is exploring a range of data-centric tools, including the digital twin. “Digital twins are one thing that I really champion because they pull all of this stuff together,” Stahlberg said. However, their implementation is often complicated by social, technical, and financial difficulties. For example, practitioners must obtain appropriate stakeholder buy-ins; set realistic expectations; establish a baseline healthy state for patients; obtain quality data; and manage regulatory, privacy, and liability concerns. “We also have to have methodologies of trust,” Stahlberg said. “How can we be economical in the mathematics but still maintain integrity, trust, and reliability?”

To address these concerns, Stahlberg and his colleagues are educating patients, AI developers, and the general public about fair and responsible use. “Improving healthcare options through data science in cancer is a team effort,” he said. “Quality data, processes, and people are essential to the collective effort for robust translational data science. Collaborations across boundaries are key to continued innovations and sustained impact.”

Much like MD Anderson, ISMMS conducts basic research that is translated into clinical settings and returned to the cycle via a feedback system. “Mount Sinai has been transforming care delivery with AI for over 10 years,” Kovatch, the Dean for Scientific Computing and Data at ISMSS, said. “There’s enormous potential for further impact.” Several years ago, ISMMS founded the Windreich Department of Artificial Intelligence and Human Health—one of the first such departments in the country—to improve diagnoses, expedite drug discovery, and deliver personalized care to patients.

Yet even with this advances, Kovatch acknowledged certain obstacles that inhibit AI’s successful implementation in the medical sphere. For instance, differences in processing abilities between healthcare systems limit data sharing. And human activity frequently introduces errors into the data, such as when a patient fails to take their medication or a receptionist incorrectly records someone’s race, gender, or age. Kovatch encouraged cooperation between applied mathematicians, computer scientists, healthcare workers, and other stakeholders to tackle these impediments. “We need to find a common language so we can work on these problems together,” she said.

Given the unavoidable uncertainties, Mount Sinai follows a set of core principles to promote a comprehensive understanding of the risks and benefits of AI tools. Decision-makers ask the following four questions of any proposed AI software: (i) Is it safe? (ii) Is it effective? (iii) Is it equitable? and (iv) Is it ethical? The Mount Sinai AI Review Board for Governance then employs a five-step checkpoint system—pre-triage, evaluation, validation, deployment, and quality assurance—to evaluate an intended project or technology. “It’s at least an initial framework to make some decisions as to whether the models are doing what we think they should,” Kovatch said. “It can help us set priorities.”

To ensure that practitioners and institutions uphold the principle of “do no harm” in the changing medical landscape, Vardi advocated for a national AI safety board to establish universal guidelines for AI in healthcare. “If AI is going to be involved in making decisions about human health, we need to figure out the standards,” he said. Researchers must be able to mathematically explain possible sources of harm and provide mitigative solutions to prove that a system is sufficiently reliable. “We don’t have perfect safety, so we have to develop standards where it’s safe enough,” he continued, likening the situation to air travel — airline passengers may not know for absolute certain that a system won’t fail, but they can nevertheless feel comfortable with the diminutive risk.

As the panel drew to a close, the speakers emphasized the collaborative aspects of AI development. “AI is evolving based on how you use it,” Parashar said. “It’s not just about how you build it, but how the community is engaging with it.” He urged audience members to foster a general sense of awareness and appreciation of AI, from the early stages of research to ultimate deployment. This type of open communication is especially important if no governmental or organizational guardrails are in place. “You need to be transparent about how you’re using AI,” Stahlberg said. “Share with others when you’ve been successful and when you’ve failed.”

Despite the many challenges that are associated with fair and responsible AI, the panelists remain hopeful about AI’s expanding capacity in the coming years — as long as practitioners adhere to appropriate protocols. “I’m extremely optimistic about the potential of AI,” Parashar said. “It’s a huge resource that we can leverage.”

References
[1] Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. Cambridge, MA: MIT Press.
[2] Ferreira, R., & Vardi, M.Y. (2021). Deep tech ethics. In SIGCSE ‘21: Proceedings of the 52nd ACM technical symposium on computer science education (pp. 1041-1047). Association for Computing Machinery.
[3] Francis, M. (2025, May 1). When artificial intelligence takes shortcuts, patient needs can get lost. SIAM News, 58(4), p. 1.
[4] Hao, K. (2021). Stop talking about AI ethics. It’s time to talk about power. MIT Technology Review. Retrieved from https://www.technologyreview.com/2021/04/23/1023549/kate-crawford-atlas-of-ai-review.
[5] High-level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. Brussels, Belgium: European Commission.
[6] IEEE Global Initiative on Ethics of Autonomous and Intelligence Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (1st ed.). Piscataway, NJ: Institute of Electrical and Electronics Engineers. 

About the Author