February Prize Spotlight
Congratulations to the following 2024 prize recipients who will be recognized at the 2024 SIAM Conference on Uncertainty Quantification (UQ24), taking place February 27 – March 1, 2024, or the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP24), happening March 5-8, 2024.
- Christie Louis Alappat, Georg Hager, Gerhard Wellein, Achim Basermann, Alan R. Bishop, Holger Fehske, Olaf Schenk, and Jonas Thies – SIAM Activity Group on Supercomputing Best Paper Prize
- Laura Grigori – SIAM Activity Group on Supercomputing Career Prize
- Giulia Guidi – SIAM Activity Group on Supercomputing Early Career Prize
- Jonas Latz – SIAM Activity Group on Uncertainty Quantification Early Career Prize
Christie Louis Alappat, Georg Hager, Gerhard Wellein, Achim Basermann, Alan R. Bishop, Holger Fehske, Olaf Schenk, and Jonas Thies
Christie Louis Alappat, Georg Hager, Gerhard Wellein, Achim Basermann, Alan R. Bishop, Holger Fehske, Olaf Schenk, and Jonas Thies are the 2024 recipients of the SIAM Activity Group on Supercomputing Best Paper Prize. The team received the prize for their paper, “A Recursive Algebraic Coloring Technique for Hardware-efficient Symmetric Sparse Matrix-vector Multiplication”, ACM Transactions on Parallel Computing, Vol. 7, No. 3, Article 19 (2020). The committee awarded them for their introduction of a novel algorithm for the long-standing graph coloring problem that significantly outperforms previous methods.
They will be recognized at the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP24), taking place March 5 – 8, 2024, in Baltimore, Maryland. Alappat will present a talk, titled “Accelerating Sparse Iterative Solvers and Preconditioners Using RACE,” on March 7, 2024, at 8:30 a.m. ET.
The SIAM Activity Group on Supercomputing awards this prize every two years to the author(s) of the most outstanding paper, as determined by the selection committee, in the field of parallel scientific and engineering computing published within the four calendar years preceding the award year.
Christie Louis Alappat received a master’s degree with honors from the Bavarian Graduate School of Computational Engineering at the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). He is currently working as a research assistant at Erlangen National High-Performance Computing Center and is in the final stages of completing his doctoral studies under the guidance of Dr. Gerhard Wellein. His research interests include performance engineering, sparse matrix and graph algorithms, iterative linear solvers, and eigenvalue computations. He has received numerous awards including the 2017 Software for Exascale Computing Best Master Thesis Award, the 2018 Supercomputing ACM Student Research Competition (SRC) Award, second place in the 2019 ACM SRC grand finals, and the 2020 International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computer Systems Best Short Paper Award.
Achim Basermann is head of the High-Performance Computing Department at German Aerospace Center's (DLR) Institute for Software Technology and a German Research Foundation review board member in computer science, topic “Massively Parallel and Data Intensive Systems”. In 1995, he obtained his Ph.D. in Electrical Engineering from RWTH Aachen University followed by a postdoctoral position in computer science at Jülich Research Centre, Central Institute for Applied Mathematics. He led a team of HPC application experts at the C&C Research Laboratories, NEC Laboratories Europe in Germany (1997-2009), before joining the DLR. Dr. Basermann’s current research is focused on massively parallel linear algebra algorithms, partitioning methods, optimization tools in the area of computational fluid dynamics for many-core architectures and GPGPU clusters, high-performance data analytics, and quantum computing.
Alan R. Bishop is an internationally recognized leader in theory, modeling, and simulation for interdisciplinary condensed matter, statistical physics, nonlinear science, and functional, multiscale complexity. He has made major contributions in the areas of soliton mathematics and applications, quantum complexity, structural and magnetic transitions, collective excitations in low-dimensional organic, inorganic, and biological materials, and complex electronic and structural materials with strong spin-charge-lattice coupling. He is a Fellow of the American Physical Society, The Institute of Physics, and the American Association for the Advancement of Science, a recipient of the Department of Energy’s E.O. Lawrence Award, a Humboldt Senior Fellow, and a Los Alamos Laboratory Fellow.
Holger Fehske received a Ph.D. in physics from the University of Leipzig, and a Habilitation degree and Venia Legendi in theoretical physics from the University of Bayreuth. In 2002, he became a full professor at the University of Greifswald. He currently holds the chair for complex quantum systems and works in the fields of solid-state theory, quantum statistical physics, light-matter interaction, quantum informatics, plasma physics, and computational physics. Dr. Fehske is a longstanding member of the steering committee of the High-Performance Computing Center Stuttgart and the Erlangen National High-Performance Computing Center.
Georg Hager holds a Ph.D. and a Habilitation degree in computational physics from the University of Greifswald. He leads the Training and Support Division at Erlangen National High-Performance Computing Center and is an associate lecturer at the Institute of Physics at Greifswald. His recent research includes architecture-specific optimization strategies for current microprocessors, performance engineering of scientific codes on chip and system levels, and the analytic modeling of structure formation in large-scale parallel codes.
Dr. Hager has authored and co-authored more than 100 peer-reviewed publications and was instrumental in developing and refining the Execution-Cache-Memory performance model and energy consumption models for multicore processors. He received the 2018 ISC Gauss Award with Johannes Hofmann and Dietmar Fey for a paper on accurate performance and power modeling. He also received the 2011 Informatics Europe Curriculum Best Practices Award with Jan Treibig and Gerhard Wellein for outstanding contributions to teaching in computer science. His textbook, Introduction to High Performance Computing for Scientists and Engineers, is a recommended or required reading in many HPC-related lectures and courses worldwide. With colleagues from the FAU, the High-Performance Computing Center Stuttgart, and the Vienna University of Technology, Dr. Hager develops and conducts successful international tutorials on node-level performance engineering and hybrid programming.
Olaf Schenk is a professor at the Institute of Computing of Universita della Svizzera italiana in Switzerland, and an adjunct member of the Computer Systems Institute at USI. He is also the co-director of the Institute of Computing and the Master of Science in Computational Science at USI. He completed his undergraduate degree in applied mathematics at the Karlsruhe Institute of Technology, received his Ph.D. from the Department of Information Technology and Electrical Engineering at the Ecole Polytechnique Fédérale de Lausanne, and a Venia Legendi from the Department of Mathematics and Computer Science at the University of Basel. Dr. Schenk has been an active SIAM member for 21 years, was named a 2020 SIAM Fellow, and is a senior member of the Institute of Electrical and Electronics Engineers. His research interests are inherent to the field of high-performance computing, in particular to the development and optimization of algorithms and software tools to perform large-scale simulations.
Jonas Thies received a bachelor’s degree in computational engineering from FAU (2003), a master’s degree in scientific computing from KTH Stockholm (2006), and a Ph.D. in applied mathematics from the University of Groningen (2010). He then spent two years as a postdoc at the Center for Interdisciplinary Mathematics in Uppsala. He worked at the German Aerospace Center in Cologne, where he led a research group on parallel numerics (2013-21). Since 2022, Dr. Theis has been an assistant professor in high-performance computing at the Delft Institute of Applied Mathematics, and the scientific advisor for users of the Delft University of Technology High Performance Computing Center.
Gerhard Wellein is a professor for high performance computing at the Department for Computer Science of FAU and holds a Ph.D. in theoretical physics from the University of Bayreuth. He was also a guest lecturer at Università della Svizzera italiana (USI) Lugano (2015-17). Since 2021, he has been the director of the Erlangen National Center for High Performance Computing.
Dr. Wellein is also a member of the board of directors of the German National High Performance Computing Alliance, which coordinates the national HPC Tier-2 infrastructures at German universities, he has held the deputy speaker position of the Competence Network for Scientific High Performance Computing in Bavaria for several years. As a member of the scientific steering committees of the Leibniz Supercomputing Centre and the Gauss Centre for Supercomputing, he is organizing and surveying the compute time application process for national HPC resources.
Dr. Wellein has more than 20 years of experience teaching HPC techniques to students and scientists. He has contributed to numerous tutorials on node-level performance engineering in the past decade and received the 2011 Informatics Europe Curriculum Best Practices Award with Jan Treibig and Georg Hager for outstanding teaching contributions. His research interests focus on performance modeling and performance engineering, architecture-specific code optimization, novel parallelization approaches, and hardware-efficient building blocks for sparse linear algebra and stencil solvers. He has been conducting and leading numerous national and international HPC research projects and has authored or co-authored more than 100 peer-reviewed publications.
The authors collaborated on their answers to our questions.
Q: Why are you all excited to receive the award?
A: We are truly thrilled and honored to receive the prestigious SIAM Activity Group on Supercomputing Best Paper Prize. It is incredibly motivating and inspiring to present our research and have it acknowledged by the community. The award not only highlights the importance of our research but also encourages and motivates us to continue exploring innovative ideas and push the boundaries further.
Q: Could you tell us about the research that won your team the award?
A: This paper solves a long-standing problem in computational science. Mathematical models of many complex systems in science and engineering involve very large, sparse matrices, such as matrices in which almost all entries are zero. Although this sounds encouraging, it actually complicates matters when implementing algorithms, especially if one aims for high performance and efficiency on modern parallel computers. A key component in many algorithms is the multiplication of a sparse matrix with a vector (SpMV). This important computational kernel can be parallelized easily; however, if the matrix is symmetric and only half of it is stored in memory, parallelization becomes a hard problem due to dependencies. Although many partial solutions have been presented in the literature, they all involve compromises that can cause inefficiencies due to load imbalance and bad hardware utilization.
In our paper, a novel algorithm and library was introduced called the Recursive Algebraic Coloring Engine (RACE). It can adapt the parallelism in symmetric SpMV to the underlying hardware while retaining good load balancing and cache-friendly data access patterns. RACE is not restricted to optimizing symmetric SpMV but has already been shown to improve the performance of other relevant kernels and solvers in sparse linear algebra which have dependencies. Following the publication of this paper, our research on RACE has diversified considerably. We've explored the performance optimization of more intricate kernels, such as the cache blocking of the matrix power kernel, integrated our optimizations into popular iterative linear solvers, introduced multi-node parallelization strategies, and started adapting our performance optimization strategies for GPUs.
Q: What does your team's work mean to the public?
A: The development of the RACE framework outlined in our publication significantly reduces the runtime of essential sparse kernels. These computational kernels play a crucial role in various algorithms, such as linear iterative solvers, eigenvalue solvers, and exponential time integrators, forming the backbone of numerous numerical simulations. Beyond mere runtime reduction, our performance optimizations pave the way for more energy-efficient simulations and enable us to handle larger-scale computations with limited resources, thus pushing the boundaries of research.
Q: What does being a member of SIAM mean to your team?
A: Being members of SIAM is invaluable for our team as it underscores the significance of teamwork and community effort in conducting impactful research. The journals, events, and conferences organized by SIAM provide an ideal platform for collaborative research and networking within the field of applied mathematics. The inclusive nature of SIAM, bringing together academia and industry, fosters mutual growth and accelerates the pace of research.
Laura Grigori
Laura Grigori, Ecole Polytechnique Fédérale de Lausanne and Paul Scherrer Institute, is the recipient of the 2024 SIAM Activity Group on Supercomputing Career Prize for her outstanding contributions to scientific computing, particularly communication avoiding algorithms.
She will deliver a talk at the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP24), happening March 5 – 8, 2024, in Baltimore, Maryland. The talk, titled “Tackling High Dimensional Problems Through Randomization and Communication Avoidance,” will be take place on March 7, 2024, at 9:50 a.m. ET.
The SIAM Activity Group on Supercomputing awards this prize every two years to one outstanding senior researcher who has made broad and distinguished contributions to the field of algorithms research and development for parallel scientific and engineering computing.
Dr. Laura Grigori is a professor and holds the chair of high-performance numerical algorithms and simulation at Ecole Polytechnique Fédérale de Lausanne, and is the Head of the Laboratory for Simulation and Modelling at Paul Scherrer Institute, Switzerland. She has received a Ph.D. in computer science from the Henri Poincaré University, INRIA Lorraine (2001). After spending two years at the University of California, Berkeley (UC Berkeley) and Lawrence Berkeley National Laboratory as a postdoctoral researcher, she joined INRIA (2004-23). Dr. Grigori also led Alpines group, a joint group between INRIA and J.L. Lions Laboratory, Sorbonne University, France (2013-23). She has spent two sabbaticals at UC Berkeley as a visiting professor. Her field of expertise is numerical linear, multilinear algebra, and high-performance scientific computing for challenging applications ranging from astrophysics to molecular simulations.
Dr. Grigori has been an active member of SIAM for 17 years, was named a 2020 SIAM Fellow, and the recipient of an ERC Synergy Grant (2018). For her work on communication avoiding algorithms, she and her co-authors were awarded the 2016 SIAM Activity Group on Supercomputing Best Paper Prize for the most outstanding paper published in a refereed journal in the field of high-performance computing. She has been an invited plenary speaker at many international conferences including the SIAM Conference on Parallel Processing for Scientific Computing, the International Conference for High Performance Computing, Networking, Storage, and Analysis, SIAM Conference on Applied Linear Algebra, SIAM Conference on Computational Science and Engineering, and the GAMM Annual Meeting. Among others, Dr. Grigori was also the Partnership for Advanced Computing in Europe Scientific Steering Committee Chair, the SIAM Activity Group on Supercomputing Chair (2016-17), and served as a member of SIAM Council (2018-23).
Q: Why are you excited to receive the award?
A: I am truly honored and enthusiastic to receive this career award from SIAM and its parallel processing community. I hope that this recognition will inspire others to explore cutting-edge research at the frontier of applied mathematics and high-performance computing and emphasize challenging applications— a topic that holds a special significance for me. This award is also an opportunity for me to acknowledge my close collaborators and members of my group, including Ph.D. students and postdoctoral researchers from France, and now, Switzerland that have turned this research into a captivating journey through the ever-evolving landscape of high performance scientific computing.
Q: Could you tell us about the research that won you the award?
A: This research focuses on the design of high-performance numerical algorithms for operations that are at the heart of many computations, from numerical simulations or data analysis and machine learning, for different application domains ranging from astrophysics to molecular simulations. These algorithms often concern operations in numerical linear or multi-linear algebra and are designed to meet major challenges in high-performance computing while having numerical stability guarantees.
One such challenge is the large and steadily growing gap between the time needed to perform arithmetic operations by one processor and the time needed to communicate its result to another processor. This high communication cost prevents many algorithms from being efficient on parallel computers formed by multi-core processors and/or accelerators. We have shown that a new generation of algorithms needs to be, and can be, designed that provably reduce the number of communication instances to a minimum. These algorithms are referred to as communication avoiding. To give an example, several algorithms in linear algebra require some form of pivoting to avoid divisions by small numbers or to preserve stability. The classic pivoting schemes imply that the subsequent algorithm communicates asymptotically more than the lower bounds require. We have introduced a novel pivoting strategy, referred to as tournament pivoting, that can be used in operations as LU, rank revealing factorizations or low rank approximations, providing a numerically stable alternative while minimizing communication.
Another area I have contributed to is so-called randomized linear algebra. Randomization is a compelling technique which has the potential of reducing both the computational and communication costs of an algorithm. It leverages optimized kernels, mixed precision, and simple communication patterns, while providing numerical guarantees with high probability. It notably allows us to address some open questions, as controlling pivot growth during Gaussian elimination through left and right multiplication by square Gaussian random matrices or reducing communication in Krylov subspace methods while maintaining numerical stability.
In addition to these developments, I have worked on different application domains, in particular those involving solving problems in high dimensions, where we aim at providing efficient and robust solvers, tailored to the needs of each specific application and capable of solving problems of size and complexity which are otherwise out of reach for the existing methods. A recent example is in the area of quantum chemistry when solving numerically the time independent Schrödinger equation. Our goal is to enable the simulation of systems of very large size or strongly correlated systems
Q: What does your work mean to the public?
A: Linear and multi-linear algebra operations represent very often an important percentage of the overall time of a numerical simulation or a data analysis process. They may be often hidden for the end user, but the relevant technologies are necessary for the final success of the entire process. High performance, parallel algorithms, and their advanced implementations allow us to use efficiently a large number of processors, and thus to solve larger problems in a significantly shorter time and also saving energy. Therefore, their impact on industrial and scientific applications is very broad and they often enable and drive progress in many areas. With our collaborators from astrophysics, we have demonstrated the feasibility of producing cosmic microwave background maps using the full volume of data—incorporating data from all detectors and covering the entire duration of the mission. This includes accounting for pervasive data correlations, as anticipated in LiteBIRD, a next-generation, Japan-led satellite experiment planned for launch in the next decade. LiteBIRD has the potential to revolutionize our understanding of cosmology and fundamental physics.
The potential impact of these algorithms is very broad as they are integrated in a number of libraries, developed by academics or hardware vendors. For example, the orthogonalization of a set of vectors is implemented underneath several libraries by using TSQR, a communication avoiding technique. Indeed, since these algorithms are provably as stable as classic algorithms, they progressively could replace them. In fact, you may be already using it without even knowing that!
Q: What does being a member of SIAM mean to you?
A: SIAM has played a pivotal role throughout my entire career. It provided me with the opportunity to be part of a dynamic and engaging community. SIAM's high-quality journals and prestigious conferences have been instrumental in keeping me informed of the latest advancements in the field and offering opportunities to present my own results. SIAM conferences provided venues to meet top experts in the field, to exchange ideas with colleagues who share similar research interests for applied mathematics and its real-world applications, and to get inspired by new trends and research opportunities.
Giulia Guidi
Giulia Guidi, Cornell University and Lawrence Berkeley National Laboratory, is the recipient of the 2024 SIAM Activity Group on Supercomputing Early Career Prize for her pioneering works bridging high-performance computing and computational biology.
She will deliver a talk at the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP24), happening March 5 – 8, 2024, in Baltimore, Maryland. The talk, titled “Scalability and Productivity in Data-Intensive Biological Research on Massively Parallel Systems,” will take place on March 7, 2024, at 9:10 a.m. ET.
The SIAM Activity Group on Supercomputing awards this prize every two years to one individual in their early career for outstanding research contributions in the field of algorithms research and development for parallel scientific and engineering computing in the three calendar years prior to the award year.
Dr. Giulia Guidi is an assistant professor of computer science at Cornell University in the Bowsers College of Computing and Information Sciences and is a member of the graduate field of computational biology and applied mathematics, in addition to computer science. Dr. Guidi’s work focuses on high-performance computing for large-scale computational sciences. She received her Ph.D. in computer science from the University of California Berkeley, under the supervision of Aydin Buluç and Kathy Yelick (2022).
Dr. Guidi is part of the Performance and Algorithms Research Group in the Applied Math and Computational Sciences Division at Lawrence Berkeley National Laboratory, where she is currently an affiliate faculty member. She received the 2023 Italian Scientists & Scholars in North America Foundation Young Investigator Mario Gerla Award, and the 2020 ACM Special Interest Group on High Performance Computing Computational & Data Science Fellowship. Dr. Guidi is interested in developing algorithms and software infrastructures on parallel machines to accelerate data processing without sacrificing programming productivity and make high-performance computing more accessible. Learn more about Dr. Guidi.
Q: Why are you excited to receive the award?
A: I am thrilled and honored to be awarded the SIAM Activity Group on Supercomputing Early Career Prize. This prize is an important milestone in my career, and it is very exciting to see the community recognize the impact of my work. This award is also a tribute to the mentors and collaborators I have been privileged to work with. Receiving this prize is not only an encouragement to continue pushing the field in this direction but also underscores the potential and transformative benefit of supercomputing to the computational sciences, especially in a world where the need for data and computing continues to grow.
Q: Could you tell us about the research that won you the award?
A: In my research, we used sparse matrices as a proxy for large-scale biological computation by mapping entire genomics pipelines to sparse matrices and their computation and showed how the entire pipeline can be parallelized across hundreds of nodes without sacrificing productivity. This allowed us to reduce the processing time for the human genome from more than a day to less than 30 minutes on a supercomputer. In doing so, we took advantage of semiring abstraction, which makes it possible for data structures and computation, e.g., matrix multiplication, to be overloaded with virtually any structure or function that is useful for the application. In this way, we can use sparse matrices and their computation in a non-numerical way, as our nonzeros are pieces of DNA or similar rather than numerical values. The north star of my research is to democratize access to high-performance computing for scientific purposes.
I believe that using sparse matrices as an intermediate representation or abstraction is a promising approach to achieve this goal. Sparsity, whether in the form of a graph, matrix, or network, is a key aspect of science and a growing requirement of deep learning to reduce its memory and computational demand.
Q: What does your work mean to the public?
A: In practice, my research serves as a bridge between complex biological data and data processing and tangible healthcare approaches for the public, e.g., genome-based diagnostics. The ability to reduce the processing time for certain types of information, like in the context of genome-related data, can have a significant impact on the quality of downstream analysis. It not only improves the productivity and cost-efficiency of the overall process, but also enables us to tackle research challenges that would not be feasible without supercomputing due to time constraints or high computational demand.
Q: What does being a member of SIAM mean to you?
A: As a member of SIAM, I feel part of the community and am grateful for the insight into different areas of applied math that the various conferences provide. SIAM Conference on Parallel Processing for Scientific Computing is one of my favorite conferences to attend. The Gene Golub SIAM Summer School I attended in 2019 is one of the fondest memories of my Ph.D. This summer school was my ticket to feeling like a part of the community and I would recommend any Ph.D. student to attend a summer school organized by SIAM.
Jonas Latz
Jonas Latz, University of Manchester, is the recipient of the 2024 SIAM Activity Group on Uncertainty Quantification Early Career Prize for several substantial and innovative contributions to uncertainty quantification, integrating a wide range of mathematical areas.
He will give a talk at the 2024 SIAM Conference on Uncertainty Quantification (UQ24), happening February 27 – March 1, 2024, in Trieste, Italy. The talk, titled “Perspectives on Stochastic Gradient Descent,” will take place on March 1, 2024, at 1:30 p.m. CET.
The SIAM Activity Group on Uncertainty Quantification awards this prize every two years to one individual in their early career for outstanding research contributions in the field of uncertainty quantification in the three calendar years prior to the award year.
Dr. Jonas Latz has been a lecturer in applied mathematics— equivalent to assistant professor —at the University of Manchester since September 2023. Dr. Latz studied mathematics and scientific computing at Trier University, Germany, and the University of Warwick, United Kingdom, respectively. He obtained his doctorate from the Technical University of Munich, Germany (2019). Previously, he was a research associate in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge (2020-21). His first academic position was at Heriot-Watt University, Edinburgh, as an assistant professor (2021-23). Learn more about Dr. Latz.
Q: Why are you excited to receive the award?
A: The 2018 SIAM Conference on Uncertainty Quantification in California was my first large conference as a Ph.D. student. It was also the first time that the SIAM Activity Group on Uncertainty Quantification awarded the early career prize. Being the recipient of this prize in 2024 is a great honor and, to me, a great opportunity to look back.
Q: Could you tell us about the research that won you the award?
A: The stochastic gradient descent method is the workhorse of modern machine learning methods. Traditionally, it allows the applicant to minimize a large sum of suitable functions by considering only one or few of the functions in each iteration of the algorithm. This makes stochastic gradient descent scalable in big data settings and, indeed, very popular in machine learning. In the latter, it is often applied in a regime— with a so-called constant learning rate —in which it typically cannot converge to a fixed point. With such a constant learning rate, it may show a stationary behavior that may act as an additional implicit regularization of the training problem and is sometimes used for approximate uncertainty quantification. I have analyzed the stochastic gradient descent method via a novel continuous-time framework. Here, I studied the method's longtime behavior in the constant learning rate regime, as well as in the traditional setting with a decreasing learning rate. This framework has turned out to be quite powerful and applicable far beyond the basic stochastic gradient descent method, e.g., in Langevin Monte Carlo with Kexin Jin and Chenguang Liu, or Ensemble Kalman inversion with Matei Hanu and Claudia Schillings.
Q: What does your work mean to the public?
A: Uncertainties appear in all parts of our lives. I am particularly interested in those that appear when blending mathematical models with data, so called inverse problems. Inverse problems appear in weather prediction, for example, but also when machine learning is used to build an artificial intelligence. Uncertainty arises here when data is inaccurate, not sufficiently informative, or the models are too complex. Quantifying these uncertainties is vital, especially when using the resulting models to make safety critical decisions, e.g., in self-driving cars or medical imaging. In addition to uncertainty quantification, I work on several other aspects of inverse problems, such as algorithms for large data, as well as interpretability and robustness in artificial intelligences.
Q: What does being a member of SIAM mean to you?
A: Being a researcher has given me the opportunity to meet and collaborate with researchers around the world and is probably my favorite aspect of being a researcher. SIAM is a great place to make connections and start collaborations and is as such irreplaceable. My membership allows me to support the SIAM community and to be a part of it.
Stay Up-to-Date with Email Alerts
Sign up for our monthly newsletter and emails about other topics of your choosing.