How to Boost Your Creativity
As a SIAM News reader, you are a creative person — you would not have gotten where you are today without being creative. However, you may not understand exactly how your creativity works or how to “turn it on” when you need it. You also might not know how to train others to be creative.
In this article, we describe the basic idea behind a six-step process for creativity that Dennis Sherwood developed over the last 20 years. We have used this process together in creativity workshops during the last decade, working with groups ranging from the numerical analysis group at the University of Manchester to the SIAM leadership.
Our approach is based on the insight that creativity is not so much about creating something totally new as about identifying something different. The search for something different is much easier than the search for something new, for “different” means “different from now” and “now” is visible all around us. So if we observe “now” very carefully, we might notice some feature of “now” that might be different and ideally better too, and from this feature an idea might spring. Creativity therefore does not necessarily require an act of genius, or a lightning strike out of the blue. Rather, good ideas can be discovered as the result of detailed observation coupled with curiosity, and can follow a systematic process that can be applied in any circumstance.
Two of the key steps in our procedure are to write down every feature of the focus of attention (which could be a mathematical problem or something else entirely), and then ask “How might this be different?” for each one. Here we provide a glimpse into the process with an old and familiar example: iterative refinement for improving an approximate solution to a linear system \(Ax=b\), where \(A\) is a square, nonsingular matrix. The basic algorithm in its original form is as follows, and we assume that it is carried out in double-precision floating-point arithmetic:
- Solve \(Ax=b\).
- Compute \(r=b-Ax\) in quadruple precision.
- Solve \(Ad=r\).
- Update \(x \leftarrow x + d\).
Repeat from step 2 if necessary.
James H. Wilkinson programmed iterative refinement in 1948 using LU factorization with partial pivoting for the solves in steps 1 and 3. For step 2, he took advantage of the ability of his computer—the Pilot Automatic Computing Engine at the National Physical Laboratory—to accumulate inner products in quadruple precision at no extra cost.
Iterative refinement became popular and was implemented in this way for the next 25 years or so. Several textbooks from the 1960s and 1970s made statements such as “It is absolutely essential that the residuals be computed in extra precision,” and the method seemed to be set in stone. However, every aspect of iterative refinement is amenable to the question “How might this be different?”, and the answers to this question have yielded a panoply of different versions of the method.
Here is a thumbnail sketch of some iterative refinement variants, each of which is identified by the feature that distinguishes it from the aforementioned version. Specific references for these and other developments are given in [1, 2].
Precision of the Residual
The residual \(r\) does not need to be computed with extra precision, at least not if the aim is to improve the backward stability rather than the accuracy. This was realized in the 1970s, by which time most computers could no longer accumulate inner products in quadruple precision for free. This finding opened up the possibility of using a somewhat unstable solver.
Precision of the Factorization
The algorithm still works if step 1 uses an LU factorization that is computed in single precision, as long as \(A\) is not nearly singular to single precision. This observation was made in the 2000s and was important because processors were appearing on which single precision arithmetic was much faster than double precision arithmetic.
The Solver
The solver in steps 1 and 3 does not need to be LU factorization with backward and forward substitution. Iterative refinement works with an arbitrary solver, subject to some assumptions on its numerical behavior. Moreover, the solver on step 3 does not have to be the same as the one on step 1. For step 3, the generalized minimal residual method (GMRES)—preconditioned by the LU factors that were computed on step 1—increases both the accuracy and range of matrix condition numbers for which convergence is obtained.
Interplay of Precisions
This process is not limited to two precisions; we can independently choose the precisions in steps 1 through 3 and even employ more than one precision inside the solvers used on steps 1 and 3. We can determine good combinations of precisions based on the relative speed of the arithmetics and the properties of the matrix.
Hardware
We can deploy this method on more than just digital hardware. It has been used with the correction equation on step 3 solved using low-precision analog circuitry and residuals computed using higher-precision digital circuitry.
Multiprecision
Computer hardware provides floating-point arithmetic in a limited number of precisions, but arbitrary-precision arithmetic is available in software. Iterative refinement can be done with a precision that increases during the iteration until any desired accuracy is obtained.
These are just some of the ways in which iterative refinement has been modified over its more than 70 years of use on digital computers; many of the changes were motivated by the evolution of computer architectures. Any numerical analyst could have discovered these variations by observing all of the algorithm’s features and asking “How might this be different?” for each one. It is essential, however, not to judge ideas too quickly after generating them, as doing so risks prematurely throwing out some of the best ones. Having noted that we can compute the residual at the working precision, we might be tempted to dismiss this idea in light of conventional wisdom. But great ideas often go against conventional wisdom!
We invite you to try the steps that we have illustrated on a topic that interests you. Write down all of its features and underlying assumptions and ask how each one might be different. Make sure not to evaluate your ideas too soon.
While our process can be applied by individuals on their own, it is most powerful when used in small groups. Different people spot different features of a focus of attention, and a group produces a wider selection of ways in which those features can be changed than any individual is likely to do.
Our creativity workshops consist of between six and 20 people who work in groups of up to eight, ideally meeting for one or two days. They can generate a large number of ideas for future evaluation. These workshops are great ways to train people in creativity and help build a team environment in which creativity flourishes.
For full details about our six-step creativity process—as well as other techniques for generating ideas—see our new SIAM book: How to Be Creative: A Practical Guide for the Mathematical Sciences, which is available for purchase at bookstore.siam.org.
References
[1] Carson, E., & Higham, N.J. (2018). Accelerating the solution of linear systems by iterative refinement in three precisions. SIAM J. Sci. Comput., 40(2), A817-A847.
[2] Higham, N.J., & Mary, T. (2022). Mixed precision algorithms in numerical linear algebra. Acta Numerica. To appear.
About the Authors
Nicholas J. Higham
Professor, University of Manchester
Nicholas J. Higham, who passed away in January 2024, was a Royal Society Research Professor and Richardson Professor of Applied Mathematics at the University of Manchester. He was a past president of SIAM.
Dennis Sherwood
Consultancy Business Owner
Dennis Sherwood runs his own consultancy business that specializes in organizational creativity and innovation, with a particular interest in applications to mathematics, the sciences, and engineering.
Stay Up-to-Date with Email Alerts
Sign up for our monthly newsletter and emails about other topics of your choosing.