The Concept of Finite Differences Enters the Mathematical Scene
The earliest concrete idea of how to simulate a partial derivative with an algebraic difference quotient was the brainchild of L. F. Richardson in
1910.[767] He was the first to introduce the numerical solution of partial differential equations by replacing each derivative in the equations with an algebraic expression involving the values of the unknown dependent variables in the immediate neighborhood of a point and then solving simultaneously the resulting massive system of algebraic equations at all grid points. Richardson named this approach a "finite-difference solution,” a name that has come down without change since 1910. Richardson did not attempt to solve the Navier-Stokes equations, however. He chose a problem reasonably described by a simpler partial differential equation, Laplace’s equation, which in mathematical speak is a linear partial differential equation and which the mathematicians classify as an elliptic partial differential equation.[768] He set up a numerical approach that is still used today for the solution of elliptic partial differential equations called a relaxation method, wherein a sweep is taken throughout the whole grid and new values of the dependent variables are calculated from the old values at neighboring grid points, and then the sweep is repeated over and over until the new values at each grid point converges to the old value from the previous sweep, i. e., the numbers "relax” eventually to the correct solution.
In 1928, Richard Courant, K. O. Friedrichs, and Hans Lewy published "On the Partial Difference Equations of Mathematical Physics,” a paper many consider as marking the real beginning of modern finite difference solutions; "Problems involving the classical linear partial differential equations of mathematical physics can be reduced to algebraic ones of a very much simpler structure,” they wrote, "by replacing the differentials by difference quotients on some (say rectilinear) mesh.”[769] Courant, Friedrichs, and Lewy introduced the idea of "marching solutions,” whereby a spatial marching solution starts at one end of the flow and literally marches the finite-difference solution step by step from one
end to the other end of the flow. A time marching solution starts with the all the flow variables at each grid point at some instant in time and marches the finite-difference solution at all the grid points in steps of time to some later value of time. These marching solutions can only be carried out for parabolic or hyperbolic partial differential equations, not for elliptic equations.
Courant, Friedrichs, and Lewy highlighted another important aspect of numerical solutions of partial differential equations. Anyone attempting numerical solutions of this nature quickly finds out that the numbers being calculated begin to look funny, make no sense, oscillate wildly, and finally result in some impossible operation such as dividing by zero or taking the square root of a negative number. When this happens, the solution has blown up, i. e., it becomes no solution at all. This is not a ramification of the physics, but rather, a peculiarity of the numerical processes. Courant, Friedrichs, and Lewy studied the stability aspects of numerical solutions and discovered some essential criteria to maintain stability in the numerical calculations. Today, this stability criterion is referred to as the "CFL criterion” in honor of the three who identified it. Without it, many attempted CFD solutions would end in frustration.
So by 1928, the academic foundations of finite difference solutions of partial differential equations were in place. The Navier-Stokes equations finally stood on the edge of being solved, albeit numerically. But who had the time to carry out the literally millions of calculations that are required to step through the solution? For all practical purposes, it was an impossible task, one beyond human endurance. Then came the electronic revolution and, with it, the digital computer.