Inaugural lecture by Klaus Mosegaard
Statistics in geophysical modeling: A good servant or a bad master?
Abstract: Ernest Rutherford, the British chemist who laid the groundwork for the development of nuclear physics, once said: "If your experiment needs statistics, you ought to have done a better experiment"! While there is some general thruth in this statement, there is no doubt that in geophysics our data are often so sparse, so insufficient, so inaccurate, and so inconsistent that some additional constraints (perhaps from statistics?) are needed to obtain a reasonable model of the Earth.
Our journey through the use (and abuse) of statistics begins around 1800 with the invention of "Least Squares", which has now become an accepted technique in, for example, linear regression, signal processing, and curve fitting. This method, which is build on assumptions of Gaussian statistics, also dominates geophysical modeling today. It is widely used to provide missing information in the many cases where data are insufficient to compute a unique solution. We shall investigate how the use of Least Squares influences current Earth models, often with poor results.
After these introductory observations we shall look at some recent work where Gaussian assumptions are replaced by more complex statistics, based on empirical data. This is done through an application of Bayes Rule where data is combined with "prior information" derived from observations of real Earth structure.
Finally, we will explore the perspectives for replacing statistical assumptions with constraints based purely on physics. In general, when Earth models are computed, a number of physical properties of the Earth are not involved in the modeling, and each of these properties may potentially provide constraints to help resolving the ambiguity in geophysical data.
Coffee and Cookies
|