We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Old climate models are often evaluated on whether they made correct predictions of global warming. But, if the old models were missing processes that we know now to be important, any correctness of their predictions would have to be attributed to a fortuitous compensation of errors, creating a paradoxical situation. Climate models are also tested for falsifiability by using them to predict the impact of short-term events like volcanic eruptions. But climate models do not exhibit the numeric convergence to a unique solution characteristic of small-scale computational fluid dynamics (CFD) models, like the ones that simulate flow over a wing. Compensating errors may obscure the convergence of individual components of a climate model. Lack of convergence suggests that climate modeling is facing a reducibility barrier, or perhaps even a reducibility limit.
Climate is an emergent system with many interacting processes and components. Complexity is essential to accurately model the system and make quantitative predictions. But this complexity obscures the different compensating errors inherent in climate models. The Anna Karenina principle, which assumes that these compensating errors are random, is introduced. By using models with different formulations for small-scale processes to make predictions and then averaging them, we can expect to cancel out the random errors. This multimodel averaging can increase the skill of climate predictions, provided the models are sufficiently diverse. Climate models tend to borrow formulations from each other, which can lead to “herd mentality” and reduce model diversity. The need to preserve the diversity of models works against the need for replicability of results from those models. A compromise between these two conflicting goals becomes essential.
The Leaning Tower of Pisa, used by Galileo to demonstrate the simplicity of science, is also a testament to the complexity of science. Over an 800-year period, multiple attempts were made to fix the errors in the tower’s construction that caused it to lean. Often, the fixes had unanticipated consequences, necessitating additional compensating fixes. Climate models face a similar problem. The models use approximate formulas called parameterizations, with adjustable parameters, to represent processes like clouds that are too fine to be resolved by the model grids. The optimal values of these parameters that minimize simulation errors are determined by a trial-and-error process known as “model tuning.” Tuning minimizes errors in simulating current and past climates, but it cannot guarantee that the predictions of the future will be free of errors. This means that models can be confirmed, but they cannot be proven to be correct.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.