Session of the wmo-ioc joint Technical Commission for Oceanography and Marine Meteorology (jcomm) agreed that it would be logical to transform the wmo wave Programme into the jcomm wind Wave and Storm Surge Programme



Download 1.37 Mb.
Page8/19
Date18.10.2016
Size1.37 Mb.
#956
1   ...   4   5   6   7   8   9   10   11   ...   19

Data Assimilation

Data assimilation is a process to enable observed data to be integrated into a forecast model. This can be useful in a number of stages of the storm surge model. In forecasting mode it is important to correct boundaries and account for processes not included in the model. In the initial calibration and tuning phase of a model, data assimilation can be very useful to obtain optimal parameter settings. Methods for data assimilation are usually time consuming and they require a much more complex model setup. For calibration and tuning, the cost aspects of data assimilation are much less critical than when the model actually has to produce a forecast, and then elaborate and expensive but accurate methods like adjoint or inverse modelling can be applied. In the operational phase, timely production of forecasts is essential and the data assimilation method has to be cheap and fast enough to meet these requirements.


The crudest method for operational forecasting would be direct substitution of the observed values to replace the predicted values they represent. However, if the value at an observation point is changed in this way, it is possible that it will no longer agree with values at neighbouring grid points introducing instability in the form of oscillations. Data assimilation schemes therefore attempt to modify the original predictions around the observation point or on a larger scale so that they are consistent with the observations. Sequential data assimilation uses observations that precede the point of analysis, whereas non-sequential (or retrospective) data assimilation performs an analysis at an intermediate point in the time domain. Many assimilation approaches exist, most of which began in meteorology and then found their way into oceanography.
A brief outline of several methods is given below, followed by a discussion of possible options for the operational implementation of a data assimilation scheme. The techniques differ in their computational cost, and their usefulness for real time forecasting.
For a more complete review see Ghil and Malanotte-Rizzoli (1991) or: http://www.ecmwf.int/newsevents/training/lecture_notes/pdf_files/ASSIM/Ass_cons.pdf

      1. Ensemble forecasts

Just as important as a forecast of the sea level itself is an estimate of its accuracy. A traditional deterministic forecast of storm surge produces a single estimate of how the water level will evolve over time. A single forecast could thus fail to capture a particular non-linear combination of circumstances, consistent with a small change to model initial conditions, which could generate an extreme event. An ensemble modelling approach, already widely used in meteorology, addresses this by producing several simulations. Each simulation uses different initial conditions, boundary conditions and/or model physics with the aim of sampling the range consistent with the uncertainty in (a) observations and (b) the model itself.


The variation in such an ensemble of forecasts contains information on the accuracy of the forecasts which goes beyond a standard measure determined from verification of previous forecasts. However, interpretation of the ensemble forecasts in not always straightforward. Biases of the ensemble average or the width might require calibration before meaningful results are obtained. Further exploitation of the results of ensembles can lead to forecasts for the probability of exceeding certain critical levels.
These techniques can also be applied to storm surge forecasting. One storm surge model can be forced with the output from the various members of a meteorological ensemble. The ensemble can consist of perturbations of one single model. Such ensemble surge systems are currently being run operationally in the UK (Flowerdew et al., 2009; Flowerdew et al., 2010) and in the Netherlands. Multi-model ensembles in which the forecasts for certain parameters from a number of different models, very often from different institutes, are combined are also possible. One could think of different atmospheric forecasting systems. But also the storm surge models of different agencies in the same area can be exploited, as will be shown elsewhere in this guide, for the North Sea case. As an example, Figure 5.2 gives a surge forecast based upon the ECMWF ensemble. Skew surges from all individual members are shown together with calibrated probabilities. The calibration has been derived from an earlier winter season of forecasts. Orange asterixes are the observed surges.




Figure 5.2: Probability surge forecast, based upon the ECMWF ensemble, see text.



      1. The Cressman Scheme


One of the earliest and simplest successive correction schemes was published by Cressman (1959). It was used for objective weather map analysis at the US weather bureau in the late 1950s, and involved assimilating winds and pressure heights to a grid of the northern hemisphere. Values of a variable within a certain distance of each observation are corrected by adding then to some proportion of the error at the observation point (i.e. the difference between the observation and corresponding prediction). The size of the proportion for a given grid point will depend on its distance from the observation, as defined by a weighting function. The result will be consistent with the smoothed observations, but not necessarily consistent with model dynamics. One of the least satisfactory aspects of the Cressman scheme is the way in which the correction is relaxed as one moves away from the observation.

      1. Statistical (Optimal) assimilation


In Optimal Assimilation, large matrices of statistical information of forecast errors and observed errors are used to determine the model adjustment in space and time. In this scheme weights are applied to two measurements (i.e. the model and observed values) to give a best estimate. The weights are based on the variances of the model and observations and are chosen so that the overall variance of the error of the best estimate is minimised (it is an optimal least-squares estimation). Unlike the Kalman filter, the background covariance matrix is assumed to have a time-invariant form; this is usually some function that depends on the distance between grid points and some assumed decorrelation length scale (inferred from our physical knowledge, e.g. the Rossby deformation radius).



      1. Download 1.37 Mb.

        Share with your friends:
1   ...   4   5   6   7   8   9   10   11   ...   19




The database is protected by copyright ©ininet.org 2024
send message

    Main page