As discussed in the previous update, I aimed to carry out 2000 model runs to look at how the output (river discharge) of the model varied as model variables or parameters (such as channel roughness) were varied randomly.
The greatest difficulty with this has been just making sure that everything is set up correctly for my model runs. Sometimes this has involved quite a bit of trial and error - often a lot of model errors - to get everything right. As I can automate all my simulations it would be very annoying to find after 2000 simulations I'd made a msitake. However the simulations have now been carried out (with the help of Stuart letting me use his computer as well as mine).
I have used a number of different objective functions to analyse the accuracy of model predictions of the flood hydrograph. These basically attempt to quantify the goodness of fit of my observed (2000 flood event) and simulated hydrographs. Amongst these was the Nash-Sutcliffe model efficency (NSME), which allows measurement of the variation between the observed and the simulated hydrograph.
However Nash-Sutcliffe values were very poor for nearly all of the simulations. As were results showing the error in predicted flood peak. The low values were quite discouraging as it suggests that, at least when varying parameter values, the model is a poor respresentation of the observed flood hydrograph.
However there are several potential positives to take from the simulations;
- Firstly it is now even more clear that the model is very sensitive to rainfall rate applied. This stands to reason that a very small or very (very!) large rainfall rate is not likely to produce a flood hydrograph similar to that observed in 2000. - On reflection this is good in that it would be expected that the flood hydrograph would change quite considerably under different rainfall rates.
- Secondly the NSME statistic can give misleading results if they are not looked at closely.
For example it is biased towards the highest flows, therefore a model can be given a low NSME value even if most of the flood hydrograph is correctly predicted.
Also errors in the timing of flood peak can affect the results from using the statistic.
For example it is possible to produce a pretty good qualititative simulation of the observed hydrograph of the 2000 event using a set rainfall rate. From simply looking at this we can see that the timing of the peaks is slightly out - this can greatly affect NSME values. If the timing error if corrected for, very high NSME values can be achieved - indicating a good fit.
Therefore we believe the poor results are far from suggesting the model is not useful and it highlights the importance of not just analysing results but looking at how they are analysed.
As the model is very sensitive to rainfall rate only a small range of rainfall rate is likely to produce a good simulation of the flood hydrograph, therefore when I ran 2000 simulations very few combinations of variables produced 'good' results.
However after some extra thought usefull information can be found from the simulations. Trends in parameters values can be seen and will be looked at more closely in time. A future forward step is likely to be calibrating the model using several different rain rates as the event progresses. It is hoped this may offer up more accurate results and allow better investigation of the model parameters.
On the plus side it is now possible to run much faster simulations, using the computer power of several computers so results can be obtained much sooner.
Tomorrow I shall update you on results from my screening simulations, which are quite interesting.
Ed
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment