Dr. Douglas Tolley gives an explanatory presentation on integrated hydrologic modeling, including how models are coupled and key terms such as sensitivity analysis, model calibration, and uncertainty analysis
Douglas Tolley recently finished his PhD from UC Davis where his research focuses on the development, calibration, and prediction uncertainty of groundwater models. In this presentation from the Groundwater Resource Association’s Western Groundwater Congress, Dr. Tolley focused on integrated hydrologic model development and evaluation from the standpoint of a non-modeler with the goal of providing the layperson with a fundamental understanding of what integrated hydrologic modeling is, as well as applications and processes involved with model development itself, such as sensitivity analysis, calibration, and uncertainty analysis.
There are a number of different hydrologic models that do a lot of different things, but basically at the core, these integrated models just simulate fluxes between two or more different sub-systems, such as groundwater and surface water; agriculture, surface water, and groundwater; and even more complicated integrated models that can simulate climate, soil zone, groundwater and surface water all together.
Integrated modeling is a little more challenging than traditional numerical modeling because these different subsystems typically require different equations that simulate the physical processes, and the different subsystems tend to operate at different spatial scales and time scales as well, so trying to combine those into a single model can be pretty challenging.
Three ways to couple models together
Weakly coupled models
The first and simplest method to put models together is to weakly couple them; in this case, fluxes are calculated from one model and passed onto another model, and so on for multiple models. For example, the climate model can pass in precipitation and ET down to the soil zone model, and the soil model can pass recharge into the groundwater model.
However, Dr. Tolley noted that there isn’t anything passing back up the chain; there are no feedbacks between the systems. The benefit is that it’s simple, relatively cheap, and easy; the downside is important feedbacks that happen between the systems can be missed.
“If the groundwater table is relatively deep, this may not be a problem, but if the groundwater levels come up into your soil zone, then that can affect processes that happen in this model,” he said. “If you’re doing a more regional model, then maybe you are having evapotranspiration effects that are affecting your climate, so these are things you might miss with these relatively simple weakly coupled models.”
Iteratively coupled models
The second way to put models together is to couple the models iteratively, meaning that conditions are solved for separately within each models and the fluxes are passed back and forth until some sort of predetermined convergence criteria is reached.
“What this allows is for feedbacks to happen between the two systems, so what happens in the surface water system can affect groundwater, and vice versa,” Dr. Tolley said. “This is great; we have increased complexity, which is what we want. We want to be able to represent these complex processes that are happening, but also with increased complexity comes at additional computational costs as well as additional non-linearities.”
Fully coupled models
The third way we can couple models together is by fully integrating them or fully coupling them. These models essentially assemble all of the different equations into one very large matrix that is all solved together. The problem is that they are incredibly computationally expensive and they require a very high degree of parameterization. A fully integrated model can take as much as a day to a day and a half to run where the same type of model if iteratively coupled might take just a few hours. These are mostly relegated to academia at the moment because they are so expensive and they give a lot of details that may not necessarily be needed, he said.
Picking the right model
When you are developing any kind of model, what’s important to understand is what are the questions you are asking the model to answer.
“You don’t necessarily need a flashy Ferrari if you’re a family of four on a budget and you just need to get back and forth to soccer practice, because a van can suit your needs quite a bit better than a Ferrari can, so you need to keep in mind what you’re asking the model to do,” said Dr. Tolley.
Modeling the Scott Valley system
“In the context of the Scott Valley, we’re really concerned with late summer streamflow,” Dr. Tolley said, presenting a plot showing the average daily streamflow in the Scott River. “We have wet winters and dry summers because we’re in a Mediterranean climate, and if we superimpose our growing season, we can see that in the late summer when we have these cumulative effects from pumping, these can results in stream disconnections and increased water temperatures, both of which are really bad for fish.”
The Scott Valley model was built to understand the hydrogeology of the area; he noted that models are fantastic for integrating lots of different types of information and they are a great repositories.
“When we can understand what’s happening with the system, then we can start applying changes and see what would happen,” he said. “Late summer the critical streamflow period because the chinook or coho fall runs are tending to come up around this time. This is mostly affecting the chinook currently but the coho can also be affected as well, depending on how late that reconnection is. So what if we change the management strategies that may affect streamflow that’s happening during this time.”
The Scott Valley Integrated Hydrologic Model or SVIHM is the combination of three separate models: an upper watershed model that estimates streamflow coming into the valley; a soil-budget model that does a field-scale accounting of groundwater pumping and recharge; and a model that calculates detailed groundwater levels and streamflow.
“These three models are weakly coupled,” Dr. Tolley said. “You can see that we’re cascading down, but the groundwater-surface water component is iteratively coupled, so those are being solved together at every single time-step.”
The model does a detailed water budget at the field scale for pumping and recharge that is at a high resolution; they also use a streamflow routing package to route water through the system.
“So the stream can go dry, all we do is provide a volumetric flow at the boundary conditions; then the model simulates what the head is in the stream and what the flows are,” he said.
The model uses monthly stress periods during the 21 year simulation period, but there are also daily time steps, so they can calculate not only monthly changes, but changes that happen within the month as well.
One of the first things that needs to be done when building a model is to determine which parameters are sensitive. There are over 250,000 cells, each of which has multiple parameters associated with it, so the modeler needs to find out which of the parameters really affect model outcomes. This is called a sensitivity analysis.
He presented a visual of a simple black box model with three knobs and a gauge to illustrate the concept. The knobs are all set at a starting location and if we turn a single knob, we can see what the change is to the output.
“We generally adjust these by the same amount of the same percentage, depending on the system,” Dr. Tolley said. “And we can compare what the output is doing based off of changing these individual parameters. This is called a one-at-a-time analysis.”
If we turn a knob and the output is changed substantially, that parameter would be considered highly sensitive. If we turn a knob and the output is changed somewhat, that parameter would be considered less sensitive. If we turn a knob and the output isn’t changed much at all, that parameter would be considered not sensitive.
Knowing which parameters in the model affect the output the most helps to direct efforts. “If our model is not sensitive to that parameter, we can set it aside for now and not waste a bunch of time and effort trying to adjust it if it’s not really changing our model results,” he said. “This can help us collapse the number of parameters that we have down to something that’s a bit more manageable.”
The Scott Valley model has crop parameters, field parameters, aquifer parameters, stream parameters, and non-field recharge parameters – 63 parameters in all. The changes in the 63 parameters were evaluated at the four stream gauges as well as 50 monitoring wells that are located throughout the valley.
“Instead of looking at one gauge, we’re looking at thousands of gauges as we adjust all these parameters, because we’re looking at both space and time changes,” said Dr. Tolley. “The takeaway from this is what is called a ‘normalized composite scaled sensitivity graph’. Essentially values of 1 are very sensitive; if we turn those knobs just a little bit, it changes the output quite a bit. Values of 0, you can crank that thing up all the way, it doesn’t really change anything all that much.”
“To account for that modeling area that gets added when we account for this interaction between groundwater and surface water, we used five different sets of starting values to see if the sensitivity changes if we change where we’re starting our analysis at,” he continued, noting that if there are anomalies in the analysis, it’s important to evaluate it at multiple locations to eliminate false information.
“It’s really important to understand what your model is doing across as much of the parameter space as you can afford to do. This takes a long time to do.”
Through the analysis, they determined that the soil-water budget model parameters are the most sensitive; even though they are measuring the output from the groundwater-surface water model, the soil-water budget model that feeds into that is what is controlling the output the greatest.
This can be further broken down. The chart shows the 14 most sensitive parameters that have been identified for the calibration. The colored bars indicate the proportion of information that is coming from different types of observation groups.
“The low flow observations are actually giving us the most information about our parameters, and the groundwater heads as well to some degree,” Dr. Tolley said. “Some of the calibrations may provide some information, but really it’s the low streamflows are giving us the most information about our parameters. It would be good if we could add some more streamflow gauges in the valley which would potentially provide even more information about areas where we don’t necessarily have any data to calibrate to.”
Calibrating the model
Before a model can be used in any type of predictive role, it’s important to understand and demonstrate that the model can simulate observed aquifer behavior. So the next step is to calibrate the model by altering certain parameters in a systematic fashion and running the model repeatedly until the computer solution matches what is observed in the field with an acceptable level of accuracy. The parameters that have been identified as the most sensitive are the important ones to use for calibration.
Values of model parameters are often poorly known early in model development, and are estimated using available data. Some parameters such as the subsurface are hard to measure; others such as wells have point measurements such.
“So we have some observed value, and we adjust our knobs and try to get as close to this observed value that we can; in this example, we can get to 72, which we’re pretty happy about,” said Dr. Tolley. “This is called inverse modeling as well because we’re basically starting off with the answer that our model is providing us and then tweaking our parameters, and then once we have those set, those parameters generally don’t change in time, so now we can make predictions in the future.”
Model results that do not agree with observations can identify problems with the conceptual model. When they first calibrated the soil-water budget model, the model results showed groundwater pumping at 29” per year, but the farmers said that was way too much – it was more like 20” per year.
“Is our model is right? We think, but we don’t know,” he said. “This spawned a three year study of alfalfa fields where the farmers essentially were totally right. Between rain and groundwater irrigation, there’s about 26” per year, but the average ET is about 31” per year, so this led to the question of where is this additional 5” per year of water coming from? The answer was that it was coming from depleting soil moisture storage. Because of these cutting schedules for alfalfa, they were not able to irrigate the entire time so there are two weeks to month-long periods where they are not adding any water onto the ground, but that plant is still growing. Alfalfa has relatively deep roots, and so what these plots are showing is that the shallow soil zone dries out quite a bit and then drops back down when they irrigate. The deeper layers are just continuously drying out throughout the season, and so they are essentially deficit irrigating, even though they are not trying to. They just can’t apply enough water to satisfy that demand.”
When that was incorporated into the soil-water budget model, it gets much closer to that 21”. There were also three very dry years within the 21-year time period, so that could also account for some of that discrepancy, he said.
Soil moisture profiles of alfalfa fields continuously dry out during the growing season which limits groundwater recharge. It also significantly changed groundwater recharge.
“By reducing our groundwater irrigation by 35%, we’ve reduced our groundwater recharge by almost 60%, so this is a fairly non-linear response that’s happening in the soil zone,” he said. “It’s really important to account for these processes that are happening for your boundary conditions for your model.”
Calibrating the groundwater-surface water model was pretty straightforward. “What’s showing here is daily streamflow at the USGS gauge, located at the outlet,” he said. “There’s three orders of magnitude worth of change in this model. We’re not capturing these high winter streamflow events because our model has a monthly stress period and these are on the order of days, so that’s a limitation of our model. But we’re not really concerned with the winter streamflow period, we’re concerned with those late summer streamflow periods which we do tend to match very well.”
Uncertainty analysis is an important but often overlooked step during model evaluation. The uncertainty is resulting from this fact that the model doesn’t match up with this measured value given our best estimates of what those parameter values are. He explained that part of this could be on observation error; maybe someone just measured incorrectly. It could also be a structural error, such as the model being a lot coarser than what is trying to be simulated.
There are many ways to evaluate model predictive uncertainty. There can be uncertainty within a single calibration itself, or uncertainty across all calibrations.
“This is just another layer of complexity and uncertainty that you have to take into account when you are doing these models,” Dr. Tolley said.
One thing that models are really useful for is trying to evaluate things that are very difficult to measure in the fields.
“Stream depletion is a great example of this,” he said. “We really don’t have a lot of great ways of measuring the streamflow depletion, and so we can use models to try and evaluate them.”
UPCOMING WEBINAR: “Integrated Hydrologic Model Development and Evaluation (for Non-Modelers)”