Operating Models
To perform the experiments comparing management procedures, a virtual world — called operating model (OM) — is constructed, based on beliefs of how the real world works, and how it will work in the future.
Ideally, represented beliefs would reflect uncertainties in various types of relevant knowledge (expert, local, indigenous) but most commonly the models are essentially copied from stock assessments (augmented with extra information or assumptions). Subsequent stock assessments tend to result in substantial updates in the beliefs about the stock and its history, hence MSE should probably be re-conditioned on newer stock assessments at least once a decade (even if no warning signs were detected that could result in invoking ‘exceptional circumstances’ clauses such as recruitment failure, suspected large IUU landings, or critical issues with CPUE data).
Climate change is likely to present an additional challenge to stock assessments or any model that relies on past data, stationary assumptions, and processes discerned in the context of the past to predict the future — in the context of fast changes, the MSE offers a way to look for management procedures that minimise regret under uncertainty, if operating models are constructed more imaginatively than traditional stock assessments. Two types of virtual environments are generally distinguished in the realm of OMs: a reference set and robustness trials (see "Future Cone" below).
A reference set starts from a smaller set of possibilities and projects forward the “most probable futures”.
Robustness trials usually refer to opening up of assumptions in the reference set and hence simulating a wider set of “other plausible futures” and hence encompass more challenging circumstances for management procedures to cope with.
It might become difficult to find strategies that achieve a wide range of management objectives in all robustness scenarios. It is key that managers and stakeholders agree on what constitutes a satisfactory performance for a management procedure, preferably agreeing on what risks are unacceptable before seeing the results of evaluations.
Future Cone / Imagining Possible Futures: reference scenarions, robustness scenarios, and deep uncertainty. The reference scenarios in our swordfish example are generated by nine uncertainties (each uncertainty is represented by 2 or 3 discreet alternatives, e.g. steepness is either 0.6, 0.75 or 0.9). Out of the full grid 2592 OMs only 110 passed the plausibility test, and these 110 were resampled to generate 500 simulated worlds.
Simulated worlds might differ from each other and also from the portrait of the real world familiar from the most recent stock assessment — extra assumptions or information might make historical or future projections with OMs different from those obtained with stock assessment models. Such differences can be expressed in beliefs around resilience of the stock to exploitation (captured by the steepness parameter), population levels the stock can reach in the long term in the absence of fishing (one definition of virgin biomass), the maximum sustainable yield (MSY) that can theoretically be extracted indefinitely, the variability in abundance from year to year.
In particular, this means that reference points differ from one OM to another.
The "Past, present, future" figure shows a distribution of reference points across OMs (left), as well as differences in how OMs represent the present (center) and what is possible to achieve in the future under MP6 (right).
Past, Present, and Future: we take the 1950s as a reference, a proxy for the unfished state, or B0. One of the management objectives is to ensure that the biomass does not fall below 20% of B0 in each of the OMs. The management objectives are based on MSY estimates that range from 13% to 37% of B0 — reducing the stock to less than a quarter of its original size is a success story in half of the OMs.
The advantage of simulations is that we know a lot about each simulated world, because we built it. In particular, we know what MSY is possible in each, and it makes sense to evaluate strategies with respect to MSY-based values native to each operating model.
Further, for each virtual world a management procedure usually includes its own understanding of the simulated stock it ‘observes’ through the prism of simulated observation data and a simplified estimator (although in some MSEs the estimator is not simplified at all and the full traditional stock assessment model is ran every iteration when a harvest decision has to be made — this is very computationally expensive). Empirical management procedures don’t need an estimate of stock status, they set harvest levels based on observed data trends, such as CPUE or a larval index.
The aim is to reflect imperfect knowledge, but it is often argued that while being tested, the MPs are too well informed about the simulated stock. In fact, in our illustrative swordfish example all MPs are allowed to have nearly perfect and unbiased knowledge of the 'true' stock size. In many MSEs, the estimator replicates the operating model's assumptions about what drives the population dynamics (whereas in reality we don’t know how the real world works) and the simulated observations are often deemed “too good” even as the level of added noise appears to mimic historical data. Being too “well-informed” about their respective simulated worlds makes it easier for MPs to achieve management objectives in virtual worlds — it is like giving a student questions before the test. However, if MPs were routinely picked based on insufficiently rigorous tests, we would expect to see more failures in the real world.