Modeling & Simulation Services
Computational modeling and simulation is a research methodology developed in part to handle complex systems. Essentially, this method uses a virtual complex system to model a real-world complex system. Once the modeling effort is validated, the virtual system can be treated as an experimental platform, exploring system-level effects of various combinations of conditions, some reflecting particular regions and some corresponding to policy levers or intervention initiatives.
Conceptual and Computational models are highly specialized. nexusSIM offers a range of solutions. The most appropriate solution will vary considerably depending on the current stage of development of the project.
Theory development
Theory development is the most common kind of computational modeling research. But theory, especially in social sciences, can often be ambiguous, vague, or unfalsifiable. They may only exist visually or as high-level descriptions, but these rarely lend themselves to a concise computational interpretation. This is usually why a computational model interpretation is necessary: computational models force us to remove ambiguity and are inherently falsifiable. We work with subject-matter experts (SMEs) to clarify their knowledge and condense their goals.
Model specification
Modeling is a fuzzy, sometimes meandering process. Nonetheless, there is a clear difference between a seed of a model that is high-level, non-specific idea, and a model-specification that lays out clear goals and related literature. With a good model spec, a modeler can easily make a first go at implementing a computational model, and the model can be iterated on from there. Our model specification service is for the academic that has a hunch that a certain topic will lend itself to a very fruitful computational model, but just isn’t certain what specifics will be most appropriate in a computational model.
To build a full model specification, we will work with the academic to build:
- Clear model goals, including the kind of analysis the model aims to make possible
- A conceptual model sketch, with specified inputs and outputs
- A literature review that will act as the sources for the model
- Key concepts from the literature review that the model will aim to include in its causal architecture
Once a model specification is clearly articulated, a technical expert can give a go at building a model that incorporates concepts from the literature into a causal architecture.
Building a model skeleton
After model specification, the next step is model skeleton. A model specification will have all the resources a modeler needs to ideate a computational model, but converting the ideation into code is still an intensive process. Discrepancies in the spec and within the literature may arise, and the technical expert will collaborate with the academic to work out the kinks.
The finished model skeleton will be implemented in software. It may or may not run. The model skeleton demonstrates what the model could look like in code, and articulates all the inputs and outputs from the spec. If the academic agrees with the skeleton, the skeleton can be mathematically fleshed out and validated.
Note that the skeleton may need some rounds of iterative development to reach a final product.
Model expansion
Models rely on rates, distributions, and thresholds that may be sourced from empirical data and/or expert estimates. Integrating the model with the relevant data is often a separate task from building a model skeleton, and difficulty will depend a lot on the model.
For this type of project, we must start with an existing model and get it running. As much as possible, we verify that our implementation produces the same or similar enough outputs as the original model. Doing so will increase our integrity as we present our changes to the model. Providing the theory behind changes to the model are accepted in the field, verifying the original model’s results rather than a new verification may be sufficient. This is a major benefit to model expansion if there is no validation data, or if the SME does not know the new outputs well enough to verify strongly.
Model verification and validation
Verification is when we ensure the code is doing what it is supposed to do and the model operates according to specifications. It is necessary to have an expert opinion verifying the model before making any kind of research claims, as an expert will always have a more complete or nuanced opinion.
Validation consists of ensuring that low-level design features are accurate based on expert agreement and high-level model outputs successfully simulate real-world situations. Validation is stronger than verification, though sometimes the strength of validation may not be necessary to complete the goals of a project.
We do this by running a parameter sweep over the model inputs, and measuring the model outcomes. We can plot the outcomes in various ways, and work with the academic to see if the outcomes match our theoretical expectation. Often, successful verification may be the end of the project for the academic; the academic would work to publish the model and the verification findings.
A verified and validated computational simulation functions as a platform for policy assessment and experimentation. While policy simulation routinely generates valuable, research-based insights into the subject area, it does not generate novel policy solutions. But the insights it generates do inspire policy professionals to come up with novel ideas, in an interactive process between human experts and high-level technology.