Navigating Stochastic Dynamics: Resources and Applications

Stochastic dynamics, the study of systems evolving randomly over time, plays a crucial role in understanding and modeling various phenomena across diverse fields. Unlike deterministic models, which produce the same outcome given the same initial conditions and parameters, stochastic models incorporate randomness, making them more realistic representations of biological, physical, and economic systems. This article explores the importance of stochastic dynamics, provides resources for learning about it, and illustrates its applications in different domains.

The Significance of Stochasticity

Biological systems are inherently stochastic. There is always some amount of randomness or noise present. While every individual host or pathogen undergoes somewhat random dynamics, these random parts get averaged out, and the dynamics of the whole population is fairly predictable and well described by such deterministic models. However, when numbers are small, randomness starts to matter. For example, in the early stages of an infectious disease (ID) outbreak, whether a single infected person transmits the disease before recovering can significantly impact the course of the outbreak. If the person recovers before transmission, the outbreak is over, and the ID goes extinct.

Deterministic models also treat individuals in each compartment as continuous, which is unrealistic. To capture both the randomness and discrete nature of real systems requires a slightly different modeling approach. We can still use compartmental models, i.e., we track total numbers of individuals in particular states (e.g., S-I-R). As an example, instead of new susceptible hosts entering the system at some continuous birth rate, we now model births occurring as discrete events.

Extinction and Emergence

Extinction of ID is of interest because that is often our final goal. We would like to drive an ID to extinction, either locally or even better, globally. If and how an ID can be driven to extinction depends on different factors. For IDs in humans, the only real chance is to tackle those with no other hosts or reservoirs. As such, we will likely never be able to eradicate influenza. Factors related to the host population and distinct ID characteristics influence the likelihood of an ID to go extinct. Two significant population factors are the size of the host population and the speed at which new susceptible hosts are replenished. For most human diseases, such host extinction is fortunately not very common - though highly lethal ID such as Ebola can lead to marked reduction in the number of hosts in a given population. For non-human disease, extinctions of hosts due to disease is a more significant issue. The host extinction approach is also often considered and used for vector-borne diseases. Here, the idea is to drive one of the hosts (commonly called the vector) to extinction. The obvious reasoning is that if there are fewer vectors (e.g., mosquitoes), the chances for humans to get infected are lower. Widespread use of DDT or insecticide-coated bed-nets are examples of this.

During extinction, infected/pathogen numbers move from a level that can be decently approximated by a deterministic model to numbers so small that it requires a stochastic analysis approach to allow the possibility of extinction.

Read also: Learn Forex Trading

The flip-side of extinction is the emergence of a new disease. During emergence, the new disease starts at zero, then is introduced in modest numbers (possibly only a single introduction) into a new population, and “bounces around” for a while in small numbers. If conditions are right (i.e., local reproductive number greater than 1), the disease might take off and spread. In contrast to deterministic models, a reproductive number that is larger than 1 is necessary, but not sufficient, for the pathogen to produce an outbreak. An initial introduction can by chance be followed by extinction (i.e. an infected person recovers before they can infect someone else), even if R0>1. Thus, the concept derived from deterministic models, that there is no outbreak for R0<1 and one gets an outbreak for R0>1 needs to be modified for stochastic (arguably closer to the real world) settings. An R0<1 still means no outbreak, but now for R0>1, an outbreak is not guaranteed. Instead, there is a certain probability that an outbreak happens.

One finds that for an SIR-type stochastic model, the probability that an outbreak occurs, P, if started by a single infected is given by P=1-1/R0. Thus, as expected, the larger R0, the more likely an outbreak is to occur, but it’s not guaranteed. Similarly, one can show that if there are initially N infected individuals, the probability for an outbreak is P=1-1/(R0)N. Again, this makes intuitive sense, namely more infected individuals make it more likely that an outbreak occurs. For instance if the pathogen has an R0=2 and there are 10 initially infectious individuals, the probability for an outbreak is 99.9%. Once several 10s-100s of individuals are infected, a deterministic approximation which focuses on the mean dynamics is often a reasonable approximation. Often, during the emergence process, the pathogen evolves which makes it potentially easier to establish itself and eventually take off.

Advantages and Disadvantages of Stochastic Models

While stochastic models offer a higher level of realism, they are generally more complex than deterministic models. Deterministic models are generally easier to build, easier to analyze, faster to run on a computer, and importantly, much easier to fit to data. Often this simplicity is worth the loss in some level of realism. But of course, it depends on the specific scenario and question. The most appropriate model depends on your question. For some questions, stochastic models are needed.

Learning Resources for Stochastic Dynamics

Several resources are available for individuals interested in learning about stochastic dynamics, ranging from textbooks and courses to research articles and online tools.

Textbooks

  • "An Introduction to Stochastic Dynamics": This book provides a comprehensive overview of the fundamental concepts and techniques in stochastic dynamics.

    Read also: Understanding the Heart

  • "Effective Dynamics of Stochastic Partial Differential Equations": This book delves into the more advanced topic of stochastic partial differential equations (SPDEs) and their applications.

Courses

  • Stochastic Processes Courses: These courses cover the basic models and solution techniques for problems of sequential decision-making under uncertainty (stochastic control). They often consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. Approximation methods for problems involving large state spaces are also discussed. Courses in this domain may delve into topics such as probability management, mathematical methods, the fundamentals of trading and technical analysis, or manufacturing systems.

    In addition to these foundational concepts, learners can explore probability theory further, which encompasses key elements like probability distributions, random variables, expectations, and variance. Other areas of study might involve Poisson processes, a common stochastic process used to model events occurring randomly in time or space. Learners can also examine Brownian motion, a continuous-time stochastic process, and investigate its properties, such as self-similarity, sample paths, and applications in finance and physics.

    For those seeking more advanced knowledge, courses may cover stochastic differential equations, a fundamental tool in mathematical modeling and finance. Advanced topics can also delve into Martingale theory, which finds extensive use in probability theory, finance, and optimization. Stochastic calculus, another possible subject, plays a crucial role in pricing financial derivatives and analyzing complex stochastic systems.

    edX offers a variety of educational opportunities for learners interested in studying these topics, as well as a host of other disciplines. A boot camp can provide flexible hands-on learning for those who want to upskill quickly, while executive education courses are designed for busy professionals. You can also pursue a more comprehensive curriculum in a bachelor’s degree program or, for more advanced learners, a master’s degree program.

    Read also: Guide to Female Sexual Wellness

Research Articles

  • "Measles Endemicity in Insular Populations: Critical Community Size and Its Evolutionary Implication" by Black (F. L. Black 1966): The paper discusses critical community size for measles. Find 2 papers/analyses that discuss the CCS for other IDs. Some further analysis of critical community size and measles can be found in (Maurice S. Bartlett 1957; M. S. Bartlett 1960; M. J. Keeling 1997; M. J. Some further discussion of stochastic epidemic models can be found in (A. J.
  • (James O. Lloyd-Smith et al.): A broad discussion of invasion and persistence of ID and the role of stochasticity is given in this paper.
  • (Peel et al.): An interesting discussion of critical community size and its dependence on birth dynamics, especially birth pulses, is provided in this paper.
  • "Learning stochastic dynamics from snapshots through regularized unbalanced optimal transport" by Zhang, Z., Li, T., & Zhou, P. (2025): This paper introduces a new deep learning approach for solving regularized unbalanced optimal transport (RUOT) and inferring continuous unbalanced stochastic dynamics from observed snapshots. Based on the RUOT form, the method models these dynamics without requiring prior knowledge of growth and death processes or additional information, allowing them to be learnt directly from data. Theoretically, it explores the connections between the RUOT and Schrödinger bridge problem and discusses the key challenges and potential solutions. The effectiveness of the method is demonstrated with a synthetic gene regulatory network, high-dimensional Gaussian Mixture Model, and single-cell RNA-seq data from blood development.

Online Tools and Resources

  • Illinois Tech’s stochastic dynamics community: The center benefits Illinois Tech’s stochastic dynamics community in the following ways:

    • Offering minicourses on new or emerging topics in stochastic dynamics & data science, by visiting scholars, as a supplement to our exiting academic programs
    • Serving as one interdisciplinary hub to facilitate research and collaboration, especially among junior faculty and postdocs
    • Hosting visitors (especially pass by visitors) for research and seminars
    • Hosting graduate student seminars
    • Sponsoring undergraduate research (minority/women students, pre-REU, REU);
    • Organizing activities such as joint workshops, including DEI workshops with Argonne National Laboratory and the NSF Institute for Mathematical and Statistical Innovation

    The center plays a key role to partner effectively with corporate, university, national laboratory, and governmental stakeholders by driving collaborative, computational and data-driving research involving faculty and students from several disciplines, and by promoting diversity, equity, and inclusion.

    Many funding opportunities call for expertise from various disciplines and the center, with people who share the same mission and vision, can put people in a team to respond to these opportunities. The center will appoint some research scientists at Argonne National Laboratory as research faculty, to facilitate collaborative research and educational activities.

  • Software Libraries: Several software libraries, such as MIOFlow and TorchCFM, provide tools and functions for implementing stochastic models and analyzing stochastic data.

Applications of Stochastic Dynamics

Stochastic processes have applications in a diverse array of fields, including:

Finance and Trading

Stochastic processes are used to model asset price movements, quantify and manage financial risks, and optimize investment portfolios.

Computer Science

Stochastic processes are commonly employed in simulations to model complex systems and evaluate their behavior under uncertain conditions, and to analyze the performance of computer networks, distributed systems, and queueing systems.

Biology and Medicine

Stochastic models are used to describe the dynamics of biological populations, model the spread of diseases, and understand the behavior of biomolecules, such as DNA and proteins.

Physics

Stochastic processes play a role in understanding the behavior of particles in gasses, liquids, and solids at the molecular level, while quantum processes can also find modeling applications using stochastic differential equations.

Economics

In macroeconomics, stochastic processes account for uncertainty and shocks in the economy, while economic models incorporate stochastic processes to analyze investment decisions, economic growth, and financial markets.

Social Sciences

Stochastic models serve as valuable tools in psychological research for understanding decision-making under uncertainty and modeling behavior.

Stochastic Dynamics in Policy and Control

In the late 1960s and early 1970s, Rausser, with a number of fellow PhD students at UC Davis, began a journey to conquer and determine the policy relevance of control theory in its richest versions, including dual control, open-loop feedback control, close-loop control, and M-measurement feedback control formulations. As a group, they recognized early that many agricultural and natural resource systems require stochastic and dynamic models. A host of publications emerged from their collaboration, including the first application of adaptive control to trade policy, (Rausser & Freebairn, 1974) and the first application of M-measurement feedback control to environmental externalities (Rausser and Howitt, 1975). All of these publications embedded learning, sometimes passively and other times actively. All of these contributions advanced a policy or optimal control dimension that required taking a stand on the treatment of evolving measurements and information. Among the various approaches, the two most common are open-loop-with-revision and feedback. The former sets as a benchmark a deterministic problem under the fiction that new information will not arrive, but with an understanding that when it does emerge, it will be incorporated into a decision or policy revision. The latter formulations create a stochastic problem with “anticipated but passive learning;” the decision maker chooses the current policy recognizing that subsequent policy will be adapted to information or data not currently available.

A third approach “dual control” or “active learning” recognizes that choices not only have direct effects on outcomes or payoffs but also have indirect effects on improved measurements of causal impacts, sometimes referred to as response impact curves (Judge et al., 1977, Rausser & Johnson, 1975, Rausser et al. 1979). Pekelman & Rausser 1978 and Rausser & Pekelman, 1980 recognized that firms can learn about an unknown demand function by varying the prices that they charge. A particular pricing choice not only affects current revenues or profits but also provides information about the price elasticity of demand. More accurate measurements increase future profitability. The optimal pricing policy balances the effects on current profits and on the acquisition of information.

The various policy control formulations attained here were originally introduced and applied by electrical engineers. Macroeconomists became intrigued with the methods’ applicability to monetary and fiscal policy. However, the electrical engineering formulations generally dealt with physical responses, not agents’ behavioral responses. The “Lucas critique” (1976) and the subsequent work by Kydland and Prescott (1977), which followed the development of rational expectations modeling, presented a conceptual rather than technical challenge to policy applications of control theory. This critique recognizes that a standard optimal control formulation leads to time inconsistency in a setting where a decision-maker would like to announce a sequence of future policies (e.g. taxes) in order to influence other agents’ current decisions (e.g. investment), and moreover the “future self” of this decision-maker would want to deviate from the announced sequence. Absent the ability to commit to this future policy sequence, agents with rational expectations would have no reason to believe that it will be carried out, so the announcement will not have its intended effect in influencing agents’ current decisions. The Lucas critique means that policy cannot be effective when it relies on repeatedly surprising people, e.g. by using inflationary shocks to increase effective demand, or by promising low future capital taxes to encourage investment.

A different response that has gained widespread currency in both micro- and macroeconomics starts with the assumption that the policymaker understands that agents have rational expectation (Klein et al. 2008). These agents make decisions (e.g. about investment) based on their rational expectations about future government policy (e.g. taxes). The individual agents are too small to influence future government policy by manipulating the aggregate stock of endogenously changing capital; they therefore behave non-strategically despite having rational expectations. However, the agents’ aggregate decisions do change the stock of capital - or some other payoff-relevant state variable. Moreover, the policymaker in the current period is unable to commit successors to a particular policy sequence. The resulting model is formally a Stackelberg dynamic game, in which both the strategic policymaker and a large number of nonstrategic agents have rational expectations (Karp and Havenner, 1984). The Nash condition requires that each decision rule is the best response to the other agent’s decision rule. Moreover, each rule is a best response to the equilibrium decision rules that agents’ “future selves” use. The last requirement guarantees time consistency.

The curse of dimensionality is particularly important in active learning formulations. These formulations require at least one state variable for each unknown parameter, potentially leading to an unmanageable number of state variables. The use of conjugate priors leads to tractable equations of motions for these state variables. However, even for specifications where conjugate priors make sense, the curse of dimensionality of the resulting system often makes dynamic programming impractical.

tags: #learn #stochastic #dynamics #resources

Popular posts: