We explore how organizations can use system dynamics as the core analytical decision technology to achieve mission-critical goals.
In the The Basics of System Dynamics course we build and apply several basic System Dynamics models in the first 5 weeks of the course. In the last two weeks, 6 and 7, we will apply our inventory of skills and models to the problem of simulating a B (Benefits) Corporation in the context of a practical Business Canvas approach. For this portion of our studies we will use paper, pencil, and the VensimPLE platform for all of our work.
In the System Dynamics Inference course we apply Bayesian data analytics to calibrate, fit, simulate, and infer conclusions consistent with the causal model of the decisions we are studying. The causal model will be built in VensimPLE, read into R, where we can mash the model with data and use Stan to make probabilistic inferences. During the first 5 of 7 weeks we will build basic system dynamics causal models of various business decision, estimate probability models of decisions based on the causal models, and infer probabilistic results using Machine Learning and information criteria. During weeks 6 and 7 we will apply our knowledge and models to decisions in the global energy and associated commodities supply chain.
[UNDER CONSTRUCTION] The Informing Decisions course will help us formulate and solve problems to inform decision-makers within organizations using simulation and optimization, all deployed with spreadsheets. We will develop the skills and practice the techniques to structure and analyze a wide range of complex business problems to inform and support managerial decision-making in functional business application areas such as finance (e.g., capital budgeting, cash planning, portfolio optimization, valuing options, hedging investments), marketing (e.g., pricing, sales force allocation, planning advertising budgets) and operations (e.g., production planning, workforce scheduling, facility location, project management). Spreadsheets are used to assist in modeling, analysis, and communication of results and findings.
For these courses we will cross four computing platforms.
VensimPLE will help us build and simulate generative causal models, visualize results, and develop scenarios for decision makers.
The R programming language (with R Studio - Posit), the tidyverse of data, optimization, numerical integration, and visualization packages will provide a platform for analysis, inference, and visualization of results.
The Stan (for Stanislaus Ulam) probabilistic programming library with its ability to estimate systems of differential equations (the underlying mathematics of system dynamics) using Hamiltonian Monte Carlo simulation will allow us to estimate the uncertainty within the causal models we have built.
Lastly, spreadsheets? Yes, the ubiquitous, immediate satisfaction of near instantaneous results spreadsheet environment is used by millions. As a prohtotype it surpasses most other environments. But beware its use in production! We will use spreadsheet engineering practices to improve on modeling hygiene and model deployment for decision makers on the run.
(During the Spring 2025 session from January 13th to March 9th, the Basic System Dynamics course is offered as MBA 645 Special Topics: Strategic Management Science by the Manhattan College MBA Program. Please contact Dr. Marc Waldman, Program Director, at marc.waldman@manhattan.edu for more information about the program.)1
Penumbral Results
In this penultimate episode of this Bayesian Decision Analytics course, we build a simple market clearing mechanism by finding a price which equates aggregate MooseC0 and WolfCo demands against an exogenousl supply.
We begin by following John Muth’s rational expectations approach to equilibrium, we explore the dynamic movement of price, demand, and supply. Our model employs Cyert and DeGroot’s remake of Muth’s model in the image of a Bayesian decision maker. Even though we have yet to involve probabilistic considerations here (that will be modeled next), the basic idea of Bayesian updating works through an equilibrium given the information at hand in the market, represented by a subset of all information in the market, filtered through our representation of the market, namely our predator-prey model of customer retention and eventual demand. Bayesian decision making, especially the strategic intelligence of this approach, is inherently dynamic with updating based on the states of the world of each system dynamics stock variable. The price optimization routine follows James Sterman’s Business Dynamics, , approach based on Michael Powell’s conjugate gradient optimization algorithm. A mouthful or two!
At a first approximation we arrive at a rational expectations equilibrium in this realistic case of cuthroat competition. As we wander though this maze of boxes, arrows, spreadsheet simulations, and graphs, we may do well to recall our purpose in this course, namely to examine the impact of highly interactive market decision makers (overall potential customers, MooseCo, WolfCo, an anonymous supplier) on market states (price, customers, demand, supply), across time. We have one more task, namely, to examine the information content of prices, demand, and supply as strategic to binary decision alternatives in a Bayesian probabilistic context.
A second approximation looms in our next, and last, video together. There we will impose a simple binomial up-down branching process on some interesting parameter to form probabilistic expections of price. We might then wander into a model with storage to make supply a bit less exogenous.
We should realize that the R version of the course has a market interpretation as well. There we examine the “market” of labor demand through hours worked and labor supply through quality of work, availability, and burnout. There is a valuation metric hiding in that complex interaction as well. It is the price which balances the the demand and supply of labor in a forced outage. So much to learn and discover!
Cyert, R. M., & DeGroot, M. H. (1974). Rational expectations and Bayesian analysis. Journal of Political Economy, 82(3), 521-536. (available: https://https://iiif.library.cmu.edu/file/Simon_box00028_fld02010_bdl0001_doc0001/Simon_box00028_fld02010_bdl0001_doc0001.pdf )
Powell, M. J. (1964). An efficient method for finding the minimum of a function of several variables without calculating derivatives. The computer journal, 7(2), 155-162.
Here is a video with which to amuse ourselvesAn
[And a toy spreadsheet to play with](https://models/spreadsheets/(models/spreadsheets/mooseco-wolfco-limits-capacity-synthetic-waic-decision-price.xlsm)
A little lite philosophy: a work always in process
Does philosophy matter to the humanistic manager? Yes, indeed. Bottom line up front: updatable Bayesian probabilistic inference over frequentist mindsets in our reasoning processes, at least first order predicate logic in validating the logic of our discoveries and inventions, a humanistic prioritization of the dignity of humans over technology and reasoning enveloping science, a decision objective of minimizing the maximum grief of the most vulnerable over the greatest good for the greatest number, bottom up governance and collaboration guided by top down servant leaders over bureaucratic authoritarian division and competition. Reality conforms our thinking.
A philosophical anthropology that is a manifold encompassing various sociological, political, and psychological frameworks and insights would have us model our understanding of the social, political, and psychological worlds as a transcending network of communal giving and receiving humans (who might be managers).
This in turn means we should use a systems approach to analysis along dynamic line and across various relevant scenarios for our questions and responses might be framed. We would be prioritizing humans and their communities over institutions;freedom over license; people over technology; and being over doing.
A philosophy of mind would evolve into a philosophy of a community of minds who prioritize reason over science. The process of reasoning, from a mind in relation to others (a committee anyone?), would guide decision process and product resulting in knowledge as judgments of what is and is not, of values to prioritize doing, resulting from a dynamic understanding of entities (think stocks of states of nature in system dynamcics) in relation to one another (the arrows of causality from one entity to another), verfied in the data of brutally honest experience, rinse, and repeat.
Ohhh – there’s so much more to consider! Instead let’s consider a simple perception.
When we look at light flowing over a sphere we see light from the darkness reflecting from the smooth manifold of the sphere. We we look more closely we see the edge of the dark shadow, a boundary between dark and light on the ball of the sphere. One more look even more discerning is that the edge is fuzzy. It is in partial shadow. The edge is the terminus, the umbrum, of the projection of light on a sphere. The fuzzy boundary is the penumbra (almost-dark). That is where we are now in this course, in that fuzzy boundary between what we know, the light, and what we have yet to discover, the dark shadow. Move the light around a bit and we move the terminus and its penumbra, revealing more of that smooth surface of the sphere. That’s our job throughout the course and after as well. We have only begun our journey to understand decisions. There is a whole psychology and sociology and anthropology of decisions awaiting us.
There is also a philosophy of decisions overarching all of our reasoning with the various departments of science, including economics and its mathematical representations. This area of philosophy is known as epistemology. It is worth a gander at least. A critical realist epistemology embeds a humanistic management anthropology and might involve the following.
Kant’s “Copernican” revolution defines truth such that ” objects must conform to our knowledge.” (my italics) (Kant 2008, p. 21). Knowledge, that is, Verstand, as understood here is some aspect of the cognitional operations of thinking, further endows Verstand with the form of maxim. Longeran (1957) indicates that the structure of intentionality of human consciousness extends Kant’s notion of thinking into realm of rational selfconsciousness and judgment of what is and is not verified in the experience of the subject as virtually unconditioned. (Longeran 1957, pp. 348–364).
For managers this means it is not enough to think concepts about what is apparent through the senses, that is, it is not enough to take a look at, say market data as empirically observed data. The manager as a self knows the reasons for the reasons of a judgment about the movements which might be indicated in market data. Reasons for the reasons are the data of consciousness. This is where managerial wisdom begins. The manager also experiences, understands, and judges the very reasons the manager even makes a judgment at all about market movements.
This leads to a manager who is responsible for enacting what the manager knows to be true. In this way knowledge instead conforms to reality in the subject who is the manager. The responsibility born of knowledge of market reality then impels the manager to decide on an action in the market, that is, the will drives the next managerial act. Subsequent sequences of acts, knowledge, and will consistent with knowledge build successive manifolds of market reality.
The manager is at once “explanatory genus coincident with explanatory species.” (Longeran 1957, p. 267). By explanatory is meant the ability to systematize the data of facts (e.g, prices) and the data of consciousness (e.g., reasons for the facts as understood and thus intelligible). That a manager is a genus means the manager can systematize what are otherwise species as lower levels of unsystematized coincidences (e.g., independent residuals in a regression of current prices on past prices and volumes of trade). In this one move the manager is the embodiment of a “transition from the intelligible to the intelligent” (Ibid., p. 267).
One bottom line for the humanistic manager is that market price data samples are necessary but hardly sufficient to discern a buy or a sell or a hold decision. Rarely is it one manager who acts, more likely the manager acts in concert with a community of managers, a management team. More importantly, the management team’s action plan to implement the results and judgments from an understanding of market movements, with the manager present in those very movements, means that the group relationships of each manager in communication with other managers on the team, and perhaps in the marketplace as well, deposits the data of their collective consciousnesses, their reasons for their reasons, into the action plan. This plan now systematizes at a higher viewpoint, a genus, from the team’s perspective, various lower unsystematic components of viewpoints, perhaps several species, and thus develops a transcending and innovative final end.
Yikes! Are we still sure, after bobbing and weaving through this material, that we have a provisional answer to the question, does philosophy matter? I believe so. If we go with Kant, we invent our reality, call it a technology for short, in our minds. We begin to build a totalitarian / authoritarian approach to decisions. This is rather dicey. Let’s go along with Kant’s philosophy and let megalomaniac 1 (MM1 for short) invent a reality, get some followers to go along with this conforming of reality to an invention of the mind, and foist this made-up reality on others in a family, a community, an organization, a polity, local and global. First, the “others” will literally loose their voice. Some might be so upset as to follow Kant as well. We will label some very upset persons as MM2 who invent their own reality, get their own followers, and count-foist their reality against MM1 and MM1’s followers. The result is division and the equivalent of war. Development cannot possibly happen since no one can question, revise, rebuild, recycle, refine, in a word, transcend the current division driven downward spiral of anti-progress. Vices of divide-and-conquer, what’s-in-it-for-me, do-what-I-say-not-what-I-do abound.
Okay, then after MM1 and MM2 and their followers have annihilated one another, what might happen next? We might go with a critical realist position. This will, by the way, go well with about 5,000 years of the wisdom tradition which girds the global community. Instead now we pick up the pieces, realize that reality is not in our minds but is represented by what we observe to be true, beautiful, and good, all simultaneously outside of our minds in the wide world of sports where we too participate in what we observe, since we can also observe ourselves, our decisions, our reasons for the reasons for our decisions. Instead of a totalitarian / authoritarian approach to decisions, we begin to build a communal, participative, relational giving for the other approach to living. This leads us to discover new ways to develop, transform, grow from the bottom up, served by the top down. Virtues of solidarity with one another and subsidiarity in respect of our innate dignity and contributory skills and capabilities are drilled into our children and ourselves.
e The (next to the) last mile …
Try to attend our Sixth Live Session tomorrow Saturday, April 19, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014ral. As usual featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session may be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
This week we will add a two-decision-2 state model to the mix from Acting on Bayes. We may even perform this feat on a spreadsheet (again!).
Check out the video from an article by John Sterman in 1986. - EXPECTATION FORMATION IN BEHAVIORAL SIMULATION MODELS, [Sterman (1986),](https://systemdynamics101.com/notes/sterman-1986-behavioral-expectations-formation.pdf John Sterman develops the model as well in Business Dynamics, p. 634-643, with case studies following the baseline model. Here is an implementation of the model to highlight the character of initial conditions, and a switch to modify the initial present perceived present condition to change with changing inputs.
Now that we have filed our returns …
We continue with our spreadsheet implementation of Bayesian (probabilistic) decision analysis. This time around we apply our inferential analysis of probabilities of hypotheses (now called decision states) using evidence we have gathered (perhaps simulated) to two decision alternatives, across low and high risk scenarios for customer acquisition.
[Spreadsheet here](models/spreadsheets/(models/spreadsheets/mooseco-wolfco-limits-capacity-synthetic-waic-decision.xlsm)
We also walked back to Week 3 and Acting on Bayes for some further examples of decision optimization. We are bringing all of our experiences and models to the fore now. Interpretation is our next big hurdle.
When April Showers … Again
Try to attend our Fifth Live Session tomorrow Saturday, April 12, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014. As usual featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session may be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
This week we will add a two-decision-2 state model to the mix from Acting on Bayes. We may even perform this feat on a spreadsheet.
Thermodynamics, Entropy, Model Comparison: WAICing through a comparison of models
In this episode we compare two models based on two data sets of MooseCo customers. Each is sampled at the same 7 MooseCo sites but one counts customers who took less than an hour to decide to transact and the other, well, more than an hour. We use a simple grid approximation model to mash together data with hypothesis about the average intensity of customer transactions under the two regimes of low and high touch sales experience. We use log predictive probabilitiles (log odds really: lppd, for short) and a volatility penalty (pWAIC = variance(sum(log(probability of each observation, given the model)) to measure information uncertainty (Wide Area Information Criterion (WAIC = -2(lppd - pWAIC)); entropy and the 2nd Law of Thermodynamics at work) and forecast predictability. We then compare the uncertainty and volatility difference between the two regimes and their Wide Area Information Criterion. Watch me fumble through a scatterplot - yikes, the spreadsheet platform was not very forgiving! A reboot of the platform allowed for a simple yet fairly decisive view of two distinct customer decision regimes.
For those of you on a spreadsheet track, try to replicate the inference worksheet for the model you have been building. For those on the R track, just continue to work the regular assignment with Nettle’s model. Or if you like, replicate the spreadsheet in R? That would be a useful exercise. Let me know if you have any concerns!
Thanks for your patience. Bill
When April Showers …
Try to attend our Fourth Live Session tomorrow Saturday, April 4, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014. As usual featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session may be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
Here is yet another model two of your mates are working on. Cobb-Douglas Supply and Demand with Price Formation.
Yet another model with synthesizer.
Again, these spreadsheet models do not run very quickly, but they are pleasing in so many other ways.
Spring has Sprung
Here is a spreadsheet version of the R synthetic sampler model from Week 2.
Spreadsheet synthesizer: MooseCo data
in this semester’s Bayesian Decision Analysis - 2025 playlist.
We demonstrate a workflow in a spreadsheet to sample data from our limits to predation endogenized (partly) model of predator WolfCo and prey MooseCo model. We sneakily (week 3 material) use a hierarchical Bayesian prior-to- posterior generative model for synthesizing MooseCo customer count data.
We sample the initial customer market parameter using the Poisson distribution with Gaussian distributed lambda intensities. This will stand in for our synthesized and sampled observational model of customer base interactions in this model.
We then pull monthly (month 1, 2, …, 24) data from the 12,000 simulations using INDEX( MATCH()) to build a calculation region. Every time the spreadsheet recalculates, this region changes.
We feed, in this case, sMoose recalculated monthly sampled Poisson realizations into an interface region.
A VBA subroutine recalculates the calculation region, copies the interface region’s values, and pastes the values into a simulatioin region across 100 consecutive runs using OFFSET() to advance the pasting cell positions. A simple statusbar display monitors our sampling process.
We view the medians of the sampled runs in a graph.
We can access the spreadsheet synthesizer model here.
On my Lenovo, tricked out with several gig of RAM, it still took over 5 seconds for each run summing to over 8 minutes for this one-factor demonstration model. The https://systemdynamics101.com/add-inference site has the same model in R, which will run many more samplings much faster.
Try to attend our third Live Session tomorrow Saturday, March 28, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014. Featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session may be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
Spring is here!
Try to attend our second Live Session tomorrow Saturday, March 22, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014. Featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session may be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
You may access the Spring 2025 playlist here.
This week we force our model to curtail the predatory activity and reactive decisions of two interacting organizations. Both want to acquire, retain, and reduce switching of customers. But we can also interpret these models as interactions between technological components, humans and machines, humans and humans sharing work (and thus rework). In this note we produce a model in a spreadsheet, again. But this time we add a potential market of customers, an allocation of potential customers (a very naive one at that) to the predator and prey customer bases, all to limit the growth of the market.
Here is a video, and supporting spreadsheet model, for us to peruse.
The Ides of March Await Us!*
Try to attend our first Live Session tomorrow Saturday, March 15, 2025 from 10am-noon (ET, UTC-5) on Zoom: https://us06web.zoom.us/j/9177353014. Featured will be questions and answers and not a few solutions as we crank up the mechanics of this online course. While mechanics might annoy us from time to time, the purpose of modeling is to enable insightful analysis and interpretation. Sensitivity analysis will dominate much of the discussion. The session will be video’d for posterity and deposited on a Youtube playlist dedicated to this terms’s course experience.
Some housekeeping notes:
For those registered in a current Manhattan University MBA course, access the WALL course blog-site on the course Learning Management System (Moodle) and post your response there for credit. Examples of responses from other participants in the course are located at this public site https://systemdynamics101.blogspot.com/.
For those registered in a current Manhattan University MBA course, access the weekly grade assignment activity on the course Learning Management System (Moodle) and post your response there for credit. Answer the questions, and upload your first model.
Due dates are not deadlines. The only deadline in the course is at its completion when grades must be posted to the Registrar for credit. But the due dates are there to help us pace ourselves, keep up with readings, and simply digestion of the complex of ideas we are attempting to conform to the reality of what we are modeling and ultimately interpreting for decision
Et tu? Brute?
Welcome to our first week together as we explore strategic decision intelligence with Bayesian System Dynamics and highly interactive models of business decisions.
Here is a video, and supporting spreadsheet model, for us to peruse.
Video: spreadsheet predator-prey model.
We bring a Vensim model through its equation documentation into a spreadsheet. We then simulate the model and plot results. We find that this development might help us peer into the model mechanics of input and output flows as well as the accumulation (yes, a simple Euler integration) of state values in stock variables. All is System Dynamics of a complex predator-prey interaction. We might consider reusing this model as we deepen our understanding of more complex models.
Enjoy!
We are about to begin our first week together in MBA 645 Strategic Decision Intelligence at Manhattan University with the module Add Inference. Welcome to all who enrolled. A welcome to those who might begin a self-study course for your own edification.
Enjoy the ride!
You can text me on my mobile (917-767-7980) anytime. Please let me know who you are and give me 24 hours to respond. I’m usually a bit quicker than that. We will have live sessions on zoom every Saturdays from 10am-noon.
Thanks, Bill
Enjoy, and always encourage one another daily, while it is still today!
William G. Foote, Ph.D.
Mobile/Text: 917-767-7980
Zoom: https://us06web.zoom.us/j/9177353014
GitHub: https://github.com/wgfoote/
Office hours (MBA 645 Summer 2024):
At the end of these courses students can expect to demonstrate progress in meeting the following goals, proposed here as actions with verbs in the imperative mood.
Pose a researched business question, model the causal influences implicit in the question, simulate potential causal relationships and counterfactual inferences and their sensitivities, and align inferences with decision alternatives and plausible choices for stakeholders.
Deploy analyses which produce interactive analytical products using an industry-grade computational platform engineered according to a tradition of design principles.
Using endogenous generative models, summarize experience and beliefs about stakeholders, their data, and the processes that the generated data used, to infer potential outcomes to answer business questions.
Practice quantitative critical thinking skills through a compound of statistical and normative problem solving which links strategic policies and practices with stakeholders.
Understand the role of the analyst and the analytics process in the decision-making context of complex organizations and their environments.
Communicate analytical decision results to decision makers and other consumers of analytical products effectively using interactive tables and graphs.
For my part this curriculum emanates from over 45 years of learning from and teaching managers system dynamics and statistical inference at Fordham University, Clarkson University, Syracuse University and LeMoyne College. I have used SD techniques and simulations at a variety of financial institutions, high tech, energy, retail, governmental and not-for-profit organizations world-wide. I especially want to acknowledge the many years of working with my son, Andrew Foote, who, with his company Paraclete Risk Solutions LLC, was critical in the development, promotion, and delivery of systems models, strategy, consulting, and services to multiple public and private sector clients over the past 20 years.
I have taken liberally materials and ideas (some might say I curated materials) from several extant courses. They all flow from the avowed discoverer of the systems dynamics methodology, Jay W. Forrester, and his decades of work, and students, at the Sloan School of Management, MIT.
First, the maths: Harry Hochstadt’s Differential equations : a modern approach (1963-4) in the first 84 pages details the math behind system dynamics, namely solving systems of simultaneous differential equations.
Second, the numerics: Joel Ferziger’s Numerical Methods for Engineering Applications, 1978. Yes, FORTRAN. I moved most of these routines to MATLAB and APL2 in the 80’s.
Jay Forrester’s 1998 MIT Introduction to System Dynamics self-study course Everything you will need to know about the formulation and interpretation of System Dynamics models from the inventor.
George Richardson’s 2013 Albany University Public Policy courses.
John Sterman’s 2013 Introduction to System Dynamics MIT-OCW course
Ventana System’s VensimPLE Modeling Guide and Tutorial along with Tom Fiddaman’s MetaSD model library
The premise of this curriculum is that learning is inference. Learning can be reading, understanding, reflecting whether in our heads or with complex computing environments. We begin with the following chain of reasoning:
All events, and data collected from events, have a truth value.
Probability is the strength of plausibility of a truth value.
Inference is a process of attaining justified true belief, otherwise called knowledge; learning is inference.
Justification derives from strength of plausibility, that is, the probability distribution of a hypothesis conditional on the data and any background, prior, and assumptive knowledge.
The amount of surprise, or informativeness, of the probability distribution of data given our experiences, is the criterion for statistical decision making – it is the divergence between what we known to be true and what we find out to be true.
All statistical analysis, and reasoning within analysis, begins from a disturbance in the status quo. The disturbance is the outlier, the error, the lack of understanding, the inattentiveness to experience, the irrationality of actions that is the inconsistency of knowledge and action based on knowledge.
We are surprised when the divergence between what we used to know and what we come to know is wider than we expected, that is, believed. The analysis of surprise is the core tool of this course. In a state of surprise we achieve insight, the aha! moment of discovery, the eureka of innovation.
The course will boil down to the statistics (minimum, maximum, mean, quantiles, deviations, skewness, kurtosis) and the probability that the evidence we have to support any proposition(s) we claim.
The evidence is the strength (for example in decibels, base 10) of our hypothesis or claim. The measure of evidence is the measure of surprise and its complement informativeness of the data, current and underlying, inherent in the claim.
Copyright 2024, William G. Foote, all rights reserved.↩