Fragile predictive modeling
> Predictive modeling to quantify risk: this tends to result in people taking on more risk; the predictive model tells them that they are taking on better risks but all risks are bad risks in the tail scenario [if risk taking is constrained by an exposure metric as opposed to a loss metric, then it might be fine]
> Predictive modeling to support long term projection: let’s say we do a 10-year projection; for the first 9 years, reality is consistent with the prediction; at the end of year 9, it would be tempting to believe that the predictive model must be right and not do much to prepare for alternative scenarios which can result in disaster [if the success of the prediction does not result in less preparation for alternative scenarios, then it might be fine]
Antifragile predictive modeling
> Predictive modeling where the model learns from immediate feedback: let’s say the model identifies someone who is likely to buy a product; if the person does not buy the product, the failure provides one more data point that the model can learn from.
Big data without artificial intelligence >>> using yesterday’s data to predict tomorrow’s events >>> probably fragile
Big data with artificial intelligence >>> using some data from today to predict other events today >>> possibly antifragile
This post is fragility: part iv. I’m aiming for ten parts but who knows where this will take me. Part i was a post I wrote back in August (like everyone else—just better than everyone else). Part ii was a submission to the Product Matters newsletter published in February. Part iii was a post I wrote in December (thinking differently is a source of flow).