AI/Machine learning in fluid mechanics, engineering, physics

AI/Machine learning in fluid mechanics, engineering, physics

Making any ML Prediction Explainable. Part 2: Literature Survey

Review of Explainable AI for fluid dynamics and heat transfer

Justin Hodges, PhD's avatar
Justin Hodges, PhD
Aug 19, 2025
∙ Paid
5
2
Share

Engineering simulations (e.g. CFD for fluid flows, FEA for structural analysis, heat transfer models, etc.) often use surrogate models - fast, data-driven approximations of high-fidelity solvers - to enable quicker analysis and design. However, the black-box nature of many ML-based surrogates (e.g. pretty much any flavor of deep neural networks) can hinder their adoption in safety-critical or design-critical workflows.

In this review, i’ll take a shot at surveying techniques for enhancing surrogate model interpretability before model training, during model development (through inherently interpretable models), and after training via post-hoc explanations. That way my time spent on this article can be as broadly applicable as possible to many different folks, somewhat agnostic to whichever model they are using.

I’ll also try to cover a range of surrogate types – from neural networks and Gaussian processes to decision trees and symbolic regression – and highlight representative methods in CFD, FEA, heat transfer, and related domains.

This is in follow-up to my ‘part 1’ in this series on explainability. I will do a subsequent post on this topic.

Making any ML Prediction Explainable: Part 1

Justin Hodges, PhD
·
Aug 12
Making any ML Prediction Explainable: Part 1

Feel free to share this post with those you think might be interested. I appreciate the help spreading the word.

Read full story

Feel free to share this post.

Share

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Justin Hodges
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture