Any interpretation of relationships in the data is only as good as the model it is based on. Both under- and overfitting can lead to bad models with a misleading interpretation.
=> Use proper resampling techniques to assess model performance.
Photo by: Christoph Molnar
Caption by: George Anadiotis
2-unnecessary-use-of-ml
Don't use a complex ML model when a simple model has the same (or better) performance or when the gain in performance would be irrelevant.
=> Check the performance of simple models first, gradually increase complexity.
Photo by: Christoph Molnar
Caption by: George Anadiotis
3-1-ignoring-feature-dependence
When features depend on each other (as they usually do) interpretation becomes tricky, since effects can't be separated easily.
=> Analyze feature dependence. Be careful with the interpretation of dependent features. Use appropriate methods.
Photo by: Christoph Molnar
Caption by: George Anadiotis
3-2-confusing-dependence-with-correlation
Correlation is a special case of dependence. The data can be dependent in much more complex ways.
=> In addition to correlation, analyze data with alternative association measures such as HSIC.
Photo by: Christoph Molnar
Caption by: George Anadiotis
4-misleading-effects-due-to-interaction
Interactions between features can "mask" feature effects.
=> Analyze interactions with e.g. 2D-PDP and the interactions measures.
Photo by: Christoph Molnar
Caption by: George Anadiotis
5-ignoring-estimation-uncertainty
There are many sources of uncertainty: model bias, model variance, estimation variance of the interpretation method.
=> In addition to point estimates of (e.g., feature importance) quantify the variance. Be aware of what is treated as 'fixed.'
Photo by: Christoph Molnar
Caption by: George Anadiotis
6-ignoring-multiple-comparisons
If you have many features and don't adjust for multiple comparisons, many features will be falsely discovered as relevant for your model.
=> Use p-value correction methods.
Photo by: Christoph Molnar
Caption by: George Anadiotis
7-unjustified-causal-interpretation
Per default, the relationship modeled by your ML model may not be interpreted as causal effects.
=> Check whether assumption can be made for a causal interpretation.
Modern requirements for machine learning models include both high predictive performance and model interpretability. A team of experts in explainable AI highlights pitfalls to avoid when addressing model interpretation, and discusses open issues for further research.
Images created by Cristoph Molnar: https://twitter.com/ChristophMolnar/status/1281272026192326656
Read MoreRead Less
1-bad-model-generalization
Any interpretation of relationships in the data is only as good as the model it is based on. Both under- and overfitting can lead to bad models with a misleading interpretation.
=> Use proper resampling techniques to assess model performance.
The
AAEON
BOXER-8120AI
equipped
with
NVIDIA
Jetson
TX2
can
be
hooked
up
to
thermal
and
CCTV
cameras
and
can
be
used
to
monitor
entrances
and
other
areas
with
high-volume
foot
traffic.
...
Lenovo
showed
journalists
around
its
recently
opened
campus
in
Beijing,
which
aims
to
create
a
'Silicon
Valley
environment'
for
its
10,000-plus
employees.
...
Join Discussion