Counterfactual explanations are used for interpreting predictive machine
learning models. We describe three methods for generating counterfactual
examples, LIME-C, Polyjuice and ChatGPT, and highlight their key features.
We apply them to three different text datasets. We compare the methods
used and the obtained counterfactual examples and evaluate them according
to the quality criteria of counterfactual examples. We conclude that there is
no single best solution and that each approach has advantages and disadvantages.
Nevertheless, the most general and useful solution at the moment are
the counterfactual examples generated with ChatGPT large language model.
|