Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Quantifying Actionability: Evaluating Human-Recipient Models
oleh: Nwaike Kelechi, Licheng Jiao
Format: | Article |
---|---|
Diterbitkan: | IEEE 2023-01-01 |
Deskripsi
With the increasing use of machine learning and artificial intelligence(ML/AI) to inform decisions, there is a need to evaluate models beyond the traditional metrics, and not just from the perspective of the issuer-user (I-user) commissioning them but also for the recipient-user (R-user) impacted by their decisions. We propose evaluating R-user-focused actionability - the degree to which the R-user can influence future model predictions through feasible, responsible actions aligning with the I-user’s goals. We present an algorithm to categorize features as actionable, non-actionable, or conditionally non-actionable based on mutability and cost to the R-user. Experiments were carried out using tree models paired with SHAP and permutation feature importance on tabular datasets. Our key findings indicate noteworthy differences in global actionability across the different datasets, even in datasets that are purposed towards similar goals, and observable but less significant differences among the different model-interpreter combinations applied to the same datasets. Results suggest actionability depends on the entire pipeline, from problem definition and data selection to model choice and explanation method, that it provides a meaningful signal for model selection in valid use cases and merits further research across diverse real-world datasets. The research extends ideas of local and global model explainability to model actionability from the R-user perspective. Actionability evaluations can empower accountable, trustworthy A.I. and provide incentives for serving R-users, not just issuers.