목차
1장 XAI(Explainable AI)

1부 Global Explainers
2장 ELI5(Explainable Like I’m 5)
3장 Global Explainer의 종류와 작동원리
3.1 PD(Partial Dependence)
3.2 PV(Partial Dependence Variance)
3.3 ALE(Accumulated Local Effect)
3.4 PI(Permutation Importance)
4장 PD, PV, ALE, 그리고 PI의 적용과 응용
4.1 Partial Dependence(PD)
4.2 Partial Dependence Variance(PV)
4.3 ALE
4.4 Permutation Importance(PI)

2부 Local Explainers
5장 LIME
5.1 데이터에 따른 LIME의 적용
5.2 SP-LIME
5.3 LIME의 적용과 응용
6장 Anchors
6.1 Anchors의 결정
6.2 Anchors의 적용과 응용
7장 IG(Integrated Gradients)
7.1 IG의 생성원리
7.2 Integrated Gradients의 적용
8장 SE(Similarity Explanations)
8.1 GS(gradient similarity)
8.2 GS의 적용과 응용

3부 Shapley Additive Explainers
9장 Shapley values
9.1 Shapley value의 기본개념
9.2 머신러닝에서의 Shapley value
9.3 Kernel SHAP
9.5 Deep SHAP
9.6 SHAP interaction values
10장 SHAP의 적용과 해석
10.1 KernelExplainer
10.2 shap.Explainer
10.3 TreeExplainer
10.4 DeepExplainer
10.5 Text 자료에 대한 SHAP value
11장 Decision plot

4부 CounterFactual Explainers
12장 CEM(Contrastive explanation method)
12.1 CEM PP와 PN
12.2 CEM의 적용
13장 CFI(Counterfactual Instances)
13.1 CFI의 적용과 응용
14장 CFP(Counterfactual Guided by Prototypes)
14.1 CFP의 이론적 배경
14.2 k-d tree와 신뢰점수
14.3 CFP의 적용과 응용
15장 CFRL(Counterfactual with Reinforcement Learning)
15.1 CFRL의 작동원리
15.2 ALIBI에서 CFRL의 적용
15.3 CFRL의 적용과 응용

5부 ALIBI와 EBM
16장 ALIBI explainer들의 비교와 응용
17장 EBM(Explainable Boosting Machines)
17.1 EBM 모형의 이해
17.2 EBM의 적용과 응용

참고문헌
찾아보기
닫기