What is XAI?

Explainable AI, also known as transparent AI, consists of techniques in artificial intelligence that are easily understandable to humans. XAI is a programme that explains why and how of a derived decision by having an objective, rationale and decision making process. XAI becomes necessary in cases which require implementation of social right to explanation. Explainable AI is important in regulatory and accountability frameworks and provides algorithmic accountability. Some of the use cases of XAI are:-

1.     Data Protection: European Union and their General Data Protection Regulation has a ‘right to explanation’ clause, which will require use of XAI to arrive at a decision.

2.     Medical: XAI has an application in Clinical Decision Support System (CDSS), in healthcare, where a system is being built to predict a diagnosis for patients by just looking at the medical records. If the CDSS has a convincing positive predictive value and answers clinician’s question of ’how and why did the application arrive on the probable diagnosis?’ it becomes much more valuable and logical. E.g.: A person is diagnosed with type 2 diabetes as his chief complaints showed unexplained weight loss, increased hunger and his fasting blood sugar is 126 mg/dL (7 mmol/L) or higher and random blood sugar was 200 mg/dL (11.1 mmol/L) or higher.

3.     Defense: XAI in military practices becomes important because lethal autonomous weapon systems (LAWS) can cause less damage if they are able to differentiate between a civilian and a combatant

4.     Banking: In banking sector regulator would like to look at overall business volumes and the amount of suspicious activities reported. Any ratio that is outside the industry norm will need regulatory investigation. In such cases XAI will help in reducing false positives.

5.     Finance: Around 40 million Americans do not get credit because simple AI models reject applicants with little or no history on file. XAI on the contrary uses more data and improved algorithms to identify worthy borrowers that legacy models would have overlooked.

As the decision making factors become more transparent with XAI, it serves the crucial ethical filter on decisions. Data is the key for any financial, defense and medical decision making as regulatory laws need to be satisfied. With more efficient models in XAI that run on the cloud, these institutions would be able to provide explanations on why a specific decision was made and will be able to handle regulatory and compliance requirements efficiently.