Facilitating ‘knowledge management’
There is a growing need to delve deeper into the use of ‘explainable’ artificial intelligence for productive ‘knowledge management’ in a trustable manner

Accuracy should not be the only criterion for trust; transparency and explicability should also be considered as ethical AI issues spread throughout society. During the past ten years, machine learning (ML) models for predictive and analytical issues have grown in importance due to increased access to massive amounts of data and advancements in processing capacity. ML approaches have shown to be extremely effective and precise when applied to creating decision-making systems for identifying diseases and recommending medications. Because of the opaque nature of ML algorithms, it is extremely risky and dubious to entrust a system with making crucial judgements with no way of explaining how it did so to the consumers. Furthermore, since trust is a key factor in developing decision support systems and since the user or owner may want to know why the model arrived at the result, it is the need of the hour to show exactly how the model works. However, explaining the details of such a black-box AI model to someone from a different background is difficult.
Machine learning models' internal workings are transparently visible through Explainable Artificial Intelligence (XAI). In this context, explainability refers to knowing how the model operates without making it too complicated to comprehend for the typical individual. The justification contains the data the model uses to decide, whether it interprets how things function, and what its objectives are. The overall aim of XAI is to enable humans to comprehend and believe the outcomes of AI and ML without compromising performance.
The algorithms' openness makes it possible to guarantee that the model is impartial and that only significant factors influence outcomes. Transparency also aids in the model's evolution by resolving errors or generating original ideas. For the scientists, businesspeople, and employees who utilise the system, XAI just makes interpretation simpler. Several sectors also use XAI since it makes it easier to comprehend the insights and forecasts produced by AI or ML models in systems. XAI increases users’ trust in a model or system, complies with all legal standards, and lends ethical credibility.
Taxonomy of XAI
Various taxonomies have been proposed to classify explanation strategies. These can vary greatly depending on the properties of the method, and fall into several overlapping or non-overlapping classes. The locally interpretable models explain a single prediction by focusing on a single event and analysing how the model arrived at the prediction. To this end, a simplified interpretable model is used to approximate the smaller relevant aspects within the black-box system. Global explainable approaches focus on the internals of the model, with all information about the model, associated data, and training to give a bird's-eye view of the model and how the various components of data interact to influence the result. Essentially, it is an attempt to characterise the essence of the model. While model-specific interpretation methods are developed based on the parameters of a particular model, model-independent methods are not tied to any model architecture, and the model's internal weights and structure factors are not directly accessible through the black box model.
Explainable AI for knowledge management
In recent years, data collection, separation, and supervision have become critical components of every business. To accomplish this, every enterprise expends significant human and financial resources to organise this information, a task known technically as Knowledge Management (KM). KM helps in gathering the necessary information, evaluating the collected information, sharing the information within or between enterprises, and evaluating the available data to find the best solution for a given task. Knowledge management and artificial intelligence (AI) are two technologies that are bringing an incredible digital transformation to businesses in order to increase customer engagement.
Explainable AI (XAI) creates a chasm between data scientists and knowledge scientists. The goal of data scientists is to gather and analyse unconnected, unmanaged data. Contrarily, semantic AI, a crucial component of machine learning, is the focus of knowledge scientists, who can extract normalized data that is readily turned into structures useful in building a Knowledge Graph (KG) as a XAI model. KG is a semantic representation of knowledge that is more transparent and understandable to human experts and end-users. In addition, KG provides interactive explanations to answer end-user queries through symbolic-level interpretation. KG provides insight into the inner workings of the model and explains it in a more interactive way. KG is the backbone of semantic relationships between entities that are important for coherence analysis, reasoning, and causal reasoning.
More exploration is required for the potential role of AI in supporting fundamental KM dimensions such as knowledge creation, storage and retrieval, sharing, and application followed by proposing practical ways to foster human-AI collaboration in supporting organisational KM activities, as well as several implications for the development and management of AI systems based on people, infrastructure, and processes in more responsible and trustable way.
The writer is Associate Professor, Dept of Computer Science, Techno International New Town. Views expressed are personal