. Higher model quality and explainability lead to better business results the challenge for organizations is in how we build and operationalize higher quality, trusted AI models faster and more efficiently.
1. Building higher quality, trusted and Explainable
AI(XAI) models
MARUTISH VARANASI
Principal Consultant & Director
QvantumX.co.in
June 2021
2. Contents
1. The Significant impact of AI on the industry and society
2. Challenges in AI adoption
3. Why Explainable AI(XAI)
4. Key business drivers for implementation of Explainable AI(XAI)
5. Rewiring the organizations for AI
6. Building higher quality and trusted AI models faster and efficiently
3. 1. The Significant impact of AI on the industry and society
Artificial intelligence, or AI, has long been the object of excitement and fear. Machines in a few industries
are outperforming humans. In the future, machines will be out performing humans across industries not
by copying us but by harnessing the combination of colossal quantities of data (volume, velocity and
variety), massive processing power (specialized AI accelerators in the cloud and the edge) and remarkable
algorithms (ML, DL & AI models). Across Industries and society, the impact of AI is increasingly becoming
significant. On a positive note, for example side, Scientists, medical researchers, clinicians,
mathematicians, and engineers, when working together are designing an AI system that is aimed at
medical diagnosis and treatments and offering reliable and safe systems of health-care delivery. On the
negative side, with the progressive development of AI, human labor will no longer be needed as
everything can be done mechanically. With AI becoming more ubiquitous, AI practitioners recognize in
their everyday work that data and algorithms amplify and perpetuate human biases. While at one level
the adoption of AI across enterprises is increasing, challenges do remain and are significant. Organizations
need to understand what AI systems can deliver and cannot do.
2. Challenges in AI adoption
Businesses and employees alike need to be prepared for the impact of AI adoption and the ethical and
regulatory challenges that will come with it. In most cases, organizational and societal exposure to AI bias,
algorithmic transparency, distrust and lack of fairness is standing in the way of AI adoption. Unknown
biases can still sneak into the system, necessitating the need for a strong Quality Assurance (QA)
processes, specifically keeping bias in mind. The widespread adoption of AI obviously raises ethical
challenges, but numerous organizations are spring up to monitor and advise on best practice scorecards.
In this context, Responsible artificial intelligence (AI) has acquired prominence in the practicing AI world.
Responsible AI is a governance framework that showcases how specific organizations address the
challenges around (AI) legally and ethically.
3. Why Explainable AI(XAI)
Explainable AI (XAI) is an emerging field in machine learning and AI that aims to address how AI decisions
are made. These include an understanding of the key steps, processes and models, involved in making
decisions. The objective of Explainable AI (XAI) is to reason and explain how the results of AI can be
4. understood by humans. It contrasts ML models which are black boxes and even data scientists who
created them cannot explain why an AI arrived at a specific decision.
4. Key business drivers for implementation of Explainable AI(XAI)
As application of AI expands, regulatory requirements in industries such as healthcare, financial services
and automotive could drive the need for more explainable AI models with an objective to provide safe,
premium care to stakeholders. In industries such as automotive and airlines regulators often want rules,
choice criteria and decisions to be clearly explainable. As model complexity is increasing it is becoming
hard to explain in human terms, why a certain decision has been reached (more complex when real time
decisions have to be taken). Larger and more complex models make it hard to explain, in human terms,
why a certain decision was taken and reached (it is even harder to explain why a decision was taken and
reached in real time). Let’s take the case of a social media, Twitter may sometimes add a notice to an
account or a Tweet (specific context on the actions its systems or its teams may take). In some instances,
this is because the behavior violates the rules of Twitter. In more cases, why the tweet or the count is
labelled, blocked, placed behind an interstitial, is not explained and the timelines when the account
suspension will end is not known and is a question mark.
5. Rewiring the organizations for AI
When building the AI organizations of the future, the AI roadmaps need to embedded in the business
planning and business usual processes. Instead of a siloed specific use cases, an enterprise-wide road map
for deploying advanced-analytics (AA)/machine learning (ML), AI models across entire business domains
need to be pursued. The challenges in rewiring the business processes and including the AI in the business
planning of organizations are often underplayed and not estimated properly. Considerable efforts are
required on how the AA/AI models will be embedded; how AI is used for making decision’s “explainable”
to end-users; and a change-management plan for embracing AI by the organization that addresses
employee mindset shifts and skills gaps.
5. 6. Building higher quality and trusted AI models faster and efficiently
The development of AA/ML/AI models is highly iterative, experimental with lot of trials and errors, so it is
highly critical to quickly understand the evolution of model from the inception to its production.
Traditionally, deterministic, software, which is rule-based the bulk of the work occurs at the inception or
initial stage and once deployed, the system works as we've defined it. But with AA/ML/AI, we haven't
explicitly defined (by rules) on how something will work, but allowed data to architect a probabilistic
solution. So just monitoring the system's health won't be enough to capture the underlying performance
issues with our model. The layer of metrics to monitor includes the model's performance and its drift. The
quality of AA/ML/AI Model quality helps in ensuring the models achieve the intended and specified
business impact. Improving model quality involves analyzes several facets of model quality including:
Accuracy, Generalization, Conceptual soundness, Stability, Reliability and Fairness.
Finally, to capture business opportunities quickly, data scientist’s, business analysts, the IT team
responsible for governance and compliance, to the business executives and analytics leaders who derive
business impact from the deployed models need to know which model is performing best and why in real
time and take remedial actions. Higher model quality and explainability lead to better business results the
challenge for organizations is in how we build and operationalize higher quality, trusted AI models faster
and more efficiently.