Malaysian banks are deploying Artificial Intelligence (AI) at breakneck speed. But ask them to quantify the risk exposure from unexplainable algorithmic decisions, and you’ll uncover the industry’s next major challenge.
When AI denies business loans to viable SMEs or flags legitimate transactions as suspicious – and banks can’t articulate why – the risk cascades: regulatory penalties, discrimination lawsuits, reputational damage and customer attrition. Yet most institutions are measuring AI performance without measuring AI explainability risk.
Malaysian banks are accelerating AI adoption at remarkable speed. According to the Asian Institute of Chartered Bankers, 57% of financial institutions are already in early-stage AI implementation. Bank Negara Malaysia (BNM) released its “Discussion Paper on Artificial Intelligence” in August 2025 and Oracle’s multi-agent AI investigators are transforming compliance workflows across institutions.
However, when AI denies a business loan or flags a transaction as suspicious, can the bank document the decision-making process well enough in the face of regulatory scrutiny? Not with vague references to “insufficient creditworthiness.” Can they provide specific, defensible reasoning that satisfies regulators, courts and increasingly sophisticated customers?
The answer, more often than anyone wants to admit, is no.
Quantifying the Explainability Risk Exposure
Hong Leong Bank’s partnership with DCAP Digital illustrates both promise and risk. The collaboration uses AI-powered credit scoring to assess underbanked SME borrowers, particularly in motorcycle financing where over 61,000 units were registered in May 2025 alone.
Without explainability infrastructure, banks could possibly face three compounding risks:
- Regulatory risk when BNM demands justification for algorithmic decisions
- Legal risk when rejected applicants claim discrimination
- Reputational risk when customers migrate to competitors offering transparent decision-making
These AI systems analyse hundreds of data points to generate credit scores. When the algorithm says no, explaining which specific factors drove that decision becomes exponentially more complex than traditional credit assessments. More critically, without systematic documentation, banks can’t defend those decisions when challenged by regulators, courts or customers.
The Regulatory Compliance Challenge
Regulators globally are converging on explainability requirements. Singapore’s Monetary Authority (MAS) emphasises transparency and explainability in AI governance frameworks. The European Union (EU) AI Act mandates clear explanations for algorithmic credit decisions. Even as US federal oversight shifts, state regulators are affirming that “the algorithm decided” is no longer legally defensible.
Bank Negara’s AI governance discussion paper emphasizes fairness, transparency and accountability. The AICB’s AI Governance Framework includes explainability as a core principle. But principles and practical implementation are very different.

Consider the risk exposure: A pattern of AI-driven loan rejections disproportionately affecting specific sectors could trigger BNM investigations. Legal discovery in discrimination cases would force banks to produce documentation they don’t have. Reputational damage compounds when media coverage frames it as “algorithms discriminating against people.”
The Risk Management Gap
Explainability techniques like SHAP and LIME allow data scientists to reverse-engineer AI decisions. Financial institutions globally are integrating these tools into workflows.
But deploying explainability tools requires different skillsets than deploying AI models. Banks need internal teams capable of interrogating models, documenting their logic and translating technical explanations into language that risk officers, compliance teams and regulators understand.
The AICB’s Future Skills Framework notes that 40,000+ banking employees will see roles evolve due to automation. That’s a massive skills transformation while AI deployment accelerates and risk exposure accumulates.
Alternative Data: Expanding Credit Access While Multiplying Risk
Malaysia’s push toward alternative credit scoring adds risk complexity. Bank Negara’s Financial Sector Blueprint encourages “forward-looking and alternative data” for credit assessment – utility payments, e-commerce transactions, digital platform engagement.
Malaysia has a RM90 billion MSME funding gap partly because traditional assessments exclude businesses without conventional lending histories. Alternative data bridges that gap.
But it multiplies explainability risk. When banks deny credit based on “atypical digital payment patterns,” how do legal teams defend it when regulators investigate discrimination or plaintiff attorneys pursue class actions?
Building Risk-Resilient Explainability Infrastructure
Bank Negara’s discussion paper on AI addresses explainability, noting existing policies are “adequate for the time being” but may require enhancement as AI complexity increases.
Risk-mature institutions are treating explainability as first-line defence, investing in:
Explainability-by-design: Embedding SHAP, LIME or similar tools into AI workflows from the start, reducing regulatory scrutiny and legal discovery exposure.
Cross-functional risk teams: Pairing data scientists with compliance officers and legal counsel who can translate technical outputs into plain language, ensuring risk functions can defend decisions when challenged.
Documentation standards: Creating systematic records of how AI models make decisions. When regulators or courts ask “why did this happen?” two years from now, banks need retrievable, defensible answers.
Scenario and discrimination testing: Stress-testing AI systems for explainability and fairness. Identifying patterns that could be interpreted as discriminatory before they become regulatory issues.
The Gig Economy’s Exclusion Risk
Many lack fixed salaries, consistent EPF contributions or audited financials that banks typically require.
Can banks prove their AI didn’t systematically disadvantage an entire category of workers that Parliament granted statutory protections?
Many will find it tough to explain the algorithm’s logic even to themselves.
When Bank Negara demands justification or gig worker advocacy groups file complaints, vague responses become regulatory violations.
The explainability gap transforms financial inclusion tools into litigation liabilities.
The Risk Management Imperative
Banks that master AI explainability won’t just avoid regulatory penalties. They’ll gain competitive advantage in risk management and customer trust.
In a market where 57% of institutions are deploying similar AI technologies, differentiation won’t come from having AI. It’ll come from managing AI risks better than competitors.
Gartner forecasts that ‘death by AI’ legal claims will surge to over 2,000 cases by late 2026, driven largely by inadequate risk controls around opaque algorithmic systems. Banks can build explainability infrastructure now or scramble when the first regulatory investigation forces the issue.
Malaysia’s AI governance framework provides solid foundations. Bank Negara is asking the right questions. The industry is moving with appropriate urgency. But frameworks don’t manage risk. Implementation does.
The banks investing in explainability infrastructure now aren’t just preparing for compliance. They’re managing existential risks: litigation exposure from unexplainable decisions, regulatory penalties from inadequate governance and customer attrition from eroded trust.
The question isn’t whether Malaysian banks can master AI explainability. It’s whether they can afford not to, before the first discrimination lawsuit, regulatory investigation or reputational crisis forces the issue. Right now, most institutions are accumulating risk faster than they’re building defences.
Closing that gap isn’t a 2026 priority. It’s a 2026 survival requirement.



