
Insights
An Interview with Yalın Yalıç on Explainable AI: Shaping the Future of AI in Digital Health
We interviewed Yalın Yalıç, our AI Researcher, to explore the role of Explainable AI and its critical role in digital health. Yalıç emphasized that Explainable AI makes AI-driven healthcare decisions transparent and reliable, increasing clinicians’ confidence in AI systems. From an Explainable AI perspective, he also highlighted the capabilities of Tiga Healthcare Technologies’ solutions, Predis and ShareMind.

Here’s the interview:
1. First of all, could you explain the role of Explainable AI (XAI) in the digital health sector?
Explainable AI (XAI) plays a critical role in digital health by making AI-driven decisions transparent, interpretable, and trustworthy. In healthcare, decisions directly affect patient safety, clinical outcomes, and resource allocation, so black-box models are often insufficient. XAI enables clinicians to understand why a model produced a specific prediction, risk score, or recommendation. This transparency increases clinical trust and supports the clinician-in-the-loop paradigm rather than replacing medical expertise. It also helps identify bias, data quality issues, and potential model failure cases.
From a regulatory perspective, explainability supports compliance with frameworks such as the EU AI Act and medical device regulations. In population health systems, XAI allows policymakers to understand the drivers behind trends or anomaly detections. It improves auditability, accountability, and model validation processes. Ultimately, XAI bridges the gap between advanced AI models and real-world clinical adoption.
2. How does XAI help bridge the trust gap between AI systems and healthcare decision-makers, such as clinicians and health authorities?
XAI bridges the trust gap by making AI decisions transparent rather than opaque. Clinicians are more likely to rely on AI systems when they can see the clinical factors, features, or data patterns that influence a prediction. For example, highlighting relevant biomarkers, imaging regions, or risk variables allows medical professionals to validate outputs against their own expertise. This supports a professional-in-the-loop approach, where AI augments rather than replaces human judgment.
For health authorities, explainability enables accountability and auditability in large-scale predictive systems, such as early warning or anomaly detection platforms. It allows decision-makers to understand why a certain population was flagged as high-risk or why a trend is forecasted. XAI also helps detect bias, data drift, or systematic errors before they impact public health decisions.
By improving transparency, validation, and regulatory compliance, XAI transforms AI from a black-box tool into a collaborative and reliable decision-support system.
3. How do regulatory expectations and data protection laws impact the development and deployment of XAI models in healthcare?
Regulatory expectations and data protection laws significantly shape how XAI models are designed and deployed in healthcare. Regulations require transparency, traceability, and documented risk management, meaning AI systems must provide interpretable and auditable outputs. High-risk medical AI applications must demonstrate safety, performance validation, and human oversight. This pushes developers to integrate explainability mechanisms from the design phase, not as an afterthought.
Data protection laws such as GDPR impose strict rules on processing sensitive health data, including principles of data minimization, purpose limitation, and accountability. In some cases, individuals have the “right to explanation” regarding automated decisions, reinforcing the need for interpretable models. Developers must also ensure that explanations do not compromise patient privacy or expose sensitive attributes.
As a result, XAI in healthcare must balance transparency, clinical usefulness, robustness, and data confidentiality, aligning technical innovation with legal and ethical compliance.
4. Within Tiga Healthcare Technologies’ AI-powered early warning system, Predis, why is it important to understand why a risk is flagged, not just that it is flagged?
In Predis, understanding why a risk is flagged is as important as detecting the risk itself. Health authorities need to know the underlying drivers, such as unusual prescription patterns, regional disease spikes, or abnormal drug consumption trends, to take targeted action. Without explanation, a flagged alert may create uncertainty rather than informed intervention.
Explainability allows decision-makers to validate whether the signal reflects a real epidemiological trend, supply-chain issue, or potential misuse. It supports evidence-based policymaking by linking predictions to concrete variables and temporal patterns. This transparency also reduces false positives and prevents unnecessary resource allocation.
In large-scale public health systems, accountability and auditability are critical; authorities must justify actions based on AI insights. By explaining the contributing factors behind each alert, Predis becomes not just a detection tool, but a reliable decision-support system aligned with regulatory and operational expectations.
5. How can researchers understand and trust machine learning model outputs without seeing the raw data when they use Tiga’s ShareMind?
Researchers using ShareMind do not access raw health data, but they can still understand and trust model outputs through secure, privacy-preserving analytics and transparent reporting mechanisms. The platform executes approved machine learning algorithms within a controlled environment where data never leaves the secure infrastructure. Researchers receive aggregated results, statistical summaries, model performance metrics, and explainability outputs instead of identifiable records.
Built-in validation tools, such as feature importance analysis, model coefficients, and confidence intervals, help interpret how predictions are generated. Standardized, predefined analytical pipelines reduce methodological ambiguity and ensure reproducibility. Access controls, logging, and audit trails provide traceability of every analysis request.
Because computation occurs close to the data under strict governance rules, institutions maintain compliance while enabling scientific insight. This architecture allows researchers to trust the integrity, robustness, and transparency of results without compromising data privacy.
6. What do you think about the capability of XAI in improving proactive population health management in the future?
XAI will be a key enabler of proactive population health management in the future. As predictive models forecast disease outbreaks, chronic risk clusters, or resource shortages, decision-makers will need to understand the drivers behind these projections. XAI can reveal which demographic, behavioral, environmental, or prescription-related factors contribute most to emerging risks.
This transparency enables earlier, more targeted interventions rather than reactive responses. It also helps policymakers evaluate whether trends are medically meaningful, socially driven, or data-related artifacts. By exposing bias or regional disparities, XAI supports more equitable health strategies.
In large-scale systems, explainability strengthens accountability and public trust in AI-assisted public health decisions. Ultimately, XAI will transform predictive analytics from a forecasting tool into a transparent, evidence-based strategic planning instrument for sustainable population health management.

Key Points of the Interview
- Transition from Black-Box to Transparent Models: XAI eliminates the opacity of traditional AI by making the variables behind a prediction, such as specific biomarkers or prescription patterns visible to users. Instead of offering a result without context, it allows health authorities to see the underlying drivers of a risk alert, transforming a "black-box" output into an interpretable and auditable insight.
- Human-in-the-Loop and Clinical Trust: XAI enables healthcare professionals to validate AI recommendations against their own expertise. This approach ensures that AI serves as a collaborative decision-support tool that augments rather than replaces human judgment.
- Future of Proactive Population Health Management: XAI supports targeted and preventive healthcare strategies by revealing the demographic, environmental, or behavioral drivers behind forecasted outbreaks or chronic risk clusters. It allows policymakers to implement evidence-based interventions that address specific factors rather than relying on generalized assumptions.
This insightful interview with Yalın Yalıç shows that Explainable AI turns AI from an opaque mechanism into a transparent, evidence-based partner. This technology empowers healthcare stakeholders to manage population health proactively by providing the ‘why’ behind every alert.
Let’s shape the future together, as always!








