TrustAI4SCi
Full Title
Trustworthy Artificial Intelligence for Scientific ApplicationsDescription
The scientific community has long recognized the potential of artificial intelligence (AI) as a tool for scientific discovery, with machine learning, pattern mining, and reasoning playing crucial roles in several steps of the scientific process. To ensure the trustworthiness of AI methods as tools that can be used to uncover new knowledge, help understand the mechanisms underlying natural phenomena, and distinguish meaningful predictions from spurious correlations, it is crucial that they are explainable. Despite this, the vast majority of scientific projects that use AI do not prioritize explainability.
TrustAI4Sci seeks to transform XAI by integrating scientific knowledge from Knowledge Graphs (KGs) into data-driven explanations. Using reinforcement learning, the project identifies logical paths within KGs to explain “why” predictions occur, providing causal justifications rather than merely outlining “how” decisions are made. These logical paths are then converted into natural language using language models, bridging the gap between technical outputs and human understanding.
TrustAI4Sci aims to produce trustworthy, scientifically valid, and human-aligned XAI approaches for high-impact life science research and will be validated in explaining gene-disease associations and drug-disease recommendations.