Year: 2025 | Month: December | Volume 13 | Issue 2

Explainable Generative AI: Concepts, Challenges and Future Directions

Saurav Paul and Saptarshi Paul
DOI:10.30954/2322-0465.2.2025.11

Abstract:

Large language models (LLMs), diffusion-based image generators, speech synthesizers, code assistants, and multimodal foundation models are examples of generative AI systems that are now deeply integrated in scientific research, creative work, and decision-making. Understanding why these models produce specific outputs has become crucial for safety, accountability, trust, debugging, and regulatory compliance as they continue to acquire autonomy and agency. However, explainability for generative AI is significantly
more challenging than explainability for traditional discriminative models, because the generation unfolds over time, the output space is open-ended, and the model internals operate in high-dimensional spaces. Explainable Generative AI (XGAI) is the new interdisciplinary field examined in this paper. The paper defines the scope and fundamental concepts of XGAI, presents a taxonomy of interpretability techniques for generative models, examines technical, social, and regulatory challenges, and outlines future research directions toward transparent and controllable generative systems. This work argues that explainability for generative AI requires coverage of intent, process, uncertainty, and provenance and cannot be reduced to local feature attribution alone.



© This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited





Print This Article Email This Article to Your Friend

@International Journal of Fermented Foods | Association with SASNET | Printed by New Delhi Publishers

58865566 - Visitors since April 13, 2019