SRIS Module · IFD

Interpretable Fault Diagnosis

Logic-grounded diagnosis for rotating machinery, industrial equipment, and cyber-physical assets with human-readable evidence and certified root-cause reasoning.

Temporal Logic Root-Cause Tracing 可信诊断
01
Explainability
02
Causality
03
Certification
SRIS Laboratory SCUT · GZIC
Smart, Reliable, and Interpretable Systems
Overview

Research scope

Interpretable Fault Diagnosis serves as one of the modular building blocks of the SRIS research portfolio. It connects methods, models, and deployment scenarios so that theory, algorithms, and system-level outcomes can be presented as a coherent academic narrative.

Interpretable diagnosis Signal processing Formal reasoning
Three key scientific problems:
  • how to construct expressive yet tractable temporal fault representations
  • how to achieve accurate and robust fault inference under noise, uncertainty, and limited data
  • how to provide formally verifiable and physically meaningful diagnostic explanations that support root-cause isolation and decision-making.
  • Module resources

    Research Highlights

    Selected output

    Related publications

    Representative papers are pulled from the current publication archive and embedded here so each module page has both visual entry points and a paper list beneath them.

    Browse all publications

    Temporal Logic Inference for Interpretable Fault Diagnosis of Bearings via Sparse and Structured Neural Attention

    Gang Chen, Guangming Dong
    ISA Transactions, Early Access, 2025
    Journal
    We propose a Sparse Temporal Logic Network for interpretable bearing fault diagnosis. The framework combines wavelet-based predicate extraction, sparse and structured neural attention, and temporal logic inference to deliver accurate diagnosis together with formal, human-readable explanations.

    A Neural-Symbolic Network for Interpretable Fault Diagnosis of Rolling Element Bearings Based on Temporal Logic

    Ruoyao Tian, Mengqian Cui, Gang Chen
    IEEE Transactions on Instrumentation and Measurement, 73, 3515614
    Journal
    We develop a neural-symbolic learning architecture for interpretable rolling-bearing diagnosis that combines weighted signal temporal logic, predicate extraction, autoencoding, and timed failure propagation graphs to produce accurate and explainable fault decisions.

    Interpretable Fault Diagnosis with Shapelet Temporal Logic: Theory and Application

    Gang Chen, Yu Lu, Rong Su
    Automatica, 142, 110350
    Journal
    We introduce shapelet temporal logic, a formal language that describes temporal relations among discriminative shapelets in sequential data. An incremental inference algorithm with theoretical guarantees is developed to obtain interpretable fault diagnosis rules for rolling element bearing signals.

    An interpretable causal invariant graph neural network for unseen domain gear fault diagnosis

    Zhenpeng Lao, Gang Chen, Yiyue Zhang, Penghong Lu, Zhenzhen Jin
    Engineering Applications of Artificial Intelligence,170,114227
    Journal
    In recent years, causal learning has provided application prospects for revealing the internal causal relationships of equipment and the explainability of intelligent diagnostic models. However, existing methods still have limitations of the difficulty in eliminating spurious causal correlations in high-dimensional data and insufficient explainability, leading to unstable and unreliable diagnostic performance in unseen domains. Aiming at the above problems, an interpretable fault diagnosis method based on causal invariant graph neural network (CIGNN) is proposed to enhance model’s accuracy and interpretability for gears in the unseen domain. Firstly, a structural causal model is constructed from the cross-domain perspective and combined with GNN to clarify the internal causal mechanism of faults. Then, a causal disentanglement refining module is proposed to separate the effective causal parts from the high