Guidelines and principles of trustworthy AI should be adhered to in practice
during the development of AI systems. This work suggests a novel information
theoretic trustworthy AI framework based on the hypothesis that information
theory enables taking into account the ethical AI principles during the
development of machine learning and deep learning models via providing a way to
study and optimize the inherent tradeoffs between trustworthy AI principles. A
unified approach to “privacy-preserving interpretable and transferable
learning” is presented via introducing the information theoretic measures for
privacy-leakage, interpretability, and transferability. A technique based on
variational optimization, employing conditionally deep autoencoders, is
developed for practically calculating the defined information theoretic
measures for privacy-leakage, interpretability, and transferability.

By admin