About AI-SS 2026

Artificial Intelligence (AI) systems are increasingly embedded in safety- and mission-critical domains such as healthcare, transportation, energy, and nuclear environments. However, the integration of AI brings new dimensions of risk, uncertainty, and adversarial vulnerability that challenge traditional safety and security assurance methods.

This workshop aims to bridge the dependability and AI research communities by addressing fundamental and practical challenges in AI safety, security, and trustworthiness. It will provide a platform for researchers and practitioners to exchange ideas, discuss methodologies, and explore standards and regulatory frameworks supporting safe and secure AI adoption in critical systems.

Themes, goals, topics and relevance to the EDCC community

The theme of the workshop is Towards Dependable and Trustworthy Intelligent Systems and the Workshop's goals include:

  • To identify key research challenges and emerging methodologies for AI safety and security
  • To facilitate interdisciplinary collaboration between AI, dependability, and cyber security experts
  • To promote discussion on standardisation, certification, and governance for AI dependability, safety and security
  • To encourage young researcher participation through short papers and panel sessions

Topics of Interest

The workshop welcomes submissions covering one or more of the following topics (including but not limited to):

Accepted papers will be included in a companion proceedings of EDCC 2026, which will be published by the Conference Publishing Services (CPS) and be submitted for possible publication in IEEE Xplore.

Technical Sponsors

Important Dates

All dates in AoE (Anywhere on Earth)

  • Paper submission deadline: 19 January, 2026
  • Author notification: 24 February, 2026
  • Camera-ready paper due (HARD deadline): 5 March, 2026
  • Workshop: 7 April, 2026