About AI-SS 2026

Artificial Intelligence (AI) systems are increasingly embedded in safety- and mission-critical domains such as healthcare, transportation, energy, and nuclear environments. However, the integration of AI brings new dimensions of risk, uncertainty, and adversarial vulnerability that challenge traditional safety and security assurance methods.

This workshop aims to bridge the dependability and AI research communities by addressing fundamental and practical challenges in AI safety, security, and trustworthiness. It will provide a platform for researchers and practitioners to exchange ideas, discuss methodologies, and explore standards and regulatory frameworks supporting safe and secure AI adoption in critical systems.

Themes, goals, topics and relevance to the EDCC community

The theme of the workshop is Towards Dependable and Trustworthy Intelligent Systems

Our goals include:

  • To identify key research challenges and emerging methodologies for AI safety and security
  • To facilitate interdisciplinary collaboration between AI, dependability, and cyber security experts
  • To promote discussion on standardisation, certification, and governance for AI dependability, safety and security
  • To encourage young researcher participation through short papers and panel sessions

Topics of Interest

The workshop welcomes submissions covering one or more of the following topics (including but not limited to):

Accepted papers will appear in the EDCC 2026 Companion Proceedings managed by IEEE Computer Society's Conference Publishing Services (CPS) and will be submitted to IEEE Xplore and CSDL (IEEE Computer Society Digital Library) for inclusion.

Technical Sponsors

Important Dates

All deadlines in AoE (Anywhere on Earth)

  • Paper submission deadline: 19th January, 2026
  • Author notification: 24th Febuary, 2026
  • Camera-ready paper due (HARD deadline): 5th March, 2026
  • Workshop: 7th April, 2026