Cerca
Close this search box.

BEWARE workshop alla AI*IA 2023 conference

Anche quest’anno AI Aware assieme a Sipeia partecipano all’organizzazione del workshop BEWARE alla AI*IA 2023 conference.

Ecco il CfP:

We are delighted to announce that we will be hosting the second edition of the BEWARE workshop on Bias, Risk, Explainability, and the role of Logic and Logic Programming. Our first edition was a great success, and it was our pleasure to foster a vibrant, international community interested in the intersections between explainable AI (xAI), ethics, and computational logic. With thematic sessions, an invited talk, and 14 accepted papers authored by both academics and industry professionals, we built a strong foundation at our first event. If you missed it, you can access the proceedings here: https://ceur-ws.org/Vol-3319/.

We invite you to submit your long, short, and possibly non-original papers for the second edition of BEWARE. The workshop will take place with AIxIA this year at the University of Roma Tre in Rome, Italy, from November 6-9, 2023. For more information, please visit our website. Kindly note that the submission deadline is approaching quickly: it is on September the 10th. The full call for papers (CfP) can be found on EasyChair, and you’ll find it below my signature. You can also make submissions through EasyChair using this link. We encourage you to share the CfP with any colleagues who may be interested in discussing these topics within our high-profile, engaging, and vibrant community.

BEWARE-23

https://sites.google.com/view/beware2023

The 2nd international workshop on the emerging ethical aspects of AI, with a focus on Bias, Risk, Explainability and the role of Logic and Computational Logic. BEWARE23 is co-located with the AIxIA 2023 conference .

Aims and Scope

Current AI applications do not guarantee objectivity and are riddled with biases and legal difficulties. AI systems need to perform safely, but problems of opacity, bias and risk are pressing. Definitional and foundational issues about what kinds of bias and risks are involved in opaque AI technologies are still very much open. Moreover, AI is challenging Ethics and brings the need to rethink the basis of Ethics.

In this context, it is natural to look for theories, tools and technologies to address the problem of automatically detecting biases and implementing ethical decision-making. Logic, Computational Logic and formal ontologies have great potential in this area of research, as logic rules are easily comprehensible by humans and favour the representation of causality, which is a crucial aspect of ethical decision-making. Nonetheless, their expressivity and transparency need to be integrated within conceptual taxonomies and socio-economic analyses that place AI technologies in their broader context of application and determine their overall impact.

This workshop addresses issues of logical, ethical and epistemological nature in AI through the use of interdisciplinary approaches. We aim to bring together researchers in AI, philosophy, ethics, epistemology, social science, etc., to promote collaborations and enhance discussions towards the development of trustworthy AI methods and solutions that users and stakeholders consider technologically reliable and socially acceptable.

The workshop invites submissions from computer scientists, philosophers, economists and sociologists wanting to discuss contributions ranging from the formulation of epistemic and normative principles for AI, their conceptual representation in formal models, to their development in formal design procedures and translation into computational implementations.

Topics of interest include, but are not at all limited to:

Conceptual and formal definitions of bias, risk and opacity in AI

Epistemological and normative principles for fair and trustworthy AI

Ethical AI and the challenges brought by AI to Ethics

Explainable AI

Uncertainty in AI

Ontological modelling of trustworthy as opposed to biased AI systems

Defining trust and its determinants for implementation in AI systems

Methods for evaluating and comparing the performances of AI systems

Approaches to verification of ethical behaviour

Logic Programming Applications in Machine Ethics

Integrating Logic Programing with methods for Machine Ethics and Explainable AI

Submission

The workshop invites (possibly non-original) submissions of FULL PAPERS (up to 15 pages) and SHORT PAPERS (up to 5 pages). Short papers are particularly suitable to present work in progress, extended abstracts, doctoral theses, or general overviews of research projects. Note that all papers will undergo a careful peer-reviewer process and, if accepted, camera-ready versions of the papers will be published on the AIxIA subseries of CEUR proceedings (Scopus indexed).

Manuscripts must be formatted using the 1-column CEUR-ART Style (you can access the Overleaf template here). For more information, please see the CEUR website http://ceur-ws.org/HOWTOSUBMIT.html. Papers must be submitted through EasyChair: https://easychair.org/conferences/?conf=beware23.

Proceedings

CEUR Workshop Proceedings.

Please refer the workshop website for updates regarding the proceedings, and a potential special issue.

Organizers

Guido Boella, Università di Torino

Fabio Aurelio D’Asaro, Università degli Studi di Verona

Abeer Dyoub, Università degli Studi dell’Aquila

Laura Gorrieri, Università di Torino

Francesca A. Lisi, University of Bari ”Aldo Moro”

Chiara Manganini, Università degli Studi di Milano

Giuseppe Primiero, Università degli Studi di Milano

Important Dates

Submission deadline: 10 September 2023

Notification: 10 October 2023

Camera ready: 20 October 2023

Accedi per vedere questi contenuti

registrati se non lo ha ancora fatto