eXplainable AI Day 2024 – the conference focused on explaining and understanding decisions made by AI

We currently live in a time where Artificial Intelligence (AI) reigns supreme. Each day, we find ourselves encircled by AI technologies. It’s hard to envision a future where we rely solely on algorithms to make decisions without understanding their rationale or verifying their explainability. Operating blindly without insight into the foundations of these decisions seems improbable. In many sci-fi movies, we see people blindly following such advices, which often leads to bad results.

Furthermore, regulations are on the horizon. The AI Act is scheduled to take effect in 2024. Serving as a cornerstone of the EU’s digital strategy, its purpose is to regulate AI and cultivate a conducive environment for the advancement and application of this technology. Additionally, the AI Liability Directive is also looming, designed to complement the Act but possibly posing legal hurdles for numerous companies in the EU lacking adequate algorithmic oversight. Consequently, this subject holds immense significance in contemporary discourse.

Therefore, promoting AI explainability is essential, which is why our event is taking place. On March 5th, we will host the inaugural edition of the eXplainable AI Day 2024 conference. This event, the first of its kind in Poland, is dedicated to explaining and understanding the decisions generated by AI. The conference will feature ten presentations by industry experts and will be conducted entirely online, free of charge. The significance of such an event is paramount in today’s landscape. It’s a gathering of the Polish AI community, conducted in English to make it accessible to others interested in the field. Such events are rare, so we kindly invite you to participate and register to secure a spot at the following link: https://10senses.com/xaiday-2024/ 

The program of the conference

During the eXplainable AI Day, you will explore what AI explainability entails, its significance for companies, and practical use cases. Moreover, you will gain insights into methods for explaining and reporting AI decisions and the upcoming regulations regarding AI explainability. The event starts with a presentation by our Keynote Speaker, Joris Krijger, the AI & Ethics Officer at de Volksbank. He’ll explore ‘The Purpose and Value of Explainable AI,’ highlighting its importance beyond model improvement. Drawing from his experiences, Joris will show how XAI builds trust and accountability in society, crucial for organizational success.

Our conference also covers the following topics:

The upcoming EU regulations regarding explaining AI

AI is now part of our daily lives and because of this, there’s a need for reliable AI and fair rules to govern it. These rules should encourage progress while also making sure people can trust AI and it benefits society. The European Union aims to ensure transparency, safety, non-discrimination, and environmental friendliness in AI systems used within its borders through the enactment of this law. Additionally, it seeks to establish clear requirements and obligations for AI developers, deployers, and users concerning the various applications of AI systems.

Following this, Marek Porzeżyński, Partner and Head of IP and NewTech at the Linke Kulicki Law Firm, will discuss regulations for AI transparency and decision-making justification. His presentation, titled “Regulating Explainable AI,” emphasizes this approach’s importance in fostering trust and ethical AI applications as well. Following Marek’s discussion, Łukasz Węgrzyn, Head of the IT and Data Team at Osborne Clarke, will highlight how businesses can turn compliance into a competitive edge. In addition, this session will examine today’s AI regulations, spotlighting vital laws and ethics crucial for developers and businesses worldwide.

Register for eXplainable AI Day 2024

Practical use cases for explainable AI

Exploring the practical applications of explainable AI offers insight into how AI algorithms make decisions, thereby fostering trust and confidence in their outcomes. Additionally, industries such as infrastructure, e-commerce, healthcare, and finance – where accountability is paramount – derive significant benefits from the transparency provided by explainable AI. This transparency ensures that decisions align with regulations and ethical standards. Understanding these use cases empowers us to identify and rectify any biases or errors in AI systems, facilitating ongoing improvement.

In the use cases section, Mariusz Jurczyk, Executive Director at TAURON Polska Energia, will discuss AI’s role in physical infrastructure. He’ll emphasize understanding algorithms’ contributions to decision-making, potentially enhancing organizational efficiency. The next presentation will be “Machine Learning in E-commerce: How Artificial Intelligence Can Drive Your Business” by Witold Zaklukiewicz. Witold is a Data Science Team Leader at Ceneo, and he will discuss how, in 2023, up to 80% of internet users made online purchases, presenting extraordinary opportunities while also posing significant challenges in managing such a large scale of operations. In this context, machine learning plays a pivotal role.

Additionally, Agnieszka Niezgoda, Senior Technical Specialist at Microsoft, will explore Azure AI’s capabilities, highlighting services like computer vision and AI speech. We’ll discuss use cases and key considerations for starting AI projects, covering various AI solution approaches. Finally, Mike Guzowski, Co-Founder of Developico, will present ‘Managing Rapid Progress – How to Automate Processes Within a Company in a Hyper-Agile Manner?’ He’ll discuss challenges in utilizing AI, Low-Code, or RPA, drawing on experiences from Budimex, NFM Group, and Tchibo.

Effective methods for explaining and reporting AI decisions

Effective methods for explaining and reporting AI decisions involve clear and concise communication tailored to diverse audiences. Techniques like algorithmic transparency, where the inner workings of the AI model are made accessible, can enhance understanding. Furthermore, integrating feedback mechanisms that allow users to question and interact with AI-generated outputs fosters transparency and trust in the decision-making process.
 
In this segment, Łukasz Borowiecki, CEO of 10 Senses, will explore Machine Learning models for CRM, showcasing their ability to forecast patterns and customer profiles. He’ll also explain how Shapley values extract insights from these models and present typical business scenarios where such analysis is invaluable. The next presentation will be by Mikołaj Sacha, Head of Machine Learning at Molecule.one. Mikołaj will demystify AI in Computer Vision with his presentation ‘Why Does This Look Like That?’ He’ll highlight common errors in modern image processing models and discuss integrating interpretability for safer solutions. At the end, Marek K. Zieliński, CTO of 10 Senses, will discuss practical examples of documentation generation and best practices for trustworthiness and efficacy. This presentation will also address the challenge of balancing model complexity with the need for transparency.

eXplainable AI Day 2024 is a highly unique conference featuring numerous valuable presentations and talented speakers. We’re delighted that many individuals will have the opportunity to witness and engage with the expanding realm of explainable AI work. There’s no comparison to the delight of a crisp winter morning dedicated to an explainable AI conference – the virtual format undeniably offers its advantages!

Additionally, the conference recordings will be available on our YouTube channel after the conference, enabling everyone to explore the latest advancements in artificial intelligence.

Register for eXplainable AI Day 2024

Talk to our expert

Are you looking for expert skills for your next data project?

Or maybe you need seasoned data scientists to extract value from data?

Fill out the contact form and we will respond as soon as possible.