At the start of the year, I set Objectives and Key Results (OKRs) to drive my personal and professional growth. I worked toward my OKRs by contributing to a research project, mentoring and training an intern in data science, expanding my knowledge by reading 28 books and improving my EFSET exam score by 15 points to qualify my English proficiency. Additionally, I deepened my expertise in Machine Learning for Production (MLOps) by completing a specialization with top scores across multiple courses.
In recognition of these achievements, Galaksiya honored me with the opportunity to attend the ECAI conference, a prestigious event that provided invaluable insights into the latest advancements in artificial intelligence. The conference was held in the breathtaking city of Santiago de Compostela, a place that felt like stepping back in time. Walking through its cobblestone streets and medieval architecture, it felt as if I had suddenly teleported to the 14th century, walking through the pages of a history book. The city's beauty and rich heritage made this experience even more unforgettable. In this blog, I’ll share my journey and key takeaways from the conference.
The 27th European Conference on Artificial Intelligence (ECAI)
ECAI was held from October 19 to 24, 2024, and brought together over 1,600 delegates from the global AI research community.

The conference featured a comprehensive program, including technical paper presentations, workshops, tutorials, invited talks, and special sessions. In this blog, I will provide a summary of the presentations and tutorials I attended during the event. All articles are published in [1].
Key Themes
A. Ethics and Fairness
Accountability in AI Systems: Addressing the risks associated with general-purpose AI and creating mechanisms to ensure safe and ethical deployment.
AI Misuse: Tackling ethical dilemmas like malicious uses of AI, especially in sensitive areas like social media.
Fairness and Inclusivity: Advocating for equitable and proportional representation in AI-driven decision-making systems.
Explainability in AI: Ensuring AI systems are transparent and interpretable to foster trust and understanding, enabling stakeholders to comprehend how decisions are made.
B. Sustainability
Environmental Impact of AI: Acknowledging the growing carbon footprint of machine learning processes and promoting sustainable practices.
Efficient Resource Utilization: Focused on minimizing resource waste and improving the efficiency of AI systems to reduce environmental harm.
C. Enhancing Active Learning and Optimization
Active Learning: Developing smarter algorithms that can refine themselves using targeted queries and counterexamples, leading to better model efficiency.
Optimization: Conducting studies to improve the performance of classical AI planning systems, making them more effective and reliable.
D. AI Applications in Processes and Systems
Declarative Processes: Using AI to optimize process mining, representation, and synthesis for better workflow automation.
Multi-Agent Collaboration: Exploring the challenges and opportunities of multi-agent systems, particularly in addressing complex, multi-objective scenarios.
Anomaly Detection: Leveraging AI to identify unusual patterns or deviations in data that may indicate errors, fraud, or system failures. This includes real-time monitoring and predictive analytics to enhance decision-making and maintain system integrity.
E. General-Purpose AI
Defining, Evaluating, and Improving AGI: Growing focus on identifying limitations and proposing standardized solutions for AI that can operate across diverse tasks.
Key Methods
A. Large Language Models
B. Reinforcement Learning
C. Multi-Agent Systems
D. Computer Vision
Highlights from the Conference
1. Ethics and Fairness
The Implementing AI Ethics Through a Behavioral Lens[2] workshop explored how to align AI systems with human values, emphasizing fairness, bias reduction, and transparency. It highlighted the need to move beyond traditional accuracy metrics and adopt evaluation methods that reflect real-world challenges, such as patient-centered metrics in healthcare. The workshop also discussed the importance of preventing data leakage and implementing robust multi-center evaluation processes to ensure reliability.
Another notable contribution was Ethical AI Governance: Methods for Evaluating Trustworthy AI[3] , which categorized AI evaluation methods into conceptual, manual, semi-automated, and automated approaches. The paper argued that while automated tools can efficiently detect bias and explainability issues, human oversight remains essential for nuanced ethical decision-making. Challenges such as the lack of standardization and inconsistencies across trustworthiness assessments were also addressed, emphasizing the need for unified evaluation frameworks.
Additionally, the PEACE (Providing Explanations and Analysis for Combating Hate Expressions)[1] paper introduced a web tool designed to detect subtle and hidden hate speech that traditional models often overlook. By integrating visualization tools and explainable AI techniques, PEACE helps content moderators better understand and address both explicit and implicit hate speech across online platforms.
2. Sustainability
More (Enough) Is Better: Towards Few-Shot Illegal Landfill Waste Segmentation[1] addressed the challenge of detecting illegal waste sites in aerial images with minimal labeled data. The proposed approach leveraged few-shot learning techniques and synthetic data generation to improve detection accuracy while minimizing the need for extensive manual annotations. This research underscored how AI can be used to tackle real-world environmental issues efficiently.
Another significant paper, Zero-Waste Machine Learning[1] , introduced a sustainability-driven approach to AI model optimization. The authors proposed a framework inspired by circular economy principles, focusing on reusing computations and knowledge to enhance model efficiency. Key techniques included conditional computation, continual learning, and resource recycling, ensuring that AI systems remain resource-efficient without sacrificing performance.
The broader challenge of managing multi-agent and multi-objective decision-making in sustainability contexts was explored in The World is a Multi-Objective Multi-AgentSystem: Now What?[1] . This study examined how AI can address conflicting priorities, such as balancing environmental sustainability with economic goals. By integrating explainability and transparency into AI systems, the research aimed to improve trust and adoption in large-scale decision-making applications.
3. Enhancing Active Learning and Optimization
The tutorial Beyond Trial & Error: A Tutorial on Automated Reinforcement Learning[4] provided insights into the automation of reinforcement learning (RL) processes. AutoRL methods aim to streamline traditionally manual tasks, such as hyperparameter tuning and algorithm selection, making RL more accessible and adaptable to real-world applications. The tutorial also covered the impact of hyperparameters on model sensitivity and performance, reinforcing the importance of automated optimization techniques.
A related study, Hyperparameter Importance Analysis for Multi-Objective AutoML[1], explored the role of hyperparameter tuning in balancing competing objectives, such as model accuracy versus computational cost. The authors introduced novel analysis techniques to identify which hyperparameters have the most significant impact, providing valuable insights for optimizing multi-objective AI models.
In addition, Survival of the Fittest: Evolutionary Adaptation of Policies for Environmental Shifts[1] introduced an evolutionary-based approach to reinforcement learning. The study proposed Evolutionary Robust Policy Optimization (ERPO), a technique that enables AI models to adapt to changing environments without requiring extensive retraining. This research has implications for applications where AI systems must function in dynamic and unpredictable settings, such as robotics and financial modeling.
4. AI Applications in Processes and Systems
The paper Memory Adaptive and Spatially Specialized Model Ensembles for Industrial Anomaly Detection[1] introduced a framework that dynamically integrates multiple specialized models for detecting anomalies in industrial systems. By combining memory adaptation with spatial specialization, the framework enhances AI’s ability to handle diverse and evolving data distributions. This approach proved to be highly effective in industrial applications where traditional static models struggle with concept drift and changing operational conditions.
Another relevant study, Cascade Memory for Unsupervised Anomaly Detection[1], tackled the challenge of overgeneralization in unsupervised models. Many existing anomaly detection methods struggle to distinguish between normal and abnormal data, leading to high false-positive rates. The proposed cascade memory architecture refines the focus on normal patterns while allowing the system to isolate anomalies more effectively. The method demonstrated state-of-the-art performance across various benchmark datasets, particularly in complex real-world scenarios.
The validation of anomaly detection models without labeled data was addressed in Towards Unsupervised Model Validation[1] . This study proposed the use of an "accurately diverse" ensemble of models to evaluate anomaly detection systems, leveraging correlation-based ranking techniques to assess model reliability. By providing a structured approach to unsupervised validation, the research contributed to making AI-driven anomaly detection more robust and scalable.
5. General-Purpose AI
The workshop Language Understanding in the Human-Machine Era[5] explored the evolving role of large language models (LLMs) in applications such as conversational AI and machine translation. While LLMs have demonstrated impressive capabilities, the workshop emphasized their limitations in truly understanding human language, particularly in contexts that require deep reasoning and contextual awareness.
Finally, the paper Caveats and Solutions for Characterizing General-Purpose AI[1] tackled the broader challenge of measuring GPAI’s generality and capability. The authors proposed new benchmarking strategies, including adaptability tests that assess AI performance across diverse and changing tasks. They also advocated for interdisciplinary insights from philosophy and cognitive science to refine the conceptualization of GPAI, reinforcing the importance of transparency and explainability in AI evaluation.
Summary
The ECAI explored key themes shaping the future of AI, including ethics, sustainability, optimization, and real-world applications. Discussions on ethics and fairness emphasized the importance of transparent, unbiased AI systems that align with human values.
Sustainability efforts focused on reducing AI’s environmental impact through energy-efficient methods and responsible resource use. Advances in active learning and optimization highlighted techniques for automating and adapting AI systems to dynamic environments. In applied AI, process optimization and decision-making innovations showcased practical solutions for industrial and societal challenges. The conference also examined the complexities of general-purpose AI, stressing the need for adaptability, transparency, and interdisciplinary collaboration. Overall, ECAI underscored the development of ethical, efficient, and adaptable AI systems to address pressing global challenges.
References
[1] ECAI 2024 Proceedings – 27th European Conference on Artificial Intelligence (ECAI), October 19-24, 2024, Santiago de Compostela, Spain. IOS Press. Available at: https://ebooks.iospress.nl/volume/ecai-2024-27theuropean-conference-on-artificial-intelligence-1924-october-2024santiago-de-compostela-spain-including-pais-2024
[2] AIEB-ECAI 2024 Workshop – Implementing AI Ethics Through a Behavioral Lens. ECAI 2024. Available at: https://aieb-ecai2024.framer.ai/
[3] McCormack, L., & Bendechache, M. (2024). Ethical AI Governance: Methods for Evaluating Trustworthy AI. arXiv. Available at: https://arxiv.org/abs/2409.07473
[4] ECAI 2024 AutoRL Tutorial – Beyond Trial & Error: A Tutorial on Automated Reinforcement Learning. Available at: Link
[5] LUHME Workshop, ECAI 2024 – Language Understanding in the Human-Machine Era. Available at: https://luhme.web.uah.es/