AI in tech: EY and LUXINNOVATION unite to drive innovation

Luxembourg innovates with AI initiatives, remaining watchful of upcoming regulations. On November 8th, EY and Luxinnovation joined forces to promote innovation in AI in Luxembourg at the AI in Tech event. Over 60 firms were in attendance. This evening was a success, providing a unique opportunity for a diverse group of attendees to interact and learn from each other.

November 14, 2023

Expert speakers, including Ralf Hustadt Special Advisor, Digitalisation, Data Economy and Gaia-X from LUXINNOVATION; Céline Tarraub, Adviser Digital & Innovation from FEDIL; and Jordi Cabot, Head of Software Engineering RDI Unit from LIST (Luxembourg Institute of Science and Technology), shared key insights.

A compelling panel discussion, featuring Ralf Hustadt- Luxinnovation, Meysam Minoufekr, CEO from Dropslab, Sébastien Respaut – Country Head from Microsoft, and Seva Vayner from Gcore Technologies, enriched the event, contributing to its overall success.

From the National HPC Competence Center to the proposed EU AI act, Luxembourg is leading the way in AI initiatives. AI technology has enabled organizations to improve decision-making processes and offers opportunities for businesses to rethink their processes. As AI continues to advance, there are new opportunities on the horizon, such as edge AI, global intelligence pipelines, and AI-driven sustainability.

Luxembourg’s National HPC Competence Center provides HPC support to industries, academia, and public administration. The Center enables simulations, modeling, big data analytics, and AI, offering various use cases such as optimizing logistic choices, reducing waste in production processes, and more. Organizations such as Luxinnovation, LuxProvide, and the University of Luxembourg contribute to make these innovations in AI possible. EY Luxembourg also plays a central role as an innovation catalyst, actively involved in guiding businesses in the integration of advancements related to artificial intelligence.

The proposed EU AI act has provided guidelines to address risks related to AI applications, specifying a list of high-risk applications, and setting clear requirements for systems used in such applications. The rules propose conformity assessments before the AI system is put into service or placed on the market and enforcement thereafter. The act applies to providers and users of AI systems in the EU or third countries whose output is used within the EU. The risk-based approach categorizes AI systems based on risks such as unacceptable risk, high risk, AI with specific transparency obligations and minimal to no risk. High-risk AI systems must use high-quality data, have documentation, and design logging features, ensure transparency, human oversight, and robustness, accuracy, and cybersecurity. The proposed regulations aim to strike a balance between stringent regulations on high-risk applications and more flexibility on low-risk AI to protect citizens without stifling innovation.

While AI advancements could affect existing startups and services by making them redundant, generative AI could also improve productivity by up to 40% for highly skilled workers. Ethical considerations such as mitigating biases and ensuring transparency must be central to the development process.

Corporates need to have a clear strategic roadmap, data management, governance, responsible AI, identified use cases, and a culture that supports AI development to prepare for AI adoption.

AI technology has enabled the integration of AI into core business applications, offering intelligent and automated decision-making processes, and increasing efficiency and productivity. For example, the adoption of more powerful GPUs like NVIDIA A100 and H100 allows organizations to train larger and more complex AI models. In parallel, there has been a surge in the development of local language models to ensure compliance with European data privacy and sovereignty regulations.

Ethical considerations, such as transparency, fairness, and accountability, must be at the forefront of AI design and implementation. AI technologies can reduce human autonomy, infringe on privacy rights, and impact the job market and social structures. Collaborative regulation can help mitigate these risks without unduly hampering the growth and innovation that AI can bring to society.

Looking to the future, various advancements are expected in the next few years, including edge AI and real-time processing, global intelligence pipelines, federated learning, and AI-driven sustainability. These advancements offer opportunities for better energy optimization, improved privacy and security, and enhanced climate modeling. As Luxembourg continues to lead in AI initiatives, AI technology will play an increasingly vital role in driving innovation across sectors.

Businesses are increasingly interested in exploring how they can effectively utilize AI in their operations while also addressing the risks and governance associated with this technology. At EY, we have developed AI maturity models, value programs, and guidelines to manage these risks.

In this transformative journey, events like EY’s AI in Tech not only share knowledge but also light the way forward. With the commitment of every participant, the future of AI will be guided by a shared vision of responsible, innovative, and sustainable integration.

Watch video

In the same category