Artificial intelligence in the energy sector

Listen to article
Summarize article
Share on LinkedIn
Share by mail
Copy URL
Print

Artificial intelligence (AI) is rapidly gaining increasing importance in many areas, including the ­energy sector, where it can make a major contribution to enhancing the efficiency, reliability and sustainability of energy production and distribution, and to the transformation of energy systems.

Energy production can be optimized by using machine learning and predictive analytics to respond dynamically to changes in energy supply and demand. Likewise, ­demand response strategies help match energy demand with supply, especially during peak consumption periods. If AI systems are used to monitor and analyze electricity grid data, irregularities or unusual patterns can be ­detected at an early stage. In energy trading, AI based systems help predict prices and make automated trading decisions.

Open AI’s release of ChatGPT in November 2022 marked a further shift towards the use of AI in business. This brought another wave of public attention to AI systems. More and more businesses develop their own GPT applications for different tasks within their company. To exploit the full potential of AI, it is important to understand the legal framework surrounding its operation.

Legal framework for AI technologies

On 13 March 2024, the European Parliament passed the Artificial Intelligence Act (AI Act), a European Union regu­lation establishing the legal framework for the use of AI technologies. The different provisions of the AI Act will take effect on different dates, starting with the provisions on prohibited practices presumably in December 2024, while the majority will come into effect from the middle of 2026. For businesses that already use AI technologies or plan to do so in the future, it is important to know the provisions of the AI Act. When an AI application is introduced in a company, it should meet the statutory requirements to ensure it can also be used in the future.

Risk-based approach of the AI Act

The AI Act follows a risk-based approach based on four categories of AI applications: (1) unacceptable risk, (2) high risk, (3) limited risk and (4) minimal risk applications.

The Act prohibits AI applications that manipulate human behavior or exploit vulnerabilities. Social scoring that may lead to discriminatory outcomes and the exclusion of ­certain groups is also prohibited.

Use of high-risk systems

The provisions on high-risk systems are particularly ­important for businesses, especially for those in the ­energy sector. The AI Act classifies all AI systems intended to be used as safety components in the management and operation of critical infrastructure and the supply of gas, ­heating and electricity as high-risk systems.

Providers of high-risk systems are required to undergo a comprehensive procedure before they are allowed to place their systems on the market. The AI Act (Article 26) also sets out a number of obligations for deployers of high-risk systems: They are required to implement appropriate technical and organizational measures to ensure that they use such systems in accordance with the provider’s instructions for use. Moreover, deployers are required to ensure human oversight of systems, guarantee the quality of input data and inform data subjects about the use of AI systems.

Transparency and AI literacy

Other AI applications that are not regarded as high risk are also subject to a number of transparency obligations to inform data subjects about the use of AI systems. Further, businesses must provide their employees who use AI systems with sufficient training to ensure that they have the necessary AI expertise (Article 4 AI Act).

Risk prevention

Staff training and the development of internal company guidelines are also important to prevent risks to the company. This applies in particular to the prevention of liabi­lity risks, data protection violations and copyright ­infringements.

The most well-known AI applications, such as ChatGPT, Google Gemini and Microsoft Copilot, are Large Language Models (LLMs) designed to produce fluent text. Their performance is excellent. However, LLMs are not knowledge databases. This means that the texts they produce often contain significant factual errors. Before such AI systems can be used in a business environment, they need to be trained on use-specific data. Moreover, the ­application scope should be limited. For example, a chatbot for customer support should answer questions about company products but not discuss competitor products, let alone political matters or daily events.

If an AI application also processes personal data, which is often the case, data protection law applies. Generative AI applications that create text or images are also subject to copyright requirements. Businesses must ensure that they have the necessary licenses to use their own training data. Businesses must also regularly check that the texts and images created do not infringe any third-party rights.

Opportunities in the use of AI

On the whole, the use of AI applications also brings signi­ficant advantages for businesses in the energy sector. The question is not whether businesses should introduce AI applications but how they should do it. The efficiency gains are so great that these applications will soon become commonplace.

It is good news that the AI Act has now been passed. Businesses now know what to expect and what requirements they have to fulfil. Businesses that are familiar with the provisions of the AI Act and take them into account in their planning and strategy will fully benefit from the ­potential offered by AI applications.

 

Author

Arnd Böken, Graf von WestphalenArnd Böken
GvW Graf von Westphalen, Berlin
Attorney-at-Law, Notary, Partner

a.boeken@gvw.com
www.gvw.com

 

Author

Dr. Maximilian Emanuel Elspas, Graf von Westphalen

Dr. Maximilian Emanuel Elspas
GvW Graf von Westphalen, Munich
Attorney-at-Law, Business Lawyer, Partner

m.elspas@gvw.com
www.gvw.com