What IT decision-makers should have on their radar now: Four digital trends for 2026
The technology experts at software manufacturer d.velop have identified four particularly pivotal trends for 2026.
Few technologies have evolved as rapidly as artificial intelligence in recent years. And with the new year only just beginning, there is no sign that this pace of progress will slow down – quite the opposite. AI will likely make its way into even more areas of everyday life and work. At the same time, lawmakers are responding to calls for stricter regulation, and Europe is striving for greater digital independence. Against this backdrop, an exceptionally exciting year lies ahead. d.velop’s technology experts have highlighted four key trends for 2026:
1. Agents and specialised integrations: AI becomes more productive
According to a widely cited MIT study, 95 per cent of AI pilot projects fail and do not deliver the expected added value. The findings indicate that although tool adoption rates are high, true transformation is often lacking. Standard chatbots such as ChatGPT may be used, but tailored solutions and genuine integration into existing processes are frequently missing. This is not ideal for workflows, as employees have to switch between applications. Moreover, such use of AI can lead to serious data protection and data security issues. In addition, the outputs of general-purpose models are not suitable for highly specialised tasks.
However, even in the field of AI, lessons are learned – and we are approaching the trough of disillusionment, after which the technology will reach the plateau of productivity. In the coming year, native, specialised AI integrations for a range of software solutions will contribute to this development by supporting employees directly within their familiar workflows.
In addition, AI agents will redefine automation. Today, automation often relies on rigid if–then logic. Actions are selected from a predefined set based on specific parameters. AI agents, however, go far beyond such rigid systems: they can independently interpret situations, weigh options, develop ideas and even prepare actions – a true paradigm shift and milestone.
2. Stricter regulations and fear of dependencies: Digital sovereignty becomes business-critical
From cloud services to office suites, Europe’s desire for local alternatives is growing – not only within the public sector but also across the private economy. In today’s tense geopolitical climate, dependencies can quickly become problematic. For example, last year access to Microsoft services was restricted for senior officials of the International Criminal Court following disagreements with the US government. The authority now intends to rely on a European alternative.
At the same time, the number of European cloud offerings that provide alternatives to the major hyperscalers and ensure compliance with European regulations is growing. In 2026, more and more SaaS solutions will also be available in a “hosted in Europe” variant to meet the rising demand for independence.
3. A wallet for everyone: Digital identities enable more efficient administration
All EU Member States must provide their citizens with a free digital wallet for electronic identities by the end of 2026, as stipulated in the revised eIDAS Regulation. The European Digital Identity Wallet (EUDI Wallet), combined with its connection to the German BundID, offers significant potential to simplify authentication processes when interacting with public authorities and to avoid media discontinuities. Companies in regulated sectors may also benefit – for example through simplified Know-Your-Customer (KYC) processes in the financial industry. To take advantage of these benefits, both public authorities and businesses must prepare for the new EUDI infrastructure and establish the necessary interfaces.
4. Binding rules for AI: The AI Act continues to be rolled out
The EU’s AI Act, which establishes the first comprehensive, Europe-wide regulatory framework for the use of artificial intelligence, entered into force in August 2024. Its aim is to promote trustworthy, human-centric AI while minimising risks such as discrimination or lack of transparency. The legislation follows a risk-based approach with four categories: “Unacceptable Risk” (e.g., social scoring or real-time facial recognition), “High Risk” (e.g., AI in critical infrastructure, education or justice), “Transparency Obligations” (e.g., labelling of deepfakes) and “Minimal Risk”. The associated rules enter into force gradually, with bans on unacceptable-risk systems already applying since February 2025.
In August 2026, the second, 24‑month implementation period for high-risk systems ends. This phase introduces additional requirements for risk management, data quality, traceability and human oversight. Importantly, these obligations apply not only to AI model providers but also to the companies deploying such systems. It is therefore essential to begin preparing now for the new requirements – from documentation and compliance through to technical adjustments.
Contactgegevens
Stefan Olschewski - Perscontact
stefan.olschewski@d-velop.de