Skip to content
Jan Fietz
Back to home

Vendor-Agnostic Data Acquisition Platform for Wind Turbines

15 years of responsibility for a SCADA middleware with over 8,000 connected assets at 30+ customers across Europe and the US — from protocol layer to cloud-native deployment.

Client
Quantec Networks, Quantec Systems, Scada International, Opoura GmbH
Role
Developer → Senior → Team lead · Technical advisory · Third-level support
Period
2011 - 2026
C++ Python SCADA OPC UA IEC 61400 Modbus Kubernetes Azure Event Hub RabbitMQ Renewable energy

Starting point

In the renewables market, asset owners and direct marketers increasingly run mixed portfolios: wind farms from different manufacturers, complemented by solar, storage and controllable loads. Every plant generation and every manufacturer speaks its own language — proprietary, semi-standardised, or over industrial protocols. At the same time, requirements for availability, scaling, cloud operation and traceable data flows keep growing.

Within this environment I co-developed, modernised and ultimately led a data acquisition middleware over 15 years. The software is in use at over 30 customers across Europe and the US today and acquires data from more than 8,000 assets and devices — including wind turbines from all major European manufacturers as well as their plant controllers.

The challenge

The software sits at the centre of a heterogeneous ecosystem:

  • Southbound it must connect to a wide range of plants and controllers — using proprietary vendor protocols as well as industrial standards (Modbus, OPC XML-DA, OPC UA, IEC 60870-5-101/104, ICCP/TASE.2).
  • Northbound customers expect connectivity to their own SCADA solutions, databases, marketing systems and cloud platforms — via AMQP, Azure Event Hub, OPC UA, FTP, HTTP APIs.
  • Operationally the system must be available 24/7, run in widely varying infrastructures (from the customer’s data centre to the public cloud) and stay maintainable so that hundreds of plant connections can be operated reliably for years.

The complexity isn’t in any single problem — it’s in mastering all of these axes cleanly at the same time.

My responsibility

Over the years I accompanied the software in every role it grew through: first as a developer, then as a senior with architectural responsibility, finally as team lead responsible for building and growing the development team. In parallel, I was involved from the start in technical advisory for sales and customers, and took on third-level support for demanding integration cases.

Concretely this included:

  • Implementation and reverse engineering of vendor protocols for wind turbines from all major European manufacturers (including Vestas, Enercon, Siemens, Gamesa, Senvion, NEG Micon, Nordex, GE Wind Energy) and for plant controllers from various controller vendors (Mita, Bachmann, Phoenix Contact and others).
  • Building a modern software engineering culture in the team: establishing unit tests, traceable deployment processes and continuous refactoring.
  • Connecting customer systems and integrating the middleware into other products in the company portfolio.
  • Building and leading the development team, including project management.

Architectural decisions with business impact

The biggest levers weren’t individual features — they were strategic architectural decisions that kept the product viable over many years.

Unified data model based on IEC 61400

Instead of pushing every vendor’s data schema all the way through to the northbound interfaces, I led the introduction of a unified data model based on IEC 61400. The effect was twofold: customers could build their SCADA solutions and marketing systems vendor-independently, and the in-house monitoring system used the same foundation. Advisory effort for customers with mixed portfolios dropped noticeably, and the data gateway gained visible strategic weight within the product portfolio.

Python embedding for customer-specific logic

On my initiative Python was integrated as an extension language into the C++ middleware. This made it possible to implement custom logic for individual wind farms or customers without touching the core product. Adjustments could then be made not only by the development team but also by support — with significantly shorter delivery times and freed-up engineering capacity for product work.

Cloud-native and cloud-agnostic deployment

The transition from bare-metal processes through virtualisation to container-based cloud operation was a deliberate strategic move by the company, which I helped implement. Today the software runs in Docker containers on Kubernetes — at Hetzner, OVH, Azure, Google Cloud or on-premises at the customer. Deployments take less than an hour, new plants can be onboarded in minutes, and customers with sovereignty or compliance requirements can be served without vendor lock-in.

Modernising the northbound interface

On my initiative Azure Event Hub and RabbitMQ were integrated as modern messaging backbones. This made the middleware compatible with the cloud and streaming architectures customers had moved their data platforms to.

Outcome

  • Scale: Over 8,000 connected assets and devices at 30+ customers across Europe and the US today.
  • Vendor agnosticism: Connectivity to virtually all relevant wind turbine manufacturers in the European market, via proprietary and standardised protocols.
  • Time-to-market: Customer adjustments via Python can be implemented by both engineering and support — instead of through core-product releases.
  • Operations: Sub-hour deployments, plant onboarding in minutes, semi-automated processes with clear traceability.
  • Strategic: The data gateway has grown from a technical component into a platform that other products in the company build on.

Technologies used

  • Languages: C++, Python
  • Southbound protocols: Modbus, OPC XML-DA, OPC UA, IEC 60870-5-101/104, ICCP/TASE.2, various proprietary vendor protocols
  • Northbound protocols and interfaces: AMQP, RabbitMQ, Azure Event Hub, OPC UA, FTP, HTTP APIs, database connectors
  • Standards: IEC 61400 (unified data model)
  • Infrastructure: Docker, Kubernetes (Hetzner, OVH, Azure, Google Cloud, on-premises)
  • Engineering practices: Unit testing, refactoring, traceable deployment processes

What this experience transfers to

This case study stands in for a profile I offer clients: the ability to keep long-lived industrial software alive in a heterogeneous, regulated environment — and not just maintain it, but evolve it strategically. From protocol layer to cloud architecture, from code to team leadership.