Senior Cloud Engineer -Unit: BTO | Data Platform

TenneT

image

Geen max uurtarief

Gelderland

32 uur p/w

ICT Informatievoorziening

30ste juni, 2025

2de juli, 2025

DE OPDRACHTOMSCHRIJVING
ZZP of detachering

The Digital & Data organization at TenneT is focused on driving innovation and leveraging digital technology to enhance data-driven decision-making across the company. As part of this mission, the organization has developed the TenneT Data Cloud (TDC), a modern cloud-based data platform built on Azure. This platform supports a wide range of data integration, processing, and analytics tasks, serving as the foundation for data initiatives across TenneT. Within this structure, DevOps teams play a central role, working closely with stakeholders to deliver high-quality, scalable, and reliable data solutions that meet the evolving needs of the business.
Function:
As a Cloud Data Platform Engineer in TenneT’s Digital & Data organization, you will be a crucial member of a DevOps team responsible for designing, implementing, and maintaining the TenneT Data Cloud (TDC) on Azure. Your role involves setting up and managing Azure services like Azure Data Factory, Azure Databricks, and Microsoft Fabric, ensuring seamless integration with various data sources and automating workflows to enhance efficiency. Additionally, you’ll monitor and optimize the performance of the TDC to uphold high standards of availability and reliability, staying current with the latest Azure technologies and best practices to continuously improve the platform.
Tasks and responsibilities:
• Design, develop, and implement scalable data solutions using Microsoft Azure services.
• Manage containerized applications with Azure Kubernetes Service (AKS).
• Build and maintain CI/CD pipelines to support efficient, automated deployment and testing of data engineering workflows.
• Develop and maintain data processing solutions using Python, Java, or other relevant programming languages.
• Ensure effective data storage, ingestion, transformation, and analytics leveraging Azure data services.
• Design, develop, and integrate APIs to facilitate seamless data exchange with external systems.
• Implement automated workflows and system integrations to streamline operations.
• Use Infrastructure as Code (IaC) tools to provision and manage cloud infrastructure on Azure.
• Design, build, test, deploy, and maintain applications with a focus on performance, fault tolerance, observability (logging and monitoring), and reliability.
• Write and maintain unit and integration tests to ensure code quality and reliability.
• Troubleshoot and resolve issues identified through testing or reported by users.
• Continuously identify opportunities to improve existing technical solutions and team practices.
• Actively participate in knowledge sharing, design discussions, and technical reviews within the team.
Profile:
• Bachelor’s in Computer Science, Engineering, or a related field (or equivalent practical experience).
• Extensive experience (min 7 years) with Microsoft Azure services, including but not limited to Azure Kubernetes Service (AKS), Azure Data Lake Storage, Azure Data Factory, and Azure Databricks (must-have).
• Proven track record of designing and deploying scalable, production-grade data pipelines and distributed data processing solutions.
• Strong proficiency in Databricks development, including notebook orchestration, Delta Lake, structured streaming, and performance optimization.
• Deep understanding of CI/CD practices and tools (e.g., Azure DevOps, GitHub Actions, Jenkins) for automating deployment and testing workflows.
• Advanced scripting and development skills in Python, Java, and SQL, with the ability to write clean, testable, and maintainable code.
• Experience provisioning and managing cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Bicep.
• Familiarity with building and integrating RESTful APIs for data access and interaction with external systems.
• Experience with automated workflows, event-driven architectures, and data integration pipelines.
• Solid understanding of data engineering principles including data modeling, ETL/ELT patterns, and data governance.
• Knowledge of big data technologies and frameworks such as Apache Spark, Kafka, and Hadoop.
• Strong analytical and problem-solving skills with the ability to debug and optimize complex systems in production.
• Excellent communication and interpersonal skills; ability to collaborate effectively in agile, cross-functional teams.
• High proficiency in English , Dutch is not mandatory.
Soft skills:
• Team player and communicative
• Proactive
• Open minded and flexible
• Ambitious and driven
• Involved and motivated
Conditions:
• At entry, TenneT performs a Pre-Employment Screening;
• Duty station for this position is officially Arnhem MCE | 2 x week in the office (team day on Thursday) and the rest hybrid.
• One interview with panel of 2 or 3 partners | Online via Teams.
Additional information:
• Suppliers must be aware of the laws and regulations regarding employment conditions and Tennet’s Collective Labour Agreement. This assignment is placed in scale 8.
• We would like to receive the personal motivation of the candidate and CV in English or Dutch.

De Eisen
Bachelor’s in Computer Science, Engineering, or a related field (or equivalent practical experience).
Extensive experience (min 7 years) with Microsoft Azure services, including but not limited to Azure Kubernetes Service (AKS), Azure Data Lake Storage, Azure Data Factory, and Azure Databricks (must-have).
Proven track record of designing and deploying scalable, production-grade data pipelines and distributed data processing solutions.
Strong proficiency in Databricks development, including notebook orchestration, Delta Lake, structured streaming, and performance optimization.
Deep understanding of CI/CD practices and tools (e.g., Azure DevOps, GitHub Actions, Jenkins) for automating deployment and testing workflows.
Advanced scripting and development skills in Python, Java, and SQL, with the ability to write clean, testable, and maintainable code.
Experience provisioning and managing cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Bicep.
Familiarity with building and integrating RESTful APIs for data access and interaction with external systems.
Experience with automated workflows, event-driven architectures, and data integration pipelines.
Solid understanding of data engineering principles including data modeling, ETL/ELT patterns, and data governance.
Knowledge of big data technologies and frameworks such as Apache Spark, Kafka, and Hadoop.
Strong analytical and problem-solving skills with the ability to debug and optimize complex systems in production.
Excellent communication and interpersonal skills; ability to collaborate effectively in agile, cross-functional teams.
High proficiency in English , Dutch is not mandatoryBachelor’s in Computer Science, Engineering, or a related field (or equivalent practical experience).Extensive experience (min 7 years) with Microsoft Azure services, including but not limited to Azure Kubernetes Service (AKS), Azure Data Lake Storage, Azure Data Factory, and Azure Databricks (must-have).Proven track record of designing and deploying scalable, production-grade data pipelines and distributed data processing solutions.Strong proficiency in Databricks development, including notebook orchestration, Delta Lake, structured streaming, and performance optimization.Deep understanding of CI/CD practices and tools (e.g., Azure DevOps, GitHub Actions, Jenkins) for automating deployment and testing workflows.Advanced scripting and development skills in Python, Java, and SQL, with the ability to write clean, testable, and maintainable code.Experience provisioning and managing cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Bicep.Familiarity with building and integrating RESTful APIs for data access and interaction with external systems.Experience with automated workflows, event-driven architectures, and data integration pipelines.Solid understanding of data engineering principles including data modeling, ETL/ELT patterns, and data governance.Knowledge of big data technologies and frameworks such as Apache Spark, Kafka, and Hadoop.Strong analytical and problem-solving skills with the ability to debug and optimize complex systems in production.Excellent communication and interpersonal skills; ability to collaborate effectively in agile, cross-functional teams.High proficiency in English , Dutch is not mandatory
De Wensen

N.v.t

Interesse in deze opdracht?
Zo werkt onze dienstverlening
1
BINNEN 1 WERKDAG REACTIE
  • We beoordelen je CV om te zien of er een match is.
  • We controleren of je voldoet aan de eisen en wensen.
  • We onderzoeken op basis van data of je gewenste tarief concurrerend is.

Omdat het proces verloopt via een aanbesteding is het belangrijk dat je een goede kans maakt om de opdracht te winnen. Bij een match starten we het offertetraject, bij twijfel laten we dit binnen 1 werkdag weten.

2
INTRODUCTIE BIJ DE OPDRACHTGEVER

De procedure verloopt via een aanbesteding. De eerste introductie doen wij daarom op papier.

  • We werken samen een offerte uit waarin we toelichten waarom jouw profiel aansluit op de gestelde eisen en wensen.
  • We verzamelen de benodigde stukken indien gevraagd zoals referenties, diploma's, motivatiebrief, VOG etc...
  • Op basis van data bepalen we een kansrijk uurtarief voor de offerte. Jij hebt zelf natuurlijk het laatste woord in het biedingstarief.
3
AAN DE SLAG
ZZP

Wij houden van eerlijk en transparant zaken doen.
Als je aan slag gaat via Bij Oranje hanteren we de volgende voorwaarden:

  • We vragen 10% marge over je uurtarief voor de duur van de opdracht.
  • We betalen je factuur binnen 21 dagen, je hoeft dus niet op je geld te wachten!
  • Als jij je opdracht goed doet en daardoor bij de dezelfde opdrachtgever een nieuwe opdracht mag doen dan ben je daar volledig vrij in! We hanteren geen concurrentie-/ relatiebeding.
Detachering

Wij houden van eerlijk en transparant zaken doen.
Als je aan de slag gaat via Bij Oranje Detachering dan hanteren we de volgende voorwaarden:

  • We vullen samen een loonheffingsverklaring in en tekenen een overeenkomst van opdracht.
  • We rekenen 15% marge van je uurtarief voor de duur van de opdracht. Het resterende bedrag verlonen we volledig als brutoloon.
  • Binnen 21 dagen na het ontvangen van je getekende urenstaat, ontvang je de netto-betaling op je bankrekening. Je hoeft dus niet op je geld te wachten!
  • Als jij je opdracht goed doet en daardoor bij dezelfde opdrachtgever een nieuwe opdracht mag doen dan ben je daar volledig vrij in! We hanteren geen concurrentie-/relatiebeding.
Reageer direct
De opdracht sluit 02-07-2025
Je hebt nog 2 dagen om te reageren.
Reageer minstens 1 dag voor de sluitingstijd van deze opdracht.

Eventuele motivatie volgt in een latere fase

Akkoord geen bemiddelaar / bureau

Om de inhuurketen kort en transparant te houden kiezen wij ervoor om alleen direct met de zelfstandige te schakelen en niet met bemiddelende partijen.