Freelance Data Engineer (Azure / Spark/Databricks) - 6 months rolling

Projektbewertung

Die Ausschreibung bietet einen sehr detaillierten Einblick in ein spannendes Cloud-Dataplattform-Projekt im Bereich erneuerbarer Energien mit vollständig remote Arbeitsmöglichkeit und umfassenden technischen Anforderungen für erfahrene Data Engineers.

Job Title: Freelance Data Engineer (Azure / Spark/Databricks)

Location: Remote, Germany-based candidates preferred

Contract Type: Freelance

Duration: 6-month rolling contract

Start Date: ASAP

Industry: Renewable Energy (Cloud Data Platform Build)

Language: English (German is a plus)

________________________________________

About the Role

Our client, a fast-growing renewable energy company, is modernising its data landscape with a new cloud-native platform. They are looking for a Freelance Data Engineer to take a leading role in building ingestion pipelines, transformations, and integrations on Azure and Spark-based environments (Databricks will be the core engine, but hands-on Databricks experience is not strictly required).

This is a chance to shape a data platform from the ground up, enabling advanced analytics and future AI/ML initiatives.

________________________________________

Key Responsibilities

•    Design and implement cloud-native data pipelines (batch + streaming) using Spark and modern data tooling

•    Ingest and process data from multiple sources: APIs, IoT devices, ERP/CRM systems

•    Work with Delta Lake, data lakes, and data warehouses for scalable storage/processing

•    Ensure data quality, security, and compliance with GDPR and sector-specific regulations

•    Collaborate with Data Scientists and Analysts to enable advanced analytics and predictive models

•    Set up monitoring, CI/CD, and automation for reliable workflows

•    Document and share best practices for platform scalability







________________________________________

Requirements

•    Strong experience as a Data Engineer with hands-on cloud expertise (Azure preferred, AWS/GCP also welcome)

•    Proficiency in Spark / PySpark and SQL

•    Background in designing ETL/ELT pipelines for structured and unstructured data

•    Experience with Delta Lake or similar lakehouse concepts

•    Familiarity with streaming frameworks (Kafka, Event Hubs, Kinesis, or equivalent)

•    Working knowledge of CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc.)

•    Solid grasp of data governance, security, and compliance frameworks

________________________________________

Preferred Skills

•    Exposure to Databricks (Lakehouse, Workflows, Unity Catalog, MLflow)

•    Previous work in the energy or utilities sector

•    Familiarity with containerization and orchestration (Docker, Kubernetes)

•    Experience with BI tools (Power BI, Tableau, Looker)

•    German language skills

________________________________________

What’s Offered

•    6-month rolling freelance contract

•    Fully remote with flexible working arrangements (Germany-based candidates preferred)

•    Opportunity to work on a greenfield cloud platform build in the renewable energy space

•    Collaboration with an international, cross-functional team of data experts

Microsoft AzureApache SparkDatabricksContinuous IntegrationWorkflowsAPIsKünstliche IntelligenzAmazon Web ServicesAutomatisierungBusiness Intelligence

Art der Anstellung

contracting

Gepostet am

22. September 2025

Angeboten von:

Freelancermap

Zur Ausschreibung (öffnet in neuem Tab)