C

Senior Data Engineer (Remote)

Circana
Full-time
Remote
United Kingdom
Data Engineer

Introduction

We are seeking a skilled and motivated Data Engineer to join a growing global team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the Azure cloud platform. You will leverage your expertise in PySpark, Apache Spark, and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a desire to make a significant impact, we encourage you to apply!

Job Responsibilities

ETL/ELT Pipeline Development:

  • Design, develop, and optimize efficient and scalable ETL/ELT pipelines using Python, PySpark, and Apache Airflow.
  • Implement batch and real-time data processing solutions using Apache Spark.
  • Ensure data quality, governance, and security throughout the data lifecycle.

Ā 

Cloud Data Engineering:

  • Manage and optimize cloud infrastructure (Azure) for data processing workloads, with a focus on cost-effectiveness.
  • Implement and maintain CI/CD pipelines for data workflows to ensure smooth and reliable deployments.

Ā 

Big Data & Analytics:

  • Develop and optimize large-scale data processing pipelines using Apache Spark and PySpark.
  • Implement data partitioning, caching, and performance tuning techniques to enhance Spark-based workloads.
  • Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives.

Ā 

Workflow Orchestration (Airflow):

  • Design and maintain DAGs (Directed Acyclic Graphs) in Apache Airflow to automate complex data workflows.
  • Monitor, troubleshoot, and optimize job execution and dependencies within Airflow.

Ā 

Team Leadership & Collaboration:

  • Provide technical guidance and mentorship to a team of data engineers in India.
  • Foster a collaborative environment and promote best practices for coding standards, version control, and documentation.

Desired Experience & Qualification

  • Client facing role so strong communication and collaboration skills are vital
  • Proven experience in data engineering, with hands-on expertise in Azure Data Services, PySpark, Apache Spark, and Apache Airflow.
  • Strong programming skills in Python and SQL, with the ability to write efficient and maintainable code.
  • Deep understanding of Spark internals, including RDDs, DataFrames, DAG execution, partitioning, and performance optimization techniques.
  • Experience with designing and managing Airflow DAGs, scheduling, and dependency management.
  • Knowledge of CI/CD pipelines, containerization technologies (Docker, Kubernetes), and DevOps principles applied to data workflows.
  • Excellent problem-solving skills and a proven ability to optimize large-scale data processing tasks.
  • Prior experience in leading teams and working in Agile/Scrum development environments.
  • A track record of working effectively global remote teams

Ā 

Desirable:

  • Experience with data modelling and data warehousing concepts.
  • Familiarity with data visualization tools and techniques.
  • Knowledge of machine learning algorithms and frameworks.

Interested?