What you will deliver
- Be a key contributor to the design, development, and maintenance of our scalable data pipelines and platforms. You'll get hands-on experience with a modern data stack from day one.
- Work closely with senior engineers, product owners, and analysts to understand data requirements and translate them into technical solutions. You'll be an active participant in team discussions and planning.
- Apply and champion our best practices for data modelling, ETL/ELT processes, and data warehousing to ensure consistency and quality.
- Implement and maintain robust testing strategies for data pipelines, including unit, integration, and data quality checks, to guarantee the accuracy and completeness of our data.
Skills and Experience
Essential
- Demonstrated experience as a Data Engineer, with a track record of building and managing data pipelines in a production environment
- Proficiency in Python and SQL
- Hands-on experience building and deploying solutions on AWS (e.g., S3, Glue, Lambda, EMR).
- Direct experience working with a cloud data warehouse, preferably Snowflake.
- Proven experience with Airflow for orchestration and DBT for data transformation.
- Experience with a distributed processing framework like Spark.
- Proficiency with Git and CI/CD practices (e.g., GitHub Actions).
- Excellent communication skills and the ability to work effectively within a technical team.
- A strong problem-solving mindset and the ability to work independently on complex tasks.
Desirable
- Real-time data processing
- Advanced data governance and quality frameworks
- Data operations and observability
- CI/CD pipelines for data (e.g., GitHub Actions)
#LI-Hybrid
#LI-MD1
We work with Textio to make our job design and hiring inclusive.
Permanent