Â
Hi, I'm Andra, Director of Data at Airalo!
Our team works across the full data ecosystem, from collection to insights activation, ensuring that every piece of data drives meaningful action. We’re curious problem-solvers who love tackling challenges that haven’t been solved before and building tools and processes that scale impact across the company.
Airalo’s fully remote Data team is growing. You’ll turn numbers into decisions that shape the future of our business, collaborating with cross-functional teams to solve complex problems and influence how millions of travellers stay connected. This isn’t just dashboards - it’s using data to drive strategy, inform product and growth decisions, and create real impact. You’ll have access to best-in-class tools, the freedom to experiment, and a team ready to turn insights into action.
As the Data Engineering Manager, you will lead the foundational backend of our data organization. YYou will directly manage our current pod of Data Engineers (2 Senior Data Engineers and 1 Customer Data Platform Engineer) and shape the hiring roadmap as the function grows - including scoping future specialized roles such as Machine Learning Engineers.
Partnering closely with the Data Director, you will help us transition out of the reactive, ad-hoc phase and into a structured, highly scalable data ecosystem. You will own the architecture, ingestion, and orchestration that powers the rest of the data team - ensuring that our Analytics Engineers, Data Analysts and other users have a rock-solid, high-quality foundation to build upon.
This role goes beyond building a data platform. You'll be the connective tissue between Product & Engineering, MarTech, and our partner ecosystem - ensuring data is produced cleanly at the source, captured reliably, and delivered cohesively across the entire organization.
What You Will Do:
-
Manage, mentor, and grow a high-performing team of Senior Data Engineers and CDP Engineers.
-
Drive hiring for the Data Engineering function as it grows, including scoping future specialized roles such as Machine Learning Engineers
-
Foster a culture of engineering excellence, continuous learning, and cross-pollination of knowledge.
-
Own the technical roadmap for Airalo’s data infrastructure (GCP, BigQuery), orchestration (Dagster, Airflow), and ingestion (Fivetran, custom APIs).
-
Drive data-platform architecture decisions, turning ambiguous business problems into scalable, production-grade technical designs.
-
Bridge the data platform with MarTech and third-party ecosystems (PSPs, MNOs, CDPs, attribution platforms), ensuring customer events, campaign data, and partner integrations flow cohesively in both directions.
-
Partner with Software Engineering to embed data quality at the source - implementing data contracts, co-owning schema decisions, and driving the rollout of a data catalogue across the organization.
-
Establish the foundations for real-time data capabilities as the business matures beyond batch processing
-
Design systems that prioritize data quality, privacy, and governance standards across all data initiatives.
-
Transition the team's workflow from reactive problem-solving to structured, agile delivery.
-
Oversee the maintenance and optimization of high-performance data pipelines, implementing CI/CD automation, observability frameworks, and strict data quality gates.
-
Roll up your sleeves when necessary to assist with complex code reviews, Python/Scala development, or unblocking the team on difficult architectural challenges.
-
Act as a strategic partner to the Analytics Engineering Manager and Data Director to build the backend requirements necessary to achieve our company-wide goal of 80% self-serve analytics.
Must-haves:
-
7+ years of professional experience as a Data/Software Engineer, with at least 2+ years of experience directly managing and scaling data engineering teams.
-
You thrive in low-maturity or greenfield data environments. You're comfortable navigating ambiguity and enjoy the process of laying down paved roads and engineering standards where none existed before.
-
Deep, hands-on background with major cloud platforms (GCP preferred) and cloud-native data warehouses (BigQuery preferred, or Snowflake/Redshift).
-
Strong experience with orchestration tools (Airflow, Dagster), ELT pipelines (Fivetran, dbt), and distributed data processing frameworks (Apache Spark, Flink).
-
Hands-on experience using AI tools to accelerate engineering workflows - code generation, code review, pipeline debugging, or documentation
-
Strong coding experience in Python (and/or Scala) and advanced SQL across relational and non-relational databases.
-
Experience implementing CI/CD, Infrastructure as Code, and observability/monitoring for data pipelines.
-
Bachelor's degree in Computer Science, Engineering, Statistics, Information Systems, or a related quantitative field.
Nice-to-have:
-
Experience implementing data contracts, data catalogues (Atlan, Amundsen, DataHub), or federated governance models.
-
Experience with Customer Data Platforms (Segment, mParticle, or similar), MarTech data integration, and real-time event processing.
-
Experience in marketplace, B2C, or high-volume transactional businesses.
-
Previous work in globally distributed data environments (multi-currency, multi-region, multi-language).
-
Experience building or contributing to experimentation platform infrastructure (A/B testing pipelines, feature flag data, experiment analysis frameworks).
-
Exposure to Machine Learning infrastructure - not necessarily building models, but scoping teams, tooling, and pipelines that support ML workloads.