JPMorganChase logo

Data Engineer III – Databricks & Python

JPMorganChase
1 day ago
Full-time
On-site
Glasgow, United Kingdom
Data Engineer
Description

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
Ā 

As a Data Engineer III at JPMorganChase within the External Regulatory Financial Control (ERFC) Technology team, you will play a crucial role in designing, developing, and maintaining scalable data pipeline solutions using Databricks, Python/PySpark on AWS. You will collaborate with cross-functional teams to deliver high-quality data pipelines that support our business objectives.

Ā 

Job responsibilities

  • Design, develop, and maintain robust data pipelines using Python and PySpark on Databricks platform on AWS
  • Process and transform large-scale financial datasets, implementing big data processing techniques to produce aggregated financial data for analytics and reporting
  • Optimize complex queries and data processing workflows to ensure efficient performance at scale
  • Analyze aggregated data outputs to identify data quality issues, anomalies, and processing bottlenecks, implementing corrective solutions
  • Participate in the full Software Development Life Cycle (SDLC), including requirements gathering, design, development, testing, deployment, and maintenance
  • Implement data quality checks, monitoring, and alerting mechanisms to ensure data accuracy and pipeline reliability
  • Work with our partners Product Owners and end users to support their business use cases
  • Act as both Production Support and SRE function as part of the Data Engineer role
  • Utilise AI tools to quickly build and test new data pipelines (e.g. CoPilot, Claude Code)

Ā 

Required qualifications, capabilities, and skills

  • Strong hands-on experience in data engineering or related roles
  • Strong proficiency in Python and PySpark for large-scale data processing
  • Demonstrated experience with Databricks platform and Apache Spark ecosystem
  • Proven track record of building and optimizing data pipelines for big data workloads
  • Strong SQL skills with experience in query optimization and performance tuning
  • Experience with AWS cloud services (S3, ECS, SNS/SQS, Lambda, etc.)
  • Strong analytical skills with ability to investigate data issues, identify root causes, and implement solutions
  • Experience with the complete SDLC, Jules/Jenkins, Spinnaker, Sonar and Agile methodologies
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field

Ā 

Preferred qualifications, capabilities, and skills

  • Experience working with financial data and understanding of data aggregation techniques
  • Experience with data orchestration tools (Airflow, Step Functions, etc.)
  • Understanding of financial services industry and regulatory requirements
  • Databricks or AWS certifications
  • Automated testing frameworks, e.g. Playwright, Cucumber, Gherkin etc.
  • Experience with Parquet, JSON, CSV, Avro, Delta Lake