storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
craft as an engineer. Work closely with product owners to shape and deliver features to customers. About you: Expertise and experience of Apache Spark, PySpark, Python based pipeline jobs. Solid Data Lake/Data Warehouse principles, techniques and technologies - Star Schema, SQL (AWS EMR, Apache Iceberg, parquet). Strong More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
stack. Excellent communication and stakeholder management skills. Nice to have Familiarity with the AWS environment (S3, Athena, Redshift). Experience with Python (Airflow, dbt, PySpark, Pandas). Experience with Snowflake. Experience with Thoughtspot. Familiarity with the command line. Some understanding of Kafka. Master's degree or equivalent in a More ❯
and architecture Maintain comprehensive technical documentation for data systems, processes, and tools What we're looking for Strong proficiency in SQL/Python/Pyspark and/or other languages relevant to data processing Experience with Infrastructure as code e.g Terraform, for managing cloud infrastructure Experience designing, implementing, and More ❯
and architecture Maintain comprehensive technical documentation for data systems, processes, and tools What we’re looking for Strong proficiency in SQL/Python/Pyspark and/or other languages relevant to data processing Experience with Infrastructure as code e.g Terraform, for managing cloud infrastructure Experience designing, implementing, and More ❯
and integration. Microsoft Fabric Expertise: Analytics, DAX, PowerBI Data Engineering & Integration: SQL, Python, CI/CD pipelines, ingestion, integration, and automation Big Data Tools: PySpark, Apache Spark, Data Warehousing, Data Factory, Synapse, Databricks, and Data Lake DevOps & Version Control: CI/CD pipelines, Terraform, GitHub, RBAC (Role-Based Access More ❯
working as a Java or Python Developer > Solid foundation in data engineering (data pipelines, modern data ecosystem) > Modern lakehouse architectures & technologies (Dremio, Snowflake, Iceberg, PySpark, Glue) > Strong understanding of AWS What's on offer? > Flexible hybrid working > Up to £135k base + bonus If this may be of interest More ❯
london, south east england, united kingdom Hybrid / WFH Options
Durlston Partners
working as a Java or Python Developer > Solid foundation in data engineering (data pipelines, modern data ecosystem) > Modern lakehouse architectures & technologies (Dremio, Snowflake, Iceberg, PySpark, Glue) > Strong understanding of AWS What's on offer? > Flexible hybrid working > Up to £135k base + bonus If this may be of interest More ❯
City of London, London, United Kingdom Hybrid / WFH Options
dcoded
security across Databricks environments What We're Looking For: Proven experience in data architecture, analytics, and engineering Expertise in Databricks, Delta Lake, Spark, and PySpark Strong programming skills in Python, Scala, or SQL Hands-on experience with Azure, AWS, or GCP Knowledge of data governance and security best practices More ❯
PySpark. Who We're Looking For Essential Experience: Hands-on Data Engineer background progressing into a Data Solution Architect role. Expertise in SQL, Python, PySpark, and data pipeline development. Experience with data modelling, warehousing, and lakehouse architectures. Cloud expertise: Azure (50%), on-prem (10%), and other cloud platforms. Ability More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
Azure data technologies. Expertise in Azure Data Factory, Azure Databricks, Apache Spark. Deep understanding of Kimball dimensional modelling and data warehousing. Proficiency in Python, PySpark, and SQL. Hands-on experience with Azure Fabric, ADLS2, and Power BI (for data visualisation). Strong understanding of SDLC, DevOps, and cloud security More ❯
knowledge of Python and SQL. Familiarity with pipeline orchestration tools (e.g., Airflow, Dagster). Advanced expertise in writing scalable, distributed Python applications (e.g., Ray, PySpark, Dask). A proven track record in shaping and architecting scalable data solutions. Experience building and optimizing big data pipelines, architectures, and datasets. Ability More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Ikhoi Recruitment
the Canary Wharf office, 3 days WFH, pension, life assurance, season ticket loan and lot's more. You must have experience with Python/Pyspark, Azure and managing a small team of Mid level/Senior Data Engineers. As part of the Technology Hub within the client, Data Engineer More ❯
team culture that thrives on innovation. Your background: Proven experience leading high-performing data engineering teams in a fast-paced environment. Expertise in Python, PySpark, SQL, Synapse, DataBricks, and Azure Data Factory (ADF). Strong knowledge of CI/CD pipelines, Git, Jenkins, Docker, and test automation (PyTest). More ❯
are familiar with, and eager to explore, a diverse range of technologies including programming languages like Python and SQL; data processing libraries such as PySpark, Tidyverse, SparkR, and Pandas; platforms like Databricks and Posit; engineering tools including GitHub, AWS, Terraform, and DBT; and data visualization tools such as Power More ❯
London Colney, Hertfordshire, United Kingdom Hybrid / WFH Options
InPost Ltd
programming, reinforcement learning, time series, geospatial analysis, etc.). Proficiency in Python, as well as ML and data analysis libraries: data transformation (pandas, polars, pyspark), gradient boosting & deep learning frameworks, scikit-learn, etc. Knowledge and experience in relational databases, cloud solutions (e.g. Databricks, Azure, GCP, AWS, Snowflake). Experience More ❯
Sunderland, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
are a Data Engineer with strong experience of data models, data mining and segmentation techniques You have coding skills with Python and/or PySpark and SQL You have experience with SQL databases (e.g. Amazon Redshift, PostgreSQL) You have experience with data tooling (e.g. Airflow, DBT, AWS Kinesis) You More ❯
Cambridge, Cambridgeshire, UK Hybrid / WFH Options
Cambridge Cognition
of broader market and clinical research landscape Engaged in ongoing professional development Technical Experience: Experience with Python and/or Databricks and Apache Spark (Pyspark or Park SQL) for development of data pipelines Proficiency with data processing and transformation tools and frameworks (ETL/ELTs, Apache Kafka etc) Solid More ❯
Lake and Lakehouse architecture for efficient data management; experience with ETL processes and optimizing data pipelines for performance Be strong in Pandas, SQL, and PySpark Have a strong understanding of cloud networking principles, including Azure Virtual Networks, Private Endpoints, secure connectivity strategies Have experience or knowledge with GDPR compliant More ❯
City of London, Greater London, UK Hybrid / WFH Options
Premier Group
data modelling Technical Skills Proficient in SQL Server and relational database management Experience with cloud platforms like Azure (Synapse, Data Lake) Programming in Python, PySpark, and T-SQL Additional Skills Familiarity with data analysis tools and workflows Strong communication and collaboration skills Experience in fast-paced, team-oriented environments More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
understanding of data definition, observability and data quality best practices. Proficiency in development languages suitable for intermediate-level data engineers, such as: Python/PySpark: Widely used for data manipulation, analysis, and scripting. SQL: Essential for querying and managing relational databases. Exposure to data science concepts and techniques is More ❯
training, inference, monitoring, and iteration. Strong understanding of ML/DL/LLM algorithms, model architectures, and training methodologies. Proficient in Python, SQL, Spark, PySpark, TensorFlow, or other analytical/model-building programming languages. Familiarity with tools and Large Language Models (LLMs). Ability to work both independently and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Adecco
training, inference, monitoring, and iteration. Strong understanding of ML/DL/LLM algorithms, model architectures, and training methodologies. Proficient in Python, SQL, Spark, PySpark, TensorFlow, or other analytical/model-building programming languages. Familiarity with tools and Large Language Models (LLMs). Ability to work both independently and More ❯