Preferred). Experience 3+ years of hands-on experience in data engineering with GCP. Strong proficiency in SQL, Python, and/or Java/Scala for data processing. Practical experience with BigQuery, Cloud Dataflow, Cloud Dataproc, and Apache Beam. Experience with event-driven streaming platforms such as Apache Kafka or More ❯
Google Cloud, Google Cloud Composer, BigQuery, Spark, Solr, Elasticsearch, Druid, PostgreSQL, ScyllaDB, Redis, Kafka, Flink, Docker, Kubernetes, Kibana, Jenkins, Prometheus, Grafana, Github, C++, Python, Scala, Compiler Explorer What Blis Can Offer: We want you to be well and thrive and we care about your growth as a person and in More ❯
City of London, London, United Kingdom Hybrid / WFH Options
McCabe & Barton
not limited to banking, insurance, healthcare, media, retail, infrastructure and telco. The ideal candidate with have expertise in some of the following: Python, SQL, Scala, and Java for data engineering. Strong experience with big data tools (Apache Spark, Hadoop, Databricks, Dask) and cloud platforms (AWS, Azure, GCP). Proficient in More ❯
not limited to banking, insurance, healthcare, media, retail, infrastructure and telco. The ideal candidate will have expertise in some of the following: Python, SQL, Scala, and Java for data engineering. Strong experience with big data tools (Apache Spark, Hadoop, Databricks, Dask) and cloud platforms (AWS, Azure, GCP). Proficient in More ❯
asynchronous architecture Familiar with AWS, Unix/Linux, Git, SQL, and REST Bonus Points for Experience or interest in: Functional programming languages such as Scala, Haskell and Clojure Relational and NoSQL databases such as PostgreSQL and MongoDB DevOps such as Terraform, Fargate and Kubernetes Frontend development such as Node.js and More ❯
least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using data manipulation and machine learning libraries in one or more programming languages. Keen interest in some of More ❯
one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL). Modelling & Statistical Analysis experience, ideally customer related. Coding skills in at least one of Python, R, Scala, C, Java or JS. Track record of using data manipulation and machine learning libraries in one or more programming languages. Keen interest in some of More ❯
least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using data manipulation and machine learning libraries in one or more programming languages. Keen interest in some of More ❯
Need to See from You Qualifications Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding More ❯
infrastructure. We would like you to have Proven experience as a Data Engineer or Data Analyst with strong SQL experience (other programming languages - Python, Scala, Java - are beneficial). Confidence in building data virtualisation layers and data semantic layers. Strong knowledge of relational databases (MySQL, PostgreSQL), experience with data warehousing More ❯
requirements. Desirable: A passion and proven background in picking up and adopting new technologies on the fly. Backend server experience using Kotlin. Exposure to Scala, or functional programming generally. Experience with highly concurrent, asynchronous backend technologies, such as Ktor, http4k, http4s, Play, RxJava, etc. Experience with DynamoDB or similar NoSQL More ❯
need to see from you Qualifications Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding More ❯
foundation (e.g., microservices, automated testing, containerization). Strong experience with building and maintaining data pipelines and platforms. Proficiency in programming languages (e.g., Python, Java, Scala). General knowledge of data engineering tools (e.g., Databricks, Apache Spark, dbt, Airflow). Knowledge of semantic layer concepts and tools (e.g., LookML, Cube.js, dbt More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83zero Limited
wide range of AWS services with the ability to demonstrate working on large engagements Experience of AWS tools (e.g Athena, Redshift, Glue, EMR) Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. Deep understanding of data manipulation/wrangling techniques Demonstrable knowledge of applying More ❯
real-time data ingestion and storage. Ability to optimise and refactor existing data pipelines for greater performance and reliability. Programming skills in Python, Java, Scala, or a similar language. Proficiency in database technologies (SQL, NoSQL, time-series databases) and data modelling. Strong understanding of data pipeline orchestration tools (e.g., Apache More ❯
Denver, Colorado, Norfolk, United Kingdom Hybrid / WFH Options
Guidant Global
technical field or the equivalent combination of education and experience. Hands-on industry experience in software design, development, and algorithmic problem-solving. Proficiency in Scala, along with experience in at least one additional programming language. Functional programming experience is a plus. A strong focus on writing high-quality, maintainable, and More ❯
schemas and queries to meet business requirements. A passion and proven background in picking up and adopting new technologies on the fly. Exposure to Scala, or functional programming generally. Exposure with highly concurrent, asynchronous backend technologies, such as Ktor, Play, RxJava, etc. Exposure with DynamoDB or similar NoSQL databases, such More ❯
business requirements. A passion and proven background in picking up and adopting new technologies on the fly. Backend server experience using Kotlin. Exposure to Scala, or functional programming generally. Experience with highly concurrent, asynchronous backend technologies, such as Ktor, Play, RxJava, etc. Experience with DynamoDB or similar NoSQL databases, such More ❯
and challenge. We have over 300 tech experts across our teams all using the latest tools and technologies including Docker, Kubernetes, AWS, Kafka, Java, Scala, Python, .Net Core, Node.js and MongoDB. There’s something for everyone. We’re a place of opportunity. You’ll have the tools and autonomy to More ❯
of software development methodologies, standards, and coding best practices. Experience with various development languages including but not limited to: C#, Java, C++, SQL, Spark, Scala, Angular, HTML, CSS, XML. Experience with various database management packages including but not limited to: MSSQL, PostgreSQL, SQL Server, Oracle, SQLite, MongoDB. Experience with data More ❯
Staines, Middlesex, United Kingdom Hybrid / WFH Options
Industrial and Financial Systems
data ingestion tools like Airbyte, Fivetran, etc. for diverse data sources. Expert in large-scale data processing with Spark or Dask. Strong in Python, Scala, C# or Java, cloud SDKs and APIs. AI/ML expertise for pipeline efficiency, familiar with TensorFlow, PyTorch, AutoML, Python/R, and MLOps (MLflow More ❯
and challenge. We have over 300 tech experts across our teams all using the latest tools and technologies including Docker, Kubernetes, AWS, Kafka, Java, Scala, Python, .Net Core, Node.js and MongoDB. There's something for everyone. We're a place of opportunity. You'll have the tools and autonomy to More ❯
and challenge and have over 300 tech experts across our teams all using the latest tools and technologies including Docker, Kubernetes, AWS, Kafka, Java, Scala, Python, .Net Core, Node.js and MongoDB. There's something for everyone. We’re a place of opportunity. You’ll have the tools and autonomy to More ❯
Milton Keynes, Buckinghamshire, United Kingdom Hybrid / WFH Options
Banco Santander SA
concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and build Reference More ❯
solutions, and containers. Strong experience designing and deploying production-grade data pipelines (batch and streaming) within a big data architecture using tools like Python, Scala, Spark, and SQL. Expertise in either AWS or Azure cloud platforms, with hands-on experience in key services such as: AWS: EMR, Glue, Redshift, Kinesis More ❯