london, south east england, united kingdom Hybrid/Remote Options
Yapily
analytical systems. API & Micro services Architecture: Comfortable working with REST APIs and micro services architectures. Real-time Stream Processing: Understanding of real-time stream processing frameworks (e.g., PubSub, Kafka, Flink, Spark Streaming). BI Tools & Visualisation Platforms: Experience supporting BI tools or visualization platforms (e.g. Looker, Grafana, PowerBI etc.). Data Pipelines & APIs: Experience in building and maintaining both More ❯
platforms (AWS, GCP, Azure) and container orchestration technologies (Kubernetes, Docker) at enterprise scale Proven track record leading and scaling data pipelines using technologies like Apache Kafka, Apache Spark, ApacheFlink, or similar streaming frameworks Deep expertise in database technologies, including both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, Redis) systems with experience in data modeling and optimization Advanced experience More ❯
maintaining data pipelines. Proficiency in JVM-based languages (Java, Kotlin), ideally combined with Python and experience in Spring Boot Solid understanding of data engineering tools and frameworks, like Spark, Flink, Kafka, dbt, Trino, and Airflow. Hands-on experience with cloud environments (AWS, GCP, or Azure), infrastructure-as-code practices, and ideally container orchestration with Kubernetes. Familiarity with SQL and More ❯
and managing large-scale data pipelines and machine learning models • Experience developing ETL processes, maintaining Spark pipelines, and productizing AI/ML models • Proficient in technologies like Kafka, Redis, Flink, TensorFlow, Triton, and AWS services • Skilled in Unix/Shell or Python scripting and scheduling tools like Airflow and Control-M • Strong experience with UI technologies (Redux, React.js, HTML5 More ❯
e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like ApacheFlink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations More ❯
in data engineering, data architecture, or a similar role, with at least 3 years in a lead capacity. Proficient in SQL, Python, and big data processing frameworks (e.g., Spark, Flink). Strong experience with cloud platforms (AWS, Azure, GCP) and related data services. Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud More ❯
Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to have: Experience with dbt, feature stores, or ML pipeline tooling Familiarity with Elasticsearch or real-time analytics (Flink, Materialize) Exposure to eCommerce, marketplace, or transactional environments More ❯
Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to have: Experience with dbt, feature stores, or ML pipeline tooling Familiarity with Elasticsearch or real-time analytics (Flink, Materialize) Exposure to eCommerce, marketplace, or transactional environments More ❯
Strong experience working with SQL and databases/engines such as MySQL, PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etc Experience building ETL and stream processing pipelines using Kafka, Spark, Flink, Airflow/Prefect, etc. Familiarity with data science stack: e.g. Juypter, Pandas, Scikit-learn, Dask, Pytorch, MLFlow, Kubeflow, etc. Strong experience with using AWS/Google Cloud Platform (S3S More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
DCS Recruitment
working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to More ❯
tranformations for production data pipelines. Experience leveraging data modeling techniques and ability to articulate the trade-offs of different approaches. Experience with one or more data processing technologies (e.g. Flink, Spark, Polars, Dask, etc.) Experience with multiple data storage technologies (e.g. S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc.) and knowledge of their associated trade-offs. Experience More ❯
field, or equivalent experience. Bonus Points Experience working with healthcare data and integrating EHR, scheduling, or operational systems. Familiarity with real-time data processing frameworks (Kafka, Kinesis, Spark Streaming, Flink). Knowledge of data warehousing solutions like Snowflake or BigQuery. Hands-on experience with Databricks or similar data lakehouse platforms. Strong understanding of data privacy, compliance, and security in More ❯
and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing (ApacheFlink, Kafka Streams, ksqlDB) and event-driven architecture • Strong proficiency in Java, Python, or Scala • Proven ability to integrate Kafka with enterprise systems (databases, APIs, microservices) • Hands-on experience with More ❯
and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing (ApacheFlink, Kafka Streams, ksqlDB) and event-driven architecture • Strong proficiency in Java, Python, or Scala • Proven ability to integrate Kafka with enterprise systems (databases, APIs, microservices) • Hands-on experience with More ❯
Azure and distributed systems. Preferred Skills Kubernetes & Helm: Deploying and managing containerized applications at scale with reliability and fault tolerance. Kafka (Confluent): Familiarity with event-driven architectures; experience with Flink or KSQL is a plus. Airflow: Experience configuring, maintaining, and optimizing DAGs. Energy or commodity trading: Understanding the data challenges and workflows in this sector. Trading domain knowledge: Awareness More ❯
Azure and distributed systems. Preferred Skills Kubernetes & Helm: Deploying and managing containerized applications at scale with reliability and fault tolerance. Kafka (Confluent): Familiarity with event-driven architectures; experience with Flink or KSQL is a plus. Airflow: Experience configuring, maintaining, and optimizing DAGs. Energy or commodity trading: Understanding the data challenges and workflows in this sector. Trading domain knowledge: Awareness More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
Vivedia Ltd
pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to create meaningful business impact. Soft Skills: Excellent communication and More ❯
similar language Experience with SQL and data modeling concepts Experience with cloud-based data warehousing solutions such as Redshift, BigQuery, or similar Experience with ETL tools such as Spark, Flink, Databricks, Snowflake, etc. Experience with messaging systems such as RabbitMQ, Kafka, etc. Knowledge of the underlying cloud infrastructure on how the various data pipeline component fit together Excellent problem More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯
contract definition, clean code, CI/CD, path to production Worked with AWS as a cloud platform Extensive hands-on experience with modern data technologies, ETL tools (e.g. Kafka, Flink, DBT etc.) , data storage (e.g. Snowflake, Redshift, etc.) and also IaC ( e.g. Terraform, CloudFormation ) Software development experience with one or more languages (e.g. Python, Java, Scala, Go ) Pragmatic approach More ❯
Team Collaboration: Collaborate within a Pod of 4+ data engineers, working towards common objectives in a consultative fashion with clients. Data Movement and Transformation: Use Apache NiFi and ApacheFlink for data movement, streaming, and transformation services, ensuring efficient and reliable data workflows. 3+ years of experience in Data and Cloud Application Engineering. 2+ years of experience working with More ❯
City Of London, England, United Kingdom Hybrid/Remote Options
Bondaval
or similar) from a good University highly desirable. Nice to Have: Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, ApacheFlink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Experience with real-time streaming technologies (Kafka, Apache Storm). Understanding of infrastructure More ❯
with Internet data, including user growth and conversion data, and strong A/B testing and data analysis capabilities. Proficiency in big data tools and frameworks such as Spark, Flink, Clickhouse and at least one of SQL dialects for big data. Strong understanding of data modeling concepts, especially dimensional modeling and data warehousing theory. Strong business understanding with the More ❯