maintaining data pipelines. Proficiency in JVM-based languages (Java, Kotlin), ideally combined with Python and experience in Spring Boot Solid understanding of data engineering tools and frameworks, like Spark, Flink, Kafka, dbt, Trino, and Airflow. Hands-on experience with cloud environments (AWS, GCP, or Azure), infrastructure-as-code practices, and ideally container orchestration with Kubernetes. Familiarity with SQL and More ❯
e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like ApacheFlink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the More ❯
in data engineering, data architecture, or a similar role, with at least 3 years in a lead capacity. Proficient in SQL, Python, and big data processing frameworks (e.g., Spark, Flink). Strong experience with cloud platforms (AWS, Azure, GCP) and related data services. Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations More ❯
Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to have: Experience with dbt, feature stores, or ML pipeline tooling Familiarity with Elasticsearch or real-time analytics (Flink, Materialize) Exposure to eCommerce, marketplace, or transactional environments More ❯
Kinesis) Knowledge of IaC (Terraform, CloudFormation) and containerisation (Docker, Kubernetes) Nice to have: Experience with dbt, feature stores, or ML pipeline tooling Familiarity with Elasticsearch or real-time analytics (Flink, Materialize) Exposure to eCommerce, marketplace, or transactional environments More ❯
Strong experience working with SQL and databases/engines such as MySQL, PostgreSQL, SQL Server, Snowflake, Redshift, Presto, etc Experience building ETL and stream processing pipelines using Kafka, Spark, Flink, Airflow/Prefect, etc. Familiarity with data science stack: e.g. Juypter, Pandas, Scikit-learn, Dask, Pytorch, MLFlow, Kubeflow, etc. Strong experience with using AWS/Google Cloud Platform (S3S More ❯
Azure and distributed systems. Preferred Skills Kubernetes & Helm: Deploying and managing containerized applications at scale with reliability and fault tolerance. Kafka (Confluent): Familiarity with event-driven architectures; experience with Flink or KSQL is a plus. Airflow: Experience configuring, maintaining, and optimizing DAGs. Energy or commodity trading: Understanding the data challenges and workflows in this sector. Trading domain knowledge: Awareness More ❯
Azure and distributed systems. Preferred Skills Kubernetes & Helm: Deploying and managing containerized applications at scale with reliability and fault tolerance. Kafka (Confluent): Familiarity with event-driven architectures; experience with Flink or KSQL is a plus. Airflow: Experience configuring, maintaining, and optimizing DAGs. Energy or commodity trading: Understanding the data challenges and workflows in this sector. Trading domain knowledge: Awareness More ❯
working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
Vivedia Ltd
pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to create meaningful business impact. Soft Skills: Excellent communication and More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
DCS Recruitment
working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯
City Of London, England, United Kingdom Hybrid/Remote Options
Bondaval
or similar) from a good University highly desirable. Nice to Have: Familiarity with message brokers (Kafka, SQS/SNS, RabbitMQ). Knowledge of real-time streaming (Kafka Streams, ApacheFlink, etc.). Exposure to big-data or machine-learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain). Experience with real-time streaming technologies (Kafka, Apache Storm). Understanding of infrastructure More ❯
contract definition, clean code, CI/CD, path to production Worked with AWS as a cloud platform Extensive hands-on experience with modern data technologies, ETL tools (e.g. Kafka, Flink, DBT etc.) , data storage (e.g. Snowflake, Redshift, etc.) and also IaC ( e.g. Terraform, CloudFormation ) Software development experience with one or more languages (e.g. Python, Java, Scala, Go ) Pragmatic approach More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
Java, Scala). Experience working in financial services or large enterprise environments. Demonstrated ability to lead distributed engineering teams effectively. Deep understanding of data architecture, streaming technologies (e.g., Kafka, Flink), and cloud platforms (e.g., AWS, Azure, GCP). Excellent communication and stakeholder management skills. ABOUT CAPGEMINI Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
Java, Scala). Experience working in financial services or large enterprise environments. Demonstrated ability to lead distributed engineering teams effectively. Deep understanding of data architecture, streaming technologies (e.g., Kafka, Flink), and cloud platforms (e.g., AWS, Azure, GCP). Excellent communication and stakeholder management skills. ABOUT CAPGEMINI Capgemini is a global business and technology transformation partner, helping organizations to accelerate More ❯
to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and ApacheFlink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and systems to ensure team alignment and knowledge sharing. Your Qualifications Experience: Professional experience … pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with ApacheFlink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure to frontend frameworks like React More ❯
Java, data structures and concurrency, rather than relying on frameworks such as Spring. You have built event-driven applications using Kafka and solutions with event-streaming frameworks at scale (Flink/Kafka Streams/Spark) that go beyond basic ETL pipelines. You know how to orchestrate the deployment of applications on Kubernetes, including defining services, deployments, stateful sets etc. More ❯
about complex problems at high scale. Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare. Experience with data streaming technologies (e.g., Kafka, Flink) is a strong plus. Experience with various logging platforms or SIEMs (e.g., Splunk, Datadog, Sumo Logic) and storage destinations (e.g., S3, R2, GCS) is a plus. Experience with Infrastructure More ❯
Greater London, England, United Kingdom Hybrid/Remote Options
Quant Capital
Java – if you haven’t been siloed in a big firm then don’t worry. Additional exposure to the following is desired : - Tech Stack you will learn Hadoop and Flink RUST, Javascript, React, Redux, Flow Linux, Jenkins Kafka, Avro, Kubernetes, Puppet Involvement in the Java community My client is based London. Home work is encouraged but you will need More ❯
process of AB testing framework. Build services which respond to batch and real-time data to safely rollout features and experiments using technology stack of AB testing, Hadoop, Spark, Flink, Hbase, Druid, Python, Java, Distributed Systems, React and statistical analysis. Work closely with partners to implement sophisticated statistical methodology into the platform. Telecommuting is permitted. Minimum Requirements: Masters degree More ❯
volumes of data in real time. Job Description: Design, develop, and maintain scalable microservices using Java and Spring Boot. Build and optimize real-time data pipelines leveraging Apache Kafka, Flink, and Spark/Databricks. Develop robust data distribution and streaming solutions for high-throughput systems. Deploy, manage, and monitor services in containerized environments (Docker/Kubernetes). Write efficient … architecture and RESTful APIs. Proficiency with Kafka and distributed streaming systems. Solid understanding of SQL and data modeling. Experience with containerization (Docker) and orchestration (Kubernetes). Working knowledge of Flink, Spark, or Databricks for data processing. Familiarity with AWS services (ECS, EKS, S3, Lambda, etc.). Basic scripting in Python for automation or data manipulation. Secondary Skills Experience with More ❯