diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
designing and supporting multi-tenant SaaS data platforms with strategies for data partitioning, tenant isolation, and cost management Exposure to real-time data processing technologies such as Kafka, Kinesis, Flink, or Spark Streaming, alongside batch processing capabilities Strong knowledge of SaaS compliance practices and security frameworks Core Competencies Excellent problem-solving abilities with the capacity to translate requirements into More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
Excellent stakeholder management and documentation skills Team leadership experience with ability to mentor and develop engineering talent Nice to haves: Knowledge of data streaming platforms such as Kafka or Flink Exposure to graph databases or vector database technologies Professional certifications in Azure or AWS cloud platforms If you're ready to take the lead on transformative data engineering projects More ❯
training/monitoring and ML inference services - Proficiency in creating and optimizing high-throughput ETL/ELT pipelines using a Big Data processing engine such as DataBricks Workflows, Spark, Flink, Dask, dbt or similar - Experience building software and/or data pipelines in the AWS cloud (SageMaker Endpoints, ECS/EKS, EMR, Glue) Why Proofpoint Protecting people is at More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Hlx Technology
or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform, orchestration frameworks More ❯
or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform, orchestration frameworks More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hlx Technology
or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform, orchestration frameworks More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hlx Technology
or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform, orchestration frameworks More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hlx Technology
or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform, orchestration frameworks More ❯
Configure and manage data analytic frameworks and pipelines using databases and tools such as (but not limited to) NoSQL, SQL, NiFi, Kafka, HDInsight, MongoDB, Cassandra, Neo4j, GraphDB, OrientDB, Spark, Flink, Hadoop, Kafka, Hive, and others. • Apply distributed systems concepts and principles such as consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms. • Administrate cloud computing and More ❯
with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., ApacheFlink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security … pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as ApacheFlink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional … following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in ApacheFlink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake More ❯
to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and ApacheFlink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and systems to ensure team alignment and knowledge sharing. Your Qualifications Experience: Professional experience … pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with ApacheFlink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure to frontend frameworks like React More ❯
Israel. Willingness and ability to travel abroad. Bonus Points: Knowledge and hands-on experience of Office 365 - A big advantage. Experience in Kafka, and preferably some exposure to ApacheFlink, is a plus. Why Join Semperis? You'll be part of a global team on the front lines of cybersecurity innovation. At Semperis, we celebrate curiosity, integrity, and people More ❯
with a view to becoming an expert BS degree in Computer Science or meaningful relevant work experience Preferred Qualifications Experience with large scale data platform infrastructure such as Spark, Flink, HDFS, AWS/S3, Parquet, Kubernetes is a plus More ❯
of the platform Your Qualifications 12+ years of software engineering experience in enterprise-scale, data-centric, or platform environments Deep expertise in distributed data technologies such as Apache Kafka, Flink, and/or Pulsar Strong background in event-driven architectures, streaming pipelines, and data lakes Hands-on experience with AI/ML production systems, including prompt-based LLM integrations More ❯
on APIs, SDKs, data platforms, data integration, or enterprise SaaS. Data platform knowledge: Strong familiarity with data warehouses/lakehouses (Snowflake, Databricks, BigQuery), orchestration tools (Airflow, Prefect), streaming (Kafka, Flink), and transformation (dbt). Technical proficiency: Solid understanding of REST/GraphQL APIs, SDK development, authentication/authorization standards (OAuth, SSO), and best practices in developer experience. Customer empathy More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
At the core of VAULT is big data at scale. Our systems handle massive ingestion pipelines, long-term storage, and high-performance querying. We leverage distributed technologies (Kafka, Spark, Flink, Cassandra, Airflow, etc.) to deliver resilient, low-latency access to trillions of records, while continuously optimizing for scalability, efficiency, and reliability. We'll trust you to: Build high-performance … oriented programming language Deep background in distributed, high-volume, high-availability systems Fluency in AI development tools We would love to see: Experience with big data ecosystems (Kafka, Spark, Flink, Cassandra, Redis, Airflow) Familiarity with cloud platforms (AWS, Azure, GCP) and S3-compatible storage SaaS/PaaS development experience Container technologies (Docker, Kubernetes) Salary Range = 160000 - 240000 USD Annually More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of ApacheFlink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
essential requirement includes experience of analytics area, especially in data warehouse and data lake technologies. Familiar with services and products related to data analysis, for example: Redshift, EMR, ElasticSearch, Flink, Spark, Hbase, Kafka, Kinesis, Trino, Hudi, Iceberg, etc.Experience with Data and AI projects and tool usage will be given priority. About the team Diverse Experiences AWS values diverse experiences. … years of IT development or implementation/consulting in the software or Internet industries experience - Familiar with services and products related to data analysis, for example: Redshift, EMR, OpenSearch, Flink, Spark, Hbase, Kafka, Kinesis, Trino, Hudi, Iceberg, etc. - Experience in deploying and maintaining big data projects. PREFERRED QUALIFICATIONS - Experience working within software development or Internet-related industries - Experience migrating More ❯
distributed team with the remit being the delivery of a new state of the art platforms for a specific business area. The technical stack includes Java, Oracle, Spring, ApacheFlink, Apache Kafka, Apache Ignite, Angular You will proactively influence design whilst also being Development Lead, promoting the highest software development standards. Experience Extensive Java experience in a complex software … development environment. Spring Framework experience. Specialisation in any of the following: messaging middleware, databases, such as Oracle, Flink, Ignite, Kafka, or Kubernetes. SDLC Automation tools such as Jira, Bitbucket, Artifactory, or Jenkins. Experience working in a global team, aiding others through pair programming and knowledge sharing to help the team improve their development practices. Coaching and mentoring experience. Please More ❯
Technology Product Manager, Enterprise Services - Financial Solutions Location New York Business Area Sales and Client Service Ref # Description & Requirements Bloomberg's Enterprise Technology team is responsible for ensuring clients can robustly connect, integrate and develop with Bloomberg's capabilities More ❯