Nice to Haves: Experience with Azure services for managing GPT pipelines and multi-cloud infrastructure. Familiarity with big data technologies such as Apache Spark, Kafka, and MSK for large-scale data processing. Experience with boost libraries (asio, beast). Advanced experience in cost optimization strategies for cloud infrastructure and More ❯
building RESTful APIs/WebSockets Proficient in Scala and its ecosystem (e.g., Akka, Play Framework, SBT) Experience working with distributed messaging systems such as Kafka, ActiveMQ, RabbitMQ, etc. Experience with microservices architecture Containerisation technologies (e.g., Docker, Kubernetes) Strong understanding of software design patterns, data structures, and algorithms Experience with More ❯
infrastructure experience being a plus Setting up CI/CD pipelines with Bitbucket, Terraform, Jenkins, and Ansible Handling queuing and event-driven architectures with Kafka and SQS Implementing security best practices and monitoring/logging solutions (ELK stack, RedHat SSO, SonarQube) What We’re Looking For At least More ❯
DevOps practices is a plus Experience with TypeScript Familiarity with GraphQL or other modern data-fetching technologies Experience in integrating message brokers (e.g., RabbitMQ, Kafka) Familiarity with cloud platforms like AWS, Azure, or Google Cloud Knowledge of microservices architecture and related tools Experience with testing frameworks like Mocha, Chai More ❯
days. Essential Skills and Experience Strong expertise in AWS cloud services , with practical experience or theoretical knowledge of AWS MSK (Managed Streaming for ApacheKafka) Deep understanding of Kafka (ApacheKafka, Confluent Kafka) and event-driven microservices architecture Proficiency in Infrastructure as Code tools (Terraform, AWS … CloudFormation) Experience with observability and monitoring tools (Prometheus, Grafana, AWS CloudWatch) Familiarity with data streaming frameworks such as Kafka Streams, Flink, or Spark Streaming Knowledge of security best practices , including IAM, encryption, and role-based access control At least 5 plus years of experience in cloud engineering, DevOps, or … backend platform development Hands-on experience with AWS MSK is ideal, but candidates with strong Kafka expertise and a willingness to upskill in AWS MSK will be considered Multi-cloud experience (Azure Event Hubs, Google Pub/Sub) Experience with alternative messaging and queueing systems (RabbitMQ, ActiveMQ, Redis Streams More ❯
Scala for data processing. Practical experience with BigQuery, Cloud Dataflow, Cloud Dataproc, and Apache Beam. Experience with event-driven streaming platforms such as ApacheKafka or Pub/Sub. Familiarity with Terraform, Kubernetes (GKE), and Cloud Functions. Strong understanding of data modeling, data lakes, and data warehouse design. Knowledge More ❯
DevOps Engineers. As a Senior Java Developer, you will: Have 5+ years experience as a Software Engineer Experience developing with: Java, Spring Boot, Microservices, Kafka (or other messaging queues e.g. RabbitMQ), AWS, Docker, Kubernetes A desire to be part of an important mission Be adaptable to working in the More ❯
with DevOps principles and tooling such as Infrastructure as Code (Terraform) and CI/CD (GitHub Actions, Jenkins) Knowledge of stream processing technologies like Kafka would be useful Experience working with ITSM systems like JSM, Zendesk or ServiceNow Experience building/maintaining automated incident management workflows Experience developing with More ❯
where blockchain technology, particularly in data handling and staking mechanisms, was a core component. Development and Automation of Data Pipelines: Leverage experience with ApacheKafka, Apache Airflow, and AWS Glue to build robust data pipelines that support the migration and ongoing data operations. Collaborative Project Execution: Work remotely with More ❯
and NoSQL) and proficiency in designing efficient and scalable database schemas. Experience with workflow orchestration tools (Apache Airflow, Prefect) and data pipeline frameworks (ApacheKafka, Talend). Familiarity with cloud platforms (AWS, GCP or Azure) and their data services (AWS Glue, GCP Dataflow) for building scalable cost-effective data More ❯
Strong understanding of multi-threading, concurrency, and performance optimization. Knowledge of SQL and database technologies (PostgreSQL, MySQL, or similar). Experience with messaging systems (Kafka, RabbitMQ) is a plus. Exposure to cloud platforms such as AWS, Azure, or GCP. Strong problem-solving skills and a keen attention to detail. More ❯
automation using Ansible, Terraform, Bash, Python, etc. Experience with containers and platforms such as Docker, Kubernetes, etc. Experience with administration RabbitMQ/Nats/Kafka Experience with administration of SQL/NoSQL databases Good writing and verbal communication skills to ensure efficient communication within and outside the team and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
83zero Limited
Data Fusion. NoSQL Databases. Dynamo DB/Neo4j/Elastic, Google Cloud Datastore. Snowflake Data Warehouse/Platform Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc Experience building More ❯
the ELK stack. Set up and manage the CI/CD pipeline using BitBucket, Maven, Terraform, Jenkins, Ansible/Packer, and Kustomize. Work with Kafka, SQS for queuing solutions and implement scheduling using Jenkins/Ansible. Use a combination of Cucumber, JUnit, Selenium, and Postman for comprehensive testing. Qualifications More ❯
pipelines for data applications and infrastructure Expertise with cloud platforms like AWS, GCP or Azure preferably AWS Experience with dbt for data transformations and Kafka (or other streaming technologies) is a strong plus Proficiency in data modeling and designing scalable data architectures to support analytics and operational use cases More ❯
similar Key Technologies We Use (not necessarily required for the role): Google Cloud, Google Cloud Composer, BigQuery, Spark, Solr, Elasticsearch, Druid, PostgreSQL, ScyllaDB, Redis, Kafka, Flink, Docker, Kubernetes, Kibana, Jenkins, Prometheus, Grafana, Github, C++, Python, Scala, Compiler Explorer What Blis Can Offer: We want you to be well and More ❯
pipelines for data applications and infrastructure Expertise with cloud platforms like AWS, GCP or Azure preferably AWS Experience with dbt for data transformations and Kafka (or other streaming technologies) is a strong plus Proficiency in data modeling and designing scalable data architectures to support analytics and operational use cases More ❯
into different categories: Backend Java, Node.js, C#, Python, PHP, Scala, Power Platform Frontend React, JavaScript, Typescript, Angular Data PostgreSQL, Microsoft SQL Server, MongoDB, ApacheKafka, Neo4J, Amazon Athena DevOps AWS, Kubernetes, Azure, Jenkins, Docker, Ansible, Terraform, Dynatrace Responsibilities As part of the team, your day-to-day responsibilities will More ❯
Scala or be interested in learning a functional language. Experience with distributed datastores (e.g., DynamoDB). Experience with message queues (e.g., RabbitMQ/ApacheKafka). Experience building scalable web applications serving 10,000s of requests per second. Experience working with RDBMS, ideally Postgres. Familiarity with DevOps culture (CI More ❯
including data analysis, extraction, transformation, and loading, data intelligence, data security and proven experience in their technologies (e.g. Spark, cloud-based ETL services, Python, Kafka, SQL, Airflow) You have experience in assessing the relevant data quality issues based on data sources & uses cases, and can integrate the relevant data More ❯
search, and other advanced natural language processing techniques. Proven experience with MLOps, data platforms (e.g., Snowflake), data pipelines (e.g., Airflow), and messaging platforms (e.g., Kafka), across multiple geographic regions. Strong background in data architecture, software architecture, and distributed systems, with experience coordinating technical efforts across global teams. Proficient in More ❯
Python and SQL. Experience with big data technologies like Apache Hadoop and Apache Spark. Familiarity with real-time data processing frameworks such as ApacheKafka or Flink. MLOps & Deployment: Experience deploying and maintaining large-scale ML inference pipelines into production environments. Proficiency with Docker for containerization and Kubernetes for More ❯
Python and SQL. Experience with big data technologies like Apache Hadoop and Apache Spark. Familiarity with real-time data processing frameworks such as ApacheKafka or Flink. MLOps & Deployment: Experience deploying and maintaining large-scale ML inference pipelines into production environments. Proficiency with Docker for containerization and Kubernetes for More ❯