engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
programming language (Java, C++, Kotlin would be beneficial) Cloud experience (we use Azure, AWS or GCP welcome) Kafka or exposure to ActiveMQ, RabbitMQ or Spark Orchestration and Containerisation experience (Kubernetes, Docker and Microservices) Creating greenfield microservices, this team plan to add a wealth of functionality to existing systems as more »
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Dupen Ltd
APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. Senior ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. This is a fantastic opportunity to join more »
machine learning techniques, deep learning, graph data analytics, statistical analysis, time series, geospatial, NLP, sentiment analysis, pattern detection, etc.) Experience using Python, R or Spark to extract insights from data Knowledge of SQL for accessing and processing data (PostgreSQL preferred but general SQL knowledge more important) Experience using the more »
machine learning techniques, deep learning, graph data analytics, statistical analysis, time series, geospatial, NLP, sentiment analysis, pattern detection, etc.)Experience using Python, R or Spark to extract insights from dataKnowledge of SQL for accessing and processing data (PostgreSQL preferred but general SQL knowledge more important)Experience using the latest more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
including but not limited to:Backend technology, Python.Databases like MSSQL.Front-end technology, Java.Cloud platform, AWS.Programming language, JavaScript (React.js)Big data technologies such as Hadoop, Spark, or Kafka.What We Need from You:Essential Skills:A degree in Computer Science, Engineering, or a related field, or equivalent experience.Proficiency in Python, MS more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
as Hadoop and Spark. Experience with data warehousing technologies such as Redshift, Snowflake, or BigQuery. Experience with data pipeline and ETL tools such as Apache NiFi, Airflow, or Glue. Knowledge of data governance and security best practices. Strong problem-solving and analytical skills. Ability to work well in a more »
that incorporate various data backends, query languages and ORM frameworks. Experience designing and building ETL pipelines built around libraries and frameworks like Pandas and Apache Spark. Strong API design skills and a familiarity with building web applications. A proponent of great testing, first-class observability and automating everything. Familiarity more »
and coding environments. Bonus Skills: Python/PHP/Typescript/ReactJS AI/ML models and usage ETL pipelines in AWS (Glue/ApacheSpark) API Load testing If you would like more information on the role or like to apply for then please send your CV more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
CloudComposer, Dagster etc) Demonstrable cloud experience developing in AWS (preferable), GCP, or Azure Experience of developing real-time streaming workloads – Kafka, Kinesis, PubSub, Flink, Spark Beyond this core skillset, our priority is always to introduce great culture fits to our client’s teams. We’re looking for enthusiastic and more »
manage several tasks/projects concurrently and prioritize work effectively. • Experience in Risk and Finance or Regulatory reporting. • Understanding of Big Data Technologies, Cloudera, Spark • Experience in CI/CD pipeline implementation • Good exposure in Python Scripting more »
Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »
Milton Keynes, Buckinghamshire, South East, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. Note: as there are actually two roles more »
Engineer, with expertise developing scalable data pipelines. Strong object oriented programming skills, particularly in Python . Experience with data lakes and data warehousing solutions ( Spark, Dataflow, BigQuery ). Knowledge of SQL and experience with relational databases, as well as NoSQL databases Familiarity with cloud services (preferably GCP ) and understanding more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability to explain technical concepts more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
Bachelor's degree in Computer Science or a related field (Master's degree preferred) Nice to have: experience with LLMs, Vector Databases, AWS EMR, Spark, and Python Our commitment: Equal opportunities are important to us. We believe that diversity and inclusion at The Stepstone Group are critical to our more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
model training, evaluation, and productionization. - Strong programming skills in Python, with proficiency in ML frameworks (e.g., TensorFlow, PyTorch) and data engineering tools (e.g., Kafka, Spark). - Expertise in cloud computing platforms (AWS, Azure) and containerization technologies (Docker) for scalable and reliable ML model deployment. - Solid understanding of data privacy more »