or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience more »
Deep familiarity with a cloud-native, modern data engineering technology stack. Experience with data ingestion methods and tools. Experience with distributed computing frameworks (e.g., Hadoop, Spark, Hive, Presto). Experience with data orchestration tools. Experience with cloud data warehousing and core data modelling concepts. Proficiency in version control systems more »
ETL processes, and data warehousing. - Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. more »
South East London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
implement data models, ETL processes, and data warehousing solutions.Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines.Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing.Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark.Database Management: Handle more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
models, ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable Skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate more »
know one or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud more »
through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from various more »
Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ years more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
or Azure Solutions Architect Expert. Experience with other cloud platforms such as AWS or Google Cloud Platform. Knowledge of big data technologies such as Hadoop, Spark, etc. If you are passionate about leveraging Azure technologies to drive data-driven insights and solutions, we encourage you to apply for this more »
a sense of trust with stakeholders. Preferred qualifications, capabilities and skills Experience with deep learning frameworks (pytorch, tensorflow) Experience with big-data technologies (Spark, Hadoop) or distributed computation frameworks (Dask, Modin) Hands on experience with Natural Language Processing (NLP) and Large Language Models (LLMs) Experience of creating and deploying more »
London, England, United Kingdom Hybrid / WFH Options
Global Relay
JMeter or similar tools Web services technology such as REST, JSON or Thrift Testing web applications with Selenium WebDriver Big data technology such as Hadoop, MongoDB, Kafka or SQL Network principles and protocols such as HTTP, TLS and TCP Continuous integration systems such as Jenkins or Bamboo Continuous delivery more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
consulting environment • Current or previous consulting experience highly desirable • Experience of working with companies in the finance sector highly desirable • Platform implementation experience (ApacheHadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile delivery more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
essential: -Proven experience as an Architect and excellent knowledge of Big Data -Great understanding of Cloud e.g. Azure and or AWS -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very more »
Months Location - Hybrid ( 2 days a week) JD : Experience of working with Streaming & Batch technology stack – Confluent Kafka, Mongdb , Streamsets, IBM CDC, Hive, Hadoop, API, Informatica, Airflow, and other similar technologies SME level skills and experience of designing/architecting test automation solutions, ability to creatively problem solve is more »
South East London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
essential:-Proven experience as an Architect and excellent knowledge of Big Data-Great understanding of Cloud e.g. Azure and or AWS-Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc-Excellent experience of ETL, data warehousing and handling a variety of data types-Very more »
Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). o Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) o M years more »
and analytical abilities. Preferred Skills: Experience with cloud databases (e.g., AWS, Azure, Google Cloud Platform). Knowledge of big data tools and frameworks (e.g., Hadoop, Spark). Certification in database management (e.g., Microsoft Certified: Azure Data Engineer Associate). What We Offer: Competitive salary and comprehensive benefits package. Opportunity more »
within a typical retail trading environment is key. Experience required: A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumes Background in data more »
level within a typical retail trading environment is key.Experience required:A background in leveraging hands on skills using tools such as Python, R, Spark, Hadoop, SQL and cloud based platforms such as GCP, Azure and AWS to manipulate and analyse various data sets in large volumesBackground in data science more »
on experience with analytic tools like R & Python; & visualization tools like Tableau & Power BI Exposure to cloud platforms and big data systems such as Hadoop HDFS, and Hive is a plus Ability to work with IT and Data Engineering teams to help embed analytic outputs in business processes Graduate more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary: £45,000 - £75,000 (DoE) • Discretionary Bonus: Circa 10% per annum • DV Bonus: Circa £5,000 • Flex Fund: £5000 • Health: Private more »