South East London, London, United Kingdom Hybrid / WFH Options
Kennedy Pearce Consulting
objectives. Key Responsibilities: Build and maintain efficient data pipelines on Google Cloud Platform (GCP), ensuring scalability and reliability. Utilise tools such as Google BigQuery, ApacheSpark, Apache Beam, Airflow, and Cloud Composer to manage and process large datasets. Collaborate with engineering, product, and data teams to create … hands-on experience in cloud platforms (experience with Google Cloud is a plus). Strong knowledge of data warehousing (e.g., Google BigQuery), data processing (ApacheSpark, Beam), and pipeline orchestration (Airflow, Cloud Composer). Proficiency with SQL and No-SQL databases (e.g., Cloud Datastore, MongoDB), and storage systems more »
implementing scalable and efficient data pipelines for both batch and real-time processing. * Hands-on experience with data transformation and data processing frameworks (e.g., ApacheSpark, Apache Kafka). * Solid understanding of relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., DynamoDB, MongoDB). * Familiarity with data more »
Troubleshoot complex technical issues Solution Design & Implementation Skills Design scalable, maintainable, and secure solutions using: Cloud platforms (Azure/AWS) Data processing frameworks (Databricks, ApacheSpark) Data warehousing solutions API and integration patterns Create detailed architecture documentation and technical specifications Establish patterns for data modelling, API design, and … design patterns Distributed systems design Data modelling and database design API design and integration patterns Cloud architecture (Azure/AWS) Data processing frameworks (Databricks, ApacheSpark) Hands-on experience with: Python, SQL, and other relevant programming languages Data warehousing and lake architectures ETL/ELT pipelines Infrastructure as more »
applications or services, preferably in AWS Experience working with event streaming platforms (Kafka/Kinesis/SQS) Experience with distributed processing systems such as ApacheSpark and/or Apache Flink Ability to handle periodic on-call duty as well as out-of-band requests Ability to more »
applications or services, preferably in AWS Experience working with event streaming platforms (Kafka/Kinesis/SQS) Experience with distributed processing systems such as ApacheSpark and/or Apache Flink Ability to handle periodic on-call duty as well as out-of-band requests Ability to more »
platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis etc) Software development best practices including CI/CD, TDD and version control. Containerisation tools like Docker or Kubernetes Experience with Infrastructure as more »
Proficiency in software engineering languages such as Python, Java, C#, JavaScript and TypeScript. Extensive experience with data engineering languages and tools such as SQL, Spark, Scala, and Hadoop. Extensive experience with agile and DevOps methodologies, practices, and technologies. Proficiency with Azure DevOps, including CI/CD pipelines, automated testing more »
Proficiency in software engineering languages such as Python, Java, C#, JavaScript and TypeScript. Extensive experience with data engineering languages and tools such as SQL, Spark, Scala, and Hadoop. Extensive experience with agile and DevOps methodologies, practices, and technologies. Proficiency with Azure DevOps, including CI/CD pipelines, automated testing more »
Manchester Area, United Kingdom Hybrid / WFH Options
Anson McCade
data warehouse, data lake design/building, and data movement. • Design and deploy production data pipelines in big data architecture using Java, Python, Scala, Spark, and SQL. Tasks involve scripting, API data extraction, and writing SQL queries. • Comfortable designing and building for AWS cloud, encompassing Platform-as-a-Service more »
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
in SQL and programming languages such as Python, Java, or Scala. Experience with cloud platforms (AWS, Azure, GCP) and big data tools (e.g., Hadoop, Spark). Knowledge of data warehousing concepts and data modelling best practices. Familiarity with modern data orchestration tools and ETL frameworks. Excellent communication skills, with more »
Data Factory Azure Data Lake Azure Synapse (and data warehousing approaches) or SSIS Azure Analysis Services Experience in programming languages such as: SQL Python Spark DAX A good understanding of Devops practices: CI/CD (Azure DevOps preferable) GIT and Version Control Exceptional communication skills, both written and verbal more »
Data Factory Azure Data Lake Azure Synapse (and data warehousing approaches) or SSIS Azure Analysis Services Experience in programming languages such as: SQL Python Spark DAX A good understanding of Devops practices: CI/CD (Azure DevOps preferable) GIT and Version Control Exceptional communication skills, both written and verbal more »
modeling, and ETL/ELT processes. Proficiency in programming languages such as Python, Java, or Scala. Experience with big data technologies such as Hadoop, Spark, and Kafka. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Excellent problem-solving skills and the ability to think strategically. Strong communication more »
data models. · Data Warehousing: Knowledge of data warehousing and ETL (Extract, Transform, Load) processes. · Big Data Technologies: Familiarity with big data technologies like Hadoop, Spark, and cloud storage solutions. · Data Integration: Skills in integrating data from various sources to create a cohesive dataset. · Data Security: Implementing robust security measures … or similar platforms. · Data Governance: Knowledge of data governance, security, and compliance best practices. · Modern Data Frameworks: Familiarity with modern data frameworks, such as ApacheSpark, Kafka, or similar tools. · Problem-Solving: Excellent problem-solving skills and a proactive, hands-on approach to challenges. · Start-up Experience: Previous more »
data models. · Data Warehousing: Knowledge of data warehousing and ETL (Extract, Transform, Load) processes. · Big Data Technologies: Familiarity with big data technologies like Hadoop, Spark, and cloud storage solutions. · Data Integration: Skills in integrating data from various sources to create a cohesive dataset. · Data Security: Implementing robust security measures … or similar platforms. · Data Governance: Knowledge of data governance, security, and compliance best practices. · Modern Data Frameworks: Familiarity with modern data frameworks, such as ApacheSpark, Kafka, or similar tools. · Problem-Solving: Excellent problem-solving skills and a proactive, hands-on approach to challenges. · Start-up Experience: Previous more »
Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., ApacheSpark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database more »
Storage, Azure Cosmos DB, and other storage solutions to design scalable and efficient data storage systems. Utilise big data technologies like Azure Databricks and ApacheSpark to handle and analyze large volumes of data. Design, implement, and optimize data models to support business requirements and performance needs. Work more »
Lakehouse's using Microsoft technologies (Data Factory, Data Bricks, Data Lake, PowerBI). Extensive experience with data engineering languages and tools such as SQL, Spark, Scala, and Hadoop. Extensive experience with agile and DevOps methodologies, practices, and technologies. Proficiency with Azure DevOps, including CI/CD pipelines, automated testing more »
engineering, including building and optimising data pipelines and distributed data systems. - Strong expertise in cloud platforms (AWS, GCP, or Azure) and modern data technologies (Spark, Kafka, Hadoop, or similar). - Proficiency in programming languages such as Python, Scala, or Java. - Experience working on AI/ML-driven platforms, with more »
Oxford, England, United Kingdom Hybrid / WFH Options
Cubiq Recruitment
engineering, including building and optimising data pipelines and distributed data systems. - Strong expertise in cloud platforms (AWS, GCP, or Azure) and modern data technologies (Spark, Kafka, Hadoop, or similar). - Proficiency in programming languages such as Python, Scala, or Java. - Experience working on AI/ML-driven platforms, with more »
Experience with machine learning frameworks (e.g., TensorFlow, Scikit-learn). Strong knowledge of SQL and database management. Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure). Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work in more »
data warehousing, data integration, and data governance. Databricks Expertise: They have hands-on experience with the Databricks platform, including its various components such as Spark, Delta Lake, MLflow, and Databricks SQL. They are proficient in using Databricks for various data engineering and data science tasks. Cloud Platform Proficiency: They more »
data warehousing, data integration, and data governance. Databricks Expertise: They have hands-on experience with the Databricks platform, including its various components such as Spark, Delta Lake, MLflow, and Databricks SQL. They are proficient in using Databricks for various data engineering and data science tasks. Cloud Platform Proficiency: They more »