Manchester, North West, United Kingdom Hybrid / WFH Options
83zero Limited
you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … Your Role: * Design and build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. * Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to more »
City of London, London, Farringdon, United Kingdom Hybrid / WFH Options
83zero Ltd
you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … Your Role: * Design and build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. * Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to more »
Spark Architect/SME Contract Role: Long term contract role but 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) JOB ADVERT FOR IJP Exciting Long-Term Opportunity to Work on Cutting-Edge Technology in One of Client's Top Fastest-growing Accounts which … than 38 countries. It has an IT infrastructure of 200,000+ servers, 20,000+ database instances, and over 150 PB of data. As a Spark Architect, you will have the responsibility to refactor Legacy ETL code for example DataStage into PySpark using Prophecy low-code no-code and available … converters. Converted code is causing failures/performance issues. The End Client Account is looking for an enthusiastic Spark Architect with deep component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Also able to analyse Spark code failures more »
you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … Your Role Design and build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to … for efficiency and scalability. Key Skills and Experience Minimum 3+ years of experience as a Data Engineer or similar role. Proven expertise in Databricks, ApacheSpark, and data pipeline development. Strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake more »
you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. … Your Role: • Design and build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. • Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to … technologies and best practices. Required Skills and Qualifications: • Minimum 3+ years of experience as a Data Engineer or similar role. • Proven expertise in Databricks, ApacheSpark, and data pipeline development. • Strong understanding of data warehousing concepts and practices. • Experience with Microsoft Azure cloud platform, including Azure Data Lake more »
processing workflows and implementing industry best practices Experience in Azure Databrick, SQL Server and Azure SQL Experience as data engineer with a focus on Apachespark and data bricks platform Proficiency in programming languages such as python, scala or SQL Skills required: Azure Databrick, SQL Server, and Azure … SQL Apachespark, ETL process and data bricks platform python, Scala or SQL Benefits : You will receive a competitive salary, a generous benefits package, training, and development, as well as an exciting career within a fast paced and dynamic business. Your benefits include Contributory pension of up to more »
data from a variety of structured and unstructured corporate sources, third-party providers, and publicly available data. • Using cloud data processing frameworks such as ApacheSpark, Databricks, etc., to process and analyse complex, large datasets. • Ideally, experience using cloud products and solutions via APIs, especially MS Azure and … audience. Key Responsibilities: • Extract, process, and analyse data from various corporate sources, third-party providers, and public data. • Use cloud data processing frameworks (e.g., ApacheSpark, Databricks) to analyse complex, large datasets. • Design multi-stage data pipelines combining scripted transformations and API calls. • Develop and apply models for more »
location is London, UK. Key Responsibilities: Design, develop, and maintain ETL data pipelines using Scala and PySpark. Work on big data processing frameworks like ApacheSpark to process large datasets efficiently. Integrate various data sources and databases into the data processing ecosystem. Collaborate in Agile environments, contributing to … code reviews, and continuous integration practices. Required Skills and Qualifications: Proficiency in Scala and PySpark for data processing and ETL development. Strong understanding of ApacheSpark and distributed computing frameworks. Strong understanding of data structures, algorithms, and software engineering best practices. Excellent problem-solving and analytical skills. more »
Databricks, ApacheSpark, SQL, Python, Azure Data Platform A fast growing and start up Data Consultancy have engaged Primus Connect to source an excellent Principal Data Engineering Consultant on an exclusive basis. These guys have grown rapidly since their formation and have strong pipeline and relationships with Databricks … is allowing them to continue to grow. You should have excellent technical skills, consultancy, client facing and leadership and key skills include; Excellent Databricks & ApacheSpark skills Excellent Azure Data Platform skills Experience with Fabric would be advantageous Experience of Technical/Team Leadership on Consultancy projects Client more »
and applications. Implement ETL processes to extract, transform, and load data from various sources into the Foundry platform. Integrate data from different systems using ApacheSpark and Airflow to ensure data consistency and accuracy. Collaborate with data scientists and analysts to understand their data needs and provide effective … in Palantir Foundry, with a strong understanding of its architecture, components, and best practices. Expertise in ETL processes and data integration techniques. Proficiency in ApacheSpark and Airflow for data processing and workflow orchestration. Solid experience with cloud platforms, ideally AWS, Terraform, Docker, and Kubernetes. more »
Mandatory Skills You need to have the below skills. · At least 12+ Years of IT Experience with Deep understanding of component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. · Spark SME – Be able to analyse Spark code failures … through Spark Plans and make correcting recommendations. · To be able to traverse and explain the architecture you have been a part of and why any particular tool/technology was used. · Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. … Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. · Monitoring –Spark jobs using wider tools such as Grafana to see whether there are Cluster level failures. · Cloudera (CDP) Spark and how the run more »
and ETL processes to ingest and transform data.Design and develop end-to-end analytics solutions using big data batch and stream processing frameworks (e.g., Spark Structured Streaming, Kafka, etc.) and large language models (LLMs).Build and train machine learning models and utilize large language models.Help mentor and train team … desired.Databricks Certified Data Engineer Professional or equivalent hands-on experience.Five-plus years of programming experience with Python and SQL.Five-plus years of experience with ApacheSpark for data processing.Five-plus years of experience building large-scale data pipelines with Apache Kafka and Spark stream processing.Three-plus more »
Manchester, North West, United Kingdom Hybrid / WFH Options
bigspark
work-from-home) basis. Bigspark is a UK-based consultancy delivering next-level data platforms and solutions with a focus on exciting technologies, including ApacheSpark and Apache Kafka and working on projects within Machine Learning, Data Engineering, Streaming and Data Science. We provide our clients with more »
proficiency in Python and SQL Extensive experience with cloud platforms (AWS, Azure, or GCP) Hands-on experience with big data technologies such as Hadoop, Spark, or Kafka Familiarity with data warehousing concepts and implementation Experience with CI/CD practices and DevOps principles Knowledge of data modeling techniques and more »
Salford, England, United Kingdom Hybrid / WFH Options
Tribal Tech - The Digital, Data & AI Specialists
API integration Technical Requirements: - Proficiency in Python, Java, or Scala - Strong SQL skills and experience with relational databases - Expertise in big data technologies (Hadoop, Spark, Kafka) - Cloud platform knowledge (AWS, Azure, or Google Cloud) - Experience with ETL tools and data warehousing concepts - Familiarity with NoSQL databases and containerization tools more »
a fast-paced, high-growth fintech environment - Deep expertise in cloud-based data technologies (AWS, Azure, or GCP) and big data processing frameworks (e.g., Spark, Hadoop) - Strong knowledge of data warehousing concepts, ETL/ELT processes, and data modeling techniques specific to financial services - Experience with real-time data more »
scalable, robust relational and non-relational database architectures Hands-on experience with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Hadoop, Spark, Kafka). Proficient in SQL , but also highly skilled in non-relational database querying and manipulation. Experience with ETL frameworks that handle both real more »
Proficiency in SQL and modern programming languages (preferably Python) - Extensive experience with cloud platforms (AWS, Azure, or GCP) and big data technologies (e.g., Hadoop, Spark, Kafka) - Solid understanding of data modeling techniques and best practices - Experience in designing and implementing data warehousing solutions - Strong problem-solving abilities and excellent more »
create compelling data stories through visualizations. Cloud Platforms: Understanding cloud-based data solutions (Azure, AWS or GCP). Big Data Technologies: Familiarity with Hadoop, Spark, and other big data tools. Machine Learning: Basic understanding of ML algorithms and their applications. AI: Knowledge of AI concepts and potential applications in more »
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop. Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
cloud platform (ideally AWS) and its data services Great written and verbal communication skills 👍 Bonus points for: Experience with big data technologies (e.g. Hadoop, Spark, Kafka) Experience with AWS data services (e.g. S3, Athena, AWS Glue) Familiarity with data warehousing solutions (e.g. Redshift, BigQuery, Snowflake) Knowledge of containerisation and more »
Microsoft Azure, Databricks, Snowflake. Architectural and/or feature knowledge of one or more of the following Programming Languages/Packages: Python, Java, Scala, Spark, SQL, NoSQL databases. Experience working within Agile delivery methodologies. Proven ability to be successful in a matrixed organisation, and to enlist support and commitment more »
City Of London, England, United Kingdom Hybrid / WFH Options
Sphere Digital Recruitment | Best Small Company 2022
AWS, Google Cloud, or Azure. A solid understanding of database systems, data modelling, and ETL processes. Experience with big data technologies like Hadoop or Spark is a plus. Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment. A passion for data, with a more »
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
Understanding of cloud computing security concepts Experience in relational cloud-based database technologies like Snowflake, BigQuery or Redshift Experience in open-source technologies like Spark, Kafka, Beam understanding of Cloud tools such as AWS, Microsoft Azure or Google Cloud Familiarity with DBT, Delta Lake, Databricks Experience working in an more »
Databricks • Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). • In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis more »