Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline and CodeBuild more »
City of London, London, United Kingdom Hybrid / WFH Options
Develop
Modeling within a cloud-based data platform Strong experience with SQL Server Azure data engineering stack, including Azure Synapse and Azure Data Lake Python, PySpark and T-SQL In return you will be offered a competitive salary and benefits package, remote working options and an opportunity to work with more »
end data solutions, delivering best-in-class experiences for their external clients. Technical Background: SAS SAS Base Azure or AWS or GCP Python/PySpark Proficiency in SQL and/or similar data technologies Familiarity with data pipeline tools and ETL processes Knowledge of cloud platforms and data architecture more »
of Python Experience developing in the cloud (AWS preferred) Solid understanding of libraries like Pandas and NumPy Experience in data warehousing tools like Snowflake, Pyspark, Databricks Commercial experience with performant database programming in SQL Capability to solve complex technical issues, comprehending risks prior to the circumstance. Please apply today more »
Greater Manchester, England, United Kingdom Hybrid / WFH Options
MRJ Recruitment
10+ years experience in a Lead Data Engineer Educated to degree level in a QS top 100 university Proven experience delivering scalable data pipelines PySpark SQLDevOps/DataOps/CICD Expertise in designing, constructing, administering, and maintaining data warehouses and data lakes Data Modelling/Data Architecture Data Migration more »
experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a 6 month initial contract with a trusted client of ours. more »
for data engineering ie Azure Functions * Core skills in coding with SQL, Python and Spark * Proven experience using DataBricks ie lakehouse, delta live tables, Pyspark etc more »
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management NEXT STEPS: If this role looks of interest, please reach out to Joseph Gregory. more »
Leicester, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
Nottingham, England, United Kingdom Hybrid / WFH Options
Harnham
STEM subject e.g. Mathematics, Statistics or Computer Science Experience in personalisation, segmentation with a focus on CRM Retail experience is a bonus Experience with PySpark/Azure/Databricks is a bonus Experience of management BENEFITS: Pension scheme Gym Membership Share options Bonus Hybrid working HOW TO APPLY: Register more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
Leeds, England, United Kingdom Hybrid / WFH Options
Damia Group
over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As a Spark Scala Engineer more »
City Of London, England, United Kingdom Hybrid / WFH Options
RJC Group
experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined up for later more »
Greater London, England, United Kingdom Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
preferably GCP | Expertise in event-driven data integrations and click-stream ingestion | Proven ability in stakeholder management and project leadership | Proficiency in SQL, Python, PySpark | Solid background in data pipeline orchestration, data access, and retention tooling | Demonstrable impact on infrastructure scalability and data privacy initiatives | Collaborative spirit | Innovative problem more »
vision to convert data requirements into logical models and physical solutions. with data warehousing solutions (e.g. Snowflake) and data lake architecture, Azure Databricks/Pyspark & ADF. retail data model standards - ADRM. communication skills, organisational and time management skills. to innovate, support change and problem solve. attitude, with the ability more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
in STEM subjects. Strong experience in data pipelines and deploying ML models Preference for experience in retail/marketing Tech across: Python, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Experience in feature engineering and third-party data Apply below more »
Mid-or-Senior level Data Scientist Solid knowledge of Data Engineering principles, including productionisation Technical experience with some or all of the following: Python, PySpark, scikit-learn, pandas, Azure Data Services, Databricks. If this sounds of interest, please apply. more »
not received on time. Communicating outages with the end users of a data pipeline What We Value Comfortable reading and writing code in Python, Pyspark and Java. Basic understanding of Spark and interested in learning the basics of tuning Spark jobs. Data pipeline monitoring team members should be able more »
performance, scalability, and reliability. Technical Skills required: RedShift Glue (inc. Glue Studio, Glue Data Quality, Glue DataBrew) Step Functions Athena Lambda Kinesis Python, Spark, Pyspark, SQL Your contributions as a Data Engineer will directly impact the organization's operations and revenue. In addition to a competitive annual salary, we more »
of 4 years commercial experience Strong proficiency in AWS and relevant technologies Good communication skills Preferably experience with some or all of the following; Pyspark, Kubernetes, Terraform Expeirence in a start-up/scale up or fast-paced environment is desirable. HOW TO APPLY Please register your interest by more »
Senior Databricks Migration Consultant Remote This role demands in-depth knowledge of data engineering, cloud technologies (preferably AWS), and a successful record in enterprise level data migrations into Databricks. You will ensure the efficient and secure transition of our data more »