data access, and data storage techniques. Excellent problem-solving skills and the ability to think algorithmically. Desirable Skills: Knowledge of big data technologies (Hadoop, Spark, Kafka) is highly desirable. Familiarity with data governance and compliance requirements. more »
with ETL processes and tools. Knowledge of cloud platforms (e.g., GCP, AWS, Azure) and their data services. Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus. Understanding of AI tools like Gemini and ChatGPT is also a plus. Excellent problem-solving and communication skills. Ability to work more »
London, Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions
rate of £250-£400, falling inside IR35 regulations. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using AWS, Databricks, Python, Spark, and SQL. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and troubleshoot data … Glue). Hands-on experience with Databricks for data processing and analytics. Proficient in Python programming for data manipulation and automation. Solid understanding of ApacheSpark for big data processing. Strong SQL skills for data querying, transformation, and analysis. Excellent problem-solving abilities and attention to detail. Ability more »
a qualified Data Engineer to join our team, where your responsibilities will include: Designing, optimizing, and maintaining scalable data pipelines and ETL processes using Spark, ensuring streamlined data processing and integration. Collaborating cross-functionally to translate complex data requirements into actionable technical solutions that drive business objectives. Leveraging Microsoft … the Midlands. Ideal Candidate Profile: We are seeking an individual who have the following attributes: Proven expertise as a Data Engineer, demonstrating proficiency in ApacheSpark and cloud-based technologies, particularly Microsoft Azure and Databricks. Strong programming skills, with a focus on Python, along with proficiency in ETL more »
least one cloud platform (preferably GCP).BSc/MSc in computer science, maths, physics or STEM subject.Basic knowledge of statistics and machine learning.Experience with Spark, Apache services, ETL tools, Data visualization and dashboards.Experience with streamed data processing, parallel compute, and/or event based architectures.Experience with web-scraping more »
leading business intelligence platform (e.g. Microsoft, Crystal, Qlik, SAP, Tableau). Good understanding of open source, big data, and cloud data platforms (e.g. Hadoop, Spark, Hive, Pentaho, AWS, Azure); given a business problem, you can analyse and evaluate options and recommend solutions. Proven experience in designing, building and maintaining more »
tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable Skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent more »
etc). Experience with SQL and query design on large, complex datasets. Experience with cloud and big-data tools and frameworks like Databricks/Spark, Airflow, Snowflake, etc. Expertise designing and developing with distributed data processing platforms like Databricks/Spark. Experience using ELT/ETL tools such as more »
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
BlackRock
includes: DevOps automation, idempotent deployment testing, and continuous delivery pipelines Networking and security protocols, load balancers, API Gateways ETL tooling and workflow engines (e.g., Spark, Airflow, Dagster, Flyte) Data modeling, and strategies for cleaning and validating data at scale Performance tuning on RDBMS or Big Data tools for row more »
Manchester, Greater Manchester, United Kingdom Hybrid / WFH Options
AutoTrader UK
of applying data technologies to solve problems and you can expect to work with a range of technologies including dbt, Kotlin/Java, Python, ApacheSpark and Kafka.Join us as a Principal Software Engineer and, as well as shaping and creating the foundations for insight-driven, market-leading … delivery chain, from data to productsYou will have an understanding of data modelling and experience with data engineering tools and platforms such as, Kafka, Spark, and HadoopComfortable presenting technical ideas to non-technical colleaguesExperience mentoring and coaching and sharing technical expertiseStrong team work ethic, communication and collaboration skillsSupport with … our recruitment process by evaluating candidates at all stages.Although not essential, helpful experience includes:Messaging systems such as Apache Kafka or Google Pub/SubDocker/Container OrchestrationWorking experience with Google CloudEvery candidate brings a unique mix of skills and qualities to the table. We're all about inclusivity more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
engineers of varying levels of experience. Flexibility and willingness to adapt to new software and techniques. Nice to Have Experience working with projects in ApacheSpark, Databricks of similar. Expert cloud platform knowledge, e.g. Azure What will be your key responsibilities? A technical expert and leader on the more »
or similar technologies. Hands-on experience with AWS and snowflake. Financial services industry experience (highly desirable). Experience with Big Data technologies such as Spark or Hadoop. Bachelor's degree in computer science, Engineering, or equivalent. Further information available upon application. ECS Recruitment Group Ltd is acting as an more »
Guildford, England, United Kingdom Hybrid / WFH Options
Hawksworth
warehousing and ETL frameworks Proficiency in working with relational databases (e.g., Oracle, PostgreSQL), Parquet/Delta files and big data technologies (e.g. Synapse, Hadoop, Spark, Kafka) Knowledge of Microsoft Azure and associated data services is a good to have. Strong analytical and data interpretation skills, with the ability to more »
data lake/warehouse/hub built in GCP. You are confident using the full suite of Google data products, IAC, CI/CD, Spark and Kafka. Our core toolbox includes Google Cloud Big Data technologies, Scala, Java & Python, Jenkins amongst others. We value first principles reasoning to select more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
tools such as Informatica MDM, Informatica AXON, Informatica EDC, and Collibra • MySQL, SQL Server, Oracle, Snowflake, PostgreSQL and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
Azure Synapse Analytics. Strong SQL and Python skills. Experience with data modeling, ETL processes, and data warehousing. Knowledge of big data technologies such as Spark and Hadoop is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Experience in the healthcare sector is more »
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are more »
Modelling. Experience with at least one or more of these programming languages: Python, Scala/Java Experience with distributed data and computing tools, mainly ApacheSpark & Kafka Understanding of critical path approaches, how to iterate to build value, engaging with stakeholders actively at all stages. Able to deal more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »