Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
data components such as Azure Data Factory, Azure SQL DB, Azure Data Lake, etc. Strong Python and SQL skills for data manipulation Experience with ApacheSpark and/or Databricks. Familiarity with BI visualization tools like Power BI Experience in managing end-to-end analytics pipelines (batch and … such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging ApacheSpark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family friendly policies more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Version 1
Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Direct experience in building data pipelines using Azure Data Factory and ApacheSpark (preferably Databricks). Experience building data warehouse solutions using ETL/ELT tools such as SQL Server Integration Services (SSIS), Oracle Data … Integrator (ODI), Talend, and Wherescape Red. Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data/event-based data Experience with other Open Source big data products eg Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational/NoSQL data repositories more »
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Change Digital – Digital & Tech Recruitment
AWS Redshift, and Python Experience with ETL processes, data integration, and data warehousing. Strong SQL skills Experience with Big Data technologies such as Hadoop, Spark, and Kafka Familiarity with cloud platforms (AWS, Azure, Google Cloud) Working knowledge of data visualisation tools (PowerBI, Tableau, Qlik Sense) Additional skills: Client-facing more »
City Of Bristol, England, United Kingdom Hybrid / WFH Options
Anson McCade
and product development, encompassing experience in both stream and batch processing. Designing and deploying production data pipelines, utilizing languages such as Java, Python, Scala, Spark, and SQL. In addition, you should have proficiency or familiarity with: Scripting and data extraction via APIs, along with composing SQL queries. Integrating data more »
Penrith, Cumbria, United Kingdom Hybrid / WFH Options
Computer Futures
DataBricks, Azure SQL (Indicative experience = 5yrs+) Build and test processes supporting data extraction, data transformation, data structures, metadata, dependency and workload management. Knowledge on Spark architecture and modern Datawarehouse/Data-Lake/Lakehouse techniques Build transformation tables using SQL. Moderate level knowledge of Python/PySpark or equivalent more »
team has strong appetite to embrace latest tech stacks offered in the industry. We need passionate software engineers who use Java, JavaScript, Python, React, ApacheSpark and open source data science toolkits to build solutions that offer best in class experience to our risk managers and analysts. Our … solving and analytical skills Preferred Qualifications Strong programming experience in at least one language Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience with continuous delivery and deployment Proficient at working with large more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
AWS Redshift, and Python Experience with ETL processes, data integration, and data warehousing. Strong SQL skills Experience with Big Data technologies such as Hadoop, Spark, and Kafka Familiarity with cloud platforms (AWS, Azure, Google Cloud) Working knowledge of data visualisation tools (PowerBI, Tableau, Qlik Sense) Additional Skills: Client-facing more »
science, Information Technology, or a related fieldExperience with containerisation and orchestration technologies (e.g., Docker, Kubernetes).Knowledge of big data technologies and frameworks (e.g., Hadoop, Spark).Familiarity with other cloud platforms (e.g. AWS, Google Cloud) and PaaS providers (e.g. Snowflake)Knowledge of Inner or Open Source paradigm and way of more »
leading business intelligence platform (e.g. Microsoft, Crystal, Qlik, SAP, Tableau). Good understanding of open source, big data, and cloud data platforms (e.g. Hadoop, Spark, Hive, Pentaho, AWS, Azure); given a business problem, you can analyse and evaluate options and recommend solutions. Proven experience in designing, building and maintaining more »
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
BlackRock
includes: DevOps automation, idempotent deployment testing, and continuous delivery pipelines Networking and security protocols, load balancers, API Gateways ETL tooling and workflow engines (e.g., Spark, Airflow, Dagster, Flyte) Data modeling, and strategies for cleaning and validating data at scale Performance tuning on RDBMS or Big Data tools for row more »
Manchester, Greater Manchester, United Kingdom Hybrid / WFH Options
AutoTrader UK
of applying data technologies to solve problems and you can expect to work with a range of technologies including dbt, Kotlin/Java, Python, ApacheSpark and Kafka.Join us as a Principal Software Engineer and, as well as shaping and creating the foundations for insight-driven, market-leading … delivery chain, from data to productsYou will have an understanding of data modelling and experience with data engineering tools and platforms such as, Kafka, Spark, and HadoopComfortable presenting technical ideas to non-technical colleaguesExperience mentoring and coaching and sharing technical expertiseStrong team work ethic, communication and collaboration skillsSupport with … our recruitment process by evaluating candidates at all stages.Although not essential, helpful experience includes:Messaging systems such as Apache Kafka or Google Pub/SubDocker/Container OrchestrationWorking experience with Google CloudEvery candidate brings a unique mix of skills and qualities to the table. We're all about inclusivity more »
Guildford, England, United Kingdom Hybrid / WFH Options
Hawksworth
warehousing and ETL frameworks Proficiency in working with relational databases (e.g., Oracle, PostgreSQL), Parquet/Delta files and big data technologies (e.g. Synapse, Hadoop, Spark, Kafka) Knowledge of Microsoft Azure and associated data services is a good to have. Strong analytical and data interpretation skills, with the ability to more »
Bedford, Bedfordshire, United Kingdom Hybrid / WFH Options
Understanding Recruitment
frameworks (TensorFlow, PyTorch etc.)MLOps experienceNice to have:Familiarity with Git or other Version Control SystemsComputer Vision Library exposureUnderstanding of Big Data Technologies (Hadoop, Spark etc)Experience with Cloud platforms (AWS, GCP or Azure)This is a fully remote role, but may require very occasional travel (once a month more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
management. Cloud Platform : AWS for cloud infrastructure. Programming Languages : JavaScript for front-end development and Java for back-end processes. Big Data Technologies : Hadoop, Spark, or Kafka for handling large-scale data processing. What We Need from You Essential Skills: Technical Proficiency : Expertise in React.js, front-end technologies (JavaScript more »
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
with Git for version control and project management, alongside some knowledge of Linux/Shell. data platform familiarity - previous experience of working with both ApacheSpark and MapReduce data processing and analytics frameworks. and reporting expertise - experience with Tableau, Power BI, Excel alongside notebooks for experiment documentation. What more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
Bachelor's degree in Computer Science or a related field (Master's degree preferred) Nice to have: experience with LLMs, Vector Databases, AWS EMR, Spark, and Python Our commitment: Equal opportunities are important to us. We believe that diversity and inclusion at The Stepstone Group are critical to our more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
environment and/or comparable experienceSignifannt years of proven experience in Java, REST APIs, Spring Boot Framework, ReactJS, Redis, OpenShift, Elastic Stack, Kafka, Hive, Spark, UNIX/Linux and Oracle DBExperience with a statistical programming language (Python or R) and experience with libraries specifically for Machine Learning or Data more »