Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
prem solutions to the cloud, including re-architecting Prior experience working on data focused projects e.g. data warehousing, big data, data streaming Proficiency with Apache Kafka, ApacheSpark, Apache Flink etc. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless more »
comfortable designing and constructing bespoke solutions and components from scratch to solve the hardest problems. Adept in Java, Scala, and big data technologies like Apache Kafka and ApacheSpark, they bring a deep understanding of engineering best practices. This role involves scoping and sizing, and indeed estimating … be considered. Key responsibilities of the role are summarised below Design and implement large-scale data processing systems using distributed computing frameworks such as Apache Kafka and Apache Spark. Architect cloud-based solutions capable of handling petabytes of data. Lead the automation of CI/CD pipelines for more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters.Converted code is causing failures/performance issues.Skills:Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans.Spark SME – Be able to … analyse Spark code failures through Spark Plans and make correcting recommendations.Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations.Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective … recommendations.Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see whether there are Cluster level failures.Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code.Prophecy – High level understanding of Low-Code more »
as Hadoop and Spark. Experience with data warehousing technologies such as Redshift, Snowflake, or BigQuery. Experience with data pipeline and ETL tools such as Apache NiFi, Airflow, or Glue. Knowledge of data governance and security best practices. Strong problem-solving and analytical skills. Ability to work well in a more »
Newcastle Upon Tyne, United Kingdom Hybrid / WFH Options
NHS Counter Fraud Authority
science, e.g., SQL, R and Python alongside the ability to use tools and packages such as Alteryx, Jupyter notebook, R Markdown, TensorFlow, Keras, Pytorch, ApacheSpark etc. oPractical proficiency in producing reproducible code and pipelines including documentation, governance and assurance frameworks, automation and code review using tools such more »
Manchester, North West, United Kingdom Hybrid / WFH Options
TECHNOLOGY RECWORKS LIMITED
sponsors) Knowledge and experience of the following would be advantageous: Knowledge of Enterprise Architecture Frameworks Good knowledge of Azure DevOps Pipelines Strong experience in ApacheSpark framework Previous experience in designing and delivering data warehouse and business intelligence solutions using on-premises Microsoft stack (SSIS, SSRS, SSAS) Knowledge more »
Pharmaceutical industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices more »
Manchester, England, United Kingdom Hybrid / WFH Options
Lorien
MongoDB/DynamoDB/etc.) Solid understanding of data governance principles and how to implement these across the business Knowledge of Big Data technology (Spark/Hadoop/etc.) Excellent communication skills across various levels of stakeholders Benefits: Salary available £120,000 Bonus scheme Enhanced pension contribution available Genuine more »
and best practices, and share knowledge with the team.QualificationsYou will have expertise within the following:Java and Python development knowledge (Essential)Previous experience with Spark or Hadoop(Essential)Trino orAirflow(Desirable)Architecture and capabilities.Designing and implementing complex solutions with a focus on scalability and security.Excellent communication and collaboration skills.Additional more »
practices, and share knowledge with the team. Qualifications You will have expertise within the following: Java and Python development knowledge (Essential) Previous experience with Spark or Hadoop (Essential) Trino orAirflow (Desirable) Architecture and capabilities. Designing and implementing complex solutions with a focus on scalability and security. Excellent communication and more »
practices, and share knowledge with the team. Qualifications You will have expertise within the following: Java and Python development knowledge (Essential) Previous experience with Spark or Hadoop (Essential) Trino or Airflow (Desirable) Architecture and capabilities. Designing and implementing complex solutions with a focus on scalability and security. Excellent communication more »
Java).Strong knowledge of security principles and best practices for cloud-based solutions.Preferred Skills:Certification in cloud platforms.Experience with big data technologies such as Apache Hadoop, Spark, or Kafka.Knowledge of data governance and compliance frameworks.Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform).HSBC experience is a more »
Sheffield, England, United Kingdom Hybrid / WFH Options
Undisclosed
knowledge of security principles and best practices for cloud-based solutions. Preferred Skills: Certification in cloud platforms. Experience with big data technologies such as Apache Hadoop, Spark, or Kafka. Knowledge of data governance and compliance frameworks. Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). HSBC more »
Sheffield, South Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Experis
knowledge of security principles and best practices for cloud-based solutions. Preferred Skills : Certification in cloud platforms. Experience with big data technologies such as Apache Hadoop, Spark, or Kafka. Knowledge of data governance and compliance frameworks. Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). All more »
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Senitor Associates Limited
within Software Engineering to explore new technologies. Contribute to a team culture that prioritizes diversity, equity, inclusion, and respect. Required Skills Expertise in Java , Spark, SQL, Relational DB, Spark, NoSQL, focusing on performance optimization. A thorough understanding of the Software Development Life Cycle and agile methodologies, including CI more »
data analysis within an e-commerce or online business context.Commercially minded, thinking about ways to increase revenue & profitability.Proficiency in data manipulation tools (Python, Pandas, Spark, SQL) and data visualization tools (Apache Superset, Tableau, Power BI, ggplot2) and MS Excel.Grasp of pricing strategies, market dynamics, and consumer behaviour in more »
within an e-commerce or online business context. Commercially minded, thinking about ways to increase revenue & profitability. Proficiency in data manipulation tools (Python, Pandas, Spark, SQL) and data visualization tools (Apache Superset, Tableau, Power BI, ggplot2) and MS Excel. Grasp of pricing strategies, market dynamics, and consumer behaviour more »
scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big Data Space (Hive, Impala … Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication skills * Ability to work more »