Southampton, Hampshire, South East, United Kingdom
Ordnance Survey Limited
development. Track record in problem resolution & selection of technical solutions. Qualified to relevant development certification (or equivalent experience). Experience working with Databricks/Apache Spark. Experience working with infrastructure-as-code. Desirable: Experience working with Geospatial Data. Experience using Terraform. The rewards: We want you to love what more »
Leeds, England, United Kingdom Hybrid / WFH Options
Damia Group
Spark Scala Developer - Scala/ApacheSpark - Hybrid/Leeds - £450-£550 Spark Scala Developer to join our client, one of the biggest financial services organizations in the world, with operations in more than 38 countries. It has an IT infrastructure of 200,000+ servers … 000+ database instances, and over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As … a Spark Scala Engineer you will be working for the GDT (Global Data Technology) Team, you will be responsible for: Designing, building, and maintaining data pipelines using ApacheSpark and Scala Working on an Enterprise scale Cloud infrastructure and Cloud Services in one of the Clouds (GCP more »
data components such as Azure Data Factory, Azure SQL DB, Azure Data Lake, etc. Strong Python and SQL skills for data manipulation Experience with ApacheSpark and/or Databricks. Familiarity with BI visualization tools like Power BI Experience in managing end-to-end analytics pipelines (batch and … such as Azure Data Engineer Associate are desirable. Knowledge of data ingestion methods for real-time and batch processing Proficiency in PySpark and debugging ApacheSpark workloads. What’s in it for you? Annual bonus scheme – up to 10% Excellent pension scheme Flexible working Enhanced family friendly policies more »
to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in more »
management. Cloud Platform : AWS for cloud infrastructure. Programming Languages : JavaScript for front-end development and Java for back-end processes. Big Data Technologies : Hadoop, Spark, or Kafka for handling large-scale data processing. What We Need from You Essential Skills: Technical Proficiency : Expertise in React.js, front-end technologies (JavaScript more »
and classification techniques, and algorithms Fluency in a programming language (Python, C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Damia Group Ltd
Spark/PySpark Architect - 12 months+ -£Inside IR35- Hybrid working of 3 days on site in Leeds My client are a Global Consultancy who are looking for a number of Spark/PySpark Architects to join them on a Long term programme. As the Spark architect, you … Integration upgrade to PySpark Collaboration with multiple customer stakeholders Knowledge of working with Cloud Databases Excellent communication and solution presentation skills. Able to analyse Spark code failures through Spark Plans and make correcting recommendations Able to review PySpark and Spark SQL jobs and make performance improvement recommendations … Able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations Able to monitor Spark jobs using wider tools such as Grafana to see whether there are Cluster level failures. As a Spark architect, who can demonstrate deep knowledge more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Microlise
data practices Possess strong knowledge of data tools, data management tools, and various data and information technologies. E.g. DAMA DMBOK, Microsoft SQL Server, Couchbase, Apache Druid, Spark, Kafka, Airflow, etc In-depth understanding of modern data principles, methodologies, and tools Excellent communication and collaboration skills, with the ability … native computing concepts and experience working with hybrid or private cloud platforms is a plus. Demonstrable technical experience working with a Microsoft, Redhat, and Apache data and software engineering environment. A team-oriented individual with a passion for engineered excellence and the ability to lead and motivate a team more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline more »
data analysis techniques Ability to produce clear graphical representations and data visualisations. Knowledge of data analysis tools, for example, R Programming, Tableau Public, SAS, ApacheSpark, Excel, RapidMiner Knowledge of data modelling, data cleansing, and data enrichment techniques An understanding of data protection issues Experience of working with more »
major advantage Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling and insurance are a plus. Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow more »
Leeds, West Yorkshire, Yorkshire, United Kingdom Hybrid / WFH Options
Damia Group Ltd
Spark Scala Developer - Scala/ApacheSpark - Hybrid/Leeds - £450-£550 Spark Scala Developer to join our client, one of the biggest financial services organizations in the world, with operations in more than 38 countries. It has an IT infrastructure of 200,000+ servers … 000+ database instances, and over 150 PB of data. As a Spark Scala Engineer, you will have the responsibility to refactor Legacy ETL code, for example DataStage into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Your responsibilities; As … a Spark Scala Engineer you will be working for the GDT (Global Data Technology) Team, you will be responsible for: Designing, building, and maintaining data pipelines using ApacheSpark and Scala Working on an Enterprise scale Cloud infrastructure and Cloud Services in one of the Clouds (GCP more »
Swansea, Neath Port Talbot, Wales, United Kingdom Hybrid / WFH Options
Inspire People
processing, and analytics. Programming Skills: Proficiency in Python, SQL, and other relevant programming languages. Big Data Technologies: Experience with big data technologies such as Apache Spark. Data Warehousing: Strong knowledge of data warehousing concepts and solutions. Problem-Solving: Excellent problem-solving skills with a detail-oriented approach. Leadership: Proven more »
Data Scientists and Service Engineering teams Experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, ApacheSpark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties Develop and define key business questions and to build … a related field Experience of Data platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Kinesis/Kafka/Spark/Storm implementations Experience with analytic solutions applied to the Marketing or Risk needs of enterprises Basic understanding of machine learning fundamentals Ability to … take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation experience more »
Spark Architect/SME Contract Role- 6 months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. … Skills: Spark Architecture – component understanding around Spark Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark … and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any memory related problems and make corrective recommendations. Monitoring – Be able to monitor Spark jobs using wider tools such as Grafana to see more »
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
Surrey, England, United Kingdom Hybrid / WFH Options
Hawksworth
working in the world of Data Science You're more than capable with SQL & Python You have exposure to big data technologies such as Spark Ideally you will have experience with statistical analysis, machine learning algorithms, and data mining techniques You have excellent communication skills and can communicate well more »
Reigate, England, United Kingdom Hybrid / WFH Options
esure Group
refining them to strong results. Exposure to Python data science stack Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark and geospatial data/modelling are a plus. We’ll help you gain… Experience working in a high-performance environment where collaboration and business more »
issues The person Degree in Engineering, Technology, or related fields/equivalent 3+ years in AI solution delivery Experienced in relevant technologies (Python, TensorFlow, Spark, Azure Cloud, Git, Docker) Strong analytical and communication skills This comes with a fantastic salary and full benefits package – happy to discuss in full. more »
Luton, England, United Kingdom Hybrid / WFH Options
Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
data platform from a legacy system to one based on AWS EMR, with Amazon RDS and DynamoDB ingestion converted to Parquet files, interrogatable through Spark and MapReduce. This modern platform will support rapid data insight generation, data experiments for new product development, our live Machine Learning solutions and live … to-target mappings) to testing and service optimisation.) Good familiarity with our developing key services/applications - AmazonRDS, Amazon DynamoDB, AWS Glue, MapReduce, Hive, Spark, YARN, Airflow. Ability to work with a range of structured, semi-structured and unstructured file formats including Parquet, json, csv, pdf, jpg. Accomplished data more »
in a technical and analytical role Experience of Data Lake/Hadoop platform implementation Hands-on experience in implementation and performance tuning Hadoop/Spark implementations Experience Apache Hadoop and the Hadoop ecosystem Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, Hcatalog, Solr … Avro) Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto) Experience developing software code in one or more programming languages (Java, Python, etc.) Preferred Qualifications Masters or PhD in Computer Science, Physics, Engineering or Math Hands on experience leading large-scale global data warehousing more »