core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple programming languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache More ❯
Desirable: Ph.D, Masters in Data Science, Machine Learning or AI-related topics Desirable: Experience with CI/CD, MLOps Desirable: Experience in Spark/Scala/PySpark Desirable: Experience with Generative AI and LLMOps At Rolls-Royce we embrace agility, are bold, pursue collaboration and seek simplicity in everything we More ❯
Northampton, Northamptonshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Scala Data Engineer I am working with an analytics and digital solutions consultancy that partner with clients from several different industries to unlock their potential to become truly data driven. They work to deliver tailored, bespoke systems to fit the needs of their clients with a focus on cloud-based … You will be joining a project with a focus on data migration from Hadoop to the cloud, creating robust data pipelines and working with Scala, Spark and other AWS services to process and manipulate data. As part of this role, you will be responsible for some of the following areas. … Develop big data solutions utilising Hadoop and Apache Spark Create, develop and maintain robust ETL pipelines using AWS Glue and Scala Design and implement Scala-based applications for the use of big data processing Work with other technical members of the team to enhance the performance of code, promoting best More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Scala Data Engineer I am working with an analytics and digital solutions consultancy that partner with clients from several different industries to unlock their potential to become truly data driven. They work to deliver tailored, bespoke systems to fit the needs of their clients with a focus on cloud-based … You will be joining a project with a focus on data migration from Hadoop to the cloud, creating robust data pipelines and working with Scala, Spark and other AWS services to process and manipulate data. As part of this role, you will be responsible for some of the following areas. … Develop big data solutions utilising Hadoop and Apache Spark Create, develop and maintain robust ETL pipelines using AWS Glue and Scala Design and implement Scala-based applications for the use of big data processing Work with other technical members of the team to enhance the performance of code, promoting best More ❯