this role is for you. Key Responsibilities: Develop scalable data processing applications using Scala. Build and optimize ETL pipelines for large datasets. Work with ApacheSpark, Kafka, and Flink for batch and streaming data. Maintain data lakes and warehouses (Databricks Delta Lake, Apache Iceberg, Apache Hudi … and event-driven architectures. Ensure data governance, security, and compliance. Required Skills & Experience: 10+ years in software development with Scala. Hands-on experience with ApacheSpark (batch & streaming). Expertise in data lake/warehouse technologies and open-source data formats (JSON, Parquet, Avro). Strong knowledge of more »
Nottingham, Nottinghamshire, United Kingdom Hybrid / WFH Options
Rullion - Eon
Join our client in embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. To support this, we are recruiting for talented data engineers. As a major UK energy provider, our client more »
experience in optimizing and fine-tuning big data applications for heightened performance and efficiency. - 5+ years Hands-on experience with relevant tools such as Apache Hadoop, Spark, Kafka, and other industry-standard platforms. - Good to have : External technology contributions (Noteworthy Open Source attributions). more »