autonomy and creativity, embedded in market leading organisation. This particular squad environment needs strong software engineer with excellent bigdata awareness. Skills required - core Java, Apache Spark, strong database skills and the ability to adapt to improve. To progress in this role candidates must be able to demonstrate their abilities … concepts/Java server-side development Demonstrable experience in Spring, Spring Boot and Hibernate Familiarity with Big Data concepts A demonstrable working knowledge of Apache Spark is required for this post. Familiar with CI/CD, Agile, SCRUM environments Advanced SQL/MySQL database knowledge Familiarity with processing Frameworks … and tools Apache Spark, Hadoop, ApacheHive, Kafka A practical working knowledge of large-scale Finance or FS Risk Technology **NB - Please note the working pattern is circa 3 days a week in Belfast City Centre, therefore candidates will be expected to live in a direct commutable More ❯
to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a More ❯
Milton Keynes, Buckinghamshire, UK Hybrid / WFH Options
Santander
with team members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation More ❯
languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
the following additional languages: Java, C#, C++, Scala Familiarity with Big Data technology in cloud and on-premises environments: Hadoop, HDFS, Spark, NoSQL Databases, Hive, MongoDB, Airflow, Kafka, AWS, Azure, Dockers or Snowflake Good understanding of object-oriented programming (OOP) principles & concepts Familiarity with advanced SQL techniques Familiarity with … data visualization tools such as Tableau or Power BI Familiarity with Apache Flink or Apache Storm Understanding of DevOps practices and tools for (CI/CD) pipelines. Awareness of data security best practices and compliance requirements (e.g., GDPR, HIPA). To Qualify: You should be willing to relocate More ❯
quickly Ability to work independently and be self-directed Bachelor's degree in Computer Science or related Experience with big data analytics: Splunk, ELK, Hive, Redshift, etc. (nice to have) In-depth knowledge of streaming back-ends and formats (nice to have) Experience working with Smart/Digital TV More ❯
techniques in production-grade code, with a focus on scalability and reliability. Experience with large-scale data analysis, manipulation, and distributed computing platforms (e.g., Hive, Hadoop). Familiarity with advanced machine learning methods, including neural networks, reinforcement learning, and other cutting-edge Gen AI approaches. Skilled in API development More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown Asset Management Limited
techniques in production-grade code, with a focus on scalability and reliability. Experience with large-scale data analysis, manipulation, and distributed computing platforms (e.g., Hive, Hadoop). Familiarity with advanced machine learning methods, including neural networks, reinforcement learning, and other cutting-edge Gen AI approaches. Skilled in API development More ❯
Employment Type: Permanent, Part Time, Work From Home
Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics. Direct experience in building data pipelines using Azure Data Factory and Apache Spark (preferably Databricks). Experience building data warehouse solutions using ETL/ELT tools such as SQL Server Integration Services (SSIS), Oracle Data Integrator … ODI), Talend, and Wherescape Red. Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data/event-based data. Experience with other Open Source big data products e.g., Hadoop (incl. Hive, Pig, Impala). Experience with Open Source non-relational/NoSQL data More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python … Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming More ❯
Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Job Description- Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Banking/Capital Markets Domain - Good to have Job Description : ============= Big Data – Scala/Spark • Good Big Data … resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A More ❯