Java 1 years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2 years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1 years experience working on real-time data and streaming applications 1 years of experience with NoSQL more »
a Data Engineer or in a similar role. Strong proficiency in SQL and experience with ETL frameworks. Experience with big data tools such as Hadoop, Spark, Kafka, etc. Proficiency in programming languages such as Python or Java. Experience with cloud platforms (AWS, Azure, or Google Cloud). Strong understanding more »
control, testing, deployment). Experience working with cloud platforms (e.g., AWS, Azure, GCP) and high-performance computing environments. Understanding of big data ecosystems like Hadoop, Spark, Flink. Proven ability to scope and deliver projects effectively. Familiarity with industry standards, security protocols, and best practices. Desirable Knowledge & Experience: Experience in more »
including SQL and NoSQL databases, and data services ecosystems (ADF, SQL DWH, Data Bricks). Experience in Distributed Data Computing and Big Data ecosystems (Hadoop, Cassandra, Teradata). Data modelling, database design, SQL performance tuning Experience with designing- & building APIs Other/Beneficial skills Previous experience in Finance/ more »
Advanced working knowledge in SQL and relational databases (e.g., Microsoft SQL Server, Oracle). Big Data Technologies: Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a plus. Data Warehousing: Experience in data warehousing, including data modeling and implementing data pipelines. Data Management: Multi-skilled experience more »
Advanced working knowledge in SQL and relational databases (e.g., Microsoft SQL Server, Oracle). Big Data Technologies: Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a plus. Data Warehousing: Experience in data warehousing, including data modeling and implementing data pipelines. Data Management: Multi-skilled experience more »
Enthusiasm for learning and adapting to new technologies Desired Skills Knowledge of machine learning algorithms and their applications Experience with big data technologies (e.g., Hadoop, Spark) Understanding of financial services industry trends and challenges Familiarity with cloud platforms (AWS, Azure, or GCP) Interest in cybersecurity and its intersection with more »
role, preferably within an IT consultancy. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and data warehousing solutions (e.g., Redshift, BigQuery). Proficient in programming languages more »
explain complex technical concepts to non-technical stakeholders. Experience with cloud-based data platforms (e.g., AWS, Google Cloud, Azure) and big data technologies (e.g., Hadoop, Spark). more »
CI/CD pipelines and containerization technologies such as Docker and Kubernetes is a plus. Preferred Qualifications: Experience with big data technologies like Spark , Hadoop , or Kafka . Knowledge of data governance , security , and compliance frameworks in the cloud. Azure certifications such as Azure Data Engineer Associate or Azure more »
including stakeholder management Have experience in one of the following technologies: • ETL toolset (Talend, Pentaho, SAS DI, Informatica etc) • Database (Client, RDS, Redshift, MySQL, Hadoop, Postgres, etc) • Job Scheduling toolset (Job Scheduler, TWS, etc) • Programming and scripting languages (PL/SQL, SQL, Unix, Java, Python, Hive, HiveQL, HDFS, Impala more »
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Bowerford Associates
optimal data extraction, transformation and loading using leading cloud technologies including Azure and AWS. Leverage Big Data Technologies - you will utilise tools such as Hadoop, Spark and Kafka to design and manage large-scale data processing systems. Extend and maintain the data Warehouse - you will be supporting and enhancing more »
5+ years of experience in software engineering with strong expertise in Java and related frameworks. Proven experience with big data technologies such as ApacheHadoop, Apache Spark, Apache Kafka, or other distributed systems. Experience building data pipelines for processing large datasets in batch and real-time. Familiarity with cloud more »
5+ years of experience in software engineering with strong expertise in Java and related frameworks. Proven experience with big data technologies such as ApacheHadoop, Apache Spark, Apache Kafka, or other distributed systems. Experience building data pipelines for processing large datasets in batch and real-time. Familiarity with cloud more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Natural Language Processing/Deep Learning/NLP/Python/Leadership more »
IT industry (prefer experience in Banking & Financial Services sector Key skills: Databases (RDBMS, No SQL, open source, proprietary), SQL expertise, StreamSets, Kafka, Big Data Hadoop, Spark, Python, Data on Cloud, Test Management tools (Octane), JIRA Experience in building Data solutions, experience in utilizing and feeding data from data lake. more »
such as scikit-learn, TensorFlow, or PyTorch - Solid understanding of statistical analysis and data mining techniques - Familiarity with big data technologies like Spark or Hadoop - Experience with data visualization tools (e.g., Matplotlib, Seaborn, Plotly) Nice to Have: - Experience with cloud platforms (AWS, GCP, or Azure) - Knowledge of NLP or more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
and product stakeholders to meet requirements Desirable Skills AI Tools: Familiarity with TensorFlow, PyTorch, Keras, and Scikit-learn Data Engineering: Knowledge of Apache Spark, Hadoop, and Kafka Model Deployment: Experience with Docker, Kubernetes, and MLflow Testing: Knowledge of AI testing frameworks and methodologies Frontend Technologies: Experience with React and more »
Python, R, SQL , and forecasting libraries such as Prophet, ARIMA, LSTM . Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Hadoop, Spark). Ability to communicate complex data insights clearly to both technical and non-technical stakeholders. Strong problem-solving skills with a results-driven more »
in SQL, Python, and modern ETL tools (e.g., Airflow, dbt). Experience with cloud platforms (AWS, Azure, or GCP) and big data technologies (Spark, Hadoop). Knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery). Ability to work autonomously and thrive in a fast-paced, project-driven environment. more »
City of London, London, United Kingdom Hybrid / WFH Options
Corecruitment International
qualification preferred). 10+ years of experience in FP&A, ideally within the leisure, hospitality, or related sectors. Expertise in Big Data technologies (e.g., Hadoop, SQL, Python, BigQuery). Proven experience in managing financial planning and budgeting processes in large organizations. Strong background in real estate investment and financial more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Corecruitment International
qualification preferred). 10+ years of experience in FP&A, ideally within the leisure, hospitality, or related sectors. Expertise in Big Data technologies (e.g., Hadoop, SQL, Python, BigQuery). Proven experience in managing financial planning and budgeting processes in large organizations. Strong background in real estate investment and financial more »
commonly used in AI and data science, e.g., Python or R, Tensorflow or Pytorch, Darts. Working experience with large-scale data processing frameworks (e.g., Hadoop, Spark, MLFlow …). Working experience with large-scale search engines and DBMS (e.g., Elasticsearch, SQL, Neo4j, CosmosDB). Experience in Training and Monitoring AI more »
experience in quantitative analytics or data modeling Fluency in a programming language (Python, OR R) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »