or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan’s largest bank, MUFG, Spark Capital, and Apollo Asset Management. About the role We're on the lookout for accomplished credit professionals to assume the role of Director within more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
and Data Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI … Azure Cloud environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Luton, England, United Kingdom Hybrid / WFH Options
Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
data platform from a legacy system to one based on AWS EMR, with Amazon RDS and DynamoDB ingestion converted to Parquet files, interrogatable through Spark and MapReduce. This modern platform will support rapid data insight generation, data experiments for new product development, our live Machine Learning solutions and live … to-target mappings) to testing and service optimisation.) Good familiarity with our developing key services/applications - AmazonRDS, Amazon DynamoDB, AWS Glue, MapReduce, Hive, Spark, YARN, Airflow. Ability to work with a range of structured, semi-structured and unstructured file formats including Parquet, json, csv, pdf, jpg. Accomplished data more »
Complexio is Foundational AI. This works to automate business activities by ingesting whole company data – both structured and unstructured – and making sense of it. Using proprietary models and algorithms, Complexio forms a deep understanding of how humans are interacting and more »
in a technical and analytical role Experience of Data Lake/Hadoop platform implementation Hands-on experience in implementation and performance tuning Hadoop/Spark implementations Experience Apache Hadoop and the Hadoop ecosystem Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, Hcatalog, Solr … Avro) Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto) Experience developing software code in one or more programming languages (Java, Python, etc.) Preferred Qualifications Masters or PhD in Computer Science, Physics, Engineering or Math Hands on experience leading large-scale global data warehousing more »
role Good level of experience of Data Lake/Hadoop platform implementation Good level hands-on experience in implementation and performance tuning Hadoop/Spark implementations Experience Apache Hadoop and the Hadoop ecosystem Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, Hcatalog, Solr … Avro) Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto) Experience developing software code in one or more programming languages (Java, Python, etc.) Preferred Qualifications: Masters or PhD in Computer Science, Physics, Engineering or Maths Hands on experience leading large-scale global data warehousing more »
succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in … technology consulting environment • Current or previous consulting experience highly desirable • Experience of working with companies in the finance sector highly desirable • Platform implementation experience (Apache Hadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an more »
improvements Key Skills 3+ years of Python experience Highly statistical and Analytical Exposure to Google Cloud Platform ( BigQuery, GCS, Datalab, Dataproc, Cloud ML (desirable) Spark & Hadoop experience Strong communication skills Good problem solving skills Qualifications Bachelor's degree or equivalent experience in a quantative field (Statistics, Mathematics, Computer Science … classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) This is a permanent position, and offers flexibility with Hybrid working, 2-3 days per week in the office, depending on workload more »
succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in … to 10 years' IT Architecture experience working in a software development, technical project management, digital delivery, or technology consulting environment • Platform implementation experience (Apache Hadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile more »
City of London, London, United Kingdom Hybrid / WFH Options
TALENT INTERNATIONAL UK LTD
such as Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Create and optimize data processing workflows in Databricks using PySpark and Spark SQL. Ensure ETL coding standards are met, including self-documenting code and reliable testing. Apply best practice data encryption techniques and standards to ensure … experience with Azure data products including Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Proficient in developing with Databricks, PySpark, and Spark SQL. Strong understanding of ETL coding standards, including standardized, self-documenting code and reliable testing. Knowledge of data encryption techniques and standards. Familiarity with more »
DevOps Engineer - with Azure DevOps, Kubernetes, Azure App Insights, Terraform, Docker, Microsoft products, Hadoop, Spark, DevOps Automation, Digital Solutions, Agile Software – Contract – UK – Remote - £500 per day Our leading global manufacturer is seeking to appoint a DevOps Engineer on a remote, OUTSIDE IR35 contract. Due to a number of … assignments simultaneously Strong verbal and written communication skills Thorough understanding of the Agile Software Development Lifecycle (SDLC) Knowledge of Big Data applications like Hadoop, Spark, and Kafka is a plus Proven Experience: in application development, technology, or a related field; equivalent work experience may be considered of 5 years more »
Head of Data Science - London - £140,000 base salary + Competitive Benefits Our client is a leading fintech company headquartered in London. Their mission is to transform the financial services industry through cutting-edge technology and data-driven solutions. They more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Microlise
data practices Possess strong knowledge of data tools, data management tools, and various data and information technologies. E.g. DAMA DMBOK, Microsoft SQL Server, Couchbase, Apache Druid, Spark, Kafka, Airflow, etc In-depth understanding of modern data principles, methodologies, and tools Excellent communication and collaboration skills, with the ability … native computing concepts and experience working with hybrid or private cloud platforms is a plus. Demonstrable technical experience working with a Microsoft, Redhat, and Apache data and software engineering environment. A team-oriented individual with a passion for engineered excellence and the ability to lead and motivate a team more »
Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Databricks and PySpark Development Develop in Databricks with experience coding in PySpark and Spark SQL. Ensure ETL code is standardized, self-documenting, and can be reliably tested. Apply best practice data encryption techniques and standards. Understand relevant national … data products like Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Experienced in developing with Databricks and coding in PySpark and Spark SQL. Thorough understanding of coding standards for ETL processes. Knowledgeable about best practice data encryption techniques and standards. Familiar with relevant legislation related to more »
SQL Server, Sybase, Snowflake) Document databases (e.g. Mongo, ArangoDB, Couchbase, Solr) Big Data (e.g. Hadoop ecosystem, Bigtable) Data streaming (e.g. Kafka, Flink, Pulsar, Beam, Spark) Cloud databases (e.g. Snowflake, CockroachDB) Other database genres (e.g. Graph, Columnar, time series) In return, we’ll give you… A competitive basic salary … scheme A high spec laptop (of course!) Need more reasons? Here's a few more... Work with some of the most exciting new technologies Spark off co-workers who’ll challenge your thinking and help you to achieve your potential Deal openly and honestly with customers Benefit from a more »
Senior Data Scientist Mobysoft is one of the fastest growing SaaS providers in the UK and has been shortlisted in the "Top 50 fastest growing technology companies in the North" for four successive years. Mobysoft provides predictive analytical software that more »
As a Data Architect, you'll lead the development of Java and Python projects, design API integrations using Spark, and collaborate with clients and internal teams to translate business requirements into high and low-level designs. You'll also define architecture and technical designs, create data flows and integrations … users and client teams. Stay updated with the latest trends and best practices. Qualifications: Expertise in Java and Python development (Essential). Experience with Spark or Hadoop (Essential). Knowledge of Trino or Airflow (Desirable). Proven ability to design and implement scalable and secure solutions. Excellent communication and more »
Greater London, England, United Kingdom Hybrid / WFH Options
Hunter Bond
My client are looking for a talented and motivated Big Data Architect (Azure, Databricks, Spark) to be based in their London office. You'll be responsible for providing technical leadership in architecting and designing end-to-end solutions for the organisation's datalake initiatives, as they provide increasing numbers … improvements in design, processes, and implementation to improve operational management, scalability, and extensibility. The following skills/experience is essential: Strong implementation experience using Spark and Databricks Strong Cloud experience (ideally Azure) Previously heavily involved in an implementation programme Data Warehouse Strong stakeholder management experience Excellent IT background, ideally more »