environment. Experience in log management tools to troubleshoot issues as well as identify useful analytics data. Preferred Experience in Microsoft Azure services and Databricks Spark, Redshift, Hadoop Map-Reduce or other Big Data frameworks Code management tools (Git, Sbt, Maven) Pyspark, Scala or other functional programming languages Analytics tools more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
at scale utilising the best breed of Cloud services and technologies. So, what tools and technologies will you be using? AWS Python Databricks/Spark Trino Airflow Docker CloudFormation/Terraform SQL/NoSQL We provide you with the opportunity to think freely and work creatively and right now … Other skills we are looking for you to demonstrate include: Experience of data storage technologies: Delta Lake, Iceberg, Hudi Sound knowledge and understanding of ApacheSpark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Knowledge of Architecture best practices and patterns Competence more »
and manage data lake/data warehouse platforms. (Some of the following types of providers: AWS, Microsoft Azure, Google Cloud Platform, Databricks, Snowflake, Cloudera, Spark, MongoDB) Done this at companies using high volumes of data, ideally in retailing. Other sectors where used high volume data would also be relevant more »
language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
in programming languages commonly used in machine learning, preferably Python. Experience with machine learning frameworks and libraries, such as TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proven track record of developing and implementing machine learning solutions in a professional setting. Passion for exploring new technologies and driving innovation in more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
Engineering experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms more »
you! Minimum Qualifications Bachelors or Masters Degree in Engineering or Computer Applications Hands-on experience with MS SQL Server and GCP Familiarity with BQ, Spark, Hive, Pig, and other analytical tools. Understanding of finance domain. Preferred Qualification Experience in SAP data modelling Genpact is an Equal Opportunity Employer and more »
wrangling, modelling, feature engineering and deployment What’s equally relevant is programming knowledge and a gift for coding using: R, Python, SQL, SAS or Spark You’ll also need the capability to deliver projects, collaborate with operational teams and be able to explain technical concepts to non-specialists What more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
build their AI practice and a team around you. Required Skills: Building cloud and native machine learning architecture with: LLamaIndex, HuggingFace, SentenceTransformers, PyTorch, Python, Apache Spark. Experience with practical application of AI and scaling AI with these tools Experience in Health Care is essential We would love to share more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Southampton, Hampshire, South East, United Kingdom
Ordnance Survey Limited
development. Track record in problem resolution & selection of technical solutions. Qualified to relevant development certification (or equivalent experience). Experience working with Databricks/Apache Spark. Experience working with infrastructure-as-code. Desirable: Experience working with Geospatial Data. Experience using Terraform. The rewards: We want you to love what more »
City of London, London, United Kingdom Hybrid / WFH Options
TALENT INTERNATIONAL UK LTD
such as Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Create and optimize data processing workflows in Databricks using PySpark and Spark SQL. Ensure ETL coding standards are met, including self-documenting code and reliable testing. Apply best practice data encryption techniques and standards to ensure … experience with Azure data products including Data Factory, Event Hubs, Data Lake, Synapse, and Azure SQL Server. Proficient in developing with Databricks, PySpark, and Spark SQL. Strong understanding of ETL coding standards, including standardized, self-documenting code and reliable testing. Knowledge of data encryption techniques and standards. Familiarity with more »
Platforms. Upskill business users in analytical standards and data science tools and techniques. Experienced using one or more analytical tools e.g. R, Python, Tableau, ApacheSpark, etc. Highly skilled in Azure Machine Learning and Azure DataBricks Knowledge and experience of how to maintain data, tools and processes to more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark and PySpark Machine learning concepts like model training, model registry, model deployment and monitoring Development and CI/CD tools (we use GitHub, CodePipeline more »
Southampton, Hampshire, South East, United Kingdom
Ordnance Survey Limited
to join Ordnance Survey's Data Derivation team in developing our next generation geospatial data. Using big data technologies (primarily Databricks, an implementation of ApacheSpark) you will develop, support, and maintain automated processes that enrich and transform Ordnance Survey's large-scale data. Working in a forward … testing approaches and best practice Providing technical coaching and mentoring Advocating software engineering industry best practice Interpreting detailed specifications and translating into simple solutions ApacheSpark/Databricks experience (desirable) Experience of working in Microsoft Azure (desirable) Experience of working with the Scala language (desirable) The rewards We more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Motability Operations
/transformation of large data sets using varied databases and platforms and a knowledge and experience of Google Analytics as well as Python and Spark libraries would be advantageous. Please note this is a fixed term appointment to cover maternity leave. You will work closely with Architects and Tech more »
Employment Type: Contract, Part Time, Work From Home
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »