with experience building complex, maintainable systems Professional software development experience with a track record of delivering high-quality, production-grade code Experience with scientific computing libraries such as NumPy, Pandas, or SciPy in production environments Holistic software development mindset covering testing, documentation, security, and performance Track record of mentoring other engineers and sharing knowledge across teams Working knowledge of mathematical More ❯
experience with Python and frameworks like Django/Flask/FastAPI. Database Expertise: Proficient with relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases. Data Analysis Skills: Experience using libraries like Pandas and NumPy. Software Development Best Practices: Understands object-oriented programming, Agile/Scrum methodologies, and version control (e.g., GitHub). Problem-Solving & Analytical Abilities: Demonstrated ability to solve complex problems More ❯
or data engineering. Ability to work standard European time-zone hours and legal authorisation to work in your country of residence. Strong experience with Python's data ecosystem (e.g., Pandas, NumPy) and deep expertise in SQL for building robust data extraction, transformation, and analysis pipelines. Hands-on experience with big data processing frameworks such as Apache Spark, Databricks, or Snowflake More ❯
track record of handling high-visibility, customer-facing outputs. 1+ years experience using Python (or another programming language e.g. R, C++, Java) and with the scientific computing stack (Numpy, Pandas, SciPy, ScikitLearn, etc.) Familiarity with renewable energy technologies, market design, and regulatory frameworks within European power markets, specifically GB, Germany, Spain, Portugal, France, or Italy. Experience writing technical, report-style More ❯
model performance evaluation, hyperparameter tuning, and maintenance using tools like Vertex AI Pipelines. Cloud Computing (Google Cloud Platform - GCP Preferred) Technical Expertise & Tools Python: Advanced proficiency in data analysis (Pandas, NumPy), machine learning, PI development (Flask/FastAPI), and writing clean, maintainable code. SQL: Expertise in querying, database design/optimization, stored procedures, functions, partitioning/clustering strategies for BigQuery More ❯
offs and varied opinions. Experience in a range of tools sets comparable with our own: Database technologies: SQL, Redshift, Postgres, DBT, Dask, airflow etc. AI Feature Development: LangChain, LangSmith, pandas, numpy, sci kit learn, scipy, hugging face, etc. Data visualization tools such as plotly, seaborn, streamlit etc You are Able to chart a path on a long term journey through More ❯
familiarity with LLM/GenAI prompting and augmentation for textual analysis, with an interest in learning more. Experience working with commonly used data science libraries and frameworks, e.g. Spacy, pandas, numpy, scikit-learn, Keras/TensorFlow, PyTorch, LangChain, Huggingface transformers etc. Familiar with both on-premises and cloud-based platforms (e.g. AWS). Working understanding of ML Ops workflows and More ❯
Machine Learning Engineer - SaaS - London (Tech stack: Machine Learning Engineer, Python, TensorFlow, PyTorch, scikit-learn, Keras, Natural Language Processing (NLP), Hugging Face Transformers, Pandas, NumPy, Jupyter Notebooks, Matplotlib, Seaborn, Flask (for building APIs), FastAPI, Docker, MLflow, DVC (Data Version Control), AWS SageMaker, Azure Machine Learning, Google Cloud AI Platform, TensorFlow Serving, ONNX (Open Neural Network Exchange) We have several exciting … full training will be provided to fill any gaps in your skill set): Machine Learning Engineer, Python, TensorFlow, PyTorch, scikit-learn, Keras, Natural Language Processing (NLP), Hugging Face Transformers, Pandas, NumPy, Jupyter Notebooks, Matplotlib, Seaborn, Flask (for building APIs), FastAPI, Docker, MLflow, DVC (Data Version Control), AWS SageMaker, Azure Machine Learning, Google Cloud AI Platform, TensorFlow Serving, ONNX (Open Neural More ❯
track drift, response quality and spend; implement automated retraining triggers. Collaboration - work with Data Engineering, Product and Ops teams to translate business constraints into mathematical formulations. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you … yrs optimisation/recommender work at production scale (dynamic pricing, yield, marketplace matching). Mathematical optimisation know-how - LP/MIP, heuristics, constraint tuning, objective-function design. Python toolbox: pandas, NumPy, scikit-learn, PyTorch/TensorFlow; clean, tested code. Cloud ML: hands-on with AWS SageMaker plus exposure to Azure ML; Docker, Git, CI/CD, Terraform. SQL mastery for More ❯
approaches. Deployment: ship a model as a container, update an Airflow (or Azure Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you … yrs optimisation/recommender work at production scale (dynamic pricing, yield, marketplace matching). Mathematical optimisation know-how - LP/MIP, heuristics, constraint tuning, objective-function design. Python toolbox: pandas, NumPy, scikit-learn, PyTorch/TensorFlow; clean, tested code. Cloud ML: hands-on with AWS SageMaker plus exposure to Azure ML; Docker, Git, CI/CD, Terraform. SQL mastery for More ❯
some of the biggest names in private equity. Ideal Candidate Experience: 5+ years in data science roles, preferably in fast-moving or early-stage environments Languages & Tools: Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow or PyTorch) Advanced SQL AWS MLOps tooling (e.g., MLflow, SageMaker, or similar) Bonus Points For: Knowledge of LLMs, RAG pipelines, prompt engineering, or agentic interfaces Experience More ❯
Northampton, Northamptonshire, England, United Kingdom
Harnham - Data & Analytics Recruitment
some of the biggest names in private equity. Ideal Candidate Experience: 5+ years in data science roles, preferably in fast-moving or early-stage environments Languages & Tools: Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow or PyTorch) Advanced SQL AWS MLOps tooling (e.g., MLflow, SageMaker, or similar) Bonus Points For: Knowledge of LLMs, RAG pipelines, prompt engineering, or agentic interfaces Experience More ❯
suits someone who is not only technically strong, but solution-oriented, strategically minded, and able to communicate insights clearly to both technical and non-technical audiences. Requirements: Advance Python (Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch) & SQL skills. (Snowflake a plus) Experience with Data warehousing and database technologies Solid machine learning experience (modelling to deployment) Cloud exposure (GCP/AWS More ❯
highly advanced analytical tools. Company: Trust in Soda Qualifications: Essential Requirements: Extensive experience in deploying products Solid experience with Python, and associated libraries such as scipy, scikit-learn and pandas/numpy Experience creating NLP/DL/RNN models Experience with Image Processing and CNNs Excellent communication skills Degree in CS, maths, statistics, engineering, physics or similar Desirable Requirements More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Adria Solutions
fresh ideas to the team. Skills and Experience: Degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Proficiency in Python and key libraries such as NumPy, Pandas, scikit-learn, TensorFlow or PyTorch. Basic understanding of machine learning algorithms and model evaluation techniques. Strong analytical and communication skills. Comfortable working in a collaborative environment and taking feedback. Desirable More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Graduate/postgraduate degree in engineering, mathematics, physics, or statistics. Experience in financial services, insurance, or e-commerce. Familiarity with cloud-based deployment, neural networks, TensorFlow, CatBoost, XGBoost, SKlearn, Pandas, SQL, API development, or CI/CD pipelines. Background in software engineering or DevOps/MLOps. RSG Plc is acting as an Employment Business in relation to this vacancy. More ❯
York, North Yorkshire, England, United Kingdom Hybrid / WFH Options
Method Resourcing
Graduate/postgraduate degree in engineering, mathematics, physics, or statistics. Experience in financial services, insurance, or e-commerce. Familiarity with cloud-based deployment, neural networks, TensorFlow, CatBoost, XGBoost, SKlearn, Pandas, SQL, API development, or CI/CD pipelines. Background in software engineering or DevOps/MLOps. RSG Plc is acting as an Employment Business in relation to this vacancy. More ❯
scalable workflows. Help grow and shape the data science team and its role within the wider business. What We’re Looking For Strong technical foundation with proficiency in Python (Pandas, NumPy, Scikit-learn), SQL, and cloud platforms (GCP or AWS). Experience with modern data warehouses (BigQuery, Snowflake, Redshift). Proven experience in deploying machine learning models or optimisation algorithms More ❯
has: 🎓 A degree in Data Science, Mathematics, Computer Science, Statistics, or a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work More ❯
has: 🎓 A degree in Data Science, Mathematics, Computer Science, Statistics, or a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work More ❯
London, England, United Kingdom Hybrid / WFH Options
Harnham
dbt, Airflow) Experience with loyalty-programme analytics or CRM platforms Knowledge of machine-learning frameworks (scikit-learn, TensorFlow) for customer scoring Technical Toolbox Data & modeling: SQL, Python/R, pandas, scikit-learn Dashboarding: Tableau or Power BI ETL & warehousing: dbt, Airflow, Snowflake/Redshift/BigQuery Experimentation: A/B testing platforms (Optimizely, VWO) Desired Skills and Experience 8+ years More ❯
yrs professional experience in Trading, Structuring, Risk or Quant role, in Financial institution, Fintech, Trading house, or Commodities house. Strong coding skills required: Python, proficiency in data science stack (Pandas, scikit-learn or equivalent), SQL. Familiarity with GUI development (Dash, Panel or equivalent). Experience designing, developing and deploying trading tools and GUIs and at least one of the following More ❯
approaches. Deployment: ship a model as a container, update an Airflow (or Azure Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow)SQL (Redshift, Snowflake or similar)AWS SageMaker Azure ML migration, with Docker, Git, Terraform, Airflow/ADFOptional extras: Spark, Databricks, Kubernetes. What you'll More ❯
Required Skills & Experience: Bachelor's degree in Computer Science, Mathematics, Statistics, Business Administration or related field Advanced knowledge of SQL Good knowledge of Python, including popular Data Science packages (pandas, matplotlib, seaborn, numpy, sklearn) Familiarity with what is happening under the hood of popular Machine Learning algorithms Strong problem-solving skills and attention to detail Strong communication and collaboration skills More ❯