areas: Big Data Analytics (e.g. Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures Modelling & Statistical Analysis experience, ideally customer related A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
a related field. Proven experience in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for driving measurable business impact. Knowledge More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls, and presentations Actively seek out new opportunities More ❯
improve our ability to serve clients. Tech Skills Required: Advanced level of coding in Python for Data Science Software engineering architecture design for application with integrated Data Science solutions Jupyter server/notebooks AWS: EC2, Sagemaker, S3 Git version control SQL skills include selecting, filtering, aggregating, and joining data using core clauses, use of CTEs, window functions, subqueries, and data More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
testing frameworks (e.g., DoWhy, causalml) Programming & Data Tools : Python: Strong foundation in Pandas, NumPy, matplotlib/seaborn, scikit-learn, TensorFlow, Pytorch etc. SQL: Advanced querying for large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
field. Proven experience in machine learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing data science teams. Deep knowledge More ❯
a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work—and a curiosity to learn more. 🗣️ Strong communication skills to share More ❯
a related field. 🧠 Solid understanding of data analysis, machine learning concepts, and statistical methods. 🐍 Proficiency in Python (e.g., Pandas, Scikit-learn, NumPy) or R, with exposure to tools like Jupyter, SQL, or cloud platforms (e.g., AWS, GCP). 📊 Experience working with data—through academic projects, internships, or personal work—and a curiosity to learn more. 🗣️ Strong communication skills to share More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Actively seek out new opportunities to learn and develop Be an example of data science best-practice e.g. Git/Docker/cloud deployment Write proposals for exciting new More ❯
with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc. Understanding of deployment pipelines, serverless components (e.g., Lambda, Step Functions). Data Science Collaboration: Exposure to Jupyter Notebooks , visualization libraries (e.g., Matplotlib, Seaborn). Knowledge of synthetic data generation, data augmentation, and perturbation techniques. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Software More ❯
with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc. Understanding of deployment pipelines, serverless components (e.g., Lambda, Step Functions). Data Science Collaboration: Exposure to Jupyter Notebooks , visualization libraries (e.g., Matplotlib, Seaborn). Knowledge of synthetic data generation, data augmentation, and perturbation techniques. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Software More ❯
Learning, AI, Statistics, Economics or equivalent) 5+ years of professional working experience Someone who thrives in the incremental delivery of high quality production systems Proficiency in Java, Python, SQL, Jupyter Notebook Experience with Machine Learning and statistical inference. Understanding of ETL processes and data pipelines and ability to work closely with Machine Learning Engineers for product implementation Ability to communicate More ❯
preprocessing, language modeling, and semantic similarity. Strong proficiency in Python, including use of ML libraries such as TensorFlow, PyTorch, or similar. Experience with data science tools and platforms (e.g., Jupyter, Pandas, NumPy, MLFlow). Familiarity with cloud-based AI tools and infrastructure, especially within the AWS ecosystem. Strong understanding of data structures, algorithms, and statistical analysis. Experience working with ETL More ❯
preprocessing, language modeling, and semantic similarity. Strong proficiency in Python, including use of ML libraries such as TensorFlow, PyTorch, or similar. Experience with data science tools and platforms (e.g., Jupyter, Pandas, NumPy, MLFlow). Familiarity with cloud-based AI tools and infrastructure, especially within the AWS ecosystem. Strong understanding of data structures, algorithms, and statistical analysis. Experience working with ETL More ❯
Playwright or similar testing frameworks. REST APIs: Strong understanding of integrating and working with RESTful services. Data Skills: Experience in data wrangling/analysis (e.g., using SQL or Python, Jupyter Notebook). Collaboration: Experience working in an Agile environment (Scrum/Kanban). Problem-Solving: Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). More ❯
data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a plus. Profound knowledge of the English language. In a changing world, diversity and inclusion More ❯
and other Qualtrics products Acquire data from customers (usually sftp or cloud storage APIs) Validate data with exceptional detail orientation (including audio data) Perform data transformations (using Python and Jupyter Notebooks) Load the data via APIs or pre-built Discover connectors Advise our Sales Engineers and customers as needed on the data, integrations, architecture, best practices, etc. Build new AWS More ❯
Defender XDR, Entra, Purview). Create scripts, APIs, and orchestrations that reduce manual effort and improve speed and accuracy in security operations. - Tell Stories with Data: Use tools like Jupyter Notebooks, Kusto Query Language (KQL), and Python to query and visualize large-scale security datasets. Translate telemetry into insights and share narratives that influence decision-making across engineering and leadership … engineering, preferably in cloud-native or regulated environments. - Strong programming/scripting skills (Python preferred) with a focus on infrastructure and operations tooling. - Experience working with large datasets in Jupyter Notebooks and building dashboards or reports for security posture and compliance. - Strong communication skills with an ability to convey technical concepts to non-technical stakeholders. - Role is UK based and More ❯
design and implement data engineering and AI/ML infrastructure. Things we're looking for: Proficiency in data analysis, insights generation and using cloud-hosted tools (e.g., BigQuery, Metabase, Jupyter). Strong Python and SQL skills, with experience in data abstractions, pipeline management and integrating machine learning solutions. Adaptability to evolving priorities and a proactive approach to solving impactful problems More ❯
Finance, Collections, Operations, and other stakeholders. What you'll need Excellent SQL skills. A drive to solve problems using data. Proficiency with the Python data science stack (pandas, NumPy, Jupyter notebooks, Plotly/matplotlib, etc.). Bonus skills include: Familiarity with Git. Experience with data visualization tools (Tableau, Looker, PowerBI, or equivalent). Knowledge of DBT. 2-5 years of More ❯