critical for structuring data for analysis. Proficiency in cloud platforms, such as AWS and GCP, with hands-on experience in services like Databricks, Redshift, BigQuery, and Snowflake, is highly valued. Advanced Python skills for data manipulation, automation, and scripting, using libraries like Pandas and NumPy, are necessary for effective More ❯
needed. Requirements: Strong proficiency in Python for data engineering tasks. Experience with cloud platforms (e.g., AWS, Azure, or GCP), including services like S3, Lambda, BigQuery, or Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such More ❯
pipelines. Work with Python and SQL for data processing, transformation, and analysis. Leverage a wide range of GCP services including: Cloud Composer (Apache Airflow) BigQuery Cloud Storage Dataflow Pub/Sub Cloud Functions IAM Design and implement data models and ETL processes. Apply infrastructure-as-code practices using tools More ❯
technology approach combining talent with software and service expertise. Tasks & Responsibilities: Design, develop, and maintain scalable data pipelines on GCP using services such as BigQuery and Cloud Functions. Collaborate with internal consulting and client teams to understand data requirements and deliver data solutions that meet business needs. Implement data More ❯
in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, GoogleBigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud More ❯
Expertise in data architecture, data modelling, data analysis, and data migration. Strong knowledge of SQL, NoSQL, cloud-based databases (e.g., AWS Redshift, Snowflake, GoogleBigQuery, Azure Synapse). Experience in ETL development, data pipeline automation, and data integration strategies. Familiarity with AI-driven analytics Strong understanding of data governance More ❯
designing, building, and maintaining scalable data pipelines and ETL processes. Proficiency in SQL and experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as Apache Spark, Kafka, Hive, or Hadoop. Proficient in at least one More ❯
for data analysis, machine learning, and data visualization. In-depth knowledge of cloud platforms (AWS, Azure, Google Cloud) and related data services (e.g., S3, BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache More ❯
london, south east england, united kingdom Hybrid / WFH Options
Careerwise
for data analysis, machine learning, and data visualization. In-depth knowledge of cloud platforms (AWS, Azure, Google Cloud) and related data services (e.g., S3, BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache More ❯
/ELT pipelines and database technologies like PostgreSQL and MongoDB Familiar with major cloud platforms and tools, ideally Amazon Web Services and Snowflake or BigQuery Solid understanding of data transformation, advanced analytics, and API-based data delivery Ability to work across departments with a collaborative, problem-solving mindset and More ❯
/ELT pipelines and database technologies like PostgreSQL and MongoDB Familiar with major cloud platforms and tools, ideally Amazon Web Services and Snowflake or BigQuery Solid understanding of data transformation, advanced analytics, and API-based data delivery Ability to work across departments with a collaborative, problem-solving mindset and More ❯
data from diverse sources. Strong knowledge of SQL/NoSQL databases and cloud data warehouse technology such as Snowflake/Databricks/Redshift/BigQuery, including performance tuning and optimisation. Understanding of best practices for designing scalable and efficient data models, leveraging dbt for transformations. Familiarity with CircleCI, Terraform More ❯
tools such as Tableau • Experience working with APIs • Experience working with large-scale spatial datasets (billions of rows) and performing geospatial at scale using BigQuery GIS or similar tools • Experience with advanced analytical modelling techniques, including statistical analysis and predictive modelling, particularly applying these to large-scale datasets to More ❯
stakeholders alike. The Head of BI will work within a modern, cloud-based BI ecosystem, including: Data Integration: Fivetran, HVR, Databricks, Apache Kafka, GoogleBigQuery, Google Analytics 4 Data Lake & Storage: Databricks Delta Lake, Amazon S3 Data Transformation: dbt Cloud Data Warehouse: Snowflake Analytics & Reporting: Power BI, Excel, Snowflake More ❯
ELT workflows. Strong analytic skills related to working with unstructured datasets. Engineering best practices and standards. Experience with data warehouse software (e.g. Snowflake, GoogleBigQuery, Amazon Redshift). Experience with data tools: Hadoop, Spark, Kafka, etc. Code versioning (Github integration and automation). Experience with scripting languages such as More ❯
databases. Hands on experience working with visualization tools including ThoughtSpot, Power BI or Tableau. Familiarity with leading cloud-based data warehouses such as Azure, BigQuery, AWS Redshift, or Snowflake. Strong analytical and problem-solving abilities to address complex data challenges. Detail-oriented mindset with a focus on data accuracy More ❯
dbt. You have experience building dashboards in Looker and/or Tableau. Experience with AWS Redshift and/or other cloud data warehouses (Snowflake, BigQuery). Typeform drives hundreds of millions of interactions each year, enabling conversational, human-centered experiences across the globe. We move as one team , empowering More ❯
and data governance practices, with an emphasis on scalability and compliance in research environments. Enterprise exposure to data engineering tools and products (Spark, PySpark, BigQuery, Pub/Sub) with an understanding of product/market fit for internal stakeholders Familiarity with cloud computing environments, including but not limited to More ❯
with Python. Experience building scalable, high-quality data models that serve complex business use cases. Knowledge of dbt and experience with cloud data warehouses (BigQuery, Snowflake, Databricks, Azure Synapse etc). Proficiency in building BI dashboards and self-service capabilities using tools like Tableau and Looker. Excellent communication skills More ❯
for workflow orchestration. Strong experience in Python with demonstrable experience in developing and maintaining data pipelines and automating data workflows. Proficiency in SQL, particularly BigQuery SQL for querying and manipulating large datasets. Familiarity with machine learning (ML) concepts, algorithms (supervised/unsupervised learning), and ML tools. Experience with version More ❯
this isn't set in stone! Familiarity with digital advertising platforms (Google Ads, Facebook Ads, LinkedIn Ads, TikTok, etc.) We use Google Analytics 4, BigQuery, Fivetran, Amplitude, Metabase and DBT amongst other things for our data stack, so experience here would be beneficial, however, experience in similar tools (if More ❯
customer segmentation, forecasting, and LTV analysis • Maintain code-driven workflows and version control through GitHub What We’re Looking For: • Strong SQL skills (GoogleBigQuery or similar cloud platforms) • Proficiency in Tableau or other data visualisation tools • Understanding of data modelling (Kimball methodology preferred) • Experience with DBT/Airflow More ❯
customer segmentation, forecasting, and LTV analysis • Maintain code-driven workflows and version control through GitHub What We’re Looking For: • Strong SQL skills (GoogleBigQuery or similar cloud platforms) • Proficiency in Tableau or other data visualisation tools • Understanding of data modelling (Kimball methodology preferred) • Experience with DBT/Airflow More ❯
london (chessington), south east england, united kingdom
Omnis Partners
customer segmentation, forecasting, and LTV analysis • Maintain code-driven workflows and version control through GitHub What We’re Looking For: • Strong SQL skills (GoogleBigQuery or similar cloud platforms) • Proficiency in Tableau or other data visualisation tools • Understanding of data modelling (Kimball methodology preferred) • Experience with DBT/Airflow More ❯
through sharing knowledge and mentoring. Minimum Requirements Min. 3 years of experience as Data Analyst/Scientist Proficient in query language/framework (SQL, BigQuery, MySQL) is a MUST Has experience handling big data projects Experienced in R or Python Experience with data visualisation tools like Looker Mastered various More ❯