Jenkins). Working with distributed computing frameworks, such as Apache Spark. Cloud platforms (e.g., AWS, Azure, Google Cloud) and associated services (e.g., S3, Redshift, BigQuery). Familiarity with data modelling, database systems, and SQL optimisation techniques. Other things we’re looking for (key criteria) Knowledge of the UK broadcast More ❯
critical for structuring data for analysis. Proficiency in cloud platforms, such as AWS and GCP, with hands-on experience in services like Databricks, Redshift, BigQuery, and Snowflake, is highly valued. Advanced Python skills for data manipulation, automation, and scripting, using libraries like Pandas and NumPy, are necessary for effective More ❯
needed. Requirements: Strong proficiency in Python for data engineering tasks. Experience with cloud platforms (e.g., AWS, Azure, or GCP), including services like S3, Lambda, BigQuery, or Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such More ❯
Platform 3+ years of experience in data engineering or data science. Experience with data quality assurance and testing. Ideally knowledge of GCP data services (BigQuery; Dataflow; Data Fusion; Dataproc; Cloud Composer; Pub/Sub; Google Cloud Storage) Understanding of logging and monitoring using tools such as Cloud Logging, ELK More ❯
technology approach combining talent with software and service expertise. Tasks & Responsibilities: Design, develop, and maintain scalable data pipelines on GCP using services such as BigQuery and Cloud Functions Collaborate with internal consulting and client teams to understand data requirements and deliver data solutions that meet business needs Implement data More ❯
processing frameworks (e.g., Apache Spark, Airflow, dbt). Hands-on experience with cloud-based data solutions (e.g., AWS, GCP, Azure) and data warehouses (e.g., BigQuery, Snowflake, Redshift). In-depth knowledge of SEO metrics, search algorithms, and technical SEO principles. Experience working with APIs from Google Search Console, GoogleMore ❯
pipelines. Work with Python and SQL for data processing, transformation, and analysis. Leverage a wide range of GCP services including: Cloud Composer (Apache Airflow) BigQuery Cloud Storage Dataflow Pub/Sub Cloud Functions IAM Design and implement data models and ETL processes. Apply infrastructure-as-code practices using tools More ❯
frameworks (Airflow, Dagster) to build scalable pipelines. Knowledge of real-time data movement, databases (Oracle, SQL Server, PostgreSQL), and cloud analytics platforms (Snowflake, Databricks, BigQuery). Familiarity with emerging data technologies like Open Table Format, Apache Iceberg, and their impact on enterprise data strategies. Hands-on experience with data More ❯
frameworks (Airflow, Dagster) to build scalable pipelines. Knowledge of real-time data movement, databases (Oracle, SQL Server, PostgreSQL), and cloud analytics platforms (Snowflake, Databricks, BigQuery). Familiarity with emerging data technologies like Open Table Format, Apache Iceberg , and their impact on enterprise data strategies. Hands-on experience with data More ❯
frameworks (Airflow, Dagster) to build scalable pipelines. Knowledge of real-time data movement, databases (Oracle, SQL Server, PostgreSQL), and cloud analytics platforms (Snowflake, Databricks, BigQuery). Familiarity with emerging data technologies like Open Table Format, Apache Iceberg, and their impact on enterprise data strategies. Hands-on experience with data More ❯
frameworks (Airflow, Dagster) to build scalable pipelines. Knowledge of real-time data movement, databases (Oracle, SQL Server, PostgreSQL), and cloud analytics platforms (Snowflake, Databricks, BigQuery). Familiarity with emerging data technologies like Open Table Format, Apache Iceberg, and their impact on enterprise data strategies. Hands-on experience with data More ❯
in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, GoogleBigQuery). Experience with big data platforms (e.g., Hadoop, Spark, Kafka). Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud More ❯
technology approach combining talent with software and service expertise. Tasks & Responsibilities: Design, develop, and maintain scalable data pipelines on GCP using services such as BigQuery and Cloud Functions. Collaborate with internal consulting and client teams to understand data requirements and deliver data solutions that meet business needs. Implement data More ❯
in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, GoogleBigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud More ❯
in data engineering, with at least 2 years in a leadership or management role. Strong experience with data warehousing solutions (e.g., AWS Redshift, GoogleBigQuery, Snowflake). Expertise in data pipeline development, ETL tools, and frameworks (e.g., Apache Airflow, Talend, Fivetran). Proficiency in SQL and experience with Python More ❯
Expertise in data architecture, data modelling, data analysis, and data migration. Strong knowledge of SQL, NoSQL, cloud-based databases (e.g., AWS Redshift, Snowflake, GoogleBigQuery, Azure Synapse). Experience in ETL development, data pipeline automation, and data integration strategies. Familiarity with AI-driven analytics Strong understanding of data governance More ❯
a related role Strong programming skills in Python and SQL Excellent problem-solving and analytical skills Experience with GA4 and GA knowledge Experience with BigQuery, AWS, or other ETL tools and techniques Experience with CDP and CEP platforms such as Segment, Rudderstack, Braze etc. to support the marketing team More ❯
designing, building, and maintaining scalable data pipelines and ETL processes. Proficiency in SQL and experience working with relational and non-relational databases (e.g. Snowflake, BigQuery, PostgreSQL, MySQL, MongoDB). Hands-on experience with big data technologies such as Apache Spark, Kafka, Hive, or Hadoop. Proficient in at least one More ❯
for data analysis, machine learning, and data visualization. In-depth knowledge of cloud platforms (AWS, Azure, Google Cloud) and related data services (e.g., S3, BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache More ❯
for data analysis, machine learning, and data visualization. In-depth knowledge of cloud platforms (AWS, Azure, Google Cloud) and related data services (e.g., S3, BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, Apache More ❯
/ELT pipelines and database technologies like PostgreSQL and MongoDB Familiar with major cloud platforms and tools, ideally Amazon Web Services and Snowflake or BigQuery Solid understanding of data transformation, advanced analytics, and API-based data delivery Ability to work across departments with a collaborative, problem-solving mindset and More ❯
/ELT pipelines and database technologies like PostgreSQL and MongoDB Familiar with major cloud platforms and tools, ideally Amazon Web Services and Snowflake or BigQuery Solid understanding of data transformation, advanced analytics, and API-based data delivery Ability to work across departments with a collaborative, problem-solving mindset and More ❯
data from diverse sources. Strong knowledge of SQL/NoSQL databases and cloud data warehouse technology such as Snowflake/Databricks/Redshift/BigQuery, including performance tuning and optimisation. Understanding of best practices for designing scalable and efficient data models, leveraging dbt for transformations. Familiarity with CircleCI, Terraform More ❯
tools such as Tableau. Experience working with APIs. Experience working with large-scale spatial datasets (billions of rows) and performing geospatial at scale using BigQuery GIS or similar tools. Experience with advanced analytical modelling techniques, including statistical analysis and predictive modelling, particularly applying these to large-scale datasets to More ❯
also be applicable. Experience designing and implementing stream processing applications (kStreams, kSQL, Flink, Spark Streaming). Experience with Data Warehousing tools like Snowflake/Bigquery/Databricks, and building pipelines on these. Experience working with modern cloud-based stacks such as AWS, Azure or GCP. Excellent programming skills with More ❯