Cloud Support Engineers in the Data in Transit domains support customers who are running ETL workload or analyzing large amounts of data using AWS services. As a part of this team, you will be working on a plethora of services such as Glue (ETL service), Athena (interactive query service), Managed … Workflows of Apache Airflow, etc. Understanding of ETL (Extract, Transform, Load) Creation of ETL Pipelines to extractand ingest data into data lake/warehouse with simple to medium complexity data transformations and troubleshooting ETL job issues. Understanding of Linux and Networking concepts. Excellent oral and written communication skills with … Bachelor's degree in the same with 1+ year of experience OR equivalent experience in a technical position. Key Job Responsibilities Intermediate expertise in ETL tools such as Talend, Informatica or similar. Knowledge of data management fundamentals and data storage principles. Advanced SQL and query performance tuning skills. Experience integrating More ❯
tables Good understanding with Microsoft BI toolset, including O365 tools and PowerPlatform (PowerApps/Power Automate/Power BI) Good understanding of Azure cloud ETL toolset, including Azure SQL Server, Azure Data Factory, Datalake Familiar with Cloud technologies including Microsoft Azure Cloud infrastructure, data stores connections, cloud functions concepts Experience More ❯
Modelling, Database & Data Platform Design. Proficiency in creating logical & physical data models. Knowledge of relational (SQL) and non-relational (NoSQL) database systems. Understanding of ETL/ELT Processes. Solid understanding of Data Integration & Architecture. Experience with data integration. Data warehousing concepts. Data flow design. Experience working with cloud computing on More ❯
be doing: Provide technical leadership and expertise for the development and implementation of data engineering solutions. Design, develop, and maintain scalable data pipelines andETL processes to support data analytics and business intelligence initiatives. Collaborate with cross-functional teams to understand data requirements and ensure effective data integration and governance. More ❯
expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks More ❯
Stream Analytics, and Event Hub. Experience working with the Microsoft Azure cloud-based ecosystem Experience in extracting data from heterogeneous data sources by using ETL tools Experience in creating and managing SSAS Tabular models, creating Dimension and Fact Tables. Finance or Insurance domain Reporting Tools: Power BI, Cognos, MicroStrategy. What More ❯
considered for a strong candidate): ADLS, Databricks, Stream Analytics, SQL DW, Synapse, Databricks, Azure Functions, Serverless Architecture, ARM Templates, DevOps. Hands-on experience with ETL/ELT processes and data warehousing. Solid understanding of data security and compliance standards. Experience with DevOps practices and tools (e.g., CI/CD pipelines More ❯
the team on a long term programme of work. Key Responsibilities: * ETL Pipeline Development: Develop, optimize, and maintain ETL pipelines to efficiently extract, transform, andload data from various sources, ensuring high data quality. * Monitor and troubleshoot production data pipelines, ensuring their performance and reliability. * Mentor junior engineers and lead … scalable data solutions. * Continuous Improvement: Proactively look for ways to improve data systems, processes, and tools, ensuring efficiency and scalability. Key Skills/Experience: * ETL/ELT & Data Pipelines: Solid understanding of ETL/ELT processes, along with hands-on experience building and maintaining data pipelines using DBT, Snowflake, Python More ❯
and various software including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar), SQL and Spark. Proven ability to develop data pipelines (ETL/ELT). Strong inclination to learn and adapt to new technologies and languages. Expertise in designing and building Big Data databases, analytics, and BI More ❯
and various software including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar), SQL and Spark. Proven ability to develop data pipelines (ETL/ELT). Strong inclination to learn and adapt to new technologies and languages. Expertise in designing and building Big Data databases, analytics, and BI More ❯
using Azure based data warehouse technologies such as Azure Data Factory, Analysis Services, SQL server and Azure Synapse. Data Pipeline Creation: Build and optimize ETL/ELT data pipelines using Azure Data Factory, Databricks, or similar services to ensure data is properly ingested, transformed, and loaded into the data warehouse. More ❯
using Azure based data warehouse technologies such as Azure Data Factory, Analysis Services, SQL server and Azure Synapse. Data Pipeline Creation: Build and optimize ETL/ELT data pipelines using Azure Data Factory, Databricks, or similar services to ensure data is properly ingested, transformed, and loaded into the data warehouse. More ❯
ripponden, yorkshire and the humber, united kingdom
JLA Group
using Azure based data warehouse technologies such as Azure Data Factory, Analysis Services, SQL server and Azure Synapse. Data Pipeline Creation: Build and optimize ETL/ELT data pipelines using Azure Data Factory, Databricks, or similar services to ensure data is properly ingested, transformed, and loaded into the data warehouse. More ❯
Factory Azure Data Lake Familiarity with continuous integration, continuous delivery, agile methodologies, and Azure DevOps. Familiarity with optimizing strategies, pipeline architectures, data sets, andETL/ELT processes. You will likely be Passionate about technology, including the Hymans chosen technology stack (Microsoft development stack, Azure Cloud computing, Data Science technologies More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown Asset Management Limited
tools like ERwin, PowerDesigner. Database Management: Experience with SQL and NOSQL databases, understanding the principles of data warehousing and operational data and experience with ETL/ELT processes. Cloud Platforms: Knowledge of cloud services with a preference as AWS, for cloud services. Data Integration: Expertise in ETL (Extract, Transform, LoadMore ❯
Employment Type: Permanent, Part Time, Work From Home
transparency and standardization across teams. Data Migration & Transformation: - Lead data migration efforts, particularly during system upgrades or transitions to new platforms. - Define and implement ETL (Extract, Transform, Load) processes for transforming data into usable formats for analytics and reporting. Documentation and Reporting: - Document data architecture designs, processes, and standards for … big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g., Apache NiFi, Talend, Informatica). • Proficiency in data integration tools and technologies. • Familiarity with data visualization and reporting tools (e.g., Tableau More ❯
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Noir
Data Engineer - FinTech Company - Newcastle (Tech Stack: Data Engineer, Databricks, Python, Azure, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) I’m working with a leading Software House in the FinTech industry, based in Newcastle, who are looking to hire a talented Data Engineer . This is a fantastic More ❯
platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, and Step Functions); You're proficient in SQL and Python , using More ❯
scalability, robust data governance and optimal performance. Developing innovative solutions: Create PoCs and MVPs for Data & AI solutions focusing on pipeline automation, ELT/ETL processes, and deployment through enterprise data platforms. Crafting compelling user experiences: Blend user-centred design with storytelling to deliver impactful Gen AI/BI, WebApp More ❯
accessibility across the organisation. Key Skills & Experience: Strong hands-on experience with Azure Data Platform technologies (Synapse, Data Lakes, ADF) Expertise in data modelling, ETL/ELT pipeline development, and data integration Proficient in SQL and Python (ideally PySpark) Knowledge of tools such as Power BI, Microsoft Fabric, and DevOps More ❯
accessible, and impactful. What You’ll Be Doing Manage and evolve the Azure data warehouse environment, ensuring scalability, performance, and resilience. Build and maintain ETL workflows, orchestrating seamless data movement and transformation from multiple sources. Develop and optimise Power BI reports and dashboards used across the business. Engage with internal More ❯
accessible, and impactful. What You’ll Be Doing Manage and evolve the Azure data warehouse environment, ensuring scalability, performance, and resilience. Build and maintain ETL workflows, orchestrating seamless data movement and transformation from multiple sources. Develop and optimise Power BI reports and dashboards used across the business. Engage with internal More ❯
with machine learning and predictive analytics. Knowledge of cloud-based data platforms such as AWS, Azure, or Google Cloud. Familiarity with data warehousing andETL processes. Experience in a business or financial analysis environment. If you are passionate about data-driven decision-making and want to make an impact, we More ❯
sets to meet business and technical requirements. Process Improvement: Identify and implement process enhancements, automate manual tasks, and optimize data delivery. Data Integration: Build ETL infrastructure to ensure smooth data extraction, transformation, and loading. Collaboration: Work alongside stakeholders, including data scientists and analysts, to meet data infrastructure needs. Data Quality More ❯
Microsoft Fabric, including Lakehouse (Delta format), OneLake, Pipelines & Dataflows Gen2, Notebooks (PySpark), Power BI & Semantic Models. Possess a solid understanding of data integration patterns, ETL/ELT, and modern data architectures. Be familiar with CI/CD practices in a data engineering context. Have excellent SQL and Spark (PySpark) skills. More ❯