database platforms Cloud Architecture: Experience with Azure data services (Azure Data Factory, Azure Synapse, Azure Data Lake) Data Governance: Understanding of data quality, data lineage, and metadata management principles. ETL/ELT Processes: Experience designing and implementing data integration workflows. Business Intelligence: Knowledge of reporting and analytics platforms (Power BI, SSRS, or similar) Data Warehousing: Experience with dimensional modelling andMore ❯
Glasgow, Lanarkshire, Scotland, United Kingdom Hybrid/Remote Options
VANLOQ LIMITED
days per week on-site), contributing to key data initiatives that support risk and analytics functions across the business. Key Responsibilities: Design, build, and optimise scalable data pipelines andETL solutions. Work with complex datasets to enable data-driven insights and reporting. Collaborate closely with data scientists, analysts, and business stakeholders to deliver robust data solutions. Support the migration andMore ❯
data and validate by profiling in a data environment Understand data structures and data model (dimensional & relational) concepts like Star schema or Fact & Dimension tables, to design and develop ETL patterns/mechanisms to ingest, analyse, validate, normalize and cleanse data Understand and produce ‘Source to Target mapping’ (STTM) documents, containing data structures, business & data transformation logics Liaise with data More ❯
AWS services. The ideal candidate will have a solid background in data engineering, Python development, and cloud-native architecture. YOUR PROFILE Design, develop, and maintain robust data pipelines andETL workflows using AWS services. • Implement scalable data processing solutions using PySpark and AWS Glue. • Build and manage infrastructure as code using CloudFormation. • Develop and deploy serverless applications using AWS Lambda More ❯
process. Strong specialist experience across several of the following areas: • Cloud services (SaaS, PaaS) (AWS preferred) • Enterprise integration patterns and tooling (MuleSoft preferred) • Enterprise data, analytics and information management, ETL knowledge • High volume transactional online systems • Security and identify management • Service and micro-service architecture • Knowledge of continuous integration tools and techniques • Design and/or development of applications using More ❯
such as Pandas, NumPy, PySpark, etc. You will also have a number of years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines. ETL process expertise is essential. Proficiency in working with Snowflake or similar cloud-based data warehousing solutions is also essential. Experience in data development and solutions in highly complex data environments … with large data volumes is also required. You will be responsible for collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and Databricks. You will also develop and deploy ETL jobs that extract data from various sources, transforming them to meet business needs. Please apply ASAP if this is of More ❯
Knowledge and Skills: • Person requires a minimum of 7 years of experience developing Oracle SQL and PL/SQL. • Person requires a minimum of 5 years of experience developing ETL or ETL solutions. • Person requires a minimum of 5 years of work experience in Oracle Forms. • Person requires a minimum of 2 years of work experience in Pro*C. • Candidates More ❯
Glasgow, Lanarkshire, Scotland, United Kingdom Hybrid/Remote Options
KBC Technologies UK LTD
About the Role: We are looking for Data Engineer for Glasgow location. Mode of Work - hybrid Databricks being (primarily) a Managed Spark engine – strong Spark experience is a must-have. Databricks (BigData/Spark) & Snowflake specialists – and general Data Engineer More ❯