how to automate and manage data systems so they run smoothly and can grow easily. You have experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Youre also familiar with Docker, Terraform, GitHub Actions, and Vault for managing secrets. You can code in SQL, Python More ❯
how to automate and manage data systems so they run smoothly and can grow easily. You have experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Youre also familiar with Docker, Terraform, GitHub Actions, and Vault for managing secrets. You can code in SQL, Python More ❯
how to automate and manage data systems so they run smoothly and can grow easily. You have experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Youre also familiar with Docker, Terraform, GitHub Actions, and Vault for managing secrets. You can code in SQL, Python More ❯
growing data practices. Expertise in data architecture, data modelling, data analysis, and data migration. Strong knowledge of SQL, NoSQL, cloud-based databases (e.g., AWS Redshift, Snowflake, Google BigQuery, Azure Synapse). Experience in ETL development, data pipeline automation, and data integration strategies. Familiarity with AI-driven analytics Strong understanding More ❯
MSSQL preferred) Experience supporting NoSQL and caching systems (Redis, MongoDB, DynamoDB, ElastiCache, etc) Understanding of Event driven architecture and related systems (Kafka, Kinesis, SNS, Redshift) Additional Information The salary range for this role is $112,000 - $178,000 per year. Where you fall within the compensation range is based More ❯
Work model: Hybrid Status: Full-Time, Permanent Responsibilities: Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, S3, and Redshift Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data availability and integrity Implement and optimize data storage solutions … plus 5+ years of experience in data engineering with a focus on AWS services Strong experience with AWS services such as S3, Glue, Lambda, Redshift, DynamoDB, and Athena Proficiency in programming languages such as Python, Java, or Scala Experience with SQL and NoSQL databases Familiarity with data warehousing concepts More ❯
Work model: Hybrid Status: Full-Time, Permanent Responsibilities: Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, S3, and Redshift Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data availability and integrity Implement and optimize data storage solutions … plus 5+ years of experience in data engineering with a focus on AWS services Strong experience with AWS services such as S3, Glue, Lambda, Redshift, DynamoDB, and Athena Proficiency in programming languages such as Python, Java, or Scala Experience with SQL and NoSQL databases Familiarity with data warehousing concepts More ❯