team leadership skills in a data engineering environment. Management experience: career development, delivery management & skills assessment. Skilled in designing and building Databricks/Hadoop (Cloudera/HortonWorks)/Spark data products. Skilled in owning, designing and implementing data pipelines ingesting enterprise levels of data volume. In-depth knowledge of data More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
InterEx Group
experience in Big Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in More ❯
london, south east england, united kingdom Hybrid / WFH Options
InterEx Group
experience in Big Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in More ❯
experience in Big Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in More ❯
on experience using Scala, Python, or Java Experience in most data and cloud technologies such as Hadoop, HIVE, Spark, Pig, SQOOP, Flume, PySpark, Databricks, Cloudera, Airflow, Oozie, S3, Glue, Athena, Terraform, etc. Experience with schema design using semi-structured and structured data structures Experience on messaging technologies (e.g. Kafka, Spark More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown Asset Management Limited
execution, and monitoring Experience Hands-on expertise with data warehousing and management, including data processing, integration, and advanced analytics (e.g., Snowflake, Amazon Redshift, Databricks, Cloudera, Oracle). Experience managing a portfolio of technical programmes within an established Enterprise Data office/management function. Technical understanding of modern data architectures, tools More ❯
Employment Type: Permanent, Part Time, Work From Home
scikit-learn). Knowledge of software engineering practices (coding practices to DS, unit testing, version control, code review). Experience with Hadoop (especially the Cloudera and Hortonworks distributions), other NoSQL (especially Neo4j and Elastic), and streaming technologies (especially Spark Streaming). Deep understanding of data manipulation/wrangling techniques. Experience More ❯
a must Experience with relational, graph and/or unstructured data technologies such as SQL Server, Azure SQL, Azure Data Lake, HD Insights, Hadoop, Cloudera, MongoDB, MySQL, Neo4j, Cassandra, Couchbase Knowledge of programming and scripting such as JavaScript, PowerShell, Bash, SQL, .NET, Java, Python, PHP, Ruby, PERL, C++, etc. Experience More ❯
a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice More ❯
a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Knowledge of AWS Knowledge of Databricks Understanding of Cloudera Hadoop, Spark, HDFS, HBase, Hive. Understanding of Maven or Gradle About the Team J.P. Morgan is a global leader in financial services, providing strategic advice More ❯
cloud platforms, preferably IBM Cloud. Contributions to open-source projects or personal projects demonstrating Big Data and Java development skills. Relevant certifications such as Cloudera Certified Associate (CCA) or Hortonworks Certified Developer (HCD) are considered a plus. By joining IBM's Public Sector team as a Big Data Java Developer More ❯
and demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience More ❯
deploy mission critical and highly differentiated Data & AI solutions for companies looking to migrate their data stack from legacy data platforms such as Teradata, Cloudera, Datastage and Informatica to modern data platforms like Databricks. Over the last few years, we have become a leading Databricks partner, building a strong and More ❯
deploy mission-critical and highly differentiated Data & AI solutions for companies looking to migrate their data stack from legacy data platforms such as Teradata, Cloudera, Datastage, and Informatica to modern data platforms like Databricks. Over the last few years, we have become a leading Databricks partner, building a strong and More ❯
for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of bigdata tech stack. Requirement: Experience of platform engineering along with application engineering (hands-on) Experience More ❯
DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test More ❯
DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test More ❯
for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution, and possess in-depth knowledge of the big data tech stack. Requirements: Experience in platform engineering along with application engineering (hands-on More ❯
best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera - Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or … years of experience in data engineering, including working with AWS services. Proficiency in AWS services like S3, Glue, Redshift, Lambda, and EMR. Knowledge of Cloudera-based Hadoop is a plus. Strong ETL development skills and experience with data integration tools. Knowledge of data modeling, data warehousing, and data transformation techniques. More ❯
and container orchestration (e.g., Kubernetes). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills. Experience in Cloudera, Hadoop, Cloudera Data Science Workbench (CDSW), Cloudera Machine Learning (CML) would be highly desirable. Work Environment: Full time on-site presence required If you don More ❯
tools. Familiarity with other big data tools and technologies. Previous experience in a similar role within a dynamic and fast-paced environment. Experience in Cloudera, Hadoop, Cloudera Data Science Workbench (CDSW), Cloudera Machine Learning (CML) would be highly desirable. Work Environment: Full time on-site presence required. If you don More ❯
include: Expertise in Terraform, Kubernetes, Shell/Powershell scripting, CI/CD pipelines (GitLab, Jenkins), Azure DevOps, IaC. Experience with big data platforms like Cloudera, Spark, and Azure Data Factory/DataBricks. Key Responsibilities: Implement and maintain Infrastructure as Code (IaC) using Terraform, Shell/Powershell scripting, and CI/… the Technical and Solution Architect teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills … integration and delivery for streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure More ❯
Cloudera is looking for a customer success oriented Group Vice President of Sales to guide the continued growth of the go-to-market strategy for Cloudera’s Northern EMEA region. Reporting to the VP of Sales EMEA, the GVP will be responsible for retaining and expanding existing customer relationships at … Mental & Physical Wellness programs. Phone/Internet Reimbursement program. Access to Continued Career Development. Comprehensive Benefits. Competitive Packages. Paid Volunteer Time. Employee Resource Groups. Cloudera is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual … orientation, gender identity, national origin, age, protected veteran status, or disability status. #LI-CH1 Management Level: 4 Senior Director About the company Cloudera, Inc. Cloudera started as a hybrid open-source Apache Hadoop distribution, CDH, that targeted enterprise-class deployments of that technology. #J-18808-Ljbffr More ❯
a DV Clearance. WE NEED THE DATA ENGINEER TO HAVE Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test …/DV CLEARANCE/DEVELOPED VETTING/DEEP VETTING/SC CLEARED/SC CLEARANCE/SECURITY CLEARED/SECURITY CLEARANCE/NIFI/CLOUDERA/HADOOP/KAFKA/ELASTIC SEARCH/LEAD BIG DATA ENGINEER/LEAD BIG DATA DEVELOPER More ❯