in computer science, Mathematics, Statistics or similar engineering discipline. Working knowledge of Linux. Knowledge of network protocols and operation. Data analysis and visualization in Hadoop, Splunk. Data manipulation using tools like MapReduce or SQL. Experience with network security products and solutions. HTML, CSS and Javascript experience. Python. Ifyou are more »
solving skills and creativity. Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data more »
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
Version 1
Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data/event-based data Experience with other Open Source big data products eg Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational/NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
dealing with streaming and batch compute frameworks like Spring Kafka, Kafka Streams, Flink, Spark Streaming, Spark Experience with large-scale computing platforms, such as Hadoop, Hive, Spark, NoSQL stores Experience with developing large-scale data pipelines is nice to have Exposure to UI development is nice to have #LI more »
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
programming language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
Docker, Kubernetes). CI/CD pipelines and tools (e.g. DBT, Jenkins, GitLab CI) Desirable: Experience with analytics tools and frameworks (e.g., Apache Spark, Hadoop). SQL Sagemaker, DataRobot Google Cloud and Azure Data platform metadata driven frameworks to ingest, transform and manage data more »
looking for you to demonstrate include: Experience of data storage technologies: Delta Lake, Iceberg, Hudi Sound knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Knowledge of Architecture best practices and patterns Competence in evaluating and selecting development more »
share knowledge with the team. Qualifications You will have expertise within the following: Java and Python development knowledge (Essential) Previous experience with Spark or Hadoop (Essential) Trino or Airflow (Desirable) Architecture and capabilities. Designing and implementing complex solutions with a focus on scalability and security. Excellent communication and collaboration more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
for Data & Analytics Platforms Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have experience of more »
NumPy, Spark). Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with distributed computing platforms (e.g., Hadoop, Apache Kafka). Familiarity with cloud computing services (e.g., AWS, GCP, Azure). Knowledge of financial markets and trading concepts. Previous exposure to DevOps more »
experience Demostrate In-depth knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse, RedShift) Experience of analytics technologies (Spark, Hadoop, Kafka) Have familiarity with Data Lakehouse architecture, SQL Server, DataOps, and data lineage concepts Demonstrate In-depth knowledge of large-scale data platforms (Databricks more »
London, England, United Kingdom Hybrid / WFH Options
Global Relay
a Global Relay DevOps engineer you will be integrated with a software engineering team to develop on premise ('on-prem') solutions including working with Hadoop based technologies. Your role will involve designing, implementing and supporting automated, scalable solutions. Your contribution will have an immediate impact of enabling efficient delivery … them from reoccurring. Deployments: Writing and running deployment automation tools using helm, ansible, or other configuration management systems Platform Integration: With technologies such as Hadoop and Kubernetes Some of the technologies that you will interact with include: Containerisation and virtualisation: Docker, Kubernetes, VMWare Operating Systems: Linux Build and deployment … Jenkins, Bitbucket, Maven, Helm Instrumentation and monitoring: Loki, Prometheus, Grafana, Mimir Languages and frameworks: Bash, Java, Groovy, Go, Python Big data technologies: Cassandra, ArangoDB, Hadoop, Kafka, MongoDB, Ceph Where you have knowledge gaps, training and mentoring will be provided. About You: You have an automation-first mindset. You enjoy more »
requiring data analysis and visual support. Skills: Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau. Strong SQL proficiency required. The ability to communicate effectively in writing, including conveying complex information and promoting more »
to translate business requirements into high and low-level designs. You'll also define architecture and technical designs, create data flows and integrations using Hadoop, and work closely with product teams throughout testing. Key Responsibilities: Lead Java and Python project development. Design and develop API integrations using Spark. Collaborate … client teams. Stay updated with the latest trends and best practices. Qualifications: Expertise in Java and Python development (Essential). Experience with Spark or Hadoop (Essential). Knowledge of Trino or Airflow (Desirable). Proven ability to design and implement scalable and secure solutions. Excellent communication and collaboration skills. more »
reliability and availability of the company's software products. Data Processing Pipelines : You'll design and implement data processing pipelines using technologies like Kafka, Hadoop, Hive, Storm, or Zookeeper, enabling real-time and batch processing of data from the blockchain. Hands-on Team Leadership : As a hands-on leader more »
in computer science, Mathematics, Statistics or similar engineering discipline. Working knowledge of Linux. Knowledge of network protocols and operation. Data analysis and visualization in Hadoop, Splunk. Data manipulation using tools like MapReduce or SQL. Experience with network security products and solutions. HTML, CSS and JavaScript experience. Python. Ifyou are more »