Oracle, SQL Server, PostgreSQL) and data warehousing technologies. Experience with cloud-based data solutions (AWS, Azure, GCP). Familiarity with big data technologies like Hadoop, Spark, and Kafka. Technical Skills: Proficiency in data modelling (ERD, normalization) and data warehousing concepts. Strong understanding of ETL frameworks and tools (e.g., Talend more »
in at least one of the following additional languages: Java, C#, C++, Scala Familiarity with Big Data technology in cloud and on-premises environments: Hadoop, HDFS, Spark, NoSQL Databases, Hive, MongoDB, Airflow, Kafka, AWS, Azure, Dockers or Snowflake Good understanding of object-oriented programming (OOP) principles & concepts Familiarity with more »
Advanced working knowledge in SQL and relational databases (e.g., Microsoft SQL Server, Oracle). Big Data Technologies: Familiarity with big data technologies such as Hadoop, Spark, or Kafka is a plus. Data Warehousing: Experience in data warehousing, including data modeling and implementing data pipelines. Data Management: Multi-skilled experience more »
well as collaborate in a team environment. Detail-oriented, with strong organizational and time-management skills. Preferred Qualifications: Familiarity with Big Data technologies (e.g., Hadoop, Spark) and data pipelines. Knowledge of deep learning models and frameworks. Experience working in an agile environment or with DevOps practices. Experience with Natural more »
Minimum 5+ years of experience in software engineering with strong expertise in Java or Python. Proven experience with big data technologies such as ApacheHadoop, Apache Spark, Apache Kafka, or other distributed systems. Experience building data pipelines for processing large datasets in batch and real-time. Familiarity with cloud more »
Experience with web technologies (HTML, JavaScript Frameworks like Vue, React, Angular, CSS, LESS). Exposure to big data processing and NoSQL technologies (e.g., Spark, Hadoop, Elasticsearch, Neo4j, Snowflake, AWS Redshift). more »
insights. Requirements: Extensive experiencein software engineering with a strong focus on Java or Python and associated frameworks. Proven experience with technologies such as ApacheHadoop, Apache Spark, Apache Kafka, or other distributed systems. Experience in building data pipelines for both batch and real-time processing of large datasets. Familiarity more »
such as scikit-learn, TensorFlow, or PyTorch - Solid understanding of statistical analysis and data mining techniques - Familiarity with big data technologies like Spark or Hadoop - Experience with data visualization tools (e.g., Matplotlib, Seaborn, Plotly) Nice to Have: - Experience with cloud platforms (AWS, GCP, or Azure) - Knowledge of NLP or more »
in data architecture frameworks, data modeling, and data governance practices. Experience with cloud data platforms (e.g., AWS, Azure, Google Cloud), big data technologies (e.g., Hadoop, Spark), and database management systems (e.g., Oracle, SQL Server, NoSQL). Familiarity with data integration tools (e.g., Informatica, Talend), ETL processes, and API-based more »
London, England, United Kingdom Hybrid / WFH Options
Darwin Recruitment
Experience of Interest: 7+ years of experience in Data Engineering & Data Architecture (Data Mesh, Lambda) 3+ years within a Leadership role Big Data (Spark, Hadoop) & Real-time/streaming environments (Kafka) Data Warehousing (Snowflake, Redshift) Database Management (SQL, NoSQL) Data Integration (Talend, Informatica, Apache Nifi DevOps, Terraform, Docker, Airflow more »
Troubleshoot pipeline issues and implement improvements. Requirements: Experience in data engineering, ideally in AI or tech. Strong SQL, Python, and experience with tools like Hadoop, Spark, and Airflow. Knowledge of cloud platforms (AWS, Azure, or Google Cloud). Solid problem-solving and communication skills. more »
data challenges. Proven leadership or mentoring experience in a technical environment. Strong communication skills for cross-functional collaboration. Familiarity with big data technologies like Hadoop or Spark (desirable). Knowledge of data governance, security, and visualisation tools like Tableau (desirable). Lead Data Engineer rewards: A base salary of more »
necessary for data scientists to do their work. Skills: SQL, Python, ETL tools, cloud platforms (AWS, Google Cloud), database management (e.g., PostgreSQL, MySQL, NoSQL), Hadoop, Spark. Academic Data Analyst: focus on using data to improve academic performance and student learning outcomes. Will work closely with Academics to conduct analyses more »
and product stakeholders to meet requirements Desirable Skills AI Tools: Familiarity with TensorFlow, PyTorch, Keras, and Scikit-learn Data Engineering: Knowledge of Apache Spark, Hadoop, and Kafka Model Deployment: Experience with Docker, Kubernetes, and MLflow Testing: Knowledge of AI testing frameworks and methodologies Frontend Technologies: Experience with React and more »
infrastructure underpinning data systems through a DevOps approach Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use more »
R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills more »
to work collaboratively in a team environment Minimum five years of experience in the financial sector Preferred Skills Experience with big data technologies like Hadoop or Spark Familiarity with regulatory requirements related to financial crime (such as AML/KYC) Knowledge of cloud-based database solutions such as AWS more »
decision making skills Exposure to working with REST API's Any of the following skills would be an added bonus: Has run code across Hadoop/MapReduce clusters Has code running in a production environment Used SAS before (or at least can decipher SAS code) Worked with very large more »
Greater London, England, United Kingdom Hybrid / WFH Options
NearTech Search
and optimisation. Strong knowledge of Azure Data Factory (ADF) and SQL Server Integration Services (SSIS). Familiarity with Apache Airflow, with additional experience in Hadoop or Spark advantageous. Experience : Demonstrated experience managing complex data migration projects. Strong analytical and problem-solving abilities. Ability to work independently, manage multiple priorities more »
A very exciting opportunity! The following skills/experience is required: Strong Data Architect background Experience in Data Technologies to include: Finbourne LUSID, Snowflake, Hadoop, Spark. Experience in Cloud Platforms: AWS, Azure or GCP. Previously worked in Financial Services: Understanding of data requirements for equities, fixed income, private assets more »
service analytics. Proficiency with data modelling, ETL, and visualisation tools. Experience in cloud environments (GCP/Google Cloud Platform is required) Familiarity with Spark, Hadoop, and DBT. Certification in TOGAF or data management is a plus! If youre passionate about breaking down data barriers and driving a culture of more »
technical skills required: Strong SQL skills - database design/optimisation/SQL Server Integration Services SSIS Azure - Azure Data Factory Apache Airflow Spark/Hadoop - beneficial but not absolutely necessary This role is hybrid from central London and does not consider candidates who need any visa sponsorship. Please reach more »
stack, therefore, skills needed: Kubernetes or Openshift ELK Stack, ElasticSearch or Opensearch Linux CI/CD Nice to haves, but definitely not essential: Cloudera Hadoop Spark Kafka Hbase Hue Atlas Logistics: up to £100,000 base salary 30 days holiday + bank holidays Private health care Annual trips to more »
new platforms and into new customer bases. C urrently exploring options including RAD Studio, Visual Studio, Delphi, C#, C++, Client/Server, n-tier, Hadoop and SaaS. They require candidate with a strong computing background . You will be coding in Delphi and other languages. Any similar Object Oriented more »