Competitive Daily Rate
21 days ago
Unfortunately this job has now expired. However you can view all of our Live jobs here.
Castille Labs is looking for Data DevOps Engineer to join our Global teams for one of our clients in the Gaming Industry. The Data DevOps engineers must demonstrate a strong interest in Big Data.
- Maintain and enhance:
- existing solution to adapt to changing business environment
- administration of existing Data Stack (varying from Data Engineering Tools, Data Science Modelling, BI Reporting tools)
- Script various isolated environments (e.g. QA, Production, Staging, User Acceptance Testing Environments)
- Develop processes so that features can be automatically tested and merged to one code base.
- Setting up and maintaining platforms to manage requests from different verticals
- Administration of permissions throughout the team.
- Doing POCs with different approaches for automation of deployments with tools and technologies like:
- Clickhouse, Apache Spark, Confluent platform (using mainly Avro, KSQL, Kafka Connect and Kafka), Apache Ignite, Apache NiFi, NiFi Registry.
- Suggest, maintain and develop tools such as Grafana for performance monitoring and security monitoring.
- Help with CI/CD processes.
- Support Spark and Spring containerised applications by maintaining Kubernetes clusters
- Work with third parties to ensure best practices among the development team with the use of tools such as Clickhouse or Kafka
- Enhance code stack to help with automation, versioning, tags and automation of Pipelines.
- BSc Degree Holder in Computer Science or any similar Computing degree
- More than 5 years of experience in Big Data related technologies like Hadoop, Apache (Kafka, Spark, NiFi), HDFS and automation with Apache Projects
- Experience working with both Windows and Linux environment
- Strong familiarity with DevOps concepts and methodologies
- Have knowledge and experience with Unix
- Possess an extensive know-how of cloud platform providers (AWS, Azure or GCP)