Skip to main content
Information Technology
Dallas, TX

The Company

NorthMark Compute & Cloud (NMC²) is backed by dedicated leadership and investment, with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance the high-performance computing (HPC) and cloud infrastructure that supports its clients' research, production, and delivery, enabling breakthroughs that shape the industries of tomorrow. Its engineers build critical infrastructure to eliminate friction in scientific research, simulations, analysis, and decision-making, accelerating discovery and driving faster innovation.

The Position

The HPC Scheduling team develops and manages a large high-performance compute (HPC) platform to enable the business to conduct complex research at scale. We are seeking a highly motivated person to join our team to help us continue to push the envelope running batch workloads on Kubernetes.

The ideal candidate will have an active interest in Kubernetes and batch computing, a broad range of experience with software engineering and development, as well as experience managing large-scale infrastructure and complex tooling environments.

The main focus will be on Armada - an exciting open source CNCF project built and maintained by the team - which we use to solve multi-cluster Kubernetes batch job scheduling at scale.

You’ll join an experienced team, working at the cutting-edge of ML workloads and at scale.

Responsibilities

  • Designing and developing high-quality software solutions using procedural programming languages, with a focus on Golang
  • Building and maintaining highly scalable, highly available and globally distributed systems to support large-scale research workloads
  • Managing and optimising data interactions across relational and non-relational databases, particularly PostgreSQL
  • Developing and operating containerised applications within Kubernetes, ensuring effective orchestration and workload scheduling
  • Supporting, tuning and troubleshooting Linux-based systems as part of our core compute platform
  • Applying core networking knowledge to help debug, optimise and enhance platform connectivity and performance
  • Independently diagnosing and resolving complex technical issues across infrastructure and software layers
  • Applying solid software architecture principles, computer science fundamentals and data structure knowledge to guide design decisions and code quality
  • Driving continuous improvement by contributing to CI/CD pipelines and engineering best practices
  • Staying up to date with emerging technologies and approaches, and applying new knowledge across disciplines

Requirements

  • Experience with developing Kubernetes components, such as controllers and operators
  • Experience with event-driven programming and message queues, such as apache Kafka and Pulsar
  • Experience of high-performance computing, Kubernetes, or DAG (Directed Acyclic Graph) workflows
  • Experience of running systems at scale using a cloud provider, ideally AWS
  • Use of operational and runtime tools and practices, including monitoring and logging with systems such as Prometheus and Grafana
  • Experience of operating or using job scheduling systems, such as SLURM

NMC²: Intelligence, Squared
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

For more information, please see our Privacy Policy