The Company
NorthMark Compute & Cloud (NMC²) is backed by dedicated leadership and investment, with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance the high-performance computing (HPC) and cloud infrastructure that supports its clients' research, production, and delivery, enabling breakthroughs that shape the industries of tomorrow. Its engineers build critical infrastructure to eliminate friction in scientific research, simulations, analysis, and decision-making, accelerating discovery and driving faster innovation.
The Position
The Compute Platform Engineer role is responsible for the day-to-day reliability, performance, and operational health of our high-performance compute platforms that support critical research and production workloads. This position focuses on maintaining and troubleshooting CPU and GPU infrastructure, coordinating with vendors, and ensuring systems operate consistently at scale. Working closely with platform, infrastructure, and operations teams, the role plays a key part in sustaining a stable compute environment.
We are seeking a highly skilled and motivated Engineer to join our Compute Platform Management team. In this role, you will take ownership of the reliability and operational excellence of our high-performance computing infrastructure, which underpins our firm’s research and production workloads.
As a Compute Platform Engineer, you will be responsible for identifying and resolving hardware issues, coordinating with vendors and ensuring compute nodes (CPU and GPU) maintain peak performance. This contract role is ideal for someone who thrives in technically demanding environments and is eager to contribute to the continuous evolution of our compute platform.
Responsibilities:
Designing, configuring, and manage a High performance compute infrastructure made up of GPU and CPU nodes
Manage the full firmware/BIOS lifecycle across our HPC/AI fleet – from baselines and validation through rollout and compliance.
Troubleshoot hardware components (CPU, GPU, DPU, NVSwitch, NICs, memory, PSU, BMC) and guide replacement or configuration changes. Diagnose and automate recurring hardware issues to improve reliability and reduce recovery time.
Work on the latest AI platforms from day one (e.g., NVL72 / Grace Blackwell), ensuring they are stable, performant, and ready for production use.
Monitoring hardware performance, identifying areas for improvement, and implementing solutions
Automate health checks and onboarding workflows to accelerate safe deployment.
Collaborate with vendors on firmware issues – providing clear repro cases, logs, and impact to drive fixes and improvements.
Recommend process, tooling, and architectural improvements to strengthen platform operations.
Performing diagnostics, tuning, and capacity planning to ensure smooth scale-out
Performing analysis of existing hardware lifecycle processes and providing recommendations for improvement and optimization
Collaborating with various teams to integrate hardware improvements and align with organizational goals
Implementing best practices for security hardening of the platform and associated systems
Mentoring junior engineers and fostering a culture of continuous learning and improvement
Acting as a subject matter expert, providing guidance and support for infrastructure-related issues
Leveraging Infrastructure as Code (IaC) methodologies to ensure efficient and scalable infrastructure management
Requirements:
3+ years of hands-on experience supporting large-scale compute platforms
Proficiency with HPE server infrastructure, such as ProLiant and Apollo, and NVIDIA GPUs, including A100 and H200
Solid understanding of server architecture, including UEFI/BIOS, PCIe devices and out-of-band management systems, such as iLO and BMC)
Proven ability to resolve complex hardware issues and manage vendor relationships
Familiarity with automation tools such as Ansible, Terraform and CI/CD systems
Working knowledge of Linux in high-performance or latency-sensitive environments
Working knowledge of basic network concepts, such as DNS, DHCP, VLANs, switching and routing
Basic working knowledge of Kubernetes and Openstack technologies (preferred but not required)
Experience with data center operations and process adherence
Excellent communication and coordination skills with cross-functional teams and external partners