Skip to main content
Dallas, TX

The Company

NorthMark Compute & Cloud (NMC²) is backed by dedicated leadership and investment, with a clear mission as it operates at the bleeding edge of technology. Its goal is to scale and enhance the high-performance computing (HPC) and cloud infrastructure that supports its clients' research, production, and delivery, enabling breakthroughs that shape the industries of tomorrow. Its engineers build critical infrastructure to eliminate friction in scientific research, simulations, analysis, and decision-making, accelerating discovery and driving faster innovation.

The Position

We are seeking a Lead Data Engineer to play a critical role in advancing NMC²’s big data engineering and analytics capabilities. This role will lead the design, development, and optimization of scalable data platforms, real time data pipelines, and analytics ready data products.

As part of the Big Data Engineering and Analytics organization, this role will focus on enabling a modern data lake and data mesh architecture that supports high volume, high velocity, and high variety data across enterprise and platform domains.

The Lead Data Engineer will bring deep hands-on expertise in Kubernetes, Kafka streaming platforms, and multi cloud ecosystems including AWS, Azure, and GCP. This role partners closely with data architects, analytics teams, and platform engineering to deliver reliable, governed, and high-quality data solutions.

Responsibilities

  • Lead the design and implementation of scalable big data pipelines for batch and real time processing

  • Build and operate streaming data platforms using Kafka

  • Design and deploy cloud native data solutions across AWS, Azure, and GCP

  • Develop and manage containerized workloads using Kubernetes

  • Enable data mesh architecture with domain oriented data products

  • Design and implement data lake and lakehouse architectures

  • Ensure data quality, reliability, and observability

  • Implement governance capabilities including metadata and lineage

  • Collaborate with analytics, AI, and business teams

  • Optimize performance and cost efficiency

  • Mentor engineering teams

Requirements

  • 15+ years of experience in big data engineering

  • Strong experience with Kafka

  • Strong experience with Kubernetes

  • Experience with AWS, Azure, and GCP

  • Experience building data lakes or lakehouse platforms

  • Experience with data mesh concepts

  • Strong programming skills in Python, Scala, or Java

  • Experience with Spark, Flink, or Beam

  • Experience with Airflow or orchestration tools

  • Understanding of data modeling and governance

  • Experience with CICD and infrastructure automation

  • Experience supporting AI and machine learning workloads including large language models such as Claude or similar platforms

  • Strong communication and leadership skills

NMC²: Intelligence, Squared
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

For more information, please see our Privacy Policy