Sr. Data Engineer & Data Platform Lead
CINCINNATI OH | Computer Software | Posted: 1 week ago |
Job Description:
Our customer is seeking a Senior Data Engineer & Data Platform Lead to support the Pull / Just-In-Time (JIT) manufacturing domain, which delivers near-real-time visibility into material movement, demand, and shortages to reduce inventory, waste, and production delays. This role will serve as the technical lead and escalation point for the Pull data domain, owning production-ready data pipelines that ingest data from ERP and manufacturing systems into the Enterprise Data Lake (AWS Redshift) and deliver engineered datasets into Spotfire and, increasingly, Palantir to support shop-floor and supply chain decision-making. The team operates in a “you build, you support” model, with full end-to-end ownership of pipeline delivery, troubleshooting, and production support.
Requirements:
• 7+ years of hands-on data engineering experience with team-lead experience, owning end-to-end pipeline delivery and serving as an escalation point for complex production issues.
• Strong SQL and Postgres experience, used to build and maintain core business logic, transformations, and production data pipelines.
• Experience building ingestion pipelines from enterprise ERP or manufacturing systems (SAP, Oracle, MES, or similar) into a cloud-based enterprise data lake / warehouse environment, primarily in AWS Redshift.
• Experience delivering engineered datasets into analytics tools such as Spotfire, Tableau, Power BI, or similar, to support KPI reporting and business insights.
• Working knowledge of AWS services (Redshift, Lambda), supporting scalable, cloud-based data pipelines in a production environment.
• Production support and troubleshooting experience, operating in a “you build, you support” model without a dedicated DevOps team.
• Bachelor’s Degree (or equivalent professional experience).
Nice to have:
• Python and/or PySpark experience, to support more advanced data transformations and future pipeline development.
• Palantir exposure (Foundry or related platforms), as the role is expected to expand into Palantir-based data pipeline and business logic work over time (not required day one).
Responsibilities:
• Own and lead end-to-end data pipeline delivery for the Pull / JIT domain, serving as technical lead and escalation point for a small pod of onshore engineers and offshore support resources.
• Design, maintain, and enhance production-ready data pipelines that move data from ERP and manufacturing systems into the Enterprise Data Lake (AWS Redshift) and downstream analytics platforms.
• Develop and support Postgres SQL functions, procedures, and transformations that implement core business logic and power supply-chain reporting and insights.
• Deliver engineered datasets into analytics and visualization platforms such as Spotfire (and increasingly Palantir) to support KPI reporting, material visibility, and executive decision-making.
• Coordinate offshore delivery and mentor onshore engineers, guiding development work, reviewing solutions, and ensuring quality and reliability across the pod.
• Troubleshoot, deploy, and support the pipelines and solutions you build in a “you build, you support” model, resolving production issues related to data, performance, or logic.
• Partner closely with TPMs and shop-floor, materials, and supply-chain business leaders to understand demand signals, material constraints, and production drivers, translating requirements into reliable data solutions.
• Contribute to evolving data standards and future platform expansion, including supporting more advanced transformations with Python / PySpark and expanding pipeline logic into Palantir over time.
Share This Job
Quick Apply
Stay Up To Date
Sign up for job alerts for
weekly job updates
APPLY NOW
Stay Up To Date
Sign up for job alerts for
weekly job updates
QUANTICO VA
CINCINNATI OH