THE ROLE: ENGINEERING PROGRAM MANAGER FOR AI INFERENCE SOLUTIONS
If you are following the evolution of the leading approach in deep learning powered AI, the renaissance in NLP as well as the next disruption in computer vision, you likely know it’s all about Transformer based models.. They are powering neural nets with billions to trillions of parameters and existing silicon architectures (including the plethora of AI accelerators) are struggling to varying degrees to keep up with exploding model sizes and their performance requirements. More importantly, TCO considerations for running these models at scale are becoming a bottleneck to meet exploding demand. Hyperscalars are keen on how to gain COGS efficiencies with the trillions of AI inferences/day they are already serving, but certainly for addressing the steep demand ramp they are anticipating in the next couple of years. d-Matrix is addressing this problem head on by developing a fully digital in memory computing accelerator for AI inference that is highly optimized for the computational patterns in Transformers. The fully digital approach removes some of the difficulties of analog techniques that are most often touted in pretty much all other in-memory computing AI inference products. d-Matrix’s AI inference accelerator has also been architected as a chiplet, thereby enabling both a scale-up and scale-out solution with flexible packaging options. The d-Matrix team has a stellar track record in developing and commercializing silicon at scale as senior execs at the likes of Inphi, Broadcom, and Intel. Notably, they recognized early the extremely important role of programmability and the software stack and are thoughtfully building up the team in this area even since before their Series A. The company has raised $44m in funding so far and has 70+ employees across Silicon Valley, Sydney and Bengaluru.
The CEO This is the role of a lifetime: It provides an opportunity for a program manager to oversee the product development lifecycle for a Silicon Valley AI silicon chip startup company, and to deliver the product to top-tier data center customers.
Responsible for: Managing all aspects of product life-cycle and engineering development. From product concept, customer and stakeholder engagement, business case development, engineering costing and resource management, through the product development, deployment and retirement. Cross-functional project management of HW and SW development … including multi-chiplet based SoC Architecture, design, development and verification, simulation and FPGA and Virtual Prototyping and emulation. Software architecture design and development, verification and delivery, including all online and offline software. This includes ML compiler development and SDK, runtime heterogeneous embedded software, AI kernels, Host drivers and middleware for Data Center integration and data scientist workflows using PyTorch and others. The role reports directly to the CEO and serves as the single point of responsibility and accountability for the product development and delivery. The role has no direct reports but does control all the resources assigned to functional areas across engineering, marketing, and customer support. The role works with engineering management to determine milestones and deliverables and tracks the progress towards those deliverables. The role works with early access customers to deliver Beta-release products and manage expectations to ensure their satisfaction. The role requires presenting the product development status to the CEO, customers, investors and the Board of Directors.
Qualifications and Experience:
- Bachelor’s Degree in Computer Science, Computer Engineering or Electrical Engineering.
- Graduate Program in Business (e.g. MBA, Masters Commerce).
- PMI Certification (PMP and/or PGMP).
- Minimum of 12 years of relevant professional experience in leading programs that span SoC hardware and software development and (more recently) ML workflows.
- Desired experience in NLP models and Data Center Software Systems.
- Be a leader, both internal to, and external to the company.
- Taking ownership of the entire AI Inference program and driving a series of successful product lines across the program.
- Be an “Agent of Change” – working across all levels of the engineering organization to steer and focus resources on what is important for ultimate product success.
- Be responsible for the development of product strategy and the deployment of resources to deliver that strategy.
- Be accountable for the tracking the progress and delivery of the project to agreed resource plan and schedule. Be accountable for the product performance.
- Take ownership of the customer engagements and customer satisfaction.
- Embrace a “no surprises” approach to project reporting – with complete transparency and integrity. Being responsible for making sure all aspects of the project are tracked accurately, and that product differentiation and competitive advantage is monitored throughout. Reporting the project status and progress to key stakeholders using state-of-art reporting tools and procedures.
- Understanding the risks that threaten the schedule, product performance or customer satisfaction. Identifying and tracking these risks and associated contingencies (with a “no surprises” approach).
- Treating all people with respect. Leveraging individual strengths to achieve desired outcomes. Working around individual weaknesses.
- Design and implement reward mechanisms that draw attention to desired behaviors and performance targets.
- Exhibit the values of the organization in all interactions within and external to the company.
Your success will be measured by:
- Product delivery to agreed requirements, features, schedule and budget.
- Customer and Stakeholder Satisfaction.
- Product development team engagement with the product success.
- Overall program success using agreed performance metrics.
- Leadership qualities as a contributing member of the executive team