d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in early 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 70+ employees across Silicon Valley, Sydney and Bengaluru.
Hybrid, working onsite at our San Jose Headquarters 3-5 days per week with the flexibility to work remotely the remainder of your time
What you will Do:
The Role: Machine Learning Performance Architect
As part of this team, you will be responsible for design space exploration, workload characterization/mapping spanning the data plane as well as control plane in the SoC. You will design, model and drive new architectural features to help design next generation hardware. You will evaluate performance of cutting edge AI workloads. You have had past experience working on performance architecture of GPUs/AI Accelerators and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design. You are able to build and scale software deliverables in a tight development window. You will work with a team of hardware architects to build out the modeling infrastructure and working closely with other software (ML, Systems, Compiler) and hardware (mixed signal, DSP, CPU) experts in the company.
What You Will Bring:
- MSEE, Computer Science, Engineering, Math, Physics or related degree + 5 of industry experience, PHD with 1+ Year of industry experience preferred.
- Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals
- Experience with performance modeling, analysis and correlation (w/ RTL) of GPU/AI Accelerator architectures
- Proficient in C/C++ or Python development in Linux environment and using standard development tools.
- Experience with deep learning frameworks (such as PyTorch, Tensorflow)
- Experience with inference servers/model serving frameworks (such as Triton, TFServ, KubeFlow,…)
- Experience with distributed systems collectives such as NCCL, OpenMPI,...
- Experience with MLOps from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment.
- Self-motivated team player with a strong sense of ownership and leadership
- Prior startup, small team or incubation experience
- Work experience at a cloud provider or AI compute / sub-system company
- Experience with open-source ML compiler frameworks such as MLIR
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.