If you are following the evolution of the leading approach in deep learning powered AI, the renaissance in NLP as well as the next disruption in computer vision, you likely know it’s all about Transformer based models.. They are powering neural nets with billions to trillions of parameters and existing silicon architectures (including the plethora of AI accelerators) are struggling to varying degrees to keep up with exploding model sizes and their performance requirements. More importantly, TCO considerations for running these models at scale are becoming a bottleneck to meet exploding demand. Hyperscalars are keen on how to gain COGS efficiencies with the trillions of AI inferences/day they are already serving, but certainly for addressing the steep demand ramp they are anticipating in the next couple of years. d-Matrix is addressing this problem head on by developing a fully digital in memory computing accelerator for AI inference that is highly optimized for the computational patterns in Transformers. The fully digital approach removes some of the difficulties of analog techniques that are most often touted in pretty much all other in-memory computing AI inference products. d-Matrix’s AI inference accelerator has also been architected as a chiplet, thereby enabling both a scale-up and scale-out solution with flexible packaging options. The d-Matrix team has a stellar track record in developing and commercializing silicon at scale as senior execs at the likes of Inphi, Broadcom, and Intel. Notably, they recognized early the extremely important role of programmability and the software stack and are thoughtfully building up the team in this area even since before their Series A. The company has raised $44m in funding so far and has 70+ employees across Silicon Valley, Sydney and Bengaluru.
ESSENTIAL DUTIES AND RESPONSIBILITIES:
- Responsible for the definition, micro-architecture and design of the AI Sub-system modules.
- Own design, document, execute and deliver fully verified, high performance, area and power efficient RTL to achieve the design targets and specifications
- Design of micro-architecture and RTL, synthesis, logic and timing verification using leading edge CAD tools and semiconductor process technologies
- Design and Implement logic functions that enable efficient test and debug
- Participate in silicon bring-up and validation for blocks owned
- Master’s degree in electrical engineering, Computer Engineering or Computer Science with 5 years of meaningful work experience
- Experience in micro-architecture and RTL development (Verilog/System Verilog), focused in Processor Design (Tensilica, ARM, MIPS, RISC-V) and Neural Network Engines.
- Experience in working on high-speed interfaces and working with 3rd Party IP.
- Exposure to Mixed-signal designs, Computer Architecture & Arithmetic is required
- Good understanding of ASIC design flow including RTL design, verification, logic synthesis and timing analysis
- Strong interpersonal skills and an excellent teammate
Submit Your Application
You have successfully applied
- You have errors in applying