India - Kernels - Senior/Staff/Principal Engineers + Manager

Bengaluru, Karnataka
Work Type: Full Time



About us 

If you are following the evolution of the leading approach in deep learning powered AI, the renaissance in NLP as well as the next disruption in computer vision, you likely know it’s all about Transformer based models..  They are powering neural nets with billions to trillions of parameters and existing silicon architectures (including the plethora of AI accelerators) are struggling to varying degrees to keep up with exploding model sizes and their performance requirements.   More importantly, TCO considerations for running these models at scale are becoming a bottleneck to meet exploding demand.  Hyperscalers are keen on how to gain COGS efficiencies with the trillions of AI inferences/day they are already serving, but certainly for addressing the steep demand ramp they are anticipating in the next couple of years. d-Matrix is addressing this problem head on by developing a fully digital in memory computing accelerator for AI inference that is highly optimized for the computational patterns in Transformers.  The fully digital approach removes some of the difficulties of analog techniques that are most often touted in pretty much all other in-memory computing AI inference products.  d-Matrix’s AI inference accelerator has also been architected as a chiplet, thereby enabling both a scale-up and scale-out solution with flexible packaging options. The d-Matrix team has a stellar track record in developing and commercializing silicon at scale as senior execs at the likes of Inphi, Broadcom, and Intel.   Notably, they recognized early the extremely important role of programmability and the software stack and are thoughtfully building up the team in this area even since before their Series A. The company has raised $44m in funding so far and has 70+ employees across Silicon Valley, Sydney and Bengaluru.

Why d-Matrix 

We want to build a company and a culture that sustains the tests of time. We offer the candidate a very unique opportunity to express themselves and become a future leader in an industry that will have a huge influence globally. We are striving to build a culture of transparency, inclusiveness and intellectual honesty while ensuring all our team members are always learning and having fun on the journey. We have built the industry’s first highly programmable in-memory computing architecture that applies to a broad class of applications from cloud to edge. The candidate will get to work on a path breaking architecture with a highly experienced team that knows what it takes to build a successful business. 

The Role: Kernels Senior/Staff/Principal Engineers + Manager 

The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement of software kernels for next-generation AI hardware. In this role you will analyze performance of ML ops and map to a SIMD processor as well as drive optimizations into hardware to optimize performance.  You will also optimize and trade-off various aspects of hardware-software co-design as well as build and scale software deliverables in a tight development window.   You will work with other software teams (Compiler, ML, Systems) and hardware experts in the company to deliver performant workloads on the chip. 


Prior experience of entrepreneurship or working for a start-up is a plus
Excellent cross-geography team-player and communication skills
8-15 years of experience in HW/SW Codesign, Kernels, with Bachelors/Masters/PhD


Bangalore & Hyderabad

Submit Your Application

You have successfully applied
  • You have errors in applying