esportsMLBB

Nvidia says AI can turn a 10-month GPU design job into an overnight task, but human chip designers still matter

By Aimirul|
Share

Nvidia says AI is no longer just a side tool in chip development. The company is now using it across multiple parts of the GPU design process, with one example cutting a job that used to take eight engineers 10 months down to a single overnight run on one GPU.

That does not mean Nvidia can simply ask an AI to "make the next GeForce" and call it a day. According to Nvidia chief scientist William Dally, the company is still far from a point where AI can design an entire processor without heavy human involvement.

One of the biggest time-savers is deep in the chip pipeline

Dally said Nvidia has been pushing AI into as many design stages as possible. One major win is a task called standard cell library porting, which happens when moving designs to a new manufacturing process.

Previously, handling a library of around 2,500 to 3,000 cells took eight engineers about 10 months. Nvidia says its reinforcement learning system, called NB-Cell, can now finish the same work overnight on a single GPU.

That is a serious productivity jump, especially for a company racing to ship increasingly complex GPUs for gaming, AI and data centres.

Nvidia also built internal AI mentors for engineers

The company is not only using AI for low-level design work. It has also built internal large language models named Chip Nemo and Bug Nemo.

These models were fine-tuned on Nvidia's own confidential material, including RTL, hardware design documents, and architecture specs from every GPU Nvidia has built. In practice, that lets junior engineers ask questions about complicated hardware blocks without constantly pulling senior engineers away from their work.

Dally described these systems more like patient internal mentors than fully independent designers. For Nvidia, that means senior staff can spend more time on tougher problems while newer engineers get answers faster.

In some areas, AI is already beating humans

Nvidia says reinforcement learning is also helping with traditional circuit design problems. Instead of following the same paths a human engineer might take, the AI explores options through large-scale trial and error.

According to Dally, some of those results look strange at first glance, but still end up delivering designs that are 20% to 30% better than human-created versions in area, power, and performance.

The company is also using AI in place and route, as well as early architectural exploration. Its agent-based systems can run many experiments, test different design directions, and narrow down promising configurations faster than a human team working manually.

But the hardest part still needs humans

Even with all that progress, Nvidia says AI cannot yet take over the full verification process, which is one of the slowest and most important stages in chip development.

Designs still need to be emulated and tested properly to make sure they actually work. That is why Dally says Nvidia is still "a long way" from full end-to-end AI chip design.

The longer-term vision is a multi-agent setup, where different AI systems handle specialised parts of the process, more like a team of experts rather than one all-knowing model.

Why Malaysia and SEA readers should care

For gamers, PC builders, esports organisers and even AI startups in Malaysia and across SEA, this matters because faster chip design could shorten the time between major GPU generations and help companies explore better designs more quickly.

That does not guarantee cheaper graphics cards, and Nvidia did not say this would directly lower prices. But it does show how aggressively the company is using AI to speed up the creation of future hardware that powers gaming PCs, creator rigs, esports broadcast machines and AI workloads.

In a region where demand for better GPUs keeps climbing, from gaming cafes to livestream studios, anything that helps major chipmakers move faster is worth watching.

Source: Tom's Hardware

Tags

NvidiaAIGPUsSemiconductorsPC Gaming