
Nvidia Develops B30A AI Chip for China to Surpass H20
Based on the company's most recent Blackwell architecture, Nvidia is developing an AI processor for the Chinese market that is likely to be referred to as the B30A. The B30A was created by Nvidia to perform better than the H20 in certain applications. The highest model level that US authorities presently allow Nvidia to export to China is the H20. With a single-die design instead of the dual-die used in the flagship B300 accelerator, the B30A has around half the raw processing power of the B300 but still differs considerably from the H20.
The B30A will have high bandwidth memory and NVLink interconnect technology which both speed up data exchange between processors. The features on the B30A will follow the same features that are present on the H20, however, Nvidia believes the chips' performance will be higher. The company plans to ship pre-production samples to Chinese customers as soon as next month, depending on the final design sign-off, and readiness of customers.
This initiative happens to be occurring at a time of heightened tension between the US and China. The US has gotten more restrictive with regards to advanced AI chips due to security concerns. Former President Donald Trump has shown willingness to consider relaxing some of these restrictions, but it is unclear whether the B30A receives export approval. Nvidia is restricted with respect to H20 shipments which were stalled for weeks before recommencing in July when they were shipped under stringent conditions.
China accounts for about 13 percent of Nvidia's total revenue, making it a significant market. By remaining in the market, Nvidia retains Chinese clients in its software ecosystem, which is seen to be better than that of rivals. This prevents clients from perhaps choosing businesses like Huawei, who have hardware covered but lack software and the ability to integrate memory and equipment. At the same time, Nvidia disputes concerns raised by Chinese official media over the security of its processors.