Nvidia’s Vera Rubin artificial intelligence system is unveiled in an interview with CNBC, Feb. 25. Samsung Electronics’ SOCAMM2 modules are above and below the Vera CPU, which is in the center. Captured from CNBC website
The upcoming GTC 2026, Nvidia’s artificial intelligence (AI) and advanced computing conference, is set to become a stage for competition among memory makers over low-power memory, with Korea’s Samsung Electronics and SK hynix planning to showcase new server memory modules at the conference.
According to industry officials Monday, Samsung Electronics and SK hynix will both have booths at GTC 2026, scheduled for March 16 to 19 in San Jose, California.
Along with high-bandwidth memory (HBM), which are key memory chips driving the progress of AI accelerators, the two companies are planning to exhibit the small outline compression attached memory module 2 (SOCAMM2).
SOCAMM2 is a module of low-power double data rate (LPDDR) memory chips, aimed at cutting power consumption to roughly one-third of conventional DDR-based modules.
While HBM is typically mounted next to a graphics processing unit (GPU) to support computing acceleration, SOCAMM2 is commonly placed closer to the CPU and is designed to improve overall system-level power efficiency.
Their exhibitions are intended not only to showcase the modules but also to highlight to attending industry officials which company is likely to gain the upper hand in Nvidia’s next-generation AI platform, Vera Rubin.
During the event, Nvidia is expected to showcase a variety of new devices for AI computing, with the Vera Rubin platform likely to be the highlight. Nvidia CEO Jensen Huang previously unveiled the architecture of Vera Rubin during CES 2026 in January, and the company is expected to reveal more details at GTC 2026, including specifications, derivative products in the Rubin lineup, mass production schedules and performance.
According to sources, Samsung Electronics and SK hynix, along with U.S.-based Micron, are now manufacturing and optimizing their SOCAMM2 to have them adopted starting with Nvidia’s VR200, which is expected to begin mass production in the second half of the year. The companies are competing to secure a larger share of the supply.
A rendered image of Samsung Electronics’ SOCAMM2 / Courtesy of Samsung Electronics
Samsung Electronics is taking an apparent early lead in the competition. In an interview with CNBC on Feb. 25, Nvidia showcased the Vera Rubin platform. Video footage included in the interview showed Samsung Electronics’ SOCAMM2 module mounted next to the Vera CPU.
Among memory makers, Samsung Electronics has begun mass production and shipments of its 192-gigabyte SOCAMM2 to its clients earlier than its rivals. KB Securities analyst Kim Dong-won estimates that Samsung Electronics’ SOCAMM2 supply to Nvidia “will reach 10 billion gigabits next year, accounting for about 50 percent of Nvidia’s total SOCAMM2 demand,” adding that the company is expected to rank first in supply share.
At GTC 2026, Samsung Electronics Executive Vice President Song Yong-ho will speak about the future of semiconductor manufacturing through AI, highlighting the company’s capability as a turnkey solution provider that integrates chip design, foundry and packaging.
Nvidia CEO Jensen Huang, right, talks with SK Group Chairman Chey Tae-won during the Asia-Pacific Economic Cooperation CEO Summit in Gyeongju, North Gyeongsang Province, Oct. 31, 2025. Korea Times file
SK hynix also plans to promote its SOCAMM2 capabilities at GTC 2026, highlighting the strong trust it has built with Nvidia and its mass-production capacity, positioning it as a major HBM supplier to Nvidia.
SK Group Chairman Chey Tae-won is set to visit GTC 2026 for a meeting with the Nvidia CEO. They are expected to discuss cooperation for advanced chips.
During MWC 2026 last week, SK hynix exhibited its 192-gigabyte SOCAMM2 as part of its AI memory portfolio. During the company’s earnings call in January, the company announced its plan to “expand SOCAMM2 product lineup by using 1c process,” which is the sixth-generation technology for 10-nanometer class production.
Industry officials said SOCAMM2 is drawing greater attention because its profitability is certain for chipmakers.
HBM is currently one of the most expensive memory chips, whose average selling price per bit is several times that of legacy DRAMs. However, the high level of difficulty in their fabrication leads to heavy manufacturing costs, meaning profitability can be limited if a company fails to secure enough manufacturing yield.
By contrast, SOCAMM2 uses DRAM produced through relatively mature processes, making it more stable with higher yields. As a result, when supply tightens, price increases are more likely to translate directly into improved profitability. As SOCAMM2 offers strong advantages in power efficiency, which is now the key focus in the chip industry, it has the potential to become a high-margin product.
“Designed to be paired with Nvidia’s CPU, SOCAMM2 can deliver significantly lower power consumption and higher bandwidth than existing products, improving system-level power efficiency,” one of the officials said. “This is expected to secure strong demand from data centers that prioritize power efficiency and total cost of ownership.”




