Micron CEO Sanjay Mehrotra recently stated that as inference scales up, token demand continues to rise, and token generation speed depends on faster, higher-capacity memory. The core challenge at present is not demand or pricing, but rather the extremely tight supply situation, which cannot be rapidly expanded. Demand for both traditional servers and AI servers remains strong, yet both are constrained by tight supplies of DRAM and NAND.