Hardware development cycles are often longer than software ones, the cost reduction due to optimisations are much more visible and tangible as well (any cent per unit, that you can save during development, is multiplied by millions during mass production), the mistakes in hardware are much costlier too (unlike software, which you can update in minutes, some buggy hardware require very expensive recalls, turning a lot of working units into scrap, or downgrading them into much cheaper models), and the moat is smaller - people care mostly about performance/cost for running their software, and not brands or fancy features, and so if you do not invest in expensive and risky new development, invention and research, you will get left behind by the competition.
For GPUs, the need to process a lot of simple stuff in parallel very quickly, means that the individual compute units, have to be much simpler and streamlined by necessity, compared to their CPU analogs. That simplicity however, also means, that you can not compete easily on how the individual units operate (since they are simple, they are also simple to copy and reverse engineer for your competitors).
Imagine not having a PC with GPU in 2025.