It is though. AI makes tons, and I mean TONS of mistakes. Not just in the code but in overall design and architecture. Security best practices are ignored, it creates race conditions that even a junior engineer would know not to make, and the context limits means it physically cannot consider the entire codebase when it's making changes. People get excited because it can spit out code that works, and it can spit out a lot of code that works, but it genuinely cannot produce production-ready code without heavy human review and correction, at which point productivity gains disappear entirely because now the humans have to read through and understand code they didn't write, to figure out why it's not working. It's a great learning tool and I use it extensively to learn new libraries and sometimes I can give it a class with a bug I can't figure out and it'll spot a silly typo, but that really just makes it a better google and IDE spellcheck.
The biggest problem with LLMs as a productivity tool is that they aren't deterministic, which all other previous technological advances have been. A factory assembly line spits out the same kind of car over and over again. A piece of software produces the same output for the same input over and over again. LLM's "creativity" comes from its randomness, which inherently limits its usefulness for things that need reproducibility, and in my opinion as also a musician, limits its usefulness for creative endeavors too. I'm not saying this to be a poo-pooer against using AI, but tools have their applications and a lot of people are claiming generative AI is a magic wand that can (or in the future will be able to) do anything, and that's simply not true.