Not anytime soon. In the first place, "artificial intelligence" doesn't actually exist yet. People just call it that for well... reasons. It's just better machine learning and deep learning and neural networks and all that other things (idk I'm not in that field). But it's not intelligent. It can't do things an intelligent being can do. It can certainly help you, but it won't do it. Let's give an example. Let's give an example very dear to the hearts of most uni students I know: doing their assignments, mainly their essays. Chat GPT writes a very human-looking essay, for sure. But it's quite nonsensical most of the times. Give it a prompt of anything factual. Print it out. When you give it a quick glance, it all seems fine. However, when you read it, you'll realize very quickly that not everything is fact. It makes up facts, uses the wrong facts, or sometimes uses facts from some other topic entirely. This realization comes only when you are familiar with the topic.
Imagine having Chat GPT write a topic you know nothing of. You would never know if what is written is true or false. You can't trust the "AI" that what it tells you is correct. Because it doesn't know what is correct. Because it isn't intelligent. At the end of the day, you yourself need to fact check its workings. Some people I know spent 3-4 hours just fact checking the assignment they had Chat GPT do and at that point why not just write it yourself.
Basically, this "AI" will always need an expert to oversee what it is doing because you can't 100% trust something unintelligent for the jobs people say will be replaced by it. Someone out there probably worded this better than I did, go read that instead if I didn't convince you of this opinion