AI itself is not a valid source of information, unless you're citing its behaviors for the purpose of discussing the AI itself.
At this time, the Language Learning Models that comprise the kind of AI you might use as a Research Assistant may 'Hallucinate' a word that when applied to AI, means that it is predicting a series of words that sound right, but are simply incorrect.
Similarly, for an AI to know something, it must have trained on data from elsewhere, which means it relies on information from human performed research meaning AI isn't actually the source of the information. Citing an AI would be like citing your friend Greg because Greg read a piece of information out of a book for you, and also, Greg may or may not have made it up!
That said, much like you would with your friend Greg, if the AI can show you where it comes from (Copilot, for example, will provide a link to the webpage it found the answer on) you could absolutely follow that information back to an otherwise acceptable source, verify the information is as the AI stated, and then cite that acceptable source. In some cases, such as when the AI gets the information off of Wikipedia, you will have to follow up on that source's own citation of the information to find an acceptable source. It's appropriate to think of these AI tools as semi-reliable reference librarians, capable of quickly referencing information for your perusal, but themselves incapable of performing as direct sources of expertise on most topics.
Perplexity is AI software that actually shows you the sources it pulls information from, so bear in mind that different AI tools might be better or worse for a research role.
Example of Hallucination by Chat GPT-3
Note that if you look at the poem 'Constantly Risking Absurdity' by Lawrence Ferlinghetti many of these lines don't actually exist, the AI made them up based on patterns in it's training data, or else found them in other poems, perhaps by going to other pages.