As AI reshapes information access, a key question emerges: how well do AI-generated answers match Google's trusted results? Recent research from Semrush, a digital marketing analytics platform, compared AI citations against Google's top 10 rankings and found Perplexity AI significantly outperforms its competitors.
The Study Results

Semrush's June 2025 analysis examined 150,000 AI citations, measuring how closely different AI tools matched Google's top search results. Perplexity AI emerged as the clear leader with 91% domain overlap and 82% URL overlap, while Google AI Overviews followed with 86% and 67% respectively. Google AI Mode showed 54% and 35%, and ChatGPT trailed at 45% and 28%.
What Sets Perplexity Apart
Perplexity's strong performance stems from its citation-focused approach, which prioritizes transparency and source attribution. The platform's recent API updates include domain filtering tools that let users search within trusted sources, boosting accuracy. These features have made it increasingly popular among researchers, journalists, and analysts who need reliable, fact-based information.
ChatGPT, by contrast, prioritizes broad contextual understanding over exact source matching. While this approach encourages creative responses, it makes verification harder and explains the lower alignment with Google's rankings.
What This Means
For content publishers, strong Google rankings now translate into greater visibility in AI responses. Users choosing between AI tools should consider whether they need precise accuracy or creative synthesis. For research-intensive industries, Perplexity's performance suggests it may become the preferred tool.
As AI search grows, alignment with Google's results may become a new standard for credibility. Perplexity is currently setting that standard, while ChatGPT serves better as a creative assistant than a strict research tool.