“Winning AI search” bonus Q&A: Answering all your questions from our AIO/GEO webinar

Share

How should you measure the impact of AI search? How do you choose which queries and prompts to optimize for? Will websites even exist in the future? Our experts tackle all the attendee questions from Contentstack's recent "Winning AI search" webinar.
Our recent “Winning AI search: The new rules for 2026” webinar with Forrester featured an hour of highly practical insights on boosting your brand’s visibility in generative AI responses.
The webinar was so jam-packed that we didn’t have time to cover every question that was submitted by attendees during the Q&A portion. So we decided to answer them here.
Read on below for our team’s answers, and be sure to check out our new guide “AI search & visibility: An enterprise playbook” for even more advice on how to set up your website content for AIO/GEO success (and how Contentstack’s agentic experience platform can get you there faster).
Question 1: What are some KPIs to focus on for AI search if click-throughs are decreasing for website traffic?
With rankings and referral traffic losing relevance in the zero-click era, brands need to start measuring how often they’re in the conversation when potential buyers ask AI engines for relevant information.
Contentstack’s marketing team currently uses Semrush’s AI Visibility tool to track KPIs including:
- Share of voice % in AI answers compared to our top competitors. This metric varies based on AI platform; we keep an eye on how Contentstack is performing on ChatGPT, Google AI Mode, Gemini and Perplexity.
- Number of prompts that trigger AI responses mentioning our brand. Semrush refers to this as “Mentions,” and it’s a number that should be steadily increasing as you fill topic gaps on your site and citation gaps off your site.
- Positive brand sentiment % compared to our top competitors. When your brand is referenced in AI answers, you want to be positively described as much as possible. Sentiment tends to be a direct reflection of your brand’s public reviews, conversations on forums like Reddit, and how often you’re included in “best X” rankings and listicles.
Note: One thing we stopped tracking is our rankings and share of voice for specific prompts. We’ll explain why in Question 3.
Even without a paid subscription to an AI tracking platform like Semrush or Profound, you could still hack together a report in GA4 or Looker Studio that shows incoming traffic to your website from AI platforms.
You can even go the old-school route and simply ask new customers if they found you through an AI search. If you’re seeing a noticeable surge in customers attributing their purchases or interest to tools like ChatGPT or Gemini, then your AI visibility efforts are working.
Question 2: According to Forrester, consumers who find a website from an AI search spend more time on the site once they get there. What do you think is driving that behavior?
It’s driven by the length of the prompts that are being asked on answer engines and the depth of the research being done there. Overall, the time that consumers spend on these engines is way up.
After a buyer does extensive research, they are much better informed by the time they get to your site compared to search engine users. They have a much more specific idea of what they're looking to learn, and they're primed to do even more research.
That’s what explains the longer dwell time. Consumers have gone from googling for a specific piece of information to having a real conversation with AI answer engines and, increasingly, with the websites themselves that may be equipped with their own AI search functions.
Question 3: If prompts are long and more specific then search engine questions, how do we prioritize which ones to address?
Creating content and answers around specific, conversational queries might be an impossible task given the infinite range of how your buyers are asking questions. A ChatGPT user might type in an entire paragraph to express a question, and your brand won’t be able to see what that paragraph is in real time, or have a page on your site that contains those exact words.
For that reason, it might seem harder to “game” AI search in the same way that SEO consultants have tried to “game” Google in the previous decade. (But you’re not trying to do that, right?)
The good news is that if your data and content architecture are set up correctly, a conversational AI platform will be able to find what it needs without much help.
When an AI tool does a search on behalf of the customer in order to create an answer to their question, one of the things it does is break the question down semantically into topics. Then, it sends out a swarm of agents to look for information on those individual topics, and those are the pages that it summarizes for the user.
What that means is, you don’t need to create content to intercept specific AI search queries. You just need to make sure that you have helpful, easily digestible information available for every sub-topic your buyers and visitors might want to learn about.
Question 4: Is it recommended to block training bots and only allow LLM bots to visit the site using robots.txt and llm.txt?
We generally recommend blocking training bots, but with a caveat: Don't block everything. Total exclusion from AI bots today is the equivalent of no-indexing your site on Google in 2005. You’ll protect your data, but you’ll become invisible to the next generation of web users.
Training bots crawl your site to feed large datasets like Common Crawl. This data is used to train the next generation of models. Blocking these prevents your content from being part of the AI's "brain," but it doesn't necessarily hurt your current visibility.
LLM Bots or User-Agents are often "search-enabled" (like Perplexity, GPTBot or Gemini). They crawl to provide real-time citations to users. If you block these, you disappear from AI-generated answers and "sources" lists.
Many creators choose to block training bots, because you protect your IP from being used to train a model that might eventually compete with you, without losing out on "referral" traffic from AI search engines.
The downside is that if a model isn't trained on your data, it may have a harder time understanding your brand or niche, potentially leading to hallucinations about your services in the future.
If you want to maintain visibility while protecting your bulk data, keep Search-LLM bots like GPTBot or OAI-SearchBot active if you want to be cited as a source in ChatGPT or Perplexity. Block CCBot or GPTBot-User (the training side) if you are concerned about your data being used for model weights without credit.
Question 5: If AI search produces fewer click-throughs (and ultimately conversions), will this result in some websites being economically unviable in the future?
Brands will always need to maintain a public, primary and accurate source of information about their products and services, even if it’s just for the purpose of feeding AI engines with information. Perhaps the identity and main purpose of a website will change over time, but that doesn’t mean websites will be “economically unviable.”
Right now, website click-throughs are dropping, but conversions are not. Discovery and evaluation is moving from your website to AI answer engines, but the conversion remains yours to own.
OpenAI and Google don’t want to be processing returns, they don't want to be processing chargebacks and they don't have enough visibility into inventory to be doing agentic commerce in the way that people are talking about right now. Click-through rate is being disrupted, but websites will still be viable as long as conversions largely take place on brand-owned channels.
Question 6: How do you recommend optimizing graphics/infographics for natural language and LLMs?
If you create a great-looking image with Nano Banana that you're excited about, make sure it has alt text and all the correct SEO properties that are needed, but repeat that information in your actual FAQs. Repeat it through text in the article or story the image is supporting.
If you have a graphic that presents a comparison, repeat that information somewhere on the page so it's accessible and readily digestible by AI. Don't depend on the AI to extract the information. Some AI tools actually do take screenshots and analyze the page visually, but we wouldn't depend on that.
And for the time being, don’t get too fancy. Focus on getting all the important information on your page in text, make sure everything's very easily digestible and then add images as necessary.
Question 7: What impact do we anticipate paid ads to have on organic SOV/citations in the long run on LLMs?
Initially, AI answers were a meritocracy. If your content provided the best answer, you got the citation. In the long run, brands should expect a hybrid model:
- Premium citations: AI engines will likely reserve the most prominent slots (such as the first "recommended" bullet or a product match card) for paid partners.
- The verified layer: We may see a future where brands pay for "preferred source" status, ensuring their latest pricing or documentation is prioritized by the LLM as the official truth.
Traditional search ads are about keywords (e.g., "best running shoes"), while AI ads are shifting toward outcomes (e.g., "Help me train for a marathon"). With this in mind, advertisers might bid on being the "hero" of a complex user journey. Instead of a single ad, a brand might pay to be the recommended vendor throughout a 10-turn conversation.
Unfortunately, smaller brands may find it harder to get cited for broad, commercial queries as big-budget competitors lock down these journey-level placements.
But just as we developed banner blindness with online display ads, users are already developing citation skepticism. If an LLM recommends a product that feels like a blatant ad, users will check the organic citations for a second opinion. For that reason, maintaining organic AI citations will become even more valuable because they act as the third-party validation for the ads users see.
In the long run, paid ads will be the cost of entry for participation in the AI's logic. You'll pay to ensure your brand is part of the story, not just a footnote.
Question 8: With AI search traffic increasing and organic search traffic decreasing, what year do you predict the intersection to happen?
According to Nikhil Lai, Principal Analyst, Performance Marketing at Forrester, AI search traffic from answer engines will eclipse organic search traffic by 2028.
2026 will be an inflection point, during which organic search traffic’s decrease will accelerate, as will adoption of answer engines’ assistive and agentic capabilities. In 2028, the intersection will occur.
By 2030, answer engines will process most prompts. Nikhil believes this will happen because of leading indicators: Answer engines are already more convenient, indicated by prompts on answer engines being longer than queries on search engines, sessions being longer, more follow-up questions being asked, fewer clicks needed to satisfy intent and AI search visitors spending ~3X as long on site and being twice as likely to convert.
Question 9: Can you share any success stories for organizations in healthcare who are winning in AI search?
One healthcare organization that Nikhil works with is successfully leveraging AI search by:
- Adding FAQ sections to pages, where FAQs reflect the highest volume prompts across answer engines
- Hiring medical professionals on retainer to create highly authoritative, specific content distributed off the brand’s website
- Adapting to answer engines’ bots inability to render JavaScript by adopting server-side rendering
- Benchmarking share-of-voice across answer engines against competitors’ and setting a beatable share-of-voice growth goal for the next few quarters
- Measuring how much higher quality traffic from answer engines is than traffic from search engines
- Addressing formerly taboo topics like direct competitive comparisons and known weaknesses buyers ask about
- Pushing updated sitemaps directly to Bing via the IndexNow protocol
- Setting up a center of excellence that publishes guidance every piece of content and webpage must adhere to to be answer engine friendly
Question 10: Wouldn't API access carry significant security and privacy concerns?
The current shift from chatbots to autonomous agents has introduced a new class of security risks. When you give an AI agent API access to internal documentation, you’re creating a highly privileged non-human identity that can act at machine speed.
The risks generally fall into four critical categories:
- The "confused deputy" problem: An AI agent often has high-level API permissions (to read your CRM, access GitHub or query databases) to do its job. An attacker doesn't need to hack your firewall, they just need to trick the "deputy” (your AI agent).
- Token and identity persistence: AI agents don't use multi-factor authentication (MFA) or change passwords every 90 days. Most agents rely on static API keys or service account tokens. If these are leaked or logged in a debug file, an attacker could gain permanent, high-speed access to your internal data. And if five different agents use the same master API key, it becomes nearly impossible to tell which specific agent (or user behind it) triggered a data leak during a forensic audit.
- Data privacy and derived leaks: An agent might have access to anonymized data, but its pattern-matching capabilities could allow it to re-identify individuals by cross-referencing documents. As a result, an AI agent could inadvertently leak a CEO’s private health status or a project’s secret code name simply because it connected the dots in a summary.
- Memory poisoning: In 2026, many agents use long-term memory (vector databases) to remember past interactions. An adversary can "poison" this memory by feeding the agent false information through a document it processes. The agent might develop a persistent hallucination that a specific malicious IP address is a trusted internal server, leading it to automatically send data there in the future without any further prompting.
To mitigate these risks while still benefiting from AI automation, consider these guardrails:
- Principle of least agency: Never give an agent a "master key." Use scoped API tokens that can only read specific directories or databases.
- Human-in-the-loop (HITL): Require a human to click "approve" before an agent executes a "write" action (e.g., deleting data, sending an email or moving funds).
- AI firewalls: Use a secondary, smaller model to sanitize the agent's inputs and outputs, checking for hidden instructions or sensitive data patterns (like SSNs or API keys) before they leave your network.
- Ephemeral credentials: Move away from static API keys and toward short-lived OIDC tokens that expire every few hours.
Question 11: Is Contentstack’s Agent OS early access on a free/trial basis?
Thanks for your interest in Agent OS! We’re currently running a small Early Access cohort so we can partner closely with participants, learn quickly and shape the product based on real workflows.
If you’d like to be considered, we’d love to schedule a short call to understand your use cases. We’ll be selecting a limited number of customers for this phase. Participants can expect a discovery workshop, support in building a proof of concept and ongoing follow-up sessions.
If you’re a current Contentstack customer, please reach out to your CSM to get started. If you’re not yet a customer, request a Contentstack demo and let our team know you’re interested in Agent OS early access.
We'd love to hear more about your ideas!
Question 12: Are there any specific recommendations for Contentstack?
Absolutely! To learn five quick ways that Contentstack’s platform can improve your brand’s visibility in AI responses, read AI search & visibility: An enterprise playbook.



