Overview
After running an audit, the next step is understanding what to do with your results. Cited's recommendation engine analyzes your audit data and generates actionable improvement suggestions tailored to your business. This guide explains how recommendations work, what types you will encounter, and how to use them effectively.
What Are Recommendations?
Recommendations are AI-generated insights and action items based on the findings from your completed audits. Rather than leaving you to interpret raw audit data on your own, Cited synthesizes the results into specific, prioritized suggestions for improving your AI visibility.
Each recommendation is grounded in your actual audit performance -- the questions that were asked, how AI providers responded, where your business was mentioned (or missing), and what sources were cited. This means recommendations are not generic advice; they are specific to your business, your industry, and your competitive landscape.
Starting a Recommendation Run
To generate recommendations, you need at least one completed audit. Navigate to the Recommendations section from your Business HQ and click Generate Recommendations. Select the audit (or audits) you want to analyze, and Cited will begin processing.
The recommendation engine reviews every question-provider combination in your audit, identifies patterns, and produces a structured set of insights. This process typically takes a few minutes depending on the size of your audit.
How Recommendations Are Generated
Two systems contribute to your recommendation list:
- The AI recommendation pipeline — judgment-driven analysis of your audit responses, competitor positioning, and content gaps. Produces the Question Insights, Head-to-Head Comparisons, and Strengthening Tips described below.
- The Validation Engine — 62 deterministic checks against your site (robots.txt, schema markup, llms.txt, technical foundation, etc.). Failed checks become recommendation cards via built-in templates. See the Validation Engine guide for the full check list.
Before the AI pipeline runs, the Validation Engine pre-checks every condition it can verify against your site. Any check that's already in place is removed from the AI's input — so the LLM never wastes tokens telling you to "add Organization JSON-LD" when you already have it. This deduplication keeps your recommendation list focused on what's actually missing.
Types of Recommendations
Cited generates several types of recommendations, each designed to address a different aspect of your AI visibility.
Question Insights
These recommendations focus on individual audit questions where your performance was notably strong or weak. For questions where you were not mentioned, Cited explains the likely reasons and suggests content or structural changes that could help. For questions where you performed well, it highlights what is working so you can reinforce those strengths.
Head-to-Head Comparisons
When AI providers mention your competitors alongside (or instead of) your business, Cited generates head-to-head comparison insights. These break down exactly how AI engines are positioning you relative to specific competitors, what language they use, and what sources they draw from. This is particularly valuable for competitive queries like "What is the best X for Y?"
Strengthening Tips
These are targeted suggestions for reinforcing your existing visibility. They might include recommendations to add specific schema markup, improve the clarity of your value propositions on key pages, or create content that addresses gaps AI engines are looking for. Strengthening tips focus on incremental improvements that compound over time.
Priority Actions
Not all recommendations are equally urgent or impactful. Cited scores and ranks your recommendations into a priority action list so you know where to focus first.
How Priority Is Determined
Priority scoring considers several factors:
- Impact potential -- How much improvement this action is likely to produce based on the gap between your current visibility and the opportunity.
- Effort level -- An estimate of how much work is required to implement the recommendation. Quick wins are ranked higher when the impact is similar.
- Frequency -- If the same issue appears across multiple questions or providers, it receives a higher priority because fixing it will improve results broadly.
- Competitive urgency -- If competitors are actively being cited where you are not, the urgency increases.
Your priority actions appear in a ranked list, with the highest-impact, lowest-effort items at the top. Working through this list in order is the most efficient path to improving your AI visibility.
Competitor Analysis
Cited does not just analyze your own performance -- it pays close attention to your competitive landscape.
During the recommendation process, Cited identifies the businesses that AI engines mention in response to queries relevant to your market. It tracks which competitors appear most frequently, how they are described, and what sources are driving their visibility.
This analysis surfaces in your recommendations in several ways:
- Competitor identification -- Cited lists the businesses that appear most often in your audit results, even if you did not explicitly name them as competitors.
- Citation source comparison -- You can see which sources AI engines cite for your competitors versus your business, revealing content gaps and linking opportunities.
- Positioning insights -- Cited explains how AI engines are framing the competitive landscape and where your positioning could be strengthened.
Deep Analysis Mode
For businesses that want a more thorough evaluation, Cited offers a deep analysis mode. This optional setting instructs the recommendation engine to perform additional layers of analysis, including more detailed competitor breakdowns, content gap identification, and expanded action items.
Deep analysis takes longer to process but produces a richer set of recommendations. It is particularly useful when you are preparing a major content strategy initiative, onboarding a new business, or need a comprehensive view of your AI visibility gaps.
To enable deep analysis, toggle the Deep Analysis option before starting your recommendation run. You can switch between standard and deep analysis on a per-run basis.
Next Steps
Once you have reviewed your recommendations and identified your priority actions, the next step is generating solutions -- implementation-ready content and artifacts that bring your recommendations to life. See the Solutions and Artifacts guide to learn more.