top of page

How Vicious Marketing Used Generative Engine Optimization to Gain 236.7K Monthly Audience in Q4 2025

  • Writer: Jitnesh Singh
    Jitnesh Singh
  • 17 hours ago
  • 7 min read

In Q4 2025, Vicious Marketing achieved 236.7K audience growth and 108 AI mentions across major platforms while reducing its content footprint by 18 pages. It was focused more on the answer engine optimization as compared to traditional SEO volume metrics. 

The company has been a major provider of AI-based search engine tools by adopting platform-specific generative engine optimization strategies, including structured data, clarity of entities, and the format of answer blocks. The commercial payoff: quantifiable brand influence in the precise moments when the prospective clients will pose essential security queries.


Problem & Goals


Vicious Marketing maintained a robust content library covering endpoint security, cloud infrastructure, and threat intelligence. Despite this investment, the brand rarely appeared in AI-generated responses of ChatGPT, Gemini, or the feature of AI Overviews in Google. Zero-click searches were taking over the behavior of the user; that is, classic click-through metrics were no longer indicative of top-of-funnel visibility.


Initial State:

  • Strong technical content across 536 pages

  • Low citation rate in large language models

  • Minimal presence in conversational AI ecosystems

  • No systematic tracking of AI Overview citations


Defined Goals:

  • Increase brand mentions in LLMs across Google, OpenAI, and Anthropic platforms

  • Capture visibility in Google's AI Overview feature for high-intent security queries

  • Improve topical authority without scaling page count

  • Establish a measurement framework for AI mentions


Key Performance Indicators:

  • Monthly audience reach from AI citations

  • Total mentions across ChatGPT, Gemini, Google AI Overview

  • Number of cited pages (efficiency metric)

  • Platform distribution of mentions

  • Assisted conversions from AI-sourced traffic


Strategy Overview


The GEO strategy for cybersecurity focused on optimization of the existing high-authority pages, instead of developing new material. The data was analyzed by revealing that 20% of pages made 80% of the external citations, meaning that it would be better to consolidate this data and provide a clearer signal to the AI models.


Core Approach:


Platform-specific optimization covered the order in which every AI system ranked sources. Google AI Overview preferred pages that were schema-enhanced and had answer blocks. Gemini weighted source markers and entity relationships. ChatGPT had stored extensive technical tutorials in step-by-step formats.


Five Primary Tactics:


  1. Schema Implementation: Added Article, FAQPage, Organization, and HowTo structured data to all cornerstone pages

  2. Answer Block Formatting: Restructured content with 40-60-word direct answers in opening paragraphs

  3. Entity Page Development: Created canonical pages for core concepts with Wikidata links and internal entity graphs

  4. Content Pruning: Removed or merged 18 low-quality pages to concentrate authority signals

  5. AI Mention Monitoring: Deployed tracking infrastructure to measure citations across platforms daily


The Yellow Bar Strategy:


November and December saw intensive mention-scaling efforts. The team identified 47 high-intent queries that competitors did better on AI responses. Every query was assigned a respective optimization sprint: the audit of the schema, reviews of answer clarification, and entity link validation. This focused push was associated with the audience spike in the purple trend line.


Implementation & Technical Details


Schema Types Implemented


AI visibility was structurally based on four types of schema. All cornerstone pages were assigned numerous schema layers to indicate content intent and authority.


Content Structure Changes


Every optimized page followed a consistent answer-first format:


Lead Answer Format (40-60 words): Direct response to the main question in a plain format and placed in the first paragraph. No introduction or background preceding the answer.


Quick Facts Block (3 bullets):

  • Core definition or mechanism

  • Primary use case or application

  • Key differentiator or limitation


Data: Comparison tables, feature matrices, and timeline visualizations appeared within the first 500 words. Tables were written in semantic HTML, and their markup, containing proper header provisions, introduced the AI parsing.


Entity Work


Designed 12 entity pages in the canonical list of basic security concepts. Each page included:

  • Single-sentence definition optimized for extraction

  • Wikidata entity link in page metadata

  • Internal links to all related concepts (minimum 5 per page)

  • External citations to NIST, MITRE ATT&CK, or relevant standards bodies

  • Breadcrumb schema showing entity hierarchy


Built an internal entity graph connecting 47 security concepts. The graphs were based on industry-provided taxonomies (MITRE, CIS Controls) to match the knowledge bases on which the LLMs are trained.


Content Curation


Removed or consolidated 18 pages based on strict quality thresholds:

  • Pages with fewer than 3 external citations (11 pages)

  • Duplicate content addressing identical queries (4 pages)

  • Outdated guides predating the current threat landscape (3 pages)


Merged content preserved all high-value sections and implemented 301 redirects with schema markup on destination pages.


Tools & Methodology


AI Mention Detection:

  • Custom monitoring using OpenAI API queries (50 test queries daily)

  • Google Search Console filtered for AI Overview impressions

  • Third-party GEO tracking platform (weekly audits)

  • Manual verification sampling (10% of detected mentions)


Audience Measurement: Based on AI Overview impression data, combined with platform-reported visibility metrics. To prevent inflation, a conservative methodology summed up unique queries, not total impressions.


Results & Data Analysis


Headline Metrics (September - December 2025)


Total Monthly Audience: 652.2K Net Growth: +236.7K (57% increase) Total Mentions: 108 (+33 from baseline) Cited Pages: 518 (−18 from starting count)

The inverse relationship between cited pages and audience growth validates the quality-over-quantity thesis. The fact that the number of pages decreased by 3.4% and the audience increased by 57% proves that AI systems are incentivizing centralized power over distributed thin content.


Platform Breakdown


Google Ecosystem: 68 mentions (63%)

  • AI Overview: 47 mentions

  • AI Mode: 21 mentions

Gemini: 26 mentions (24%)

ChatGPT: 14 mentions (13%)

Google's dominance reflects two factors: the company's search market share and the schema optimization on our platform. Gemini was doing better than expected (24%), which indicates that entity work was connected with the knowledge graph integration at Google.


Correlation Analysis


The timeline graph indicates that there is a spike in November (38 mentions) and December (41 mentions) in the yellow bars (mentions). This trend is also reflected in the purple audience line with surging growth during the same months.


Supporting Evidence for Causation:

  1. Temporal Alignment: 14-day lag between mention spikes and audience growth matches typical indexing cycles

  2. Query-Level Data: 73% of new audience came from queries where we gained AI citations in Nov-Dec

  3. Control Comparison: Uncited pages showed flat traffic during the same period


Limitation Note: While correlation is strong, external factors (seasonal search trends, competitor actions) cannot be fully isolated in a four-month window.


Secondary KPIs


Click-Through Rate: 8.2% CTR was the average result of AI Overview impressions to cited pages, as compared to 3.1% industry average

Assisted Conversions: AI-sourced traffic contributed to 127 demo requests (18% of total pipeline)

Brand Search Lift: Branded queries increased 23% in markets where AI mentions grew


Methodology & Limitations


Mention Counting Methodology

Mentions were counted through a three-layer verification process:

  1. Automated daily queries via OpenAI API, Google AI Studio, and Gemini API with 50 rotating test prompts.

  2. Weekly manual verification sampling 10% of automated detections

  3. Monthly comprehensive audit using a third-party GEO tracking platform


Measurement Window: September 1 - December 31, 2025


Data Quality Checks:

  • Eliminated duplicate mentions (same query, multiple platforms)

  • Verified brand name appeared in response text, not just source attribution

  • Excluded indirect references without a clear brand association


Known Limitations


Blind Spots:

  • Private or paywalled LLM implementations (enterprise ChatGPT, custom models)

  • Ephemeral responses that vary by user context or session

  • Voice assistant responses (Alexa, Siri) are not systematically tracked

  • Non-English language AI platforms

Sample Constraints: A 50-prompt test query set is insufficient to measure all the long-tail variations. Probably more mentioning than actually reported, that is, these are conservative estimates.

Attribution Challenges: AI systems do not often have definite sources in their hierarchies. Where numerous sources appear, we counted each mention irrespective of the position. This can exaggerate the incremental value compared to competitors.


Governance & Risk


The reputational risk posed by AI hallucination occurs when models reference brands when claiming allegations that are not present in the source material.  Monthly content audits verified all cited pages for:

  • Factual accuracy against current threat intelligence

  • Absence of speculative or unverified claims

  • Clear distinction between product capabilities and general best practices

  • Proper citations for all third-party data referenced


All cornerstone pages were revised by legal and PR teams before optimization. The sensitive issues (nation-state attribution, zero-day vulnerabilities, and customer breach information) were intensely reviewed. GEO opted out of two pages because of legal issues surrounding the use of AI-generated summaries of subtly positioning.


Established a takedown system of AI-generated content, claiming it is misattributed. The process involves direct platform reporting and schema updates in order to give corrective context.


Conclusion & Recommendations


High-quality canonical pages with robust schema and entity clarity beat volume-based strategies in the generative AI era. This case shows that the answer engine optimization needs completely different success metrics compared to traditional SEO. Mentioning in an AI-generated response will result in top-of-funnel awareness even when users do not navigate to your site.

Efficiency at scale is indicated by the 236.7K audience increase of 18 fewer pages. In the case of cybersecurity companies that operate in a saturated market, turning into the de facto provider of AI tools is a viable competitive advantage.


Three Prescriptive Next Steps


1. Ongoing Monitoring Infrastructure: Implement automatic daily scanning of all significant AI platforms. Track mentions on your top 20 most valuable queries. Establish alerts to mention loss on strategic subjects. Budget: 10-15 hours of examination and reporting each month.


2. Monthly AEO Audits: Review all cited pages for schema compliance, clarity of answer block, and health of linking entities. Test pages on new conversational queries appearing in Search Console. New content to fill in competitor mentions.


3. Content Playbook for New Topics: Document the optimization framework used in this campaign. Create answer block templates, schema implementation templates, and entity page structure templates. Train troubleshooting experts to write to extract AI to the first draft. Integrate AEO requirements into content briefs at the beginning of creation as opposed to optimizing after creation.


Frequently Asked Questions


Q1. What is zero trust architecture?

Zero trust architecture is a security framework that removes the implicit trust and constantly validates each user, device, and connection that tries to gain access to the resources. 


Q2. How does endpoint detection and response work?

Endpoint Detection and Response (EDR) is an ongoing monitoring of the endpoint devices in regard to suspicious actions and threat indicators. Upon the recognition of malicious activity by EDR, it gathers forensic evidence, isolates affected machines, and supplies security teams.


Q3. What are best practices for cloud security in 2026?

The 2026 cloud security best practices focus on identity-based controls, automatic compliance controls, and policies specific to a workload. Organizations need to deploy cloud-native security technology instead of retrofitting on-premises technology.


Q4. How do you stop a ransomware attack step by step?

Stop a ransomware attack by:

  1. Immediately isolating infected systems from the network

  2. Preserving forensic evidence before remediation, 

  3. Identifying the ransomware variant to understand the encryption scope, 

  4. Restoring systems from verified clean backups,

  5. Adopting the use of improved surveillance on indicators of lateral movement before reconnecting systems.


Q5. What is the difference between XDR and EDR?

Extended Detection and Response (XDR) combines endpoint, network, cloud workload, and application telemetry to detect threats in a single view, while Endpoint Detection and Response (EDR) is limited to endpoint devices only. XDR offers cross-domain correlation to identify more sophisticated attacks that involve more than one layer within the infrastructure, and EDR offers a greater level of forensic data to endpoint-specific threats.

Comments


Frame 48095885.png

Stop Reading. Start Scaling.

Chart 5.png
Chart 10.png
Chart 8.png
Chart 4.png
bottom of page