I’ve personally been in this SEO space for over 20 years now and I saw it evolve slowly, but recently with the generative web it is evolving faster than the usual and I love it. Most people would be afraid of change especially when it comes to AI, but personally I thrive off this because technology is going into a new realm of the way we retrieve information, faster and better. This research below breaks down everything on how you need to evolve to stay up on top of your competition.
1. Introduction: The Dissolution of the Retrieval Paradigm
The digital marketing ecosystem is currently navigating its most profound structural metamorphosis since the inception of the commercial internet. for over two decades, the economic engine of the web was powered by a “Retrieval-Based” paradigm: a user inputted a query, an algorithm retrieved a list of relevant documents (blue links), and the user selected a destination. This transactional contract formed the bedrock of the Search Engine Optimization (SEO) industry—a sector dedicated to reverse-engineering retrieval algorithms to secure prominence in that list. However, the integration of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) into the core infrastructure of search engines has initiated a transition to a “Generative-Based” paradigm. In this new reality, the search engine does not merely retrieve; it reads, synthesizes, and generates a singular, definitive answer, often rendering the click-through action obsolete.
This shift is not merely a feature update; it is an ontological change in how information is accessed and consumed. The implications for SEO agencies are existential. The traditional agency model—predicated on keyword volume, monthly blog retainers, and backlink acquisition—is facing rapid obsolescence as “ten blue links” give way to AI Overviews (AIO), Search Generative Experiences (SGE), and chat-based interfaces like ChatGPT, Claude, and Perplexity. The objective of optimization is no longer solely to rank a URL; it is to ensure a brand entity is “understood” by a neural network, cited in a synthesis, and recommended in a zero-click environment.
This report provides an exhaustive analysis of this evolution, mapping the trajectory from the lexical search of the early 2000s to the semantic and generative web of 2025. It quantifies the disruption to organic traffic, defines the emerging disciplines of Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO), and outlines the necessary restructuring of agency service portfolios, economic models, and organizational hierarchies required to thrive in the Agentic Era.
2. The Evolutionary Trajectory: From Strings to Things to Thoughts
To comprehend the magnitude of the current disruption, one must contextualize it within the broader history of search capability. The evolution of search algorithms reveals a clear teleology: the progressive reduction of friction between user intent and the desired information. This journey has moved through distinct eras, each demanding a fundamental pivot in agency strategy.
2.1 The Lexical Era (2003–2012): The Age of Strings
In the nascent stages of SEO, search engines operated on “lexical” principles. They matched strings of characters in a user’s query to strings of characters on a webpage. This was a mechanical process devoid of understanding.
- Mechanism: Exact-match keyword density. If a user searched for “best running shoes,” the engine looked for pages where that specific phrase appeared most frequently in the title, headers, and body copy.
- Agency Focus: This era spawned the first generation of SEO tactics: keyword stuffing, directory submissions, and meta tag manipulation. The barrier to entry was low, and the primary metric was ranking for single, short-tail keywords.
- Key Developments: The period 2003-2005 saw the growth of primitive keyword research tools. By 2010, the “Caffeine” update accelerated indexing speed, but the core retrieval logic remained largely text-based.
2.2 The Semantic Dawn (2012–2015): The Age of Entities
The introduction of the Knowledge Graph in 2012 marked the industry’s first “cognitive turn.” Google began to move away from matching strings to understanding “things”—real-world entities like people, places, and brands.
- The Hummingbird Update (2013): This was the pivotal moment where Google rewrote its core algorithm to focus on search intent rather than just keywords. It allowed the engine to process conversational queries and understand the context behind the words.
- Algorithmic Governance: Simultaneous updates like Panda (2011) and Penguin (2012) purged the ecosystem of low-quality content (“content farms”) and manipulative link schemes, forcing agencies to shift focus toward content quality and legitimate authority building.
- Agency Adaptation: Agencies began to focus on “topics” rather than just keywords. The concept of “User Intent” became central to strategy.
2.3 The Machine Learning Era (2015–2019): The Age of Prediction
The deployment of RankBrain in 2015 represented the first integration of machine learning into the ranking core. RankBrain was designed to interpret ambiguous or never-before-seen queries (which constituted 15% of all daily searches) by guessing the user’s intent based on past behavior and similar query patterns.
- Mobile Transformation: This era coincided with “Mobilegeddon” (2015) and the shift to Mobile-First Indexing (2018), driven by the reality that mobile searches had eclipsed desktop volume. Agencies were forced to prioritize technical performance, speed, and responsive design as primary ranking factors.
- Medic Update (2018): This update introduced the concept of E-A-T (Expertise, Authoritativeness, Trustworthiness), specifically targeting “Your Money or Your Life” (YMYL) sectors like health and finance. It signaled that the identity of the content creator mattered as much as the content itself.
2.4 The NLP Era (2019–2022): The Age of Context
Natural Language Processing (NLP) capabilities took a quantum leap with the introduction of BERT (Bidirectional Encoder Representations from Transformers) in 2019. Unlike previous models that read text sequentially (left-to-right), BERT read the entire sentence at once, understanding the nuances of prepositions and context.
- MUM (2021): The Multitask Unified Model (MUM) expanded this capability to be 1,000 times more powerful than BERT. MUM could transfer knowledge across languages and process multimodal inputs (images + text). It moved Google from a “keyword engine” to an “AI-driven knowledge engine” capable of complex reasoning.
- Agency Implication: “Keyword density” became effectively meaningless. Optimization shifted to “Topical Authority” and covering a subject comprehensively to satisfy the neural network’s understanding of a topic cluster.
2.5 The Generative Era (2023–Present): The Age of Synthesis
The current era is defined by the rise of Generative AI, catalyzed by the public release of ChatGPT (2022) and the integration of LLMs into search via Google’s Search Generative Experience (SGE), now rebranded as AI Overviews, and Bing’s integration of GPT-4.
- The Shift: The search engine is no longer a librarian; it is a research assistant. It uses Retrieval-Augmented Generation (RAG) to fetch data, synthesize it, and write a new answer.
- The Agency Crisis: This breaks the link between “providing value” (content) and “receiving value” (traffic). If the engine satisfies the user, the user never visits the agency’s client site. This is the “Zero-Click” crisis.
3. The Mechanics of Disruption: Why Traditional SEO is Failing
To understand why traditional tactics are failing, one must look “under the hood” of the new search architecture. The transition from Inverted Indices (databases of keywords) to Vector Databases (databases of meaning) is the root cause of the disruption.
3.1 Retrieval-Augmented Generation (RAG) and Vector Search
In a traditional search model, the algorithm scans an index for a specific keyword (e.g., “SEO pricing”). In the AI model, content is converted into Vectors—multi-dimensional numerical representations of semantic meaning.
- Ingestion: When a crawler visits a page, it chunks the content into segments and assigns them vector coordinates based on their conceptual meaning.
- Retrieval: When a user asks a question, the query is also vectorized. The system performs a “nearest neighbor” search to find content chunks that are semantically close to the query, even if they don’t share the exact keywords.
- Generation: These retrieved chunks are fed into the LLM (like Gemini or GPT-4) as “context.” The LLM then generates a natural language answer based only on the retrieved context (ideally) to minimize hallucinations.
Agency Insight: Traditional SEO optimized for the Index. New SEO must optimize for the Vector Space. If content is poorly structured, lacks semantic clarity, or is not “chunkable” (i.e., walls of text without headers), it will not be retrieved for the generation phase. “Generative Engine Optimization” is essentially the art of making content “machine-readable” for vector retrieval.
3.2 The Zero-Click Phenomenon and Traffic Decay
The visual displacement of organic results by AI Overviews is creating a measurable collapse in traditional metrics.
- Traffic Erosion: As of mid-2025, AI Overviews are triggering for nearly 30% of U.S. desktop queries (up from 10% in early 2025) and have seen a 474.9% increase on mobile.
- CTR Collapse: When an AI Overview is present, the Click-Through Rate (CTR) for the traditional organic results drops from an average of 15% to just 8%.
- Zero-Click Dominance: Approximately 60% of searches now result in zero clicks, meaning the user’s intent is fully satisfied by the AI summary or the SERP features.
This creates a “winner-takes-all” dynamic where visibility is binary: either you are cited in the AI answer (which may drive some high-intent clicks), or you are invisible. Ranking #1 organic below an AI Overview is now effectively ranking #4 or #5 in the old paradigm.
3.3 Industry-Specific Volatility
The impact is not uniform. “Informational” queries are the most heavily impacted, with 88.1% of queries triggering AI Overviews being informational in nature. However, distinct verticals are seeing aggressive AI penetration:
- Science: +22.27% increase in AIOs.
- Health: +20.33% increase.
- People & Society: +18.83% increase.
Crucially, navigational queries (e.g., searching for a specific site) triggering AI Overviews doubled from 0.74% to 1.43% in early 2025. This signals that Google is using AI even for brand-specific searches, potentially intercepting traffic intended for a homepage to offer a summary of that brand instead.
4. Defining the New Nomenclatures: GEO vs. AEO vs. AIO
As the industry grapples with this shift, a lexicon war has emerged to define the new optimization practices. While often used interchangeably, distinct strategic differences exist between Generative Engine Optimization (GEO), Answer Engine Optimization (AEO), and AI Optimization (AIO).
4.1 Generative Engine Optimization (GEO)
Definition: GEO is the practice of optimizing content specifically to be retrieved and synthesized by Large Language Models (LLMs) like ChatGPT, Gemini, and Claude.
- Goal: “Cite-worthiness.” The objective is to be the primary source the AI uses to construct its answer.
- Methodology:
- Citation Optimization: Using statistics, unique data, and direct quotes that the LLM must attribute to avoid hallucination.
- Structure: Formatting content with clear “In Summary” sections, bullet points, and concise definitions that are easy for the model to extract.
- Context Windows: Placing the most critical information early in the document to ensure it falls within the model’s primary attention span.
4.2 Answer Engine Optimization (AEO)
Definition: AEO is often viewed as a subset of SEO focused on Answer Engines—systems designed to give a single, spoken, or featured result (e.g., Siri, Alexa, Google Assistant, Featured Snippets).
- Goal: Winning the “Position Zero” or the voice answer.
- Methodology:
- Q&A Formatting: structuring content in strict Question-Answer pairs (e.g., “What is X? X is…”).
- Schema Markup: Heavy use of
FAQPageandSpeakableschema. - Conciseness: Prioritizing short, direct answers (40-60 words) that can be read aloud.
- Argument for AEO: Proponents argue “Answer Engine” is a more durable term because users will always seek “answers,” whereas “Generative” refers to the current technology stack.
4.3 AI Optimization (AIO) / The Unified Approach
Many experts argue that distinguishing between GEO and AEO is splitting hairs. The emerging consensus is a “Unified Entity Strategy.” Whether the output is a voice answer (AEO) or a generated paragraph (GEO), the underlying requirement is the same: The search system must understand the entity and trust the source.
- AIO Strategy: This approach blends traditional SEO (site structure, authority) with new tactics (LLM readability). It recognizes that Google’s “AI Overviews” and “People Also Ask” function similarly to LLMs, and optimizing for one often helps the other.
- The “Vendor” Reality: Agencies are increasingly seeing these not as separate services, but as layers of a single “Organic Visibility” offering. The winners will not be those who pick a side, but those who optimize for “Total Search Presence” across traditional, voice, and generative interfaces.
5. The New Agency Service Portfolio: Productizing AI Visibility
The traditional agency retainer model—typically involving a set number of blog posts and backlinks per month—is rapidly losing value. With AI reducing the cost of content production to near zero, clients are questioning why they pay premium rates for “commodity content.” To survive, agencies must move up the value chain, offering high-complexity technical and strategic services that solve the “Visibility Crisis.”
5.1 Service 1: Knowledge Graph Engineering & Semantic Architecture
The most critical new deliverable is the construction and maintenance of a client’s Knowledge Graph presence. LLMs rely on Knowledge Graphs to ground their generation in fact and reduce hallucinations. If a brand is not a defined entity in the Knowledge Graph, it is invisible to the AI.
- The Product: “Entity Architecture & Management.”
- Deliverables:
- Entity Disambiguation: Using
sameAsschema markup to explicitly link the client’s website to their profiles on Wikidata, Crunchbase, LinkedIn, and other authoritative databases. - Nested Schema Implementation: Going beyond basic JSON-LD to build complex, nested schemas that explain relationships (e.g., linking the
Organizationschema to thePersonschema of the founder, and theProductschema to theReviewschema). This “connects the dots” for the AI. - Knowledge Panel Claiming: Actively managing the Google Knowledge Panel to ensure the “facts” the AI knows about the brand (CEO, founded date, products) are accurate.
- Entity Disambiguation: Using
- Tools: Agencies should leverage tools like WordLift and Schema App. WordLift, for example, analyzes content and automatically builds an internal knowledge graph that syncs with Google’s, effectively “feeding” the algorithm structured facts.
5.2 Service 2: Digital PR for “Entity Salience” and Co-Occurrence
In the AI era, “Backlinks” are less about passing “Link Juice” (PageRank) and more about establishing Entity Co-occurrence. AI models learn associations by seeing words appear together. If “Brand X” frequently appears in the same sentence as “Enterprise CRM,” the vector space begins to associate Brand X with that category.
- The Product: “Entity Association Campaigns.”
- Deliverables:
- Contextual Mentions: Securing press placements that do not just link to the site, but discuss the brand in the context of specific topics. The goal is to train the LLM’s weights to associate the brand with the topic.
- Personal Knowledge Graphs: Building the personal brand of the C-suite. Since “people” are strong entities in the Knowledge Graph, connecting a well-known expert (CEO) to the brand enhances the brand’s E-E-A-T (Experience, Expertise, Authoritativeness, Trust).
- Unlinked Mentions: Monitoring and valuing unlinked brand mentions, as LLMs can read and process these as signals of authority even without a hyperlink.
5.3 Service 3: Experience-Based Content Strategy (E-E-A-T)
AI can synthesize existing information (“What is SEO?”), but it cannot generate Experience (the first ‘E’ in E-E-A-T). It has never used a product, visited a location, or interviewed a customer. Therefore, “Experience” is the only defensible moat against AI content.
- The Product: “Experience-Led Content Production.”
- Deliverables:
- Subject Matter Expert (SME) Extraction: Interviewing internal experts to extract unique insights, anecdotes, and contrary opinions that do not exist in the public web (and thus are not in the LLM’s training data).
- Original Data & Research: Publishing proprietary studies. AI loves to cite statistics. By being the source of the data, the agency guarantees citations in AI summaries.
- User Generated Content (UGC) Integration: Systematically gathering and displaying customer reviews and testimonials, as these are high-signal “experience” markers for Google’s algorithms.
5.4 Service 4: Technical AI Readiness (LLM Optimization)
Technical SEO must expand to include optimization for LLM scrapers and agents.
- The Product: “AI Accessibility & Governance Audit.”
- Deliverables:
llms.txtImplementation: A proposed standard (similar to robots.txt) that gives site owners control over which AI bots can access content for training versus retrieval. This helps manage copyright and data usage.- Vectorization Readiness: Auditing content structure to ensure it is logically “chunked.” Walls of text confuse vector embeddings. Clear H2/H3 hierarchies allow vector databases to index specific passages effectively.
- Content Governance: Ensuring that the brand’s content does not contain contradictory facts that could cause the AI to “hallucinate” incorrect information about pricing or services.
6. The New Agency Tool Stack: From Keyword Trackers to Semantic Engines
The transition to AI SEO requires a fundamental retooling of the agency’s software stack. The reliance on traditional keyword trackers (which track blue links) is insufficient for monitoring visibility in a dynamic, generative environment.
6.1 Knowledge Graph & Schema Tools
To deliver the entity services described above, agencies need industrial-grade schema tools.
- WordLift: A leader in “Semantic SEO.” It uses AI to analyze content, identify entities, and build a knowledge graph that publishes linked open data. Users rate it highly (9.0/10) for ease of use and its ability to actively build the graph rather than just marking up pages.
- Schema App: Another robust enterprise solution (rated 7.9 for technical SEO). It is often used for large-scale, complex schema deployments across thousands of pages.
- Comparison: WordLift is often favored for its “AI-first” approach that aids in content discovery and interlinking, while Schema App is favored for granular technical control in enterprise environments.
6.2 Semantic Gap Analysis & Research
Traditional keyword tools show search volume. New tools visualize Semantic Gaps.
- InfraNodus: A text network visualization tool. It analyzes the “discourse” of a search result page and visualizes the relationships between concepts.
- Use Case: Agencies use it to identify “Structural Gaps”—topics that are semantically relevant but missing from the current top results. This allows the agency to create content that provides high “Information Gain,” a key ranking factor.
- Mechanism: It builds a graph of “Search Intent” vs. “Search Results” and highlights the disconnects, offering a data-driven path to content differentiation.
6.3 AI Visibility & Tracking
Tracking “Rank #1” is meaningless if the AI Overview doesn’t cite you. New tools are emerging to track AI Share of Voice.
- Semrush AI Visibility Toolkit / Enterprise AIO: Allows tracking of brand mentions across AI overviews and LLMs. It provides competitive intelligence on which topics competitors are dominating in AI results.
- Profound: An enterprise-grade tool for monitoring brand mentions in LLMs, offering detailed reporting on sentiment and frequency.
- Trakkr & HubSpot Share of Voice Tool: These tools simulate thousands of user queries to calculate the percentage of time a brand is cited in an AI response, effectively measuring “Generative Market Share”.
- Otterly.ai: Specialized in monitoring brand visibility specifically within ChatGPT and other chat interfaces.
7. Organizational Metamorphosis: Structuring the AI Agency
The structural design of the agency must evolve to support these new functions. The era of the “Generalist SEO Account Manager” is ending. The modern agency structure resembles a data science consultancy more than a creative shop.
7.1 The Rise of New Roles
The workforce composition is shifting from “content creators” to “content orchestrators” and “data architects.”
- Data Ecologist / Knowledge Graph Architect: A technical role responsible for mapping the client’s entity relationships, managing the ontology, and ensuring the Knowledge Graph is accurate. This is the new “Technical SEO” lead.
- Prompt Engineer / AI Workflow Specialist: Responsible for designing the “prompts” and AI chains that automate content briefing, research, and analysis. This role ensures the agency maximizes efficiency while maintaining quality standards.
- Human-AI Creative Director: A strategic role that bridges the gap between AI efficiency and human empathy. They determine where AI should be used and where human intervention is non-negotiable to maintain “Brand Voice” and E-E-A-T.
- Agentic Orchestrators: As agencies move toward “Agentic AI” (AI that performs tasks), staff will manage fleets of AI agents rather than performing the tasks themselves. This requires skills in “management by exception” rather than direct execution.
7.2 The Decline of Manual Execution
Routine tasks are being automated, leading to a reduction in junior-level headcount for roles like “Link Prospector” or “Junior Copywriter.”
- Efficiency Gains: McKinsey research indicates that generative AI can increase productivity by 20-30%. This allows agencies to deliver more value with fewer people, or to redeploy staff to higher-value strategic work.
- Consolidation of Departments: The silos between “Content,” “Technical,” and “PR” are collapsing. In an entity-based model, technical schema, content creation, and PR mentions are all part of the same “Entity Optimization” workflow.
7.3 Cultural Shift: From “Doing” to “Thinking”
The value of an agency is no longer its ability to “write 10 posts.” It is its ability to “think strategically” about how those posts fit into a broader data ecosystem. This requires a culture of “Continuous Learning” and “Data Literacy” across all levels of the organization.
8. The Economic Reconstruction: Pricing for Value, Not Volume
The “commoditization of execution” driven by AI creates a deflationary pressure on traditional agency pricing models. If a client can generate a blog post for $0.05 using GPT-4, they will not pay an agency $500 to write it. Agencies must pivot their economic models to reflect strategic value.
8.1 The Collapse of the Commodity Retainer
The standard model—”$5,000/month for 4 blogs and 4 links”—is structurally broken. It incentivizes “doing work” rather than “achieving results.” As AI automates the “doing,” the perceived value of this retainer creates a race to the bottom.
8.2 Emerging Pricing Models
Agencies are experimenting with new models to capture the value of their high-level expertise.
- Project-Based “Infrastructure” Pricing:
- Concept: Charging high one-time fees for complex setups that AI cannot replicate.
- Application: Selling a “Knowledge Graph Architecture Setup” or “AI Readiness Overhaul” for $15,000 – $50,000. This is viewed as a capital investment by the client rather than an operational expense.
- Hybrid / Performance Models:
- Concept: Linking fees to measurable outcomes, such as “Share of Voice” or “Pipeline Generated.”
- Application: A base retainer (for maintenance) plus a performance bonus for achieving “Top Citation” status for high-value commercial queries in AI engines. This aligns agency incentives with the client’s business goals.
- Consultative / Hourly Strategic Retainers:
- Concept: Selling “Brainpower” rather than “Output.”
- Application: Charging high hourly rates ($300 – $500/hr) for senior strategic consulting, AI governance, and data strategy. This model targets enterprise clients who have in-house execution teams but lack high-level direction.
8.3 The “Vendor vs. Partner” Bifurcation
Predictions for 2026 suggest a market bifurcation.
- The Vendor Path: Agencies that fail to pivot will become “Vendors”—low-margin production houses reselling AI content at scale. This segment faces intense price competition and eventual consolidation.
- The Partner Path: Agencies that successfully pivot to “Strategic Consultancies” will integrate deeply with client operations, managing their data infrastructure and AI strategy. These agencies will command premium pricing and resist commoditization.
- Private Equity Consolidation: The market is ripe for roll-ups. Private Equity firms are actively acquiring specialized digital agencies to build “AI-First” marketing platforms. Agencies that demonstrate scalable AI workflows and proprietary tech (like custom Knowledge Graphs) are prime targets for acquisition.
9. Measurement & Reporting: The New KPIs
If organic traffic declines due to zero-click behaviors, agencies can no longer report on “Traffic Growth” as the sole metric of success. Reporting must evolve to measure Presence, Sentiment, and Influence.
9.1 From Rankings to Share of Voice (SoV)
The primary KPI for the AI era is AI Share of Voice.
- Definition: The percentage of times a brand is mentioned in response to a set of relevant category prompts.
- Metric: “Presence” (Did we appear?) vs. “Visibility” (Did we appear first/prominently?).
- Reporting: Agencies must use tools like Trakkr or Semrush AIO to generate monthly “SoV Reports,” showing the client’s dominance in the AI conversation relative to competitors.
9.2 Snippet Ownership & Factual Accuracy
- Snippet Ownership Score: Tracking how often the AI quotes the client’s content verbatim. This is a proxy for “Authority” and “Trust”.
- Factual Accuracy Rating: A “Reputation Management” metric. Agencies should track how accurate the AI’s descriptions of the brand are. An increase in accuracy (e.g., the AI finally getting the pricing model right) is a measurable win delivered by the agency’s Knowledge Graph work.
9.3 Sentiment Analysis
It is not enough to be mentioned; the mention must be positive.
- Metric: Net Sentiment Score within AI responses.
- Analysis: Tools like Peec AI or Profound can analyze the sentiment of brand mentions (Positive, Neutral, Negative). Agencies must report on their efforts to shift this sentiment through Digital PR and content updates.
9.4 Business Impact: Pipeline over Sessions
With traffic volume decreasing, the quality of the traffic often increases. Users clicking through an AI Overview are typically in a research or buying mode.
- Metric: Conversion Rate per Visitor (CRPV). Agencies should shift the conversation from “More Traffic” to “Better Traffic.” The goal is to maximize Pipeline Generated from the remaining organic visitors, proving that the agency is driving revenue even if session counts are flat.
10. Conclusion: The Agentic Future
The transformation of the SEO industry is not merely a reaction to a new algorithm; it is a realignment with the future of the web itself. We are moving toward an Agentic Web, where AI agents acting on behalf of users will scour the internet to perform tasks—booking travel, purchasing software, researching healthcare—autonomously.
In this future, the distinction between “optimizing for humans” and “optimizing for machines” dissolves. An Entity-First Strategy serves both. A website that is clear, structured, authoritative, and rich in experience is the ideal destination for a human user and the ideal data source for an AI agent.
For SEO agencies, the path forward is clear but challenging. They must abandon the “Traffic Hose” mentality and embrace the role of Digital Knowledge Architects. They must move from selling “content” to selling “intelligence”—building the structured data ecosystems that allow brands to speak the language of the machines that now curate the world’s information. Those who make this leap will find themselves not just surviving the AI revolution, but leading it, armed with a product suite that is more strategic, more technical, and ultimately more valuable than ever before.
As our SEO agency grows, we continue to build our product and service to increase our offerings. Keeping our clients on the forefront of their own industry by evolving with AI. Contact us if you’d like to upgrade your digital marketing into this new realm of AI. Don’t be afraid of evolving your digital marketing strategies to match the new AI space, it is time to make changes as you can tell in this article things will continue to evolve and get better with time.