Google Ads launches "Mix Experiments" in beta
The rollout of Mix Experiments in beta marks a decisive step forward in the automation of media buying on Google Ads. This cross-campaign testing feature finally allows advertisers to break down technical silos and allocate their budgets based on real incrementality, rather than sector-specific intuition.
Towards the end of silos: data-driven budget arbitration
For years, testing on Google Ads was limited to internal variables within the same type of campaign, such as A/B testing of ads or bidding strategies within Search. The arrival of “Mix Experiments” radically changes the game by allowing two radically different account structures to be compared. Based on our account observations, the question is no longer whether one keyword performs better than another, but whether allocating a portion of the budget to Performance Max generates additional conversion volume compared to a traditional Search structure.
This technical innovation responds to a growing need for transparency among traffic managers. By contrasting, for example, a “Search Pure” strategy with a “Search + PMax” mix, Google has established a rigorous protocol to isolate the real incremental contribution of each lever and avoid performance duplication. The challenge for brands is to scientifically validate that AI does not simply shift existing conversions, but actually expands the audience pool via inventories such as YouTube or Discover.
A rigorous testing protocol to secure ROI
The strength of Mix Experiments lies in its statistical approach based on traffic or cookie sharing. In practical terms, Google Ads randomly divides the audience to ensure that both arms of the test evolve under identical market conditions. This rigour eliminates seasonal biases or competitive bidding fluctuations that usually skew before/after analyses.
The direct impact on CPA is immediate: by identifying the most efficient configuration, brands avoid wasting budget on less effective formats. For advertisers managing substantial budgets, this lever allows them to control incrementality with surgical precision. However, the learning phase remains a key factor; as with any machine learning-based solution, we recommend test cycles of 4 to 6 weeks to stabilise data signals and obtain statistically significant results.
AI as a co-pilot: optimising creative and semantic resources
The integration of these tests is part of Google’s overall vision of “augmented marketing” with Gemini. Beyond simply comparing performance, these experiments highlight the effectiveness of creative assets across different touchpoints. If a mix including Performance Max outperforms a standalone Search campaign, this confirms the need to invest heavily in high-quality visual resources to feed the algorithm.
Furthermore, this feature allows for more refined audience strategies. By analysing how campaigns respond to new signals of intent detected by AI, brands can adjust their messages in real time. The challenge is to maintain strict brand consistency while giving artificial intelligence the flexibility it needs to optimise delivery. SEA management is becoming hybrid: bid control is giving way to a detailed understanding of how tools filter information.
Our Google Ads agency supports brands in structuring and activating audience strategies tailored to their actual volumes and performance challenges.