There is a phenomenon with Amazon DSP management that does not exist with Amazon Sponsored Ads—some agencies simply “set and forget” campaigns.

Though obviously not ideal, it’s an unfortunate trap that’s begrudgingly tolerated. It’s justified as follows: Unlike Sponsored Ads in which 90% of purchases occur within the first 24 hours of a click, Amazon DSP campaigns have longer purchase windows (avg. time from first touch ad exposure of a Prime Video ad to purchase is 22 days), so frequent optimizations lack urgency. Additionally, since optimizations on the Amazon DSP can be laborious, practitioners rationalize that their time may be better spent elsewhere.

No one aspires to “set it and forget it”, but when campaigns are set up well, many media managers only dedicate attention to these campaigns when something goes wrong or a client demands attention.

We are not here to tell you that this is bad practice (it obviously is); instead, we want to use this opportunity to set the bar for advertising excellence on the Amazon DSP and share how Gigi enables our customers to resoundingly meet this bar.

AI Raises the Bar of Best Practices

Agency leaders draft best practices for their team: when and how to analyze and optimize campaigns. These best practices become wish lists—aspirational standards that bandwidth-constrained teams can only partially execute. As a result, best practices are compromised. But what would happen if those constraints were removed entirely? What if, with AI agents, agency leaders were able to create best practices unconstrained by human labor: always on, always monitoring, always executing.

AI raises the bar for advertising excellence by removing the operational ceiling. For agencies, this creates a competitive edge. For brands, it resets expectations. The desire to achieve Amazon DSP excellence is shared among all stakeholders at their best: functional leaders managing teams, media managers aspiring to be great, and clients yearning for proactive service.

Advertising Excellence in Practice, with Gigi

Below is a shortlist of optimizations that we believe are table stakes for Amazon DSP campaign management. Additionally, we thought it would be helpful to show how Gigi elevates media management with sample tasks and prompts frequently deployed by our customers.

Improving Efficiency While Campaigns Are Live

Bid Optimizations

Maintaining base and max bids within an optimal range compared to fluctuating CPMs is critical for efficient bidding and campaign delivery. But going into every single campaign and manually inputting base and max bid adjustments is time-consuming, monotonous work. At best, media managers are likely able to do this on a weekly basis. With Gigi, you can set always-on optimizations to match your bids to the CPM fluctuations as they happen. This is beneficial not only for day-to-day maintenance, but especially during tentpole events, when CPMs can spike by 40–60%.

Optimal Cadence: Always-on monitoring

What This Optimizes: Detects when base and max bids fall outside of optimal ranges relative to recent CPMs, then provides scaled bid recommendations for each line item customized by line item KPI performance.

Sample Prompt/Task in Gigi: If any delivering line item within a delivering campaign does not have a base bid or max bid within these ranges—base bid: 60–100% of the last 24-hour CPM; max bid: 160–200% of the last 24-hour CPM—then surface those line items and provide base and max bid recommendations that fall within the appropriate ranges. Higher-performing line items should receive base and max bids toward the upper end of these ranges. Performance should be assessed using DPVR for TOF/MOF line items and ROAS for BOF line items.

The Cost of Missing This: Static bids lose auctions during high-demand windows, leaving significant budget unspent during the highest-converting periods of the year.

Inventory QA

The caliber of the inventory (domains and apps) where your ads run can have a dramatic impact on campaign performance. We recently met with a savvy agency leader; she told us that instead of curating domains and apps she simply curates supply sources (like OpenX and Pubmatic). She knew this was suboptimal but also acknowledged that manually running domain-level inventory reports would be unreasonably laborious for her and her team. A clear example of best practices being compromised due to natural human constraints.

Optimal Cadence: Weekly at minimum; recommend always-on monitoring

What this Optimizes: Identifies domains and apps with meaningful delivery but poor efficiency (high eCPDPV, low ROAS) and generates granular exclusion plans across impacted line items.

Sample Prompt/Task in Gigi: If any of the sites/apps that I am spending on for any of my delivering line items have served more than 10,000 impressions in the last 7 days with an eCPDPV that is above $8 or a total ROAS that is below $0.50, then exclude these domains from all line items serving on them. Do not include Alexa or Amazon.com in your exclusions.

The Cost of Missing This: 20% of spend could be going to Solitaire and Crossword apps. Low-quality game inventory may drive impressions, but it can crush conversion rates—making it difficult to justify not excluding them for your brand.

Viewability QA

Adjusting line item viewability thresholds to ensure highly visible ad placements is another often neglected setting on the ADSP. We continue to see media managers shocked that their Performance+ campaigns have unknowingly had a sub-40% viewability rate since launch. Amazon defines a viewable impression as one in which at least 50% of the ad shows on a screen for one second or longer for display ads and two seconds or longer for video ads. A 40% viewability rate means that only 40% of your impressions meet this baseline ‘viewable threshold’. Properly monitoring viewability gives you the ability to course-correct before thousands of dollars in spend go towards ghost impressions.

Optimal Cadence: Weekly at minimum; recommend always-on monitoring

What This Optimizes: Flags campaigns where viewability rates drop below a prescribed threshold and pinpoints which line items are dragging it down, with specific settings adjustments to fix it.

Sample Prompt/Task in Gigi: If the viewability rate for any of my delivering campaigns has dropped below 70% in the last 7 days, recommend increasing the viewability setting to 70%+ for all active line items within that campaign.

The Cost of Missing This: Line items with a low viewability rate signify wasted ad spend, buying impressions that register as "delivered" while rarely entering a user's actual view.

Audience Swaps

We often see media managers rely on intuitive targeting choices (like IM-Supplements for a supplements brand) as a part of a campaign launch, but then neglect ongoing audience optimizations if an intuitive in-market or lifestyle audience has sufficient scale to deliver. Standard practice should include regular audience performance analysis—identifying untargeted segments with higher purchase rates than targeted segments and systematically swapping them in.

Optimal Cadence: Monthly, ensuring audiences have had sufficient time to see statistically significant performance

What This Optimizes: Swaps out lowest-performing audience segments and replaces them with higher-performing, untargeted segments based on actual KPI performance across the funnel.

Sample Prompt/Task in Gigi: Remove the bottom 2 performing targeted in-market (IM) or lifestyle (LS) inclusion audiences within the awareness campaign and then add the top 2 untargeted IM or LS segments identified as top performers within the awareness campaign. Gauge audience performance based on the last 30-day purchase rate. Only make swaps if the newly added audiences have a higher last 30-day purchase rate than the segments that are being removed.

The Cost of Missing This: Intuitive targeting choices can fail silently for months. Segments that feel relevant often aren't driving the highest purchase rates. One agency found that, across their in-market and lifestyle campaigns, untargeted, tangential segments were actually driving a 2-3x higher purchase rate for the last few months but were not being targeted within the campaign.

Creative Swaps

When multiple creatives are associated to a single line item, Amazon defaults to random creative allocation versus weighting spend by performance. This typically results in all active and assigned creatives receiving relatively equal portions of the line item’s budget. Creative weighting and swapping are obvious optimization levers that savvy media managers should be leveraging on behalf of their clients. It is on the media manager to identify strong creatives and optimally allocate spend by increasing the weight on high performing creative and/or inactivating poor performing creatives.

Optimal Cadence: Monthly, ensuring creatives have had sufficient time to see statistically significant performance

What This Optimizes: Pinpoints underperforming creatives with sufficient volume and recommends pausing or deprioritizing them while promoting stronger alternatives.

Sample Prompt/Task in Gigi: Surface last 30-day creative performance and identify the REC variation applied to my TOF in-market campaign with the lowest assisted DPVR. Remove this creative from all TOF line items it is applied to.

The Cost of Missing This: Creatives rotate evenly by default, often spending equally regardless of results. Discovering weeks later that one headline drove a 3x higher DPVR than another means missing the window to optimize sooner and serve the messaging that actually drives customer engagement.

Frequency Adjustments

We all know that frequency capping based on optimal frequency reporting is a best practice that every media manager should follow. Yet, we’ve seen too many compromises here by uncapping or loosening frequency to ensure budgets spend in full. Ideally, media managers should be using AMC reports to identify the optimal frequency at which they should be reaching users in order to drive downstream impact.

Optimal Cadence: Always-on monitoring

What This Optimizes: Tightens frequency caps in defined stages when line items pace ahead with high delivery confidence in alignment with optimal frequency AMC report findings. This action reduces over-serving and supports broader reach.

Sample Prompt/Task in Gigi: For any active line item in an active campaign that's pacing >120% for the current flight, has high forecast delivery confidence, and hasn't had a frequency change in the last 3 days, tighten frequency caps to reduce over-serving and expand reach without risking full budget delivery. Specifically: set uncapped/looser-than 1 per user per 3 hours to 1 per 3 hours; if it's between 1 per 3–5 hours, move to 1 per 6 hours; if it's between 1 per 6–11 hours, move to 1 per 12 hours; and if it's already 1 per 12 hours, make no change.

The Cost of Missing This: Static, loose frequency caps can lead to oversaturation and wasted ad spend, especially when audience sizes are small.

Bid Modifiers

We continue to believe bid modifiers are the most underutilized performance lever in Amazon DSP. When we speak with agencies and ask about their use of bid modifiers prior to Gigi, we often hear the same response: “We use them as often as we can—which is sparingly.” If you wanted to manually run reports to identify areas for bid modifier optimizations you would need to run: inventory reports for domain performance, technology report for device performance, geo report for location performance, audience report for segment performance, and custom AMC reports for slot size and position performance. Upon manually reviewing the data, you would then need to decide on what multipliers you’d want to implement to drive desired KPIs across each of those dimensions, multiplied by every line item. This is an unconscionable amount of manual work that is perfectly suited for AI. Two of our agencies used Gigi to implement bid modifiers helping Lemon Perfect drive a 36x lift in purchase rate, and Topicals a 2.6x improvement in ROAS.

Optimal Cadence: Bi-weekly

What This Optimizes: Surfaces bid modifiers across geographic location, device type, domain, slot size, and slot placement to win impressions most likely to contribute to custom KPIs at each funnel stage.

Sample Prompt/Task in Gigi: Gigi has pre-built custom bid modifier queries that surface insights and bid modifier recommendations on a bi-weekly cadence for users to easily review and accept across each campaign. If impressions served in California, on ESPN.com, above the fold and with a 300x600 placement size drove a strong CTR historically, then Gigi might bid up +35% for that impression moving forward.

The Cost of Missing This: Not implementing bid modifiers is unfortunately the current standard. Yet implementing them across every single line item can dramatically elevate client performance, and it is the standard brands should demand of their agencies.

Monthly Budget Rebalancing

If a client maintains the same budget without any strategic variations month to month, it is easy to just run the same campaigns exactly as they were for the previous month. Deploying media is often similar to deploying capital and investing in a full funnel advertising strategy is similar to investing in an index funds. Every index funds rebalances its investments across its portfolio on a defined cadence, why shouldn’t your media plan?

Assuming you want to maintain the same spend distribution across funnel segments, budget allocations across campaigns and line items should be rebalanced each month based on prior-month performance and any upcoming deal events. That means pulling reports to identify which campaigns and line items are driving your target KPIs, then reflighting and updating budgets across campaigns for the new month—a tedious, time-consuming workflow that’s easy to deprioritize and often gets neglected.

Financial traders don’t manually rebalance index funds nor should programmatic traders. This is a perfect task for AI: systematically shifting dollars from underperforming tactics to high performers based on voluminous data analysis.

Optimal Cadence: Monthly, before the start of a new flight

What This Optimizes: Performance data shows which campaigns and tactics are delivering against your goal KPIs and which aren't. Rather than maintaining static allocations, this redistributes the same monthly budget across active campaigns for new flights—reallocating by tactic and funnel stage based on the previous 30 days of pacing and performance.

Sample Prompt/Task in Gigi: The budget for next month is $100,000. Use the same funnel allocation percentage as the previous month, and use NTB ROAS as the primary KPI, and ROAS as the secondary KPI. Looking at past performance for the last 30 days and knowing that there is a deal period from the 4th to the 8th of the month, allocate this budget across each active campaign.

The Cost of Missing This: Budget allocations become stale and disconnected from performance, leaving high performers underfunded while low performers continue consuming budget.

Preventing Silent Failures

No Clicks or Spend Alert

Catching when a campaign has unexpectedly stopped spending or receiving clicks is critical to quickly triaging potential issues like loss of the buy box, line items within a campaign ending early, or monthly budget caps that could be impacting campaign delivery and performance. But to identify and address these issues, you’d need to go into each advertiser and carefully review last-24-hour spend and performance metrics across every active campaign—or set up emailed DSP reporting that still requires daily review—to flag a campaign that meets these alarming criteria.

One of our clients had a brand lose the buy box for their hero product—creatives stopped running and campaigns stopped spending. He would not have caught this for several days while managing over a dozen other advertisers. By the time month-end reporting reveals 50% budget under-delivery, it's too late to recover. Having active alerts built to flag anomalies can help prevent campaign underperformance.

Optimal Cadence: Daily, always-on monitoring

What This Catches: Highlights delivering campaigns with $0 spend or 0 clicks. Automatically checks common blockers—creative approvals, flight dates, budget caps, buy box loss—and surfaces what needs fixing.

Sample Prompt/Task in Gigi: If any of my delivering campaigns have 0 clicks or $0 in spend for the last 24 hours, then for each campaign that meets this criteria check the following possible factors that would limit spend: Are all the active creatives associated to my line items approved? Have any of my approved and active creatives spent $0 in the last 24 hours (signalling a loss of the buy box)? Have any or all of the line items within the campaign ended (end date in the past)? Is my pacing strategy pace ahead and I've run out of budget this month? Do I have a restrictive monthly budget cap set at the campaign level or line item level that is preventing spend?

The Cost of Missing This: Silent failures compound over time—losing days or weeks of delivery windows that can't be recovered, resulting in significant budget under-delivery and a damaging conversation with the client.

Pacing Recommendations

While the DSP dashboard provides at-a-glance pacing visibility, actionable insights require deeper analysis: delivery confidence forecasting, available spend capacity, historical month-over-month trends, and identification of high-performing campaigns with headroom to scale. Without this context, pacing metrics alone don't drive decisions. Critically, pacing challenges need not compromise performance. Rather than resorting to blunt tactics like uncapping frequency or lowering viewability standards, intelligent pacing solutions redirect spend toward high-performing line items with strong delivery confidence or recommend budget reallocation to campaigns demonstrating both strong performance and capacity.

Optimal Cadence: Always-on monitoring

What This Optimizes: For auto-optimized campaigns, line item minimum spend settings are applied that sum to daily budget requirements while prioritizing high performers. For manual campaigns, Gigi shifts budget from low-delivery-confidence line items to high-performing, high-confidence line items.

Sample Prompt/Task in Gigi: If any of my auto optimized budget management strategy delivering campaigns are pacing at less than 100% for the current flight and at least 5% of the current flight has elapsed. Then for each of these underpacing campaigns, recommend line item minimum spends weighted towards high line item performance. High performance means a high ROAS on BOF line items and high assisted ROAS on TOF/MOF line items.

The Cost of Missing This: Consistent 5–10% under-delivery across multiple campaigns compounds rapidly, translating to thousands of dollars in unspent media and proportional losses in agency revenue.

KPI Fluctuation Tracking

Active campaigns require continuous monitoring to spot meaningful performance fluctuations. However, pulling daily reports for each advertiser across multiple time periods—and calculating percent changes—is time-intensive, and standard reporting typically captures only last-touch DSP metrics. Advanced media managers should implement automated KPI monitoring that compares performance across multiple time windows to catch declines early. Early detection enables rapid investigation into contributing factors, such as a new audience segment underperforming or a low-performing app receiving an increasing share of spend.

Optimal Cadence: Weekly at minimum; recommend always-on monitoring

What This Delivers: KPI reports triggered daily or whenever a KPI fluctuates beyond a specified threshold. This DSP and AMC metric monitoring report can be produced across several levels of granularity: campaign, line item, and creative.

Sample Prompt/Task in Gigi: For all delivering campaigns, surface the following metrics—ranked by the greatest ROAS decline (last 7 days vs. last 90 days): yesterday’s ROAS, last 7-day ROAS, last 90-day ROAS, % change (yesterday vs. last 90 days), and % change (last 7 days vs. last 90 days). Then, for campaigns showing a >10% ROAS decline (7-day vs. 90-day), run a diagnostic analysis to evaluate potential drivers and provide a concise root-cause summary, including whether low-ROAS apps/sites received a disproportionate share of spend, whether eCPM shifts have resulted in suboptimal bid levels (too high or too low), whether viewability has declined for active line items, whether spend has shifted toward underperforming line items, and whether budget has moved toward lower-converting audience segments.

The Cost of Missing This: Manual KPI monitoring consumes significant time that could be redirected toward campaign optimization and strategic initiatives. Without automated tracking, meaningful performance declines may go undetected, resulting in missed intervention opportunities.

What Agencies Gain, What Brands Should Expect

For agencies, this shift creates clear competitive differentiation. Agencies that can consistently execute this baseline—and articulate it clearly—gain a meaningful edge. These capabilities should be visible in pitch decks, renewal conversations, and day-to-day client interactions.

For brands, it resets what you should expect from agency partners. If you're investing in Amazon DSP, you should feel empowered to ask what is actively being done on your account each week. Not what was planned. Not what the strategy document says. What actually happened.

  • Which levers were pulled this week to improve performance?

  • How were budgets reallocated based on recent pacing and KPI contribution?

  • What underperforming tactics were deprioritized, and why?

  • How are AMC insights shaping decisions while campaigns are live—not just explaining results after the fact?

Spending your budget and meeting a token ROAS target is not sufficient—you need visibility into week-over-week changes that ensure your investment is being maximally deployed. These are not aggressive questions—they're reasonable ones. And they separate agencies achieving advertising excellence from those still constrained by outdated operational models.

We are no longer constrained by bandwidth. With the proliferation of AI agents, “Set it and forget it" isn't a strategy. It's what happens when execution is bounded by human limits rather than performance potential. At Gigi, we remove these limits.

Cherry Picked is a monthly newsletter from Adam Epstein, co-founder and CEO at Gigi, covering the AI and commerce media insights you just gotta know.

Keep Reading