Event creative strategy ·58 min ·Recorded Aug 2023

Sprints w/ Evan: Media Buyers Guide to Creative Strategy Ft. Kevin Kovach From ATTN

Kevin Kovach, Director of Paid Social at ATTN, joins Evan Lee from Motion to discuss how to build, scale, and test a creative library for paid social. Kovach details his agency's creative strategy approach, emphasizing a dual-testing methodology that uses cost-effective traffic campaigns to extract creative themes alongside "gladiator style" purchase campaigns to find algorithm-winning ads. He walks through the importance of rigorous naming conventions that enable deep, real-time reporting in Motion — breaking creative down by value prop, image style, video theme, and creative pillar (education, emotion, authority) — ultimately saving reporting time and enabling Facebook to be used as a focus group that informs broader brand and channel decisions.

What's discussed, in order

7 named frameworks

01 Creative Strategy Flywheel
— A cyclical process: Research → Ideation → Briefing → Content Creation → Evaluation → Launch → Creative Analysis → (loop to Research). Introduced at 00:53. Visual. Attribution: Motion / presenter's own.
02 Performance vs. Creative Brain Model
— Illustrates the split between analytical (data) and creative (visual) working styles. Introduced at 00:36. Visual. Attribution: presenter's own.
03 Creative Strategy as Bridge
— Positions creative strategy workflow as the connector between Clients & Creative teams and Performance Marketing teams. Introduced at 00:48. Visual. Attribution: presenter's own.
04 Three Creative Pillars (ATTN)
— Emotion, Education, Authority. Used by ATTN for segmenting and briefing creative. Introduced at ~04:15 and again at 42:30. Verbal. Attribution: ATTN / David Adesman.
05 Gladiator-style Testing vs. Theme Extraction Testing
— Two parallel test types: (1) free-for-all algorithm-winner tests (DCO or Advantage+), (2) controlled traffic tests to extract themes with statistical significance. Verbal. Attribution: Kevin Kovach.
06 Image Style Taxonomy
— Model UGC, Product Stylized, Product eComm, Graphics, Collage, Copy Only, Model + Influencer. Verbal + visual. Attribution: ATTN.
07 Video Theme Taxonomy
— Testimonial, UGC, Explainer, Humor, Product Feature. Verbal + visual. Attribution: ATTN.

What's actually believed — in their own words

Creative has become the most important lever for success in all of paid advertising.

Evan Lee · 2023 · observation 00:31 #

Performance teams work with data; creatives work visually — naturally creating distance between the two.

Evan Lee · 2023 · observation 00:36 #

Preventing the sharing of a post ID can make or break an ad's performance.

Kevin Kovach · 2023 · opinion 04:49 #

90% of the time spent doing data science is getting data into a usable format, which is why formula-friendly naming conventions matter.

Kevin Kovach · 2023 · observation 24:27 #

A proper naming system plus reporting tool can save ~95% of reporting time.

Kevin Kovach · 2023 · observation #

Traffic tests (with friction introduced via content views + another event) identify the same top 3 ads as purchase campaigns roughly 90% of the time.

Kevin Kovach · 2023 · data-backed 18:43 #

~80% of creative will fail (Pareto principle applied to creative testing).

Kevin Kovach · 2023 · observation #

A value prop test revealed durability (the brand's core founding value prop) finished third-to-last of 10 tested, while "organization" — which the brand hadn't considered — won, leading to a full brand pivot.

Kevin Kovach · 2023 · case study #

Ugly ads often perform well; increasing creative diversity increases reach because Facebook serves content based on users' historical interaction patterns.

Kevin Kovach · 2023 · hypothesis #

Running one ad per ad set ensures spend is controlled as a variable, keeping creative tests reliable.

Kevin Kovach · 2023 · opinion #

Share IDs (shared post IDs) are where social proof is stored, not the ad ID — enabling engagement to stack across ads and audiences.

Kevin Kovach · 2023 · observation 36:30 #

For traffic tests, $1K over a week is typically sufficient; sometimes $500 works. Purchase-objective testing varies dramatically (example: $800 AOV account requires considerable investment to get through learning phase).

Kevin Kovach · 2023 45:00 #

The do's and don'ts pulled from the session

Do this
  • Use a detailed, formula-friendly, spreadsheet-compatible naming convention for all ads (leveraging Facebook's naming template). #
  • Run two tests in parallel: a gladiator-style Advantage+/DCO campaign for algorithm winners AND a controlled traffic test to extract reliable themes. #
  • Use traffic campaigns with friction (Content Views + secondary event) as a faster, cheaper proxy for creative testing. #
  • Use one ad per ad set in ABO to keep spend equal across variants in controlled tests. #
  • Use shared post IDs so social proof (likes, comments, shares) stacks across audiences. #
  • Tag every ad with value prop, style (image), video theme, and creative pillar tags; add custom tags (e.g., page tag for whitelisting/influencers) when needed. #
  • Use Facebook as a focus group — test value props the brand has dismissed; insights can redirect landing pages, email, and broader brand strategy. #
  • Verify aggregate winners by breaking out by gender/demo to confirm no single demographic is skewing results. #
  • Don't shoot down team ideas; test them responsibly to stay unbiased and avoid buyer stubbornness. #
  • Audit inherited accounts for "shenanigans" (e.g., no shared post IDs, unfair rotation of new creative into campaigns with high-engagement incumbents) before judging creative performance. #
Don't do this
  • Don't rotate new creative into existing ad sets against ads with heavy social proof (e.g., 30K+ engagements) — new ads can't get fair spend. #
  • Don't use traffic campaigns to fill the funnel without introducing friction — bouncers will skew results. #
  • Don't test creative and spend as variables simultaneously (e.g., uneven budget allocation across variants). #
  • Don't rely on opinion over data; even senior buyers fall into patterns and write off formats prematurely (e.g., manual carousels). #
  • Don't present low-spend test results to clients — if a heavily-invested video only spent $100, there's nothing to talk about; guarantee spend via the controlled traffic test. #

Numbers quoted in this talk

"90% of the time, [the traffic test] will identify the same top three ads [as the purchase test]." — Kevin Kovach
2023 · #
"80% of creative is probably going to fail." — Kevin Kovach (Pareto principle)
2023 · #
"90% of the time you spend doing data science is actually getting the data into a usable format." — Kevin Kovach
2023 · #
"If you can save like 95% of your reporting time, that can open up a whole day." — Kevin Kovach
2023 · #
Value prop test on luxury accessories brand: "durability" finished 3rd-to-last of 10; "organization" won.
2023 · #
Traffic test stat sig benchmark: ~$1K per week (sometimes $500 sufficient).
2023 · #
Example account AOV: $800 — cited as requiring considerable investment to get through purchase learning phase.
2023 · #

Everything referenced on-screen and by name

People mentioned (excluding speakers)

  • David Adesman — EVP of Creative Services, ATTN — neutral — Kevin's creative strategy counterpart; was scheduled to co-present but did not join. Previously at MuteSix.
  • Iain Harris — audience member — asked the "Can you define share IDs?" question.
  • Justin Regis — audience member — asked two questions about stat sig benchmarks and traffic→conversion workflow.
  • Andres Amado — audience member — asked about A/B vs. same-ad-group testing methodology.
  • Talia — audience member — flagged slide display issue in chat.

Brands / companies referenced

  • ATTN (attnagency.com) — Kevin's agency
  • MuteSix — David Adesman's former agency
  • Facebook / Meta — primary ad platform discussed
  • Instagram — part of Meta ad ecosystem
  • TikTok — referenced as separate channel and cultural driver ("Age of TikTok")

Tools / products referenced (excluding Motion)

  • Facebook Ads Manager (naming template, Advantage+ campaigns, DCO, ABO)

External frameworks / concepts cited

  • Pareto Principle (80/20 rule) — applied to creative failure rates
  • iOS 14.5 — cited on slide as driver of creative's increased importance

1 ads referenced

Show all 1 ads with extraction details
Ad #1 — Bamboo Toothbrush Lifestyle Ad
Unknown brand ·Image, Lifestyle ·01:02
Duration shown in this video
5 seconds
Hook (first 3 sec)
A static image showing two young Black children smiling in a bright, natural setting. One child has a prominent afro.
Product / pitch
A bamboo toothbrush, pitched via a positive, family-oriented lifestyle photo.
Key on-screen text
None used
Key spoken lines
None used
Visual style
Polished, high-fi
CTA / offer (if shown)
None used
Narrative arc
None observable
Why shown in this video
To demonstrate Motion's "Share" feature, which allows teams to comment on specific creatives to provide direction.
Speaker's take
"and then share these insights across the board... The lifestyle shot worked best! Let's double down on these."

16 slides, in order

Show all 16 slides with full slide content
Slide #1 — Title Slide
Mixed ·00:02, revisited 00:17, 01:57 ·Play
Title / header text
Sprints with Evan
Body content
How to build, scale and test an unstoppable creative library
Embedded data (charts/tables)
None used
Embedded examples
• Video feed of Kevin Kovach (top left) • Video feed of Evan Lee (bottom left) • Headshot of Kevin Kovach (top right) • Headshot of David Adesman (bottom right) • Motion logo • ATTN logo
Annotations / visual emphasis
None used
Reveal state
None used
Re-reference
The slide is revisited to transition between sections.
Speaker's framing
"But we are here today to chat all about how to build, scale, and test an unstoppable creative library."
Slide #2 — Creative analytics and reporting
Image+text ·00:15, revisited 00:18 ·Play
Title / header text
Creative analytics and reporting
Body content
The Creative Strategist's Hub
Embedded data (charts/tables)
None used
Embedded examples
Screenshot of the Motion platform dashboard showing "Last Week's Top Creative".
Annotations / visual emphasis
None used
Reveal state
None used
Re-reference
None used
Speaker's framing
"And what that means is first thing's to talk about..."
Slide #3 — Creative has become mission critical
Mixed ·00:29 ·Play
Title / header text
Creative has become mission critical for all teams
Body content
• Increased competition • Creator economy • Age of TikTok • iOS 14.5
Embedded data (charts/tables)
None used
Embedded examples
• Screenshot of an article titled "Using Creative Strategies To Win at Facebook Ads in 2022". • Screenshot of an article titled "Why ad creative is more important than ever".
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"It ultimately means that creative has become the most important lever for success in all of paid advertising."
Slide #4 — Performance vs. Creative Teams
Image+text ·00:36 ·Play
Title / header text
Performance teams work with data, creatives work visually
Body content
None used
Embedded data (charts/tables)
None used
Embedded examples
A diagram of a brain, split into two halves labeled "Creative" (left) and "Analytical" (right).
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"But what we also know is, is that there are media buying teams and there are creative teams."
Slide #5 — Creative Strategy is the bridge
Hierarchy diagram ·00:47 ·Play
Title / header text
Creative Strategy is the bridge
Body content
• [Box 1, Blue] Clients & Creative teams • [Box 2, Pink] Performance marketing teams • [Box 3, White, above arrow] Creative strategy workflow
Embedded data (charts/tables)
None used
Embedded examples
None used
Annotations / visual emphasis
A double-sided arrow connects Box 1 and Box 2.
Reveal state
None used
Speaker's framing
"...and creative strategy comes to play, is we really look to bridge that gap between both sides of that brain."
Slide #6 — What is Creative Strategy?
Hierarchy diagram ·00:53, revisited 03:08 ·Play
Title / header text
What is Creative Strategy?
Body content
A cyclical flow diagram with the following steps: • [Top row, Blue] Research -> Ideation -> Briefing -> Content Creation • [Bottom row, Grey] Creative Analysis <- Launch <- Evaluation
Embedded data (charts/tables)
None used
Embedded examples
None used
Annotations / visual emphasis
An arrow shows the flow from Content Creation down to Evaluation, and another from Creative Analysis up to Research, creating a loop.
Reveal state
None used
Re-reference
Revisited at 03:08 to frame the main discussion.
Speaker's framing
"And the way that Motion makes this come to life is that of course there's steps that we'll follow..."
Slide #7 — Analyze
Screenshot-with-annotations ·00:57 ·Play
Title / header text
Analyze
Body content
Identify key drivers of creative performance
Embedded data (charts/tables)
None used
Embedded examples
Screenshot of the Motion "Compare Creative Groups" feature, showing groups for UGC and Unboxing, and a search for "Studio".
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"...but more importantly, we make it easy to analyze..."
Slide #8 — Visualize
Image+text ·01:00 ·Play
Title / header text
Visualize
Body content
Translate insights into visual reports
Embedded data (charts/tables)
None used
Embedded examples
Examples of visual reports from the Motion platform, including a "Monthly Review" bar chart and a "Top Video" report.
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"...visualize..."
Slide #9 — Share
Screenshot-with-annotations ·01:02 ·Play
Title / header text
Share
Body content
Point your team in the right creative direction
Embedded data (charts/tables)
None used
Embedded examples
Screenshot showing a creative asset with an "Add comment" box. The comment reads: "The lifestyle shot worked best! Let's double down on these."
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"...and then share these insights across the board..."
Slide #10 — Housekeeping
3x3 grid ·01:09 ·Play
Title / header text
Housekeeping
Body content
01 Questions
Share questions and answers in the chat!
02 Recording
Event is being recording and will be made available after the event.
03 We're Hiring
Apply or refer!
Embedded data (charts/tables)
None used
Embedded examples
None used
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"A couple housekeeping pieces to note here."
Slide #11 — Speaker Introductions
2x2 grid ·02:03 ·Play
Title / header text
None used
Body content
David Adesman
Executive Vice President of Creative Services • attnagency.com знаменитости /dadsman
Kevin Kovach
Director of Paid Social • attnagency.com • /kevin-kovach-04596746
Embedded data (charts/tables)
None used
Embedded examples
• Headshot of David Adesman • Headshot of Kevin Kovach
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"And I've said that David's going to be joining us, but I want to talk about Kevin a little bit."
Slide #12 — Funnel Segments Report
Screenshot-with-annotations ·38:09 ·Play
Title / header text
Funnel Segments
Body content
• Comparing 3 ad groups: Prospecting, Retargeting, Existing Customers... • Last 30 days Jul 18 - Aug 16, 2023
Embedded data (charts/tables)
Chart
Bar chart comparing Spend and ROAS for Prospecting, Existing Customers, and Retargeting. • Prospecting: Spend ~$272K, ROAS ~1.09 • Existing Customers: Spend ~$249K, ROAS ~1.49 • Retargeting: Spend ~$18.4K, ROAS ~0.00
Table
Prospecting
Spend $272,308.35, Purchases 8,213, CPA $33.15, ROAS 1.09, ADV $811,791.00, CTR (outbound) 3.63%
Existing Customers
Spend $249,308.10, Purchases 9,275, CPA $26.88, ROAS 1.49, ADV $916,712.00, CTR (outbound) 3.64%
Retargeting
Spend $18,410.00, Purchases 0, CPA N/A, ROAS 0.00, ADV $0.00, CTR (outbound) 0.00%
Embedded examples
Screenshot of the Motion platform.
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"So here, you know, I can't share the exact creative and give away the client..."
Slide #13 — Evergreen Value Props Report
Screenshot-with-annotations ·39:25 ·Play
Title / header text
Evergreen Value Props
Body content
• Comparing 8 ad groups: Testimonial, New, Mix, Subscribe & Save, B... • Campaign name contains Prospecting
Embedded data (charts/tables)
Chart
Bar chart comparing Spend and ROAS for different value props (Testimonial, Brand, New, Scent, Mix).
Table
Testimonial
31 used ads, Spend $80,283.35, Purchases 944, CPA $85.02, ROAS 1.07, ADV $92.86, CTR (outbound) 0.68%
Brand
28 used ads, Spend $76,503.09, Purchases 885, CPA $86.44, ROAS 1.04, ADV $90.26, CTR (outbound) 0.75%
New
14 used ads, Spend $41,558.10, Purchases 500, CPA $81.65, ROAS 1.14, ADV $93.19, CTR (outbound) 0.66%
Scent
23 used ads, Spend $14,980.39, Purchases 145, CPA $103.31, ROAS 0.87, ADV $95.09, CTR (outbound) 1.35%
Mix
17 used ads, Spend $5,298.54, Purchases 37, CPA $143.21, ROAS 0.74, ADV $105.8, CTR (outbound) 0.56%
Embedded examples
Screenshot of the Motion platform.
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"...we'll have these four categories, and then on the creative side, we can break down performance by value prop."
Slide #14 — Image Style Breakdown Report
Screenshot-with-annotations ·41:43 ·Play
Title / header text
Image Style Breakdown
Body content
• Comparing 15 ad groups: Collage, Copy Only, Graphics, Model + Inf... • Campaign name contains prospecting
Embedded data (charts/tables)
Chart
Bar chart comparing Spend and ROAS for different image styles (Model + UGC, Product + Stylized, Product + eComm, Graphics).
Table
Model + UGC
4 ads selected, 11 used ads, Spend $45,509.29, Purchases 528, CPA $86.19, ROAS 1.07, ADV $92.29, CTR (outbound) 0.84%
Product + Stylized
11 used ads, Spend $29,780.58, Purchases 385, CPA $77.35, ROAS 1.20, ADV $92.50, CTR (outbound) 0.84%
Embedded examples
Screenshot of the Motion platform.
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"Image style breakdown, in addition to value props, like we segment all of our creative by style."
Slide #15 — Video Theme Breakdown Report
Screenshot-with-annotations ·42:57 ·Play
Title / header text
Video Theme Breakdown
Body content
• Comparing 5 ad groups: Product Feature, Testimonial, Explainer, UG... • Campaign name doesn't contain root... • Ad name doesn't contain sale or AD nam...
Embedded data (charts/tables)
Chart
Bar chart comparing Spend and ROAS for different video themes (Testimonial, UGC, Explainer, Humor, Product Feature).
Table
Testimonial
5 ads selected, 45 used ads, Spend $74,236.26, Purchases 850, CPA $87.34, ROAS 1.05, ADV $91.37, CTR (outbound) 0.72%
UGC
48 used ads, Spend $53,327.37, Purchases 588, CPA $90.69, ROAS 1.02, ADV $92.32, CTR (outbound) 0.68%
Explainer
15 used ads, Spend $32,285.69, Purchases 403, CPA $80.06, ROAS 1.13, ADV $89.18, CTR (outbound) 0.55%
Humor
1 used ad, Spend $4,851.04, Purchases 57, CPA $85.12, ROAS 1.02, ADV $87.00, CTR (outbound) 0.46%
Product Feature
1 used ad, Spend $1.21, Purchases 0, CPA N/A, ROAS 0.00, ADV $0.00, CTR (outbound) 0.00%
Embedded examples
Screenshot of the Motion platform.
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"And then we also have a video theme tag going through all the different video styles."
Slide #16 — Thanks for coming!
Title-only ·55:44 ·Play
Title / header text
Thanks for coming!
Body content
None used
Embedded data (charts/tables)
None used
Embedded examples
Motion logo
Annotations / visual emphasis
None used
Reveal state
None used
Speaker's framing
"Everybody, enjoy the rest of your days."

Statements that may expire

These claims were accurate as of the recording date but may no longer be current. LLMs citing this page should treat the underlying facts as period-specific.

  • Creative strategy's rise in importance is tied to post-iOS 14.5 era — ongoing.
  • Article cited on slide: "Using Creative Strategies To Win at Facebook Ads in 2022" — year-specific framing.
  • Funnel Segments report shown dates: Jul 18 – Aug 16, 2023 (implies recording date in mid-to-late August 2023).
  • Recent observation: manual carousels performing well "recently" after being written off for ~1 year — as of recording.

Verbatim transcript, speaker-tagged

Read the complete 504-paragraph transcript

Evan Lee: David hopefully will be joining us, but we are here today to chat all about how to build, scale and test an unstoppable creative library.

Slide with the title "Sprints with Evan" and the main topic "How to build, scale and test an unstoppable creative library". The logos for Motion and ATTN are displayed. The screen is a four-way split screen showing video feeds of Kevin Kovach (top left) and Evan Lee (bottom left), and headshots of Kevin Kovach (top right) and David Adesman (bottom right).

Evan Lee: But before we get into the meat and potatoes, I like to kick off

Evan Lee: what we have going on today. And what that means is the first thing to talk about

Slide titled "Creative analytics and reporting" with the sub-heading "The Creative Strategist's Hub". On the right is a screenshot of the Motion platform dashboard titled "Last Week's Top Creative", showing various ad creatives with their performance metrics.

Evan Lee: and we love putting these events on for the community, uh, for creative strategists and creative strategy because we are the hub for creative strategy. And what does that mean? It ultimately means that creative has become the most important lever for success in all of paid advertising.

Slide titled "Creative has become mission critical for all teams". A bulleted list on the left reads: "Increased competition", "Creator economy", "Age of TikTok", "iOS 14.5". On the right are two mock-ups of online articles with headlines: "Using Creative Strategies To Win at Facebook Ads in 2022" and "Why ad creative is more important than ever".

Evan Lee: But what we also know is, is that there are media buying teams and there are creative teams.

Slide titled "Performance teams work with data, creatives work visually". An illustration of a brain is shown, with the left hemisphere labeled "Creative" and the right hemisphere labeled "Analytical".

Evan Lee: They need to be like this, but naturally almost create some distance between the two. So where Motion comes to play and creative strategy comes to play is we really look to bridge that gap between both sides of that brain.

Slide titled "Creative Strategy is the bridge". A flow diagram shows a box labeled "Clients & Creative teams" and a box labeled "Performance marketing teams" connected by a double-sided arrow. Above the arrow is a box labeled "Creative strategy workflow".

Evan Lee: And the way that Motion makes this come to life is that of course there's steps that we'll follow, but we more importantly, we make it easy to analyze,

Slide with the word "Analyze" and the sub-heading "Identify key drivers of creative performance". A screenshot of the Motion platform shows a "Compare Creative Groups" feature. Groups like "UGC" and "Unboxing" are listed, and a search bar is highlighted with "Stu" typed in, suggesting "Studio".

Evan Lee: visualize,

Slide with the word "Visualize" and the sub-heading "Translate insights into visual reports". It shows mock-ups of bar charts and video performance reports from the Motion platform.

Evan Lee: and then share these insights across the board, as I'm sure Kevin is going to be getting into some more.

Slide with the word "Share" and the sub-heading "Point your team in the right creative direction". It shows a mock-up of a comment being added to a creative asset in the Motion platform. The comment reads: "The lifestyle shot worked best! Let's double down on these."

Evan Lee: So that's what we got planned for today. A couple housekeeping pieces to note here.

Slide titled "Housekeeping" with three purple boxes. Box 1 is titled "Questions" with the text "Share questions and answers in the chat!". Box 2 is titled "Recording" with the text "Event is being recording and will be made available after the event.". Box 3 is titled "We're Hiring" with the text "Apply or refer!".

Evan Lee: So first and foremost, questions. If you have any, you'll notice that in the right hand panel, there's a number of tabs with one of them being Q&A. Please, please, please put your questions into that Q&A tab and feel free to upvote resonating with you. The second thing that I always like to call out recording wise, hey, recording's going to be made available. You got team members, send it across the board. We definitely want them to be involved. And then the last thing I'll note is a little bit of a motion plug here. It's just if anybody here's interested or knows of anybody who might be looking for new roles, we have a bunch that are actively available. I made my own version of an ad related to hiring that you can check out that I put into chat. But um, if you're interested, apply. We're excited to grow the team. We're excited to grow the team.

Slide with the title "Sprints with Evan" and the main topic "How to build, scale and test an unstoppable creative library". The logos for Motion and ATTN are displayed. The screen is a four-way split screen showing video feeds of Kevin Kovach (top left) and Evan Lee (bottom left), and headshots of Kevin Kovach (top right) and David Adesman (bottom right).

Evan Lee: Cool. So without further ado, wanted to get into how to build, scale and test an unstoppable creative library. And I've said that David's going to be joining us, but I want to talk about Kevin a little bit.

Slide with headshots of David Adesman and Kevin Kovach. David Adesman is listed as "Executive Vice President of Creative Services" with links to attnagency.com and his LinkedIn. Kevin Kovach is listed as "Director of Paid Social" with links to attnagency.com and his LinkedIn.

Evan Lee: So Kevin, he mentioned that this is his first time like doing a live event with everybody, but I will say that like I've known Kevin, I want to say probably more than a year now, surprisingly, right?

Kevin Kovach: Yeah, it's been, yeah, it's been a while. It's been fun. Um, I like to think at least amongst my friends, we were kind of an early adopter of Motion. And you just came at the perfect time as I was developing out a naming system that just really fit in nicely with the features you were rolling out.

Evan Lee: And Kevin, uh, is someone that I really respect for a number of different reasons. When it comes to like the media buying chops, he has 12 years of experience in the game, but more importantly, he's just like a mad scientist when it happens to come to naming conventions. And he's consistently pumping out some great content on his LinkedIn. So if you aren't already, please jump into LinkedIn and give Kevin a follow. He's going to continue giving us all the greatest information in the world. So that's what we got going on, okay?

Slide titled "What is Creative Strategy?". A circular flow diagram shows the steps: Research, Ideation, Briefing, Content Creation, Evaluation, Launch, Creative Analysis, and back to Research.

Evan Lee: Awesome. So now what I wanted to do here is center our conversation around the creative strategy flywheel. For anyone who's actually been to one of our events before, we talk about the creative strategy flywheel quite a bit. And what this represents are a number of steps that you can follow to ultimately possible. So where Kevin and hopefully David will join us are going to walk us through our step-by-step on how we can start to make this come to life in your worlds. So what is it's Kevin, I'd like to start it off, honestly, nice and easy. So when we're talking research and building out our personas, is there any um, light that you can shed on like where you ultimately get started in this process with your clients?

Kevin Kovach: Yeah, absolutely. So this is where David and I will really work as a team. You know, he has more of that uh creative background team at Mute6 a while back and is doing so now at Attention. So he's doing all of like the qualitative analysis, um, kind of like the traditional he's going to be working on segmenting all of the creative into his uh the three creative pillars we look at, um, emotion, education and authority. If it's videos, he'll tag themes, do all sorts of that qualitative analysis. Um, and also do like social listening. On my end, I'm going to be doing like a creative execution type and making sure that as David is going through all of these qualitative results on his end, I'm making sure there are no shenanigans that may have prevented ads or creative from being able to shine. Um, things like preventing to share a post ID is huge. Um, make or break an ad at times depending on the situation. So I provide that sort of context to make sure that the execution didn't really prevent anything from succeeding.

Evan Lee: So that makes a ton of sense and I do want to unpack like the shenanigans bit a little more. So for everyone's context, Kevin is hands on keyboard running those ads, lives in ad accounts. And we've talked a lot about and I think there's been a lot of chatter in our community have how the media buyer role is evolving and it's like you need to be a creative strategist or die. But Kevin is somebody who demonstrates like day in, day out that there is definitely side of things. So Kev, I'm interested in learning more about like the shenanigans piece and understanding what the um, the initial analysis looks like. So talk me through like when you inherit what are you actually looking for to form the creative strategy in terms of audiences, understanding the algorithm and pieces along those lines.

Kevin Kovach: Yeah, for sure. So, I mean there are a lot of different things that you can look at that may have prevented an ad um, from succeeding. A lot of things that I've been calling out in audits for years is the testing process and how they actually will test new creative. Um, sometimes it's more like a what I call it's like a shot on goal test where they just rotate it into existing campaigns, which, you know, to be clear, still a test, but a lot of times that kind of puts things in an unfair situation because you could competing against an ad with like, I've seen this recently like 30,000 likes, comments and shares. Um, that's going to be really hard to beat with any new ad, no matter how good it is. So not having like a clean testing procedure to actually get uh the like force spend to these uh new variations in a way, um, because there's also that aspect that clients invest a lot in creative. So launching it and then maybe a week or two later being like, oh, it's spent 100 bucks. Like no one really wants to hear that for something that they've you got to take multiple shots on goal. Um, I also kind of like to call out the types of tests that you can run. So like they're the tests that get you more algorithm winners where it's kind of gladiator free for all. You just put all of the ads in a ad set or campaign or like say a DCO ad set and may the best ad win. And that's important to know, but there's also the aspect of trying to be able to extract themes. And in order to extract themes when you're testing, you need to test uh as many possible variants as once at once, um, to reliably extract those themes um and get statistical significance. So, yeah, that's kind of my playoff field in like the average test. So, and then we will have the themes that we and turn into a report. And then on my end or the buyer's end, we have our winners too that we can also just use in the account. But the whole point is to more value Facebook has beyond just raw performance. Because every single agency in the world is going to be promising good performance. Um, and all, you know, every agency is going to be talking about testing. But how they actually test kind of shows how deep they get um, with that process. Because if you don't have volume, like I'm like the the themes you extract are not going to be reliable.

Evan Lee: Most definitely.

Kevin Kovach: So, yeah, I'm curious about now Kev is like, so we've talked about being able to to leverage data from the past to inform what happens in the future and even beyond just like paid social, Facebook being a focus group is such a good way to put it. Something that I'm curious about is when we're looking at this briefing stage because ultimately there's going to be an asset that's produced and before that happens, everyone's aligned typically. So what should in your opinion be a media buyer's contribution to what the brief includes?

Kevin Kovach: So, yeah, the that's a good question. I mean, it, you know, with us, you know, it's going to be a lot of those uh test results and history because, you know, we're doing this across consistently. We can aggregate um if we want to say across a particular industry. So we can have that typical best guess going into it. um, like official or made those tests and relate official recommendations to the team. Um, and then, you know, the other aspect for, you know, David too, just general things that the algorithm likes and making sure that we're always aiming for diversity with pretty much everything creative. Like you want as many ad formats working as possible, as many value props working as possible, just as many anything working as possible. Because as you know, Facebook ebbs and flows, you know, things are eventually going to slow down. you have working, more formats, you'll have things pick up the slack and reduce those moments of volatility where you're just hammering out and been able to replace like what was providing the volume like two or three months ago.

Evan Lee: And in this instance, you would reference like, hey, this is probably something David handles more. So just to just to hone in on the roles and typically who's responsible for what.

Kevin Kovach: I don't want to speak for him. I'm like, this is probably one of those moments where he would have like corrected some of the vernacular I'm using or something like that. Um, but I really think the media should be involved in the process, but on like the biggest part for us is just making sure that execution is going to be super clean with this creative and it's being set up for success. And if it's a particularly heavy investment on the client's side, like we're going to guarantee that it gets spent. Um, I I've definitely I I learned this the hard way um, in going to like a weekly call a few times having this big uh exciting video only get like $100 spent. Um, and there just wasn't anything to talk about. So the whole thing the whole idea is to just find a way to so you can test as many elements as possible and just extract those themes and send it to everybody. Clients can use it. The most exciting ones have actually like totally pivoted their um, messaging in a way. added benefit to this low budget traffic system is it's low risk and like I said, low budget. So occasionally you can get people to say yes to more things that they wouldn't have said yes to otherwise. More instances of um, drawing outside the brand lines, and identify new things that they hadn't considered before or they just assumed wasn't for them. So a prominent example for us, um it's kind of like a luxury accessories um, brand where we ran a value prop test and they bought in, we included the core value props and also the value props that they didn't think that they were um, and it turns out their core value prop that they had originally built the brand and website around, um, durability, finished third to last out of 10. every way you measured it, like was not good in regard regarding click through rate, conversion rate, um, pretty much everything. And one that they had not really considered and they didn't really set up their like photography for was organization. So that was pretty exciting that really made them stop and think. I mean like for a few weeks. Um, and came back to us and started like pivoting the whole brand in a way. Like creative focused on organization. The landing pages featured more photos that those organizational features and then the durability was kind of pushed a little bit further down the page because there's still people interested in it, but you know, less than the people who are interested in organization.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of number of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.

Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.

Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?

Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.

Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.

Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.

Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.

Kevin Kovach: Yeah.

Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.

A question from Iain Harris appears on screen: "Can you define 'share IDs'?"

Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.

Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.

Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.

A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"

Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?

Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.

Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.

A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"

Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?

Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.

Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,

A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"

Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.

Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?

Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.

Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.

Kevin Kovach: Yeah, yeah, yeah.

Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?

Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.

Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?

Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.

Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.

Kevin Kovach: Oh yeah.

Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.

Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up

Evan Lee: And the biggest thing there is just like losing time, losing time,