Evan Lee: David hopefully will be joining us, but we are here today to chat all about how to build, scale and test an unstoppable creative library.
Slide with the title "Sprints with Evan" and the main topic "How to build, scale and test an unstoppable creative library". The logos for Motion and ATTN are displayed. The screen is a four-way split screen showing video feeds of Kevin Kovach (top left) and Evan Lee (bottom left), and headshots of Kevin Kovach (top right) and David Adesman (bottom right).
Evan Lee: But before we get into the meat and potatoes, I like to kick off
Evan Lee: what we have going on today. And what that means is the first thing to talk about
Slide titled "Creative analytics and reporting" with the sub-heading "The Creative Strategist's Hub". On the right is a screenshot of the Motion platform dashboard titled "Last Week's Top Creative", showing various ad creatives with their performance metrics.
Evan Lee: and we love putting these events on for the community, uh, for creative strategists and creative strategy because we are the hub for creative strategy. And what does that mean? It ultimately means that creative has become the most important lever for success in all of paid advertising.
Slide titled "Creative has become mission critical for all teams". A bulleted list on the left reads: "Increased competition", "Creator economy", "Age of TikTok", "iOS 14.5". On the right are two mock-ups of online articles with headlines: "Using Creative Strategies To Win at Facebook Ads in 2022" and "Why ad creative is more important than ever".
Evan Lee: But what we also know is, is that there are media buying teams and there are creative teams.
Slide titled "Performance teams work with data, creatives work visually". An illustration of a brain is shown, with the left hemisphere labeled "Creative" and the right hemisphere labeled "Analytical".
Evan Lee: They need to be like this, but naturally almost create some distance between the two. So where Motion comes to play and creative strategy comes to play is we really look to bridge that gap between both sides of that brain.
Slide titled "Creative Strategy is the bridge". A flow diagram shows a box labeled "Clients & Creative teams" and a box labeled "Performance marketing teams" connected by a double-sided arrow. Above the arrow is a box labeled "Creative strategy workflow".
Evan Lee: And the way that Motion makes this come to life is that of course there's steps that we'll follow, but we more importantly, we make it easy to analyze,
Slide with the word "Analyze" and the sub-heading "Identify key drivers of creative performance". A screenshot of the Motion platform shows a "Compare Creative Groups" feature. Groups like "UGC" and "Unboxing" are listed, and a search bar is highlighted with "Stu" typed in, suggesting "Studio".
Evan Lee: visualize,
Slide with the word "Visualize" and the sub-heading "Translate insights into visual reports". It shows mock-ups of bar charts and video performance reports from the Motion platform.
Evan Lee: and then share these insights across the board, as I'm sure Kevin is going to be getting into some more.
Slide with the word "Share" and the sub-heading "Point your team in the right creative direction". It shows a mock-up of a comment being added to a creative asset in the Motion platform. The comment reads: "The lifestyle shot worked best! Let's double down on these."
Evan Lee: So that's what we got planned for today. A couple housekeeping pieces to note here.
Slide titled "Housekeeping" with three purple boxes. Box 1 is titled "Questions" with the text "Share questions and answers in the chat!". Box 2 is titled "Recording" with the text "Event is being recording and will be made available after the event.". Box 3 is titled "We're Hiring" with the text "Apply or refer!".
Evan Lee: So first and foremost, questions. If you have any, you'll notice that in the right hand panel, there's a number of tabs with one of them being Q&A. Please, please, please put your questions into that Q&A tab and feel free to upvote resonating with you. The second thing that I always like to call out recording wise, hey, recording's going to be made available. You got team members, send it across the board. We definitely want them to be involved. And then the last thing I'll note is a little bit of a motion plug here. It's just if anybody here's interested or knows of anybody who might be looking for new roles, we have a bunch that are actively available. I made my own version of an ad related to hiring that you can check out that I put into chat. But um, if you're interested, apply. We're excited to grow the team. We're excited to grow the team.
Slide with the title "Sprints with Evan" and the main topic "How to build, scale and test an unstoppable creative library". The logos for Motion and ATTN are displayed. The screen is a four-way split screen showing video feeds of Kevin Kovach (top left) and Evan Lee (bottom left), and headshots of Kevin Kovach (top right) and David Adesman (bottom right).
Evan Lee: Cool. So without further ado, wanted to get into how to build, scale and test an unstoppable creative library. And I've said that David's going to be joining us, but I want to talk about Kevin a little bit.
Slide with headshots of David Adesman and Kevin Kovach. David Adesman is listed as "Executive Vice President of Creative Services" with links to attnagency.com and his LinkedIn. Kevin Kovach is listed as "Director of Paid Social" with links to attnagency.com and his LinkedIn.
Evan Lee: So Kevin, he mentioned that this is his first time like doing a live event with everybody, but I will say that like I've known Kevin, I want to say probably more than a year now, surprisingly, right?
Kevin Kovach: Yeah, it's been, yeah, it's been a while. It's been fun. Um, I like to think at least amongst my friends, we were kind of an early adopter of Motion. And you just came at the perfect time as I was developing out a naming system that just really fit in nicely with the features you were rolling out.
Evan Lee: And Kevin, uh, is someone that I really respect for a number of different reasons. When it comes to like the media buying chops, he has 12 years of experience in the game, but more importantly, he's just like a mad scientist when it happens to come to naming conventions. And he's consistently pumping out some great content on his LinkedIn. So if you aren't already, please jump into LinkedIn and give Kevin a follow. He's going to continue giving us all the greatest information in the world. So that's what we got going on, okay?
Slide titled "What is Creative Strategy?". A circular flow diagram shows the steps: Research, Ideation, Briefing, Content Creation, Evaluation, Launch, Creative Analysis, and back to Research.
Evan Lee: Awesome. So now what I wanted to do here is center our conversation around the creative strategy flywheel. For anyone who's actually been to one of our events before, we talk about the creative strategy flywheel quite a bit. And what this represents are a number of steps that you can follow to ultimately possible. So where Kevin and hopefully David will join us are going to walk us through our step-by-step on how we can start to make this come to life in your worlds. So what is it's Kevin, I'd like to start it off, honestly, nice and easy. So when we're talking research and building out our personas, is there any um, light that you can shed on like where you ultimately get started in this process with your clients?
Kevin Kovach: Yeah, absolutely. So this is where David and I will really work as a team. You know, he has more of that uh creative background team at Mute6 a while back and is doing so now at Attention. So he's doing all of like the qualitative analysis, um, kind of like the traditional he's going to be working on segmenting all of the creative into his uh the three creative pillars we look at, um, emotion, education and authority. If it's videos, he'll tag themes, do all sorts of that qualitative analysis. Um, and also do like social listening. On my end, I'm going to be doing like a creative execution type and making sure that as David is going through all of these qualitative results on his end, I'm making sure there are no shenanigans that may have prevented ads or creative from being able to shine. Um, things like preventing to share a post ID is huge. Um, make or break an ad at times depending on the situation. So I provide that sort of context to make sure that the execution didn't really prevent anything from succeeding.
Evan Lee: So that makes a ton of sense and I do want to unpack like the shenanigans bit a little more. So for everyone's context, Kevin is hands on keyboard running those ads, lives in ad accounts. And we've talked a lot about and I think there's been a lot of chatter in our community have how the media buyer role is evolving and it's like you need to be a creative strategist or die. But Kevin is somebody who demonstrates like day in, day out that there is definitely side of things. So Kev, I'm interested in learning more about like the shenanigans piece and understanding what the um, the initial analysis looks like. So talk me through like when you inherit what are you actually looking for to form the creative strategy in terms of audiences, understanding the algorithm and pieces along those lines.
Kevin Kovach: Yeah, for sure. So, I mean there are a lot of different things that you can look at that may have prevented an ad um, from succeeding. A lot of things that I've been calling out in audits for years is the testing process and how they actually will test new creative. Um, sometimes it's more like a what I call it's like a shot on goal test where they just rotate it into existing campaigns, which, you know, to be clear, still a test, but a lot of times that kind of puts things in an unfair situation because you could competing against an ad with like, I've seen this recently like 30,000 likes, comments and shares. Um, that's going to be really hard to beat with any new ad, no matter how good it is. So not having like a clean testing procedure to actually get uh the like force spend to these uh new variations in a way, um, because there's also that aspect that clients invest a lot in creative. So launching it and then maybe a week or two later being like, oh, it's spent 100 bucks. Like no one really wants to hear that for something that they've you got to take multiple shots on goal. Um, I also kind of like to call out the types of tests that you can run. So like they're the tests that get you more algorithm winners where it's kind of gladiator free for all. You just put all of the ads in a ad set or campaign or like say a DCO ad set and may the best ad win. And that's important to know, but there's also the aspect of trying to be able to extract themes. And in order to extract themes when you're testing, you need to test uh as many possible variants as once at once, um, to reliably extract those themes um and get statistical significance. So, yeah, that's kind of my playoff field in like the average test. So, and then we will have the themes that we and turn into a report. And then on my end or the buyer's end, we have our winners too that we can also just use in the account. But the whole point is to more value Facebook has beyond just raw performance. Because every single agency in the world is going to be promising good performance. Um, and all, you know, every agency is going to be talking about testing. But how they actually test kind of shows how deep they get um, with that process. Because if you don't have volume, like I'm like the the themes you extract are not going to be reliable.
Evan Lee: Most definitely.
Kevin Kovach: So, yeah, I'm curious about now Kev is like, so we've talked about being able to to leverage data from the past to inform what happens in the future and even beyond just like paid social, Facebook being a focus group is such a good way to put it. Something that I'm curious about is when we're looking at this briefing stage because ultimately there's going to be an asset that's produced and before that happens, everyone's aligned typically. So what should in your opinion be a media buyer's contribution to what the brief includes?
Kevin Kovach: So, yeah, the that's a good question. I mean, it, you know, with us, you know, it's going to be a lot of those uh test results and history because, you know, we're doing this across consistently. We can aggregate um if we want to say across a particular industry. So we can have that typical best guess going into it. um, like official or made those tests and relate official recommendations to the team. Um, and then, you know, the other aspect for, you know, David too, just general things that the algorithm likes and making sure that we're always aiming for diversity with pretty much everything creative. Like you want as many ad formats working as possible, as many value props working as possible, just as many anything working as possible. Because as you know, Facebook ebbs and flows, you know, things are eventually going to slow down. you have working, more formats, you'll have things pick up the slack and reduce those moments of volatility where you're just hammering out and been able to replace like what was providing the volume like two or three months ago.
Evan Lee: And in this instance, you would reference like, hey, this is probably something David handles more. So just to just to hone in on the roles and typically who's responsible for what.
Kevin Kovach: I don't want to speak for him. I'm like, this is probably one of those moments where he would have like corrected some of the vernacular I'm using or something like that. Um, but I really think the media should be involved in the process, but on like the biggest part for us is just making sure that execution is going to be super clean with this creative and it's being set up for success. And if it's a particularly heavy investment on the client's side, like we're going to guarantee that it gets spent. Um, I I've definitely I I learned this the hard way um, in going to like a weekly call a few times having this big uh exciting video only get like $100 spent. Um, and there just wasn't anything to talk about. So the whole thing the whole idea is to just find a way to so you can test as many elements as possible and just extract those themes and send it to everybody. Clients can use it. The most exciting ones have actually like totally pivoted their um, messaging in a way. added benefit to this low budget traffic system is it's low risk and like I said, low budget. So occasionally you can get people to say yes to more things that they wouldn't have said yes to otherwise. More instances of um, drawing outside the brand lines, and identify new things that they hadn't considered before or they just assumed wasn't for them. So a prominent example for us, um it's kind of like a luxury accessories um, brand where we ran a value prop test and they bought in, we included the core value props and also the value props that they didn't think that they were um, and it turns out their core value prop that they had originally built the brand and website around, um, durability, finished third to last out of 10. every way you measured it, like was not good in regard regarding click through rate, conversion rate, um, pretty much everything. And one that they had not really considered and they didn't really set up their like photography for was organization. So that was pretty exciting that really made them stop and think. I mean like for a few weeks. Um, and came back to us and started like pivoting the whole brand in a way. Like creative focused on organization. The landing pages featured more photos that those organizational features and then the durability was kind of pushed a little bit further down the page because there's still people interested in it, but you know, less than the people who are interested in organization.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of number of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time, it's not like you just get it back and do nothing. It's like instead of just manual like grunt work and putting together reports, you can now shift to decision making at the end of the day. It's like, okay, now I can spend time interpreting the data to know what to do next instead of just putting the data together ultimately.
Kevin Kovach: Yeah, you know, you have a chance to sit and kind of think critically. Um, and really just it's a it gives the brands and clients a lot to think about as well. Like these are these are not test campaigns right here. Like every single ad is tagged with a value prop tag. So these are like core campaigns, gladiator style. I'm not controlling the spend anymore. What value props are driving the performance? Um, and then also just kind of when we do have our controlled theme down here. So in the sales and promotions, we can break things out, you know, for Pierce and Pierce fine detergents, bonus points if you know what that's from. Um, and break out the creative for each specific scent. Image style breakdown in addition to value props, like we segment all of our creative by style. So this is for images. So it can be model UGC, product stylized, that's typically how, you know, we define any sort of product shot that's been set up in a special way. product e-com shot, and then we separately will tag those uh general product e-com shots because sometimes they do surprisingly well. Um, I said, you know, just because something's ugly, you know, you should test it. Um, and then just general graphics. So this is something that David also has access to and will be creative strategy. So in a way, this is where the where the media buyers come in to like creative. But it's not necessarily like Facebook, it could also just be us being on top of naming systems. So the creative team flexibility to go in here whenever they want and come away with learnings.
Evan Lee: Love it. Uh, one thing I I have a follow up on is just like all of these reports stem from the naming. Is there any like examples or something you can share that showcases to our audience specific things that you're tracking if it is consistent?
Kevin Kovach: Yeah. So the consistent part is going to be here. So we have a style tag that's going to be going through all of the image styles. And then we also have a video theme tag going through all the different video styles. David actually is in the process of combining those two tags into one. So we'll be having an update here soon. And with that update, I'm going to add a few more tags. So, um, one of those is going to be the creative pillar, which is how the entire team is going to be thinking about and approaching their creative. as a reminder, we're going to be education, emotion and authority. So soon we would have that ongoing breakdown here as well. Value prop. And then honestly, we'll have some custom tags for each client. So one of the things we like to do on onboarding is like, what questions can we answer for you? And then try and come up with a strategy to use Facebook as that focus group to actually give them the data that can help them make big expensive decisions. So sometimes that will require like a custom tag. Um, so on brands that do like a lot of white listing or influencers, I'll add like a page tag. So I can you know, label what actual page is this coming from. But that's not something I need on most accounts if, you know, 90% of the ads are just coming from the uh the brand page.
Evan Lee: Love it. And if anyone plug, you can see Kevin here using motion. Feel free to check us out. Uh, book some time so you can talk about how to make this come to life for you all. Cool.
Kevin Kovach: Oh yeah, it it's crazy. Um, because I mean, like you just get so much depth with these reports and they don't take that much time as long as you're on top of your has that uh that depth to it.
Evan Lee: Love it. Okay. And yeah, with that, you know, we've had buyers take on more accounts, which uh positively impacts their income.
Kevin Kovach: Yeah.
Evan Lee: Hand in hand, business and goal, business, everyone's goals being met. to do here with our last 10 minutes is just a bunch of questions that have started to pile up. So let's do our best to try and get through them. Um, but I think like where I'll kick off is the one that has the most votes, 15, and hopefully it's a nice and easy lob. is one here.
A question from Iain Harris appears on screen: "Can you define 'share IDs'?"
Evan Lee: Can you define share IDs? And I think you had mentioned this like pretty immediately that we had talked about.
Kevin Kovach: Yeah, so I I think it's super, super important and I think it honestly in a lot of cases it is even more important than structure. Um, so when we say when you launch an ad, um, you're going to be creating two IDs in the account. There there's going to be the ad which is going to be created every time you create an ad, but also there's going to be an unpublished post made that's actually used to show the ad and that's where the or the or the post ID, those can be used interchangeably. And you can actually share that content ID and the content ID is where all of the social proof is stored, not the ad ID. So that way you can have, you know, multiple ads that are all actually using the same ID. So if you're running them in like 10 different audiences, all of the engagements going to stack. So that's how you see those posts in Facebook massive amount of social proof. Like I guarantee you, except in the most extreme situations, those are shared post IDs.
Evan Lee: Awesome. Everybody, I hope that one helps. If it doesn't, just throw it into the chat really quick so we can clarify. But I wanted to jump to the second most voted question and it comes from Justin here.
A question from Justin Regis appears on screen: "Quick ?? regarding 'statistical significance' - our agency agrees on a $1k spend minimum for an 1st run adset (usually with 4-6 ads). Does that match with what Kevin uses as a benchmark?"
Evan Lee: So quick question regarding stat sig. Our agency agrees that 1k spend minimum for the first ad set run with typically four to six ads. Does that match a benchmark that you might use to determine relevance? So 1k spend, does that stat sig for you?
Kevin Kovach: So I actually do one ad per ad set. Um, yeah, I know it looks like I actually do I mean I I I do test this way. I'm not in any way saying it's wrong because you know, the most important thing is you find a way to test and you know, how you do it is going to vary by account, by situation, by budget. Um, but I want to make sure that every single variant gets the same amount of spend or else spend is going to be a technically a variable and if you're calling it a single variable test and you're just trying to test creative, but then spend is also a variable, it's no longer going to be reliable. Um, and honestly, for the traffic tests, you know, 1k is actually, you know, 1k for a week, you know, that's probably going to do it. Um, sometimes we can even get the get it the same results at like $500, but just let it run for the full week just to be sure. On the purchase side, man, that varies so, so, so much. Um, I have one account right now that's like a $800 AOV. to test that to to test with like the purchase objective there and actually want to get through the learning phase, that would be a considerable investment.
Evan Lee: Cool. So we have some questions now about the testing. I think this goes hand in hand with what you're uh, with what you're talking about here.
A question from Andres Amado appears on screen: "What methodology do you prefer to test new creative frameworks? A/B tests OR run all the creatives in the same ad group?"
Evan Lee: Andreas asks, hopefully I've pronounced that correctly. What methodology do you prefer to test new creative frameworks? or run all the creatives in the same ad group? one ad per ad set, maybe it's like budget optimization. Are you typically running there?
Kevin Kovach: Yeah. In the ideal situation, both. Um, you know, sometimes and performance may not allow both, but that's why I like having the traffic test because I know by doing this for a while, it's reliable and it's not going to be like an a huge opportunity cost, you know, regardless of like the overall budget of the account unless it's like a couple hundred dollars. Um, and then on the side, I do like to identify that algorithm winner, that gladiator style free for all. That can be a dynamic DCO test, um, depending on the amount of creative. More recently, I've been just, say if I have like 30 variants, just launch them in an advantage plus campaign. Um, but I still want that traffic campaign on the side, so I have the flexibility to turn off that purchase campaign and find that the opportunity cost is insane and not really um, great for performance.
Evan Lee: So people are definitely curious about the traffic campaign because Justin comes with another question of just like, so,
A question from Justin Regis appears on screen: "...so is the idea to use a Traffic test with new creative and THEN move those 'winners' over to a Conversion campaign?"
Kevin Kovach: Yeah, this is where David would have been really, you know, I had to convert him essentially once he started. Um, so before he started, I was comfortable running the traffic tests, um, just by themselves for like a year. Um, I knew he would have some questions, um, and he was a little skeptical. So I started running them side by side again on accounts that, you know, was didn't negatively impact performance and showed him, you know, three consecutive reports, look, traffic campaign and this purchase campaign identified the same three winners. Um, so that's how I got that buy in. The key is you have to introduce friction. So you can kind of filter out the bouncing. And that's why traffic campaigns aren't helpful for um, filling the funnel because a lot of them do bounce, but if you introduce friction and use say content views plus any sort of event, um, as your numerator in the uh conversion rate formula, you can actually get very direct results as to what happens in the conversion campaign. Um, 90% of the time, it'll identify like the top three ads. That order might change a little bit. Like I said earlier, I'm generating a playoff field here, not just like a single winner. So that same three is still going to be valuable to me, but the benefits of the traffic test is we get those results a lot faster. Sometimes a week, maybe a two two tops, three is the absolute max if we want to do that omni channel um, event um, aspect of that.
Evan Lee: That's so interesting because something I've also like selfishly been curious about is is Facebook being the place where you're able to to to access a large amount of people like you described it, it's just being like a more or less. Um, Facebook might tell you one thing, like I'm probably not giving a great example that'll lead into a question, but let's say you have a couple personas, one that's like Daisy Dukes and then sleepy Sally, whatever it might be, right? Daisy Dukes, we thought would be the best, but we see sleepy Sally doing really well on for example. But then is Daisy Dukes like, should we restructure everything around that or is it somebody we should go after on a Tik Tok? Or is it somebody we should go after on an email more? How do you, how do you uh, like navigate those type of insights?
Kevin Kovach: So similar to uh what we did with like that value prop breakdown earlier, I'm uh going to verify that there is no particular gender or demo that's throwing off these results when you're looking at them at the aggregated level. So in that earlier example, if we're just looking at gifting at the aggregated level, it would have been a clear and decisive winner. But when you actually separate it out, you see, okay, it's only actually a clear and decisive winner with 55 plus and that's where the algorithm wanted to go. Um, you know, the true winner is going to be the one that actually activates as many of those demos as possible. And that's the one I'm most interested in. So if as long as you know, there are no shenanigans, then yeah, I'll follow up and say, okay, this might be something that's more appropriate for or a different channel. Um, but I I try and be as agnostic as possible with Facebook. Like in the end, it doesn't matter what our opinion is, it's the algorithms. So I'll test something that no one likes. I don't care. Um, just in case, you never know. Ugly ads do really well and sometimes you can just keep getting uglier and it still does really well. Um, and then just adds to that diversity too because if you have like ugly ads and then with your videos, like I understand from the branding perspective that concerns people, but on the algorithmic perspective, that increases your reach so much. Um, because Facebook knows the type of content that people historically interact with and is more inclined to serve that to them. Which is why having so many um, is often a key element to scaling.
Evan Lee: I said, you know, it's easier said than done. You have to test into it. And if you follow the Pareto Pareto principle, you can assume that 80% of creative is probably going to fail. Um, but at least you can learn from it.
Kevin Kovach: Yeah, yeah, yeah.
Evan Lee: And I think you like have a beginner's mind to it because it's just like, I don't know anything. I don't think like who knows if this works, but throw it in and we'll see what happens at the end of the day, right?
Kevin Kovach: Yeah. Like if we're testing responsibly, making sure it doesn't disrupt momentum in core campaigns or it's not getting like a crazy amount of um, budget so the opportunity cost of this test is super high. Yeah, go for it. Um, this also I think is a way to beneficially lead the team because I really don't shoot down ideas. Um, you know, it's a way of also keeping myself in check because I think it's easy for buyers no matter what seniority level to fall into patterns or to like become stubborn about something. Um, you know, even I I fall victim to that. So just having the freedom to test what they want, when they want with approval from the client, obviously, as long as we test responsibly, you know, they know I'm not going to like come down on them because I just disagree with it. Um, but those moments actually, you know, they do check me. Um, I've seen a manual carousel do well recently. I had totally absolutely written those off for like a year. Um, so yeah, those moments are cool.
Evan Lee: Love it. Uh, everybody, I have one last question for Kevin on my end. Um, I need to ask it, but just as a quick reminder, please keep up voting and getting your questions into the Q&A tab. Going to be jumping there right after this one. And one last thing, I saw that Talia had mentioned in the chat, like are we supposed to be seeing that same slide? Yes, uh, we did have some stuff to share, but again, running into some technical difficulties. So we're making best as we can just on this one. But Kev, last question that I have for you, I would be ashamed if I did not ask because whenever anyone talks about naming conventions, I say I have a guy and that's where I start to point to. So for context, everybody, uh, Kevin is like a wiz when it comes to naming conventions. So something I'm curious about now is like we've worked on all of this great creative that's now produced and like you'd mentioned, you have a specific way you test into the account. So I'm curious about is like how do you set up naming conventions and how does that then correlate into your analysis after those are live?
Kevin Kovach: Yeah, for sure. So before we met, um, and figured out and I learned about motion, um, you know, my background is in like statistics and data engineering. So I was thinking of and concepting out a naming system that would be very spreadsheet and formula friendly because I want to be able to into a database as fast as possible and reliably as possible, minimize the QA. If there are any like data scientists in the chat, like they'll know about 90% of the time you spend doing data science is actually getting the into a usable format. And having a, you know, expansive and reliable naming system just saves so much time. And then also allows us to provide like insane depth to our reports and have them be like real time in motion. Um, so you say we didn't have anything to share, but I can share my screen. Hey, let's do it. I can walk through what uh David prepared. We'll give me a minute here.
Evan Lee: And you might have to zoom in for folks just so we can see it a little bit.
Kevin Kovach: Oh yeah.
Evan Lee: A little more. Give me a couple more if you can. I'm sorry. Yeah, yeah, yeah. 175. Yeah, there we go. There we go. I like it.
Kevin Kovach: So with a detailed naming system, it's filled with qualitative information that we can't like just simply pull into a column and export. And that allows us to build these real-time reports that clients can have access to and get a sense of, you know, some of the more basic things like, you know, what ad formats are working, you know, here I opened up a few tabs because uh so no one had to wait for motion to load where things are going in the funnel. Um, I did I can't um, share the exact creative and give away the client. Um, this is a pseudonym, but pretty much across all of our we will have these four categories. reports that the client can always access and go into that will be always updating, looking at prospecting audiences, retargeting audiences, but on the creative side, by um, value prop. So part of our naming system is going to have that value prop tag. And we use the uh Facebook naming template to make it as easy and fast as possible even though it, you know, looks very, very complicated and long and, you know, sometimes with new buyers a little overwhelming the first week. But once they get used to using the naming template and this gets pretty fast and doesn't really add that much more time, but it saves an insane amount of time when reporting. So we essentially don't have to spend that much time making creative reports anymore. As long as we're on top of our all of these are automatically going to update. Um, and we just kind of have to go in and and QA, make sure we didn't there isn't like a missed character somewhere or something off. But either way that, you know, if you can save like 95% of your reporting time, you know, that can open up a whole day. Like I've talked to plenty of buyers who lose days to reporting. Um, so this just opens up
Evan Lee: And the biggest thing there is just like losing time, losing time,