How unstructured user feedback can impact churn & retention.
Christian Wiklund
|
Founder, and CEO
of
unitQ
Christian Wiklund
Episode Summary
Today on the show we have Christian Wiklund, founder, and CEO at unitQ.
In this episode, we talked about the huge importance of monitoring the quality of your product, what lead Christian to build unitQ, and how they help companies improve retention, growth, and engagement.
We also discussed how data silos across departments slow things down, and how companies need cross-functional data on demand. Christian then talks about million-dollar bugs, how we measure product quality, and how online reviews can make or break your business.
Christian Wiklund
Recommends
Masterclass with Daniel Pink on Customer Retention Customer Success by Nick Mehta.Mentioned Resources
Transcription
[00:01:26] Andrew Michael: Hey, Christian. Welcome to the show,
[00:01:29] Christian Wiklund: Andrew. Thank you for.
[00:01:31] Andrew Michael: That's great to have you for the listeners. Christian is the founder and CEO of units to a product quality monitoring platform. Christian started his career in engineering at VMware before later founding Scouts
that was acquired by the public market leader and people discovery The Meet Group.
So my first question for you, Christian is what is a product quality monitoring platform. And why did you decide to build the company?
[00:01:56] Christian Wiklund: Great questions. Product quality monitoring platform [00:02:00] is a tool that allows you to listen in to signals that your user base are giving you. And in this case, it is user feedback and the way you have monitoring and observability in the stack.
Let's say when you monitor machine data, you will have a Datadog, maybe a split. Maybe a signal FX and then you travel up one step further in the stack and you get to the clients. So the binary search running on, on all these different devices out there your iPhone and your Android device they're you most likely also have some monitoring solution in place, like an app dynamics or a new Relic, maybe a crash Alytics or a bug snag.
And that's all great. And there is a reason why the industry has instrumented and created observable. For machine data and that's because change and bugs happens all the time and we need to understand if certain metrics and exceptions and stuff are popping up that we need to address.
And what we found is that the surface layer of the stack, which is how the product manifests itself. So as your user [00:03:00] base, Using your beloved product every day. And they encounter certain issues that gets in their way that we call quality issues. And I call it, the issue is really the the Delta in between the user expectation and the user experience.
So the quality issue can be a functional that, Hey, I can't use this feature. I couldn't log in. It can be usability related so it can be. Too hard to use or missing a feature or the app is too slow or it's starting to fill up my SD card and then it can also be like delight. Like how do we delight users there if there's any discrepancy?
And the reason we built this platform is that me and my co-founder we used to be in the consumer space, as you mentioned. So we built. A product called scalp and a scout had over the years, hundreds of millions of installs and a very active user base. And the challenge we had with scalp was that we supported twenty-five [00:04:00] languages.
We supported Android iOS, web mobile web big screen support, like an iPad, small screen support, of course, like your phone. And we also had 20 plus integrations into this product, so that can be authentication as the case add that's the case analytics and All of these different dimensions are subject to change on a continuous basis because we, as a company want to stay agile.
We want to ship code as frequently as we can. We're not shipping box software anymore, right? Where we're actually have this CIC D type environment where the lines between production and pre-production is very much blurred. And our partners are holding. Shipping cold on a continuous basis.
And you layer in other external factors. You may be on 3g and then it goes to wifi and you may be in a place where SSL is not allowed like a public wifi place and so forth. And what we found was that the surface layer to test the product and make sure that it works as it should, for all different flavors of [00:05:00] configurations.
It was basically impossible. There was no way for us to test every Android device on every different language and every different flavor of of operating system and so forth. And the cool part here is that what we discovered that the there's one entity out there, that's testing your product in every configuration every day.
And that is the user base and your user base. So love your products. We'll also tell you in many different channels where your product needs. So they're generating this unstructured data, user feedback. They leave app reviews. They tweet about your product. They email support. They maybe they engage with your chat bot or your support chat bot and you have user surveys and so forth.
And what we found was that if you could harness the power of all this unstructured data, it's literally like a goldmine. And if you can extract signals from there to say that. A password reset link just broke. And if you can get that data in a timely manner to the right people inside of your company, then they will be [00:06:00] able to fix it faster.
And, Andrew, we had so many of these bugs that were out in production for months that we were not aware of. I'll give you one example we had in Polish language on Android longitude latitude was passed in some specialties. And that crashed the parser for our app. And as a location-based service, we asked for location every time you opened the app, which basically rendered the Android app for Polish speaking users.
Useless for months. And we discovered this in app reviews way too late. So we started obsessing over over bugs and quality at Scouts. Like we could literally find these million dollar bugs and that's when we sold the company, we said, Hey, someone needs to build the quality company. And that's what we're doing here,
[00:06:45] Andrew Michael: very cool. And good as well. Like we talked a little bit before, like founder market fit as well. Like having faced a problem previously, having seen the opportunity and then going off and deciding to build a new company in the space is always like a fantastic place to start. And I do [00:07:00] definitely see as well where you're coming from with this in the sense that like previous as well, coming from Hotjar analytics and feedback company really.
Pushing the big narrative and something. I'm a very strong believer of it's like having the what's in the wild, like you mentioned, you have your analytics stack and your data can tell you what's happening. But a lot of times they can't tell you why it's happening. And as you said you can see the end reflects in churn, but knowing the reasons why, or what was the bug or what is the experience that's causing a specific behaviors is gold, but.
Like most organizations today, this information is scattered. It's not very valuable and it's typically quite hard to take unstructured data and give it structure and give it meaning. So it really caught my attention as well. A unitQto like putting a quality score to things as well.
[00:07:48] Christian Wiklund: Yes. Yeah. So you're touching on something very interesting here, which is and we face this continuously, on the, on some of the engagement, the metrics analytics side of the house we have. We had two PhDs, we had five data pipeline [00:08:00] engineers and the product manager who built that in addition to we use mixed panel. But in addition to that, we had this really cool BI engine where we put Tableau on top of it and we can then follow and see Hey, how is second day return rates trending throughout all these different dimensions?
Like for Japanese iOS users? Like how does it look and what's going on there. And. And even they're like, it's hard to sometimes find where you may have suboptimal changes. On a holistic level, maybe your secondary retention went up, but maybe it's took a dive in, in, in the segments.
And so we were continuously looking at this data and then we would find stuff like wait a minute. Y your second day retention for Spanish speaking users across the board. Why has that fallen off the cliff? And then we will tell the engineers like, Hey, something changed and you would always have this back and forth of okay, what's going on here?
Let's test it. Know let's test the product in Spanish. [00:09:00] And then you have two days of testing and they come back and say, I, it works for me. I can't find what would possibly be wrong. And That's where we anchored it back to user feedback. So if we, you can actually find out what are the, there must be someone in speaking Spanish using our product is leaving feedback somewhere.
Now the issue there was. The support team there, we're dealing with tickets. And they were sitting in a data silos, like they had all the tickets and their job is to solve issues for users and deflect tickets, within a certain number of hours and then get the good C-SAT score when they're done.
The marketing team, I went to them and said, Hey, do you know what's going on? And they're of course tasks. Looking at social media, they're looking at maybe the app store reviews and so forth, and then they synthesize their data. Then I go to the product team and say, Hey, do you have any data from surveys or anything that can help us figure out what's going on?
And so the data silos there slowed us down, like crazy. And so to aggregate all of that in one place was the [00:10:00] first thing that did the unit. And even basic features like translation. So that's something that our customers love that we translate all of the data. And it was a bit surprising to me when we started this company that even.
Very well-known big iconic like consumer companies out there that we work with. There were only looking at user feedback in English. It's not that's 20% of the user base. And so translation is, they're also very important but you mentioned the divine and the Watts.
And I think, taking the take, can you take a qualitative. So the, all the anecdotes being produced out there by the user base, all this beautiful feedback. Can you take this qualitative data and make it quantitative? I think that is the key here that the same way that Datadog will send you an anomaly alerts on on certain exceptions and stuff going on the machines.
We want to have the same mechanism for user feedback, just as a signal. So if we were to look at unitQ, what. What we do is we take user feedback [00:11:00] and engagement data. So then that's two signals to figure out where the product may be broken. And it's interesting, like the, this activity with user feedback is really.
The status quo is manual. Like it's a manual tagging tickets running reports, having an analyst go out and figure out what's going on and we need this data on demand and it needs to be cross-functional. So that'd be all. Instantly see Hey the equalizer went missing on the end or that okay, great.
Let's fix it. But users reporting that equalize there is that went missing because of some bad merge that. There might be 200 people reporting it out of a data set of 300,000. So like how do you find the needle in the haystack? And that's what we've been obsessing over using machine learning and producing lots of trading data to really find these needles in the haystack and then alerting the companies.
[00:11:56] Andrew Michael: Very cool. Yeah, I can definitely see the technical challenges [00:12:00] of trying to figure that all out, especially with the natural language processing, but I think there's really great advancements as well. And some excellence like open source software now that's available and models that you can tap into.
But Love how you saw this opportunity and bring it all together now, Becca, for the past experience. So you mentioned though, let you, you had some like million-dollar mistakes, million-dollar bugs. Can you talk us through, maybe give us a specific example and how you went about figuring out outside of unitQ and a, what were the steps that you took.
[00:12:32] Christian Wiklund: Yes, I can take them. We can take the the Polish longitude latitude Bhagwan on Android where we didn't have someone really monitoring the Polish segments of our user base because they were not. They were not like there was maybe 30% of the user base. And it was me who discovered it by, by looking at what is the average app store review by my language [00:13:00] and Polish language had 1.5 and I'm like that's odd because our app is typically over 4.3.
So what is going on there? So I copied and pasted a bunch of these. App reviews into Google translate. And then it said the app crashes at launch the app crashes that launch. And I'm like that is weird. That is strange. So then I I Email Gosha I had a support like, Hey, do you see anything about this?
And she's I found that she would take gifts. And I asked the marketing team, do you see anything? And they're like yeah, we're seeing some stuff here. And then I'm like is this an ongoing issue? Is it a new issue? When did it start? So then we had to go out and do data exploration and gathering and produce reports.
And we finally found that, oh yeah. Okay. This is something that's related to the parser. And once we identified that. Like that parser bog was like a 15 minute fix for one engineer. And the cool part is that once we fixed it, we, the Delta, so this bug was live for six [00:14:00] months, I think. And the next the next six months we did around 400 K in.
In revenue from Poland on Android. That was like a $800,000 annual bug, if you will. So that we call that a million dollar bug and it was through experience. It's like this because you know what happens is if you were to look at quality of the product and how important it is for. For the product machine.
So you have of course, top of funnel impact of a poor quality product. We're sitting on zoom doing this podcast interview. We're not sitting on go-to meeting. And why is that? Is it because GoToMeeting had features or missing features or is it that the price was different? What is the reason, and there's so many examples of this, where you see it's actually quality of the product, like zoom.
We're able to come in to a very saturated market going up against Google, Microsoft WebEx, all of these really big companies with lots of resources. And they were able to, I will say dominate by yes. [00:15:00] Providing the best experience. And why is that? It's of course, top of funnel impacts.
As, good news spread, not as fast as bad news. So people, they love to talk more about oh, that was a bad product. So if you have a bad product, it's going to spread faster versus always a great product. So we need to make sure that the surface level is polished and grade. It also manifests itself.
Ratings and reviews and consumers today in any consumer vertical where you would want to download an app, may it be music or videoconference or whatever. You're going to do some research. Like consumers are armed with data and a lot of people won't download the app if you have under a certain star rating.
So it's very important to make sure that you don't have a bunch of one star reviews. And what we've seen is that the average. The average quality issue that we find has 1.5 stars. Now there will be some quality issues reported with the five star review that happens which is also recent why star ratings.
It's not a perfect indicator of [00:16:00] quality of the product, but the average has 1.5. But more importantly. As Filling the fund a little bit users is something solvable. If you have the unit economics to actually spend more on marketing and so forth, like attracting users to a sticky product is easier than retaining your user base.
So retention is an incredibly important part of any product. And here is where we see quality impact in the product machines. If you're in innovation, you have this box it's called the product. And you have a product led company, which most modern companies are product led today, right?
That the product is the core. If the product works, the company can exist. If the product doesn't work, the company won't exist. So we have this box it's called the product and we don't really know what's going in on, in the product, but there's inputs into the product. We're spending engineering hours, making the product better and new features, whatever.
We have user acquisition. Silver of spending marketing dollars to get in. There's like everything that's happening in the company [00:17:00] goes into the product, feeds the product machine. And then what comes out of the front of the machine is hopefully activated users. You have retained users, you have engaged users and you have a, if you have con if you need to convert them into paid, there is some activity that comes out and that revenue and that active user base, you can then reinvest that into the beginning.
Off the product box product machine, and that's how the sorta the flywheel or the snowball starts growing faster and faster. Now, quality, what we've seen is, has the, it almost acts as a filter function on all these conversion metrics and in particular on retention. And so if you take two products, they have exactly the same features, the same marketing, the same.
But the one product has higher quality of the experience that product will win. It's a guaranteed because they're not going to lose as much signal in this product machine. So they will be able to then reinvest. They have more [00:18:00] money coming out of the machine. They can reinvest more, they can spend more in marketing and it starts growing faster.
And eventually. Suck oxygen out of the market. In particular, if it's a product that's dependent on network effects then they will see like exponential benefits as they S for each sort of click, they grow faster. So I think like when it comes to building products, you can build your features.
That's great. But what about making sure what we have is really solid that it's great that our customers love it. And and let me start at the company there, Andrew, the one thing that we discovered is that there are no good quality metrics. So how do we measure quality and how do we benchmark against competition?
And there we developed a metric called the unit. Which basically measures how much of your public feedback data refers to a quality issue. So if your score is 100 that means you have no quality issues referenced in, in the public domain. If your score is 80, it means that 20% refers to a [00:19:00] quality issue.
And what we've done is we have, we've actually indexed the 4,000 largest apps out there. So if you go to our website com. You can find the unique Q score. And we, we ingest every app review and other public data and every midnight we republish the scorecard pages. So you can actually see how quality is trending on a daily basis for the 4,000 largest apps out there.
And and that's been a very cool project to work on and it's been illuminating for a lot of our companies. Go to Strava, which is the customer and say, Hey, how do you stack rank and quality against all the other fitness apps out there? And the question typically the answer is typically we don't know.
So it's do you think quality is important? Yes. Okay. Let's figure out how you're doing. So there's a lot to unpack here, but I just love that we can come in and take a preexisting data asset that the company already. And apply some really amazing machine learning technology and and product, and then get signals to the [00:20:00] company of what may be broken right now.
[00:20:03] Andrew Michael: Yeah. That's excellent. I love the concept of the units Cusco. I'm definitely a check it out as well for myself and see where we stack up. And it makes a lot of sense. I think also like previously there was a study looking into sort of use acquisition in SaaS businesses. And at some point in a SaaS businesses like lifecycle, at least 40% of user acquisition comes from word of mouth.
From the most like the fastest growing companies. And essentially that comes back to what you're talking about in terms of the quality and the quality score of the product. And if people start like expressing bad opinions about it, like you said, they're a lot more vocal than the good opinions in the outweigh.
So definitely the winners are the ones that I'm able to maintain that word of mouth build that quality product that people want to talk about and want to share. And Interestingly
[00:20:52] Christian Wiklund: and under we've done studies on the unitQ score. So what happens when it goes up or down and we can direct, you can almost discount.
[00:21:00] You can describe churn rates as a function of the unitQ score. So if your unique Q score is 70 versus nine 90 has a dramatic impact on in particular like mid to longterm retention, but also, so you can imagine the if you're a unitQ score goes up, what happens is you have less supportive.
One, one case study we did with Lavu, which is a dating app out of Germany within 30 days they saw a 39% reduction in support touches just because they fixed 10 issues. A lot of these things work when we get in with the company. It's almost like shining a flashlight in a dark room because they, a lot of these things that get reported on a daily basis.
The organizations have brushed them off as the user error, or it's always been like that, then let's say password reset, link, not working. Maybe they signed up with the wrong whatever, or, we don't believe that or, Hey, my password reset emails, never delivered well wrong email. They signed up for it.
And it's very easy for the organization to brush this [00:22:00] off. And I think there's. There was other issues with the, I call it the great barrier between the support marketing and product engineering. So support the marketing. A lot of times they will find what's going on. So they're like, Hey, I'm seeing something here.
People are applying. And then they will take, let's say your work in support you'll then take copy and paste to of the tickets from senders. You'll put it in a JIRA ticket and say, Hey, it seems like password reset link might be broken. Then you file the JIRA ticket. You throw it over this barrier and then it goes to engineering and product, and then you don't know what happens next and engineering and product.
They're going to say. Okay, great. Let's see. I have. 2,500 zero tickets that are open. And here we have a new ticket from support. They have two people reporting it. We have a user base of 25 million people. Okay. I'm not going to take a look at it. So I was studying the the, one of the [00:23:00] main benefits we bring is that.
We're able to quantify this qualitative data so we can actually provide indisputable data for engineering, support, marketing, and product so that they can align around this. It will be so with unitQ we will basically show them about. We had no one reporting password resettling broken until yesterday.
It's 5:00 PM. And you can see that it's a nascent growing issue like it's growing. And once the engineers see this, they will say, aha, this seems like it's a real bug. Let me jump on it. Another metric we developed is called time to fix, which is. Time to resolution, if you will. So as an issue hits production and your users are starting to report it.
How quickly on average will the companies fix these things? So if you were at Denver to take all of our issues that we detect and you look at life, the alert state, and then to the, okay. States when things [00:24:00] are normal, again for each issue how much time does that take? And what we've seen is that for the typical company, it gets cut into.
Which is really cool. And if I use that it's because when you look at an issue leaking out there, they're like three steps. I think of how you get that resolved. So first is detection. So first the organization needs to understand that something is going on. We've seen there that they can be a lag of days, sometimes weeks.
With the case of scalp, with the Polish bug, it was months until we discovered it. Yeah. So w with our platform, you discover it, we detect it very early. So instead of having days, weeks, months, you'll get it within that. So detection is contracted and then the next piece is alignment. How do we align teams engineering and so forth to say, Hey, you need to redirect resources right now.
This is a three alarm fire. You need to fix it. So aligning when you have confusion, if engineering says. [00:25:00] You have two examples, copy and paste. Can you go out and do some data research, gather data, get me a trendline and then support have to do this data manual data gathering that takes days. And now we lost time.
So with our platform, that sort of, that happens automatically. And then the last step is fixing the bugs. As an engineer. I love to get context. I love to know where is this happening and what platform or what device? So we, with our product, the engineer can see all the reviews, all the support tickets, all the tweets that talks about this particular bug that they're going to fix.
So that makes it real, but also gives them metadata so they can reproduce the bug faster. Because a lot of times it's very hard to reproduce a bug. And if you can't reproduce it, then you can't fix it. So I think there's some real magic there where just breaking down the barriers and have a single source.
[00:25:49] Andrew Michael: Yeah, absolutely. Very interesting as well, the median time to resolve seeing that go down and I see the nice way you actually have an easy way to quantify it. If you have the historical history of support [00:26:00] tickets and resolved issues
[00:26:01] Christian Wiklund: I just love it. , and what, I can also tell you a little bit about like the difference of building consumer company versus now a SAS company, which may be very typical.
So when I did consumer I'm like, oh, SAS, businesses look very compelling and interesting. You don't have to sell ads, so maybe I should do a SAS business. And now we're here is very different. One thing that's different. We have we have incredible customers and people at these organizations using our product from like Pinterest to Spotify Klarna to we have app love in older games.
We were selected as their, the vendor for this activity. Hello, fresh and like Uber, a bunch of these really cool logos. But I get a kick out of this that we can meet with, VP product or VP of ops or a director of engineering or head of support or product managers. And we can get real solid input on our roadmap.
And I just, and I think that's like a core difference between consumer and [00:27:00] the SAS that I've discovered that the feedback we get from our users. Easier to take action on because if a VP of engineering at Pinterest say, consider building these three features, we will consider it very seriously.
So consumer I found sometimes when it comes to feature requests can be very hard to get a feedback on that. What should we build? There, it's more about AB testing and And what is
[00:27:22] Andrew Michael: the value of it and how you touch in B2B? It's a lot easier to think, a touch weight as well, to a different size customers and things like that, because you obviously do have a much bigger parity in terms of the demands.
People are paying as well and the contracts and the user base. And yeah,
[00:27:41] Christian Wiklund: and by the way of course the secret trick to SAS businesses is the same as it. No churn, right? You don't want to have any churn. And that's also something where we've seen this since we have a, it's really a cross-functional platform, right?
So like one of our customers, we have 500 accounts and now they're going to [00:28:00] add a thousand accounts and expands across support, product operations, product engineering, even marketing was not a team that we initially thought. Customer, but they love the insights they're getting there.
We have user insights team on it, but as you can imagine, once you have mult multiple teams using it on a daily basis, and then you're hoping in deeper into the workflow. So we have a deep JIRA integration. We have, we tag send us tickets with our collections and so forth. So you get very much.
Locked in, into the organization and lecture. And for us we've lost one customer since we started. So then of course, now we need to fail more opportunities and customers and top of funnel and then convert them. But the same thing there it's that, that pancake that keeps building year after year of year after year.
So if you have net revenue retention or. Then the existing customers are going to expand year over year and you keep adding more customers and then and then it gets very interesting. So I would say SAS is more [00:29:00] predictable, at least much more predictable and I would say maybe at least maybe a bit, little bit less stressful because we don't have no Scouts.
We had millions of users every day and you need to moderate the community, picture moderation and bad actors and so forth. And SAS feels more predictable and you can build out the roadmap in conjunction with your customers and keep delivering and have a customer first approach and good things will happen there.
[00:29:25] Andrew Michael: Yeah. I definitely see that as well. Having been in like PTC product before and B2B, the predictability is there and you alluded to it, but the holy grill and SaaS is really getting said, net negative retention, and having that firewall continue to grow. Yeah. And we see it as well in the public markets multiples that like SaaS businesses are getting nothing more and more investors are realizing the value of the predictability and what it means to have a business that knows how to retain customers.
We are running out of time. So I want to save a couple of questions, ask every guest and what. [00:30:00] Let's imagine a hypothetical scenario. All right. So you joined a new company. General retention is not doing great at this company. The CEO comes to you and says, Hey, Christian, you need to turn things around.
You're in charge. You've got 90 days. I'm going to ask you what do you do? But there's a catch. You're not going to tell me I'm going to go speak to customers or look at feedback and see what the biggest problem is. And start there. You're just going to take something you've seen. That's been effective in reducing churn at a previous company.
And you're going to run with that playbook blindly.
[00:30:32] Christian Wiklund: Okay. So can we can leverage existing data assets, but we can't create new ones.
[00:30:36] Andrew Michael: Is that correct? Now the existing data, it's this whole just pick something that you've seen, that's been really effective at reducing churn.
[00:30:42] Christian Wiklund: Okay. I would say the approach I always took to churn.
But now we have to use existing data. So it's looking at user feedback and what are they talking about? I can we get something from there? The other one is, can I find, how does that look alike? What is the profile and what happens [00:31:00] to a cohort of users that got, that had amazing retention? Is there something special there?
Did something happen in the experience? Like at Scouts we had, if someone had. Two-way conversation, but in the first three minutes of using the app, we saw a 70% increase in secondary retention. So there's stuff there. So maybe, yeah, so what I would say is like obviously the first user experience needs to be amazing.
So I would, so that's what I would really look at. If you look at the decay function of churn For some companies, if your forklift, if you like lift up a second day of retention and it will forklift the entire term. So like your city 60 day retention will have the same sort of lift for some products.
That's not the case. I would start with obsessing over a second day return rates and and try and lift that up. Back in the day, that could mean maybe you could maybe send more notifications and stuff, but that ship has sailed, you gotta be careful with how much you nag them.
So I think [00:32:00] obsessed over the first minute, the first five minutes before. 10 minutes of the user experience make that perfect. And that can typically be a blind spot because in the company we're, we already signed up for the product. There's no one in your company signing up every day and oh, let me see how the first use expanse this.
So I think we focus on the first the first few minutes of the user experience and use your instincts to them. Figure out is there something here that's not.
[00:32:27] Andrew Michael: Yeah. We talk about this lots on the shows are like onboarding and the impact of it and how it compounds over time. And especially there's been a few different cases where we chatted from Shawn Klaus, like experience at it last year and like a few different cases where it really came down to that sort of first initial experience.
Doubling down on making it perfect. Which was a big step change for a lot of businesses. Last question. Last question. What's one thing that, you know today about churn and retention that you wish you knew when you got started with your career.
[00:32:58] Christian Wiklund: That is very simple and that [00:33:00] is growth equals good.
And early in my career, I obsessed over top of funnel. So how many, also we had a lot of focus on in the vibes and virality and growth hacks and stuff like that. And there's no reason to do that if you don't have stickiness if you're not in a scenario where you are dependent on network effects, that you have to build up a user base very quickly to get content created and stuff.
But I think that's that. The epiphany was don't focus on top of funnel, focus on retention. And if you have a sticky product, yes, you will figure out how to get users to come in. But if you don't have a sticky product, you have leaky buckets. The faster you fail in a more users in the bucket, the faster they sit out the, in the holes.
So focus on making sure and churn to me. Andrew is really churn rates. That's plus some engagement metrics, but that's the ultimate proof point of product market fit. And [00:34:00] of course, if the churn curve hits the x-axis, you're going to churn out every, use it over time. You don't have a business
[00:34:07] Andrew Michael: at some
[00:34:07] Christian Wiklund: point.
And so folks focus on the retention and churn obsess over it. I think that's the biggest learning. Yeah.
[00:34:14] Andrew Michael: If you're running a subscription business and people are canceling this subscriptions, there's no business there. Exactly. Christian has been a pleasure having you today on the show. Is there any final thoughts you want to leave the listeners with?
Like anything they should be aware of? Or how can they keep up to speed with your work?
[00:34:30] Christian Wiklund: Check out, need to.com. And we would love to talk to you. Of course, if you're. Lots of feedback data. We see that if you have more than 10,000 pieces of user feedback data a month, you start to miss insights.
So if. 50 tickets a day and three app review. So you can probably handle that with humans, but once you reach 10,000 a month, you're guaranteed to miss insights and those insights are critical. So let's leverage that existing data assets, the user base, that what be doing what they're doing every [00:35:00] day, producing all that great feedback.
So we got to start listening and tune in and use it to build a better product.
[00:35:06] Andrew Michael: Very cool. Thanks so much for joining and I wish you best of luck now going forward into 2020.
[00:35:11] Christian Wiklund: Thank you, Andrew. You too.
Comments
Christian Wiklund
A new episode every week
We’ll send you one episode every Wednesday from a subscription economy pro with insights to help you grow.
About
The show
My name is Andrew Michael and I started CHURN.FM, as I was tired of hearing stories about some magical silver bullet that solved churn for company X.
In this podcast, you will hear from founders and subscription economy pros working in product, marketing, customer success, support, and operations roles across different stages of company growth, who are taking a systematic approach to increase retention and engagement within their organizations.