Cascade allowed non-technical business teams to import large datasets, transform and combine those datasets using a variety of no-code tools, then visualize and present their findings via an interactive data app. For a full overview, check out what Cascade did.
We built Cascade on a few fundamental assumptions, which we go through below. For each one of those assumptions we were partially right and partially wrong — right enough to make the assumption in the first place, but wrong enough that we eventually had to change course. We accompany each one of our assumptions with a deeper dive into what we got right and wrong about each one.
Both the founders knew from our prior lives (Jon at Infoscout and Jake at Capital One) that there was a large market of non-technical, data-savvy analysts buried deep inside many organizations. Those analysts were caught between the limitations of spreadsheets and the heavy lift required for code, leaving them with inordinate amounts of extra work crunching numbers and updating presentation charts. That creates a “silent tax” paid by many organizations in the form of time spent by this talented but undervalued group.
If we did our jobs right, we thought we could uplevel this entire professional discipline.
We were partially right. Our mistake had nothing to do with the presence of this market, which in our opinion still exists and is growing. However, the challenge is not how much that market needs a solution like Cascade, but how we could repeatably reach them, much they’re empowered to buy it and how much the solution was worth. Open questions about one of those is likely a surmountable challenge, but in our case all of those were a bit murky. We break that down here:
While our ROI was real, it was incredibly difficult to quantify, and it even more difficult to discover who needed it without talking to them first.
We knew that our audience tackled problems that were incredibly business-specific, defying the constraints of more opinionated, purpose-built products. They needed a solution with the flexibility of a spreadsheet but with the power and automatability of code. We were also seeing the rise of “no-code” toolkits in many different sectors, so it felt like the right approach to our problem was to build a composable set of building blocks that could be rearranged to tackle bespoke problems.
We were right that a no-code toolkit was likely the best way to fill in all of those gaps.
But we underestimated two things. First, the more use cases a product can tackle, but blunter the overall value proposition is. The sales and marketing motion for blunt products needs to be incredibly strong, probably stronger than most early-stage companies with founder-led motions can assemble. Each customer cares only about their use case(s), not all the rest of the things a product can do.
Second, because the product did many different things, it was unclear how we would build a repeatable sales and marketing engine around it. Being use case-agnostic confuses everything from marketing messages to sales materials to the talent needed on a sales team. Founder-led sales worked (my belief is that founders can sell virtually anything to a degree), but any kind of scaling or playbook-building was rife with problems.
We’ve since spent a lot of time thinking about why other no-code toolkits have succeeded or not. We break that down here:
The only major player that occupied our category was Alteryx, a $10B+ legacy incumbent. The Alteryx product is both extraordinarily old-school and massive: a Windows-only, desktop product that grew up in the 2000s and was far from modern.
The presence of such a large but obviously flawed incumbent was music to our ears. Alteryx had already done the work to educate the market about the need for products like ours, and it had already tested some important interface paradigms. While its userbase loved the product (a fact we should have paid more attention to), we believed that we could play Figma to their Illustrator: deliver a collaborative, cloud-native experience that was obviously better once they saw it.
But we underestimated two things: the amount of product we would need to build to start repeatably winning, and how high switching costs would be for many of their customers. We didn’t have a clearly identifiable market force at our back, other than broad trends around collaboration and cloud-based solutions. Those trends helped enormously, but we needed more than that to overcome the hurdles we faced.
Over-indexing on a competitor also has more subtle but important effects on how a company runs. We break that down here:
Looking back, we picked what seemed like a very good place to start. We were in a hot space with a unique approach, lots of cash and a great team. But all of those assets also made us overconfident, trusting that were on the verge of success while not paying close enough attention to what the market was telling us. In sum:
I do believe we executed our play about as well as any startup could. That is to say, I doubt another de novo company will come along with the same strategy and succeed where we did not. I also believe that a version of Cascade that can succeed: one that’s paired with an existing product suite with existing distribution that has access to business analysts (accounting software or data platforms for investment bankers, for example). It’s also possible that burgeoning AI capabilities will help us move past interface-heavy, no-code interfaces like Cascade. Instead, we'll see more automated systems where the user needs to merely dictate the output rather than define the process. If a new company can draw from our lessons and build on new technical advancements, perhaps we will have taken a tiny step towards making life better for the millions of smart, data-minded business people we aimed to serve.
If you found this helpful or if you have feedback, please get in touch.