[HN Gopher] Things we've learned about building products
___________________________________________________________________
Things we've learned about building products
Author : fmerian
Score : 163 points
Date : 2025-03-05 14:44 UTC (8 hours ago)
(HTM) web link (newsletter.posthog.com)
(TXT) w3m dump (newsletter.posthog.com)
| zabzonk wrote:
| Sorry, what "successful products"?
|
| And I have to say that "Technical Content Marketer " is one of
| the most dubious job titles I have ever seen.
| Etheryte wrote:
| On their homepage, they have their own logo in the section
| listing companies that use their products. I mean, if you
| dogfood it, great, but it doesn't exactly instill confidence if
| you have to reach for your own company to fill out the list.
| hnthrow90348765 wrote:
| They have customer testimonials here which have more
| companies than the front page: https://posthog.com/customers
| Centigonal wrote:
| We use PostHog for site analytics - it's a good product. IDK
| about popularity, but it's a joy to use.
| tene80i wrote:
| Seems like a perfectly good description of someone whose job it
| is to achieve marketing goals by creating content for a
| technical audience.
| dang wrote:
| Posthog is pretty successful! But since it isn't strictly
| necessary, perhaps we can make this thread more meaningful by
| removing success from the title above.
| zabzonk wrote:
| Well, I'd never heard about it before today, but that may be
| just my bad, as I'm not really much into web stuff. But when
| I see something like this (from the top of their website):
|
| > The single platform to analyze, test, observe, and deploy
| new features
|
| my reaction is "Wha?"
|
| But what the hell - though I do think the job title is bad,
| but then I've had a few of those myself.
| dlisboa wrote:
| They list a few products on their home page:
| https://posthog.com/products
|
| I've used PostHog and it's pretty good. I don't know if I'd
| classify all of those as different products, you rarely want
| one of those without the other.
| cjs_ac wrote:
| > Your product is downstream from your ideal customer profile
| (ICP).
|
| Do not start with an _idea_. Start with a _problem_ , and then
| construct a _solution_. The solution is the product. The problem
| implies someone who has that problem: this is the customer. How
| much of a problem it is tells you how much they want it and how
| much they will pay for it. Because an idea doesn 't necessarily
| have a problem, it results in a product that doesn't necessarily
| have a customer.
|
| > As 37Signal's Jason Fried says "You cannot validate an idea. It
| doesn't exist, you have to build the thing. The market will
| validate it."
|
| Similarly, don't set out to change the world. Put your product
| into the world, and the world will decide whether to change as a
| consequence.
| kevmo314 wrote:
| This list mentions A/B testing a few times and it's worth noting
| that A/B testing is great but it's not free.
|
| I've seen a nontrivial number of smart engineers get bogged down
| in wanting to A/B test everything that they spend more time
| building and maintaining the experiment framework than actually
| shipping more product and then realizing the A/B testing was
| useless because they only had a few hundred data points. Data-
| driven decisions are definitely valuable but you also have to
| recognize when you have no data to drive the decisions in the
| first place.
|
| Overall, I agree with a lot of the list but I've seen that trap
| one too many times when people take the advice too superficially.
| nightpool wrote:
| It probably helps that one of PostHog's core products is an A/B
| testing framework, so it's much easier for them to iterate on
| it internally for what they need to A/B PostHog. Even when you
| already have a best in class A/B testing framework though, I
| agree--A/B testing too much or waiting too long for "more data"
| to make decisions can slow down momentum badly for features
| that _should_ be no-brainers.
| abxyz wrote:
| A/B testing is also high risk: a good test produces valuable
| data, a bad test produces harmful data. A company that does no
| testing is better off than a company that does bad testing.
| Many people treat product decisions made based on test results
| as unimpeachable, whereas they treat product decisions made on
| a hunch with a healthy skepticism that leads to better
| outcomes.
| pkaler wrote:
| Agree!
|
| Most orgs should just be shipping features. Before starting an
| Experiment Program teams should be brainstorming a portfolio of
| experiments. Just create a spreadsheet where the first column
| is a one-line hypothesis of the experiment. Eg. "Removing step
| X from the funnel will increase metric Y while reducing metric
| Z". And the RICE (Reach-Impact-Confidence-Estimation) score
| your portfolio.
|
| If the team can't come up with a portfolio of 10s to 100s of
| experiments then the team should just be shipping stuff.
|
| And then Experiment Buildout should be standardized. Have
| standardized XRD (Experiment Requirements Doc). Standardize
| Eligibility and Enrollment criteria. Which population sees this
| experiment? When do they see it? How do you test that bucketing
| is happening correctly? What events do analysts need? When do
| we do readouts?
|
| That's just off the top of my head. Most orgs should just be
| shipping features.
| bluGill wrote:
| Most A/B tests should not be done in a production like way.
| Grab some post-it notes and and sketch out both A and B: then
| watch users work with it.
|
| For a lot of what you want to know the above will give better
| information than a 100% polished A/B test in production. When
| people see a polished product they won't give you the same type
| of feedback as they will for an obvious quick sketch. The UI
| industry has gone wrong by making A/B in production too easy,
| even though the above has been known for many decades.
|
| (A/B in production isn't all bad, but it is the last step, and
| often not worth it)
| marcosdumay wrote:
| You are pushing for bit of too much of name-misuse. A/B tests
| are something what the entire point is that you run them on
| the real world, and gather how real users react.
|
| Design-time user testing has been a thing for much longer
| than A/B tests. They are a different thing.
|
| I mean, your point stands. But you can't do A/B tests on
| anything that is not production, those your are recommending
| are a different kind of tests.
| bluGill wrote:
| I'll accept your definition correction. However I think my
| point still stands: there are better things than A/B
| testing to get the information you need.
| simonw wrote:
| I think A/B testing is one of the _most expensive ways_ of
| getting feedback on a product feature.
|
| - You have to make good decisions about what you're going to
| test
|
| - You have to build the feature twice
|
| - You have to establish a statistically robust tracking
| mechanism. Using a vendor helps here, but you still need to
| correctly integrate with them.
|
| - You have to test both versions of the feature AND the
| tracking and test selection mechanisms really well, because
| bugs in any of those invalidate the test
|
| - You have to run it in production for several weeks (or you
| won't get statistically significant results) - and ensure it
| doesn't overlap with other tests in a way that could bias the
| results
|
| - You'd better be good at statistics. I've seen plenty of A/B
| test results presented in ways that did not feel statistically
| sound to me.
|
| ... and after all of that, my experience is that a LOT of the
| tests you run don't show a statistically significant result one
| way or the other - so all of that effort really didn't teach
| you much that was useful.
|
| The problem is that talking people _out_ of running an A /B
| test is really hard! No-one ever got fired for suggesting an
| A/B test - it feels like the "safe" option.
|
| Want to do something much cheaper than that which results in a
| much higher level of information? Run usability tests. Recruit
| 3-5 testers and watch them use your new feature over screen
| sharing and talk through what they're doing. This is an order
| of magnitude cheaper than A/B testing and will probably teach
| you a whole lot more.
| light_triad wrote:
| Some teams think they can A/B test their way to a great
| product. It can become a socially acceptable mechanism to
| avoid having opinions and reduce friction.
|
| Steve Blank's quote about validating assumptions: "Lean was
| designed to inform the founders' vision while they operated
| frugally at speed. It was not built as a focus group for
| consensus for those without deep convictions"
|
| Is the Lean Startup Dead? (2018)
| https://medium.com/@sgblank/is-the-lean-startup-
| dead-71e0517...
|
| Discussed on HN at the time:
| https://news.ycombinator.com/item?id=17917479
| eddythompson80 wrote:
| Any sort of political/PR fallout in any organization can be
| greatly limited or eliminated if you just explain a change
| as an "experiment" rather than something deliberate.
|
| "We were just running an experiment; we do lots of those.
| We'll stop that particular experiment. No harm no foul" is
| much more palatable than "We thought we'd make that change.
| We will revert it. Sorry about that".
|
| With the former people think: "Those guys are always
| experimenting with new stuff. With experimentations comes
| hiccups, but experimentation is generally good"
|
| With the later; now people would wanna know more about your
| decision-making process. How and why that decision was
| made. What were the underlying reasons? What was your end
| goal with such a change? Do you actually have a plan or are
| you just stumbling in the dark?
| porridgeraisin wrote:
| > You have to build the feature twice
|
| Why though? Can't you have it dynamically look up whether the
| experiment is active for the current request and if so behave
| a certain way? And the place it looks up from can be updated
| however?
| stetrain wrote:
| But you have to implement and test both sides of that "if"
| statement, both behaviors. Thus "build the feature twice"
| simonw wrote:
| Right: you have to take responsibility for implementing
| (and testing and short-term maintaining) two branches.
| Chyzwar wrote:
| 18. Instead, forcing PRs into day of work unit, it is better to
| be minimum testable increment. Some features just need more work
| to be tested. Forcing everything into tiny tickets make both
| planning tedious and often introduce bugs in half finished
| features.
|
| 22. I saw design system fail in many companies. It is very hard
| to get right people and budget for this to succeed. For most
| startups are better to pick existing UI toolkit and do some
| theming/tweaking.
|
| 27. I disagree, If you put Product manager as gatekeeper to users
| you will transform the organization into a feature factory.
| Engineers should be engaged with users as much as possible.
| scottishbee wrote:
| 27. I don't think you do disagree. Read point 29: Hire and rely
| on product engineers. They have full-stack technical skills
| needed to build a product along with customer obsession. Yes,
| this means they need to talk to users, do user interviews,
| recruit tests for new features, collect feedback, do support,
| and respond to incidents.
| abc-1 wrote:
| There are companies out there that probably do none of these
| things and are x1000 more successful from a revenue or market cap
| perspective. Seems like the biggest successes are simply being at
| the right place at the right time and not being a complete idiot.
| Nobody wants to hear that though.
| dowager_dan99 wrote:
| I always look for the post with the "success formula" in this
| situation and can never find it, but luck and timing are
| components. Also skill, resourcing, execution, and what I'll
| call "grit". The ratio is not defined but you need components
| of each.
| light_triad wrote:
| Sure luck plays a role. There are techniques to increase your
| odds of getting lucky though.
|
| Sam Altman argued a startup's chance of success is "something
| like Idea times Product times Execution times Team times Luck,
| where Luck is a random number between zero and ten thousand"
|
| A lot of the success in startups is not due to getting the
| initial idea right, but what founders do once they realize that
| their initial idea is wrong.
| ozim wrote:
| "Your website is the first impression your product is making" -
| unless your company does not operate in market where no one
| cares about your website.
|
| We get serious leads only via networking of our C-levels and
| sales no customer cares about our landing page and leads when
| we cared about landing page were not serious and waste of time.
| AznHisoka wrote:
| Yep, I really don't want to read any articles from any company
| that built their success during the SaaS greenfield period from
| 2008-2016. Plus, you already have had the brand and market
| share to built even more upon that foundation.
|
| Now if you've built something big that grew in the past few
| years organically, there's more to learn from that success.
| morkalork wrote:
| What, you're saying the secret to success isn't just coming
| up with cool brand name that ends in -ify or -ly in that era?
| giancarlostoro wrote:
| I was reading the wikipedia page for WhatsApp this morning and
| indeed, right place, right time, right talent pool.
| light_triad wrote:
| > If you're going to pivot, make it big.
|
| This is a great point. I've seem teams apply lean startup by
| testing -> changing something -> testing -> changing something ->
| testing ...
|
| The problem is that the changes are so small that statistically
| you end up testing the same thing over and over and expecting
| different results. You need to make a significant change in your
| Persona, Problem, Promise, or Product to (hopefully) see
| promising results.
| apsurd wrote:
| These are ok. They're great to highlight the surface area of
| product building. But the list is very biased from an analytics
| and testing perspective because posthog product is analytics and
| testing.
|
| Capturing analytics is a no brainer. however, most data in most
| products at most companies starting out just fundamentally does
| not matter. It's dangerous to get in the habit of "being data
| driven" because it becomes a false sense of security and
| paradoxically data is extremely easy to be biased with. And even
| with more rigor, you get too easily trapped in local optimums.
| Lastly, all things decay, but data and experimentation runs as is
| if the win is forever, until some other test beats it. It becomes
| exhausting to touch anything and that's seen as a virtue. it's
| not.
|
| Products need vision and taste.
| srameshc wrote:
| I was always very passionate about programming and startups or
| small team/co, but I never even got to the first round because of
| my undergraduate degree. I think I would have tried hard given an
| opportunity and worked with lot of discipline and passion. So now
| I have my own small team and I try to see if someone who doesn't
| have the right background but still willing to learn and is
| passionate about building stuff. It is probably not the idea of
| the author and he is right in his approach as it has been
| established, but I will test and see if what I am trying will
| work or not.
| hk1337 wrote:
| #2 and #3 are sort of symbiotic. If you have a bad hire, then
| it's going to be difficult to give them autonomy.
| the__alchemist wrote:
| Thought on this one:
|
| > Trust is also built with transparency. Work in public, have
| discussions in the open, and document what you're working on.
| This gives everyone the context they need, and eliminates the
| political squabbles that plague many companies.
|
| This seems prone to feedback loops; it can go both directions. If
| there are political squabbles, discussion may be driven private,
| to avoid it getting shut down or derailed by certain people.
| MrLeap wrote:
| I've seen this. Takes management vigilantly guarding the
| commons from excessive drive bys and divebombs.
|
| It takes a lot less energy to throw shit than it does to clean
| shit. There's infinite signals. Big egos take a lot of energy
| to repel and redirect to maintain it. I think it's absolutely
| worth it when it's possible, but yeah.
|
| You wouldn't think so until you've done it, but it's really
| hard to get 6+ adults together where everyone's primary goal in
| that team is to make a thing. Seems like there's always one or
| more people who want to peck others, build fiefdoms, hold
| court.
| bilater wrote:
| The problem with having such a specific, prescriptive formula for
| success is that it never actually works out that way. Sure, there
| are high-level principles, the PostHog team executes brilliantly,
| and I love the product, but I think we're really bad at
| connecting the dots on what actually made something successful.
| Instead, we assign credit to random things just to make it all
| make sense. A lot of times, it's the equivalent of saying, "Bill
| Gates eats this cereal for breakfast, so if I do that, I should
| become a billionaire too."
| phillipcarter wrote:
| I know this is not related to the article, which is great, but I
| am wondering how long "posthog" is going to be the name of this
| company given what "post hog" means.
| somekyle2 wrote:
| I marvel at this every single time i see their billboards. It
| does mean I read all of their billboards, I guess.
| drewbeck wrote:
| I'm kind of dreading anywhere I work picking up the service b/c
| of how much I'd have to say the name without laughing or making
| jokes about it.
| skyyler wrote:
| In the first image in the article, what is a "SuperDay"?
|
| Is this like a trial day where you're invited to do a day of work
| for free?
| adamgordonbell wrote:
| They pay you for it, but it is a trial work day.
|
| Story time. I interviewed for a job at posthog. I knew that I
| really loved their communication style. But I hadn't used their
| product and didn't know a ton about them except that their
| writing is fantastic.
|
| The 'product for engineers' focus that they is cool but when I
| had an interview, it was clear that I wasn't a 'product for
| engineering' person.
|
| When they proposed the Super Day. I was like, I'm not sure
| because it's awesome to get paid for a day, but it's also not
| an unstressful event. And I sort of said I had some more
| questions before we moved on to the Super Day.
|
| And they basically just said: we don't think it's going to work
| out. It was actually a pretty positive experience. I think that
| they correctly assessed the situation pretty quickly and my
| hesitation was a real signal to them.
|
| (This is my recollection - I could have the details wrong.)
|
| But yeah, super day is a day of very structured work in the
| role that they setup. And its paid.
| rapfaria wrote:
| > And I sort of said I had some more questions before we
| moved on to the Super Day.
|
| > And they basically just said: we don't think it's going to
| work out.
|
| Ouch, so their tight-knit, no-shortcuts hiring process is
| only thorough for them, not for the engineer applying.
| adamgordonbell wrote:
| Perhaps, but it wasn't a bad experience. I've come to value
| hiring processes and I guess employers where they known how
| to swiftly make decisions.
___________________________________________________________________
(page generated 2025-03-05 23:00 UTC)