Taro Logo
44

How to ensure mistakes do not repeat?

Profile picture
Staff Software Engineer [IC4] at Twilioa year ago

Often we talk about how, when getting feedback during a PR or Design Review, there are mistakes that I have seen, at least in my case, creep up repeatedly. Any tips to ensure that the learnings stick?

1.5K
3

Discussion

(3 comments)
  • 36
    Profile picture
    Staff SWE at Google, ex-Meta, ex-Amazon
    a year ago

    Systematize. Make a checklist, or a template, or have a form that enforces certain things. Leaving it to people remembering or best intentions is planning for it to fail.

    if you can automate creation of an artifact, do that. If not, automate the check that it’s included. If you need to always have a sign-off from a buddy beforehand that will double check, do that, but ideally a system makes that unnecessary.

  • 30
    Profile picture
    Meta, Pinterest, Kosei
    a year ago

    Show your work. When you have something you're unsure about in a code review (or architecture review), explain what you considered, and link to previous examples with some commentary. Something like:

    This past PR A and B are similar to what we're doing here, but the difference in our case is X. So that's why I went with this approach

    The act of writing this will maximize your learnings, and also it builds credibility with people reviewing your code.

  • 34
    Profile picture
    Tech Lead @ Robinhood, Meta, Course Hero
    a year ago

    Write stuff down! It seems like raw remembering isn't working in your case - That's perfectly normal, especially if you're working at a high-performing company like Twilio where I'm sure there's always a million things going on. Create a local note called "Code Aspects To Check Pre-PR" or something that's a checklist of all the mistakes you have made in prior PRs. After you submit your commit into a draft PR state, go through each item and make sure you haven't made any of those mistakes (and if you have, just fix them).

    On top of this personal learning system, I heavily recommend looking into automating this learning reinforcement with tooling given that you're a Staff Engineer at a larger company. There's multiple angles of attack here (inspired by my time at Meta):

    1. The IDE - If you're using something like IntelliJ, you can write custom rules to identify bad code patterns and warn the developer of it with the next-level option of having an auto-correct option as well.
    2. Local linter - If your team has a linting script that every person runs before submitting code, you can modify it to add new rules identifying these bad code patterns.
    3. PR linter - You can't count on everyone to run the lint script, so I've seen teams set up an automated linter that will go through fresh diffs and recently updated ones to suggest changes. Modifying this to add new rules would be really powerful as it's one of the more final lines of defense and it's more public.

    I have seen many Staff Engineers at Meta achieve huge impact from using the above 3 tools to share their clean code instincts with the rest of their orgs. This is classic Staff behavior: You have a personal problem, and your solution not only solves it for yourself but for many, many others as well. If your team doesn't have this kind of infra, consider building it yourself as a 20% project.

    Zooming out, improving the overall developer ecosystem for your team is a very common Staff-level project. For an in-depth case study example of this, check out Rahul's story on how he made an internal tool at Meta that empowered hundreds of engineers to get to E6.

Twilio is an American company based in San Francisco, California, which provides programmable communication tools for making and receiving phone calls, sending and receiving text messages, and other communication functions.
Twilio1 question