By Sam De Maeyer (Senior Engineering Manager)
Code reviews are meant to improve quality and share knowledge, but ours had become a bottleneck. PR’s were getting buried in Slack threads, large reviews were ignored, and engineers were left frustrated. We knew the process was broken, so we set out to fix it, and along the way discovered how AI could help us build a better solution faster than we thought possible. What started as a stopgap idea turned into an AI-powered app that transformed how our teams review code. At TryHackMe, our engineering department works across multiple squads, balancing internal tooling, the core platform, and growth engineering. With so many moving parts, even small inefficiencies in code review ripple quickly across teams.
We were no exception.
At TryHackMe, our setup was simple: whenever an engineer had a pull request ready, they would drop it into a dedicated Slack channel. The hope was that someone would pick it up within the same day. In reality, smaller PR’s, the easy wins, were usually reviewed within a few hours. But the larger or more complicated ones often sank to the bottom of the channel as new messages arrived. Over time it became harder to keep track of who was reviewing what, or whether a review was already underway. This dragged down our overall efficiency, slowing the pace at which code could move from ready to released.
In some cases this meant reviews sat idle for several days. Spread across 30 engineers, even an average of one hour lost per person each week compounds to more than 1,500 hours wasted each year.
This lack of visibility and ownership was frustrating for engineers, who often had to chase reviews or wait longer than expected. It also created knock-on effects for PMs trying to keep delivery on track, and ultimately delayed features reaching our users. The impact was not just lost hours, but a reduction in our delivery velocity, making it harder to maintain momentum across squads. What might look like a small inefficiency in isolation was in reality costing us weeks of engineering time across squads every year.
Some companies solve it by assigning reviewers automatically, but that can overload certain engineers, especially if they are already busy, on holiday, or off sick. Others try tagging people manually, but that just shifts the problem. We liked the idea of engineers choosing reviews themselves, as it creates a stronger sense of ownership and autonomy, but without any structure it simply was not working.
So we tried to solve it as a group. With around 15 senior engineers on the team, I assumed we would be able to come up with a better flow together. We held meetings. We asked for ideas.
Eventually, I decided enough was enough. If we could not crowdsource a solution, I would propose one myself.
My Proposal: The “Active Code Review Process”
The idea was straightforward: instead of letting PR’s float around in Slack with no ownership, we would create a new channel with a clear structure. Whenever someone had a PR, they would post it with the GitHub link, the Jira ticket, and a short description. Other engineers could then assign themselves as reviewers by adding the 👀 emoji. When the review was approved, they would add a ✅. If they left comments, they would add 📝. All discussion would stay neatly in a thread, and once the PR was merged, the original post would be removed to keep the channel tidy. The ultimate goal was that the channel would be empty, meaning nothing was waiting for review.
- Join the dedicated Slack channel:
#active-code-reviews
- Post your PR in a standard format:
- GitHub Link
- Jira Ticket Link
- Short Description
- Claim a review with the 👀 emoji.
- Approve with ✅.
- Add feedback with 📝 if comments were left.
- Discuss in threads to keep the main channel clean.
- Remove the post once the PR is approved. Admins could clear old threads weekly.

It was not perfect. Some people worried it meant yet another Slack channel to check. Others did not like the idea of manually posting every PR. But at least it gave us something concrete to talk about, something we could tweak, and something we could trial for a month. My plan was simple: if nobody came up with a better alternative, we would roll this out.
I made this clear on a Friday morning, my last day before a two-week holiday. I figured nothing would happen while I was away, so when I returned, I could say: you had time, no one suggested anything else, so we are going ahead with my plan.
I was wrong.
While I was away, one of our senior engineers, Steven Christie, had been thinking about the proposal. He saw the same problems I did, but also the objections. And instead of waiting for me to come back and make a decision, he built something better.
Steven’s Story
Steven’s solution was to create the GitHub PR Viewer, a custom web app that tracks all open PR’s, shows their status at a glance, and solves the visibility issue we had struggled with. What is more interesting is how he built it: using Cursorand a new way of working we have started calling vibe coding.
The GitHub PR Viewer gave us exactly what we had been missing: a single view of all open PR’s, their status, and who was reviewing them. Instead of PR’s getting buried in Slack, engineers could now see at a glance what needed attention. It solved the visibility problem overnight and quickly won support across squads.
Here’s what it looks like in action:

What is more interesting is how Steven built it: using Cursor and a new way of working we have started calling vibe coding. In the next post, Steven walks through how he used AI as a pair programming partner, the pros and cons he discovered, and what this means for how we might build software in the future. Read part two here.
Where We Are Now
What began as a manager-imposed process has turned into something much better: a purpose-built tool that addresses the real concerns engineers had.
Today, our squads are trialling the GitHub PR Viewer. Each day, engineers check the app to see which reviews are waiting, claim them, and keep the flow moving. Large PR’s are no longer ignored, and the ownership problem has all but disappeared. Most importantly, reviews are now happening faster and more consistently, boosting the overall efficiency and velocity of our engineering teams.
It is not perfect yet, no process ever is, but it is already a huge improvement over where we started. And more importantly, it shows the power of combining a clear process with the creativity and initiative of engineers, backed by AI.
By removing these delays, we expect to save well over a thousand hours of engineering time each year.
Takeaways
Looking back, there are a few lessons worth sharing:
- Code reviews are valuable, but without structure, they quickly become messy.
- Sometimes teams need a nudge. Autonomy does not mean inaction.
- Improving the review process boosted both efficiency and velocity, helping us deliver features to users more quickly.
- Time-boxed experiments make change less intimidating.
- AI can accelerate solutions that might otherwise stay as “good ideas” on paper.
- The best outcomes often come when managers and engineers push from different sides. One defines the problem, the other builds the solution.
For us, this story was about more than code reviews. It was a reminder that when used properly, AI can enhance engineers in ways that were not previously possible. A small idea turned into a working product in record time, and in doing so, transformed one of our most frustrating bottlenecks into a system we are genuinely excited to use.