Two C++ code snippets. A good interview question would be which one to pick, and why. And what they would change. Or you could just ask which one is AI.
A tweet has been making the rounds over the weekend after escaping the C++ community containment. It offers 2 different ways of handling a somewhat classic “insert or return existing” associative container problem. The author claims one was made with AI and the other hand written. They’re both bad, but they make for a good interview question. And also a deeper discussion about AI generated code. Let’s delve (wink, wink) into it!
I had seen many talks about coroutines but it never really clicked where I could use them outisde of async IO. Until I looked at how Unity uses them in C#.
Coroutines have been around in C++ for 6 years now. And still I have yet to encounter any in production code. This is possibly due to the fact that they are by themselves a quite low-level feature. Or more precisely, they’re a high level feature that requires a bunch of complex (and bespoke) low-level code to plug into a project. But I suspect another, even bigger, issue with the coroutines rollout in C++ has been the lack of concrete examples. After all, how often do you need to compute Fibonacci in real life?
Let’s talk about game simulations. Now that we have described he basics of data driven multi-threaded ticks, we look at how to build a thread safe scheduler for our tasks.
Back in late 2025 we started implementing data-driven multi threaded ticks by making all game object lookups and dereferences go through a thin accessor. This in turn forced us to describe which types a given tick task would need to read and write. And with that information, we have everything we need to build a parallel scheduler.
I wanted to write about threads, but I needed to explain some numbers and I couldn’t. Here’s why.
We have to disrupt our scheduled program because I ran into an annoying hurdle and I feel we need to talk about it. Because right now the profiler situation on Windows kind of sucks and it’s an issue given how ubiquitous the platform is. It works alright for basic/medium usage, but when you need more advanced metrics it breaks down. Let me explain.
Trying to get reliable benchmarks on a GPU that keeps adapting its clock rate.
Choosing between two implementation often requires answering the age-old question “which is faster?”. Which means measuring/benchmarking. Now what do you do when your device’s default mode of operation gives you unreliable numbers?