Human in the loop means despite your best efforts at initial prompting (which is what rules are), there will always be the need to say "no, that's wrong, now do this instead". Expecting to be able to write enough rules for the model to work fully autonomously through your problem is indeed wishing for AGI.
In my example, the human would be in the loop in exactly the same way as the technique in the article. The human can tell the model that it's wrong and what to do instead.
Tools like th one in the article are also "rules".