Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Secret of High-Performing Developers (blog.daftcode.pl)
16 points by kata_ko on July 6, 2017 | hide | past | favorite | 12 comments


> In general I would say: if you need to debug — you’ve already lost your way.

Ok guy. Sure.


Yea, like what?

That's basically saying, "If you have to check to make sure your code totally works (including edge cases), you've lost your way".

The fact that he says using logs to determine the state of a program when debugging is an anti-pattern raises red flags for me as well.


I think this is more specific than that. Specifically, I think the idea is if you are using a remote debugger and stepping through lines of your program.

I'm torn. Supposedly this is the magic of the old lisp machines. At the same time, since moving to microservice land, I have found I have not used a stepping debugger in a long time.


I'm glad I'm not the only one that read that and was like ...'wait, what?'

Sometimes log debugging is about the only thing you have to go with, depending on the language and tool set you are using.


I agree. He lost all credibility with that statement.


"It depends" is the only valid truism.

There's no panacea for techniques, tools or philosophy.

Fast-running TDD unit and smoke tests are great for non-fragile, fast development, however throwing a "pry" or equivalent breakpoint in some code not behaving as expected can be immensely helpful. Heck, when I started out 25 years ago, trial-and-error in an IDE is how I learned how Turbo Pascal worked. REPLs and IDEs are great for digging into how a piece of code works live, while automated testing is to assure things aren't broken and code isn't treated as a sacred text.


Some of these seem like attacking straw men of people using techniques that aren't as dangerous as others.

For example:

    Logging and print debugging — let print what is 
    the state of code at the different points of 
    execution and see what it is.
I certainly agree that a code base with a billion logging statements can be a bad thing. In large because it is worth less and less at more and more scale.

That said, I think it is quite common to make a few quick hypothesis about the code and confirm them with a few log statements.

Similarly, having an execution framework in your head can help if you are looking at something. So,

    Random debugging — trying different approaches 
    within the context of application. Maybe the
    problem is in this line? No? Maybe that line?
    OK, maybe let’s try this line?
Sure, that is bad if you make it a heavy weight process. But if you have a good mental model of the code, you can quickly simulating what would happen if assumptions and intentions of various lines go wrong.

That said, the general message seems to be that those that play with things will be better at it. This seems a safe assumption. I'd love to see a good data set exploring these thoughts.


Logging and print debugging is definitely not an anti-pattern. In fact, it's often a great tactic to narrow down the issue in your codebase (called “binary debugging” by the author).

This whole article seems to be written to follow some blogging best practices rather than actually giving good advice based on actual experience.


I'm suprised "Using debugger for daily coding" is an anti-pattern.

Using the debugger and variable watching much more has been the most productive change i've made in my workflow in the last 5 years.

In fact if you get good with a debugger, you don't need to log anything, as you can just view it's value at any point in the program. And with a good system in place you are seconds away from inspecting any single part of the program state at any moment, even being able to edit it while the program is paused to force the program to be in the state you want to debug.


Debuggers are indeed powerful tools.

The usual argument against using a debugger is that, arguably, you should already have a reasonably accurate mental model of your program. Therefore, you should not need a debugger, just a printf here and there to confirm your mental model is correct.

However, when you don't quite understand how the program works, especially when you think you do but really don't, a debugger is invaluable. It's useful like printfs, only exponentially more so. You can learn so much of your tech stack with a debugger it's not even funny how few people ever bother with them.


But even in your first case, why printf when you can see that variable's value, and scope, and stack, and everything else by using the debugger, all without having to modify the program at all.

I'm in the javascript world for the most part now, so it could be a much bigger pain in other languages/ecosystems, but for me a debugger is always there, and always running, and in my opinion not used nearly enough.

Although I think we are in agreement that trying to assume you know what the code is doing while debugging is going to lead to pain.


This seems more like "A few random habits of developers with basic competency". The only developers I have met who copy code directly from the Internet into their code without attempting to understand it first are the same developers that get fired after a month (or a few days).

I agree with him only partially about debuggers. While the strategy he calls binary debugging is useful, but it fails when you are dealing with large segments of highly stateful code like is often seen in legacy systems. For those situations a debugger can reveal terrible things like memory corruption. Sometimes that simply cannot be solved by reasoning about one place of code, because the because is somewhere else entirely and it reached into and broke this codes state.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: