Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Details like the language/stack and S3 configuration would presumably be somewhere else in the spec, not in the description of that particular function.

The fact that you're able to confidently take what I wrote and stretch it into pseudocode with zero deviation from my intended meaning proves my point.



To draft a spec like this, it would take more time and the same or more knowledge than to just write the code. And you still won’t have reliable results, without doing another lengthy pass to correct the generated code.

I can create a pseudocode because I know the relevant paradigm as well as how to design software. There’s no way you can have a novice draft pseudo-code like this because they can’t abstract well and discern intent behind abstractions.


I don't agree that it would take more time. Drafting detailed requirements like that to feed into coding agents is a big part of how I work nowadays, and the difference is night and day. I certainly didn't spend as much time typing that function description as I would have spent writing a functional version of it in any given language.

Collaborating with AI also speeds this up a lot. For example, it's much faster to have the AI write a code snippet involving a dependency/API and manually verify the code's correctness for inclusion in the spec than it is to read though documentation and write the same code by hand.

The feat of implementing that function based on my description is well within the capabilities of AI. Grok did it in under 30 seconds, and I don't see any obvious mistakes at first glance: https://grok.com/share/c2hhcmQtMw_fa68bae1-3436-404b-bf9e-09....


I don't have access to the grok sample you've shared (service not available in my region)

Reading the documentation is mostly for gotchas and understanding the subsystem you're going to incorporate in your software. You can not design something that will use GTK or sndio without understanding the core concepts of those technologies. And if you know the concepts, then I will say it's easier and faster to write the code than to write such specs.

As for finding samples, it's easy on the web. Especially with GitHub search. But these days, I often take a look at the source code of the library itself, because I often got questions that the documentation don't have the answer for. It's not about what the code I wrote may do (which is trivial to know) but what it cannot do at all.


Ah, weird, that's good to know. Well here's the code:

    import { env } from './env';
    import { v4 as uuidv4 } from 'uuid';
    import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
    import sharp from 'sharp';

    async function retry<T>(fn: () => Promise<T>, maxAttempts: number): Promise<T> {
      let attempt = 1;
      while (true) {
        try {
          return await fn();
        } catch (error) {
          if (attempt >= maxAttempts) {
            throw error;
          }
          const delayMs = Math.pow(2, attempt - 1) * 100;
          await new Promise((resolve) => setTimeout(resolve, delayMs));
          attempt++;
        }
      }
    }

    export async function processAndUploadImage(s3: S3Client, imageData: Uint8Array): Promise<string> {
      let metadata;
      try {
        metadata = await sharp(imageData).metadata();
      } catch {
        throw new Error('Invalid image');
      }

      if (metadata.format !== 'png') {
        throw new Error('Not a PNG image');
      }

      if (!metadata.width || !metadata.height || metadata.width !== metadata.height || metadata.width < 100) {
        throw new Error('Image must have a 1:1 aspect ratio and resolution >= 100x100');
      }

      const resizedBuffer = await sharp(imageData).resize(100, 100).toBuffer();

      const key = `${uuidv4()}.png`;

      const command = new PutObjectCommand({
        Bucket: env.IMAGE_BUCKET,
        Key: key,
        Body: resizedBuffer,
        ContentType: 'image/png',
      });

      await retry(async () => {
        await s3.send(command);
      }, 100);

      return key;
    }
The prompting was the same as above, with the stipulations that it use TypeScript, import `env` from `./env`, and take the S3 client as the first function argument.

You still need reference information of some sort in order to use any API for the first time. Knowing common Node.js AWS SDK functions offhand might not be unusual, but that's just one example. I often review source code of libraries before using them as well, which isn't in any way contradictory with involving AI in the development process.

From my perspective, using AI is just like having a bunch of interns on speed at my beck and call 24/7 who don't mind being micromanaged. Maybe I'd prefer the end result of building the thing 100% solo if I had an infinite amount of time to do so, but given that time is scarce, vastly expanding the resources available to me in exchange for relinquishing some control over low-priority details is a fair trade. I'd rather deliver a better product with some quirks under the hood than let my (fast, but still human) coding speed be the bottleneck on what gets built. The AI may not write every last detail exactly the way I would, but neither do other humans.


As I’m saying, for pure samples and pseudo code demo, it can be fast enough. But why bring in the whole s3 library if you’re going to use one single endpoint? I’ve checked npmjs and sharp is still in beta mode (if they’re using semver). Also, the code is parsing the imagedata twice.

I’m not saying that I write flawless code, but I’m more for less feature and better code. I’ve battled code where people would add big libraries just to not write ten lines of code. And then can’t reason when a snippet fails because it’s unreliable code into unreliable code. And then after a few months, you got zombie code in the project. And the same thing implemented multiple times in a slightly different way each time. These are pitfalls that occur when you don’t have an holistic view of the project.

I’ve never found coding speed to be an issue. The only time when my coding is slow is when I’m rewriting some legacy code and pausing every two lines to decipher the intent with no documentation.

But I do use advanced editing tools. Coding speed is very much not a bottleneck in Emacs. And I had a somewhat similar config for Vim. Things like quick access to docs, quick navigation (thing like running a lint program and then navigating directly to each error), quick commit, quick blaming and time traveling through the code history,…


> But why bring in the whole s3 library if you’re going to use one single endpoint?

This is a bit of a reach. There's no reason to assume that the entire project would only be using one endpoint, or that AI would have any trouble coding against the REST API instead if instructed to. Using the official SDK is a safe default in the absence of a specific reason or instruction not to.

Either way, we're already past the point of demonstrating that AI is perfectly capable of writing correct pseudocode based on my description.

> Coding speed is very much not a bottleneck in Emacs.

Of course it is. No editor is going to make your mind and fingers fast enough to emit an arbitrarily large amount of useful code in 0 seconds, and any time you spend writing code is time you're not spending on other things. Working with AI can be a lot harder because the AI is doing the easy parts while you're multitasking on all the things it can't do, but in exchange you can be a lot more productive.

Of course you still need to have enough participation in the process to be able to maintain ownership of the task and be confident in what you're committing. If you don't have a holistic view of the project and just YOLO AI-generated code that you've never looked at into production, you're probably going to have a bad time, but I would say the same thing about intern-generated code.

> I’m more for less feature and better code.

Well that's part of the issue I'm raising. If you're at the point of pushing back on business requirements in the interest of code quality, that's just another way of saying that coding speed is a bottleneck. Using AI doesn't only help with rapidly pumping out more features; it's an extremely useful tool for fixing bugs at a faster pace.


Just to conclude the thread on my side.

IMO, useful code is code in production (or if it’s for myself, something I can run reliably). Anything else is experimentation. If you’re working in a team, code shared with others are proposal/demo level.

Experimentation is nice for learning purpose. Kinda like scratch notes and manuscripts in the writing process. But then, it’s the editing phase when you’re stamping out bugs, with tools like static analysis, automated testing, and manual qa. The whole goal is to have the feature in the hand of the users. Then there’s the errata phase for errors that have slipped trough.

But the thing is code is just a static representation of a very dynamic medium, the process. And a process have a lot of layers. The code is usually a small part of the whole. For the whole thing to be consistent, parts need to be consistent with each other, and that’s when contract cames into place.The thing with generated AI code is that they don’t respect contracts. Because of their nature (non deterministic) and the fact that the code (which is the most faithful representation of the contracts can be contradictory (which leads to bugs).

It’s very easy to write optimistic code. But as the contracts (or constraints) in the system grew in number, they can be tricky to balance. The rescourse is always to go up a level in abstraction. Make the subsystems blackboxes and consider only their interactions. This assumes that the subsystems are consistent in themselves.

Code is not the lowest level of abstraction, but it’s often correct to assume that the language itself is consistent. Then it’s the libraries and the quality varies. Then it’s the framework and often it’s all good until it’s not. Then it’s your code and that’s very much a mistery.

All of this to say that writing code is the same as writing words on a manuscript to produce a book. It’s useful but only if it’s part of the final product or help in creating it. Especially if it’s not increasing the technical debt exponentially.

I don’t work with AI tools because by the time I’m ok with the result, more time have been spent than if I’ve done the thing without. And the process is not even enjoyable.


Of course; no one said anything about experimentation. Production code is what we're talking about.

If what you're saying is that your current experience involves a lot of process and friction to get small changes approved, that seems like a reasonable use case for hand-coding. I still prefer to make changes by hand myself when they're small and specific enough that explaining the change in English would be more work than directly making the change.

Even then, if there's any incentive to help the organization move more quickly, and there's no policy against AI usage, I'd give it a shot during the pre-coding stages. It costs almost nothing to open up Cursor's "Ask" mode and bounce your ideas off of Gemini or have it investigate the root cause of a bug.

What I typically do is have Gemini perform a broad initial investigation and describe its findings and suggestions with a list of relevant files, then throw all that into a Grok chat for a deeper investigation. (Grok is really strong at analysis in general, but its superpower seems to be a willingness to churn on sufficiently complex problems for as long as 5+ minutes in order to find the right answer.) I'll often have a bunch of Cursor agents and Grok chats going in parallel — bouncing between different bug investigations, enhancement plans, and one or two code reviews and QA tests of actual changes. Most of the time that AI saves isn't the act of emitting characters in and of itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: