Hacker Newsnew | past | comments | ask | show | jobs | submit | lucasyvas's commentslogin

Did it win? Against just Perl, or everything else?

This was just about the worst case scenario.


Well let me avoid those that don’t understand it. It’s literally Rust 101.


It's literally not, Rust tutorials are littered with `.unwrap()` calls. It might be Rust 102, but the first impression given is that the language is surprisingly happy with it.


https://doc.rust-lang.org/book/ch09-02-recoverable-errors-wi...

If you haven't read the Rust Book at least, which is effectively Rust 101, you should not be writing Rust professionally. It has a chapter explaining all of this.


> In production-quality code, most Rustaceans choose expect rather than unwrap and give more context about why the operation is expected to always succeed. That way, if your assumptions are ever proven wrong, you have more information to use in debugging.

I didn't read anything in that section about unwrap/expect that it shouldn't be used in production code. If anything I read it as perfectly acceptable.


These are all non-issues - don’t allow an end user to determine a serial primary key as always.

And the amount of information it leaks is negligible - they might know the oldest and the newest and there’s an infinite gulf in between.

It’s better and more practical than SERIAL or BIGSERIAL in every way - if you need a random/external ID, add a second column. Done.


Why not serial PK with uuid4 secondary? Every join uses your PK and will be faster.


> if you need a random/external ID, add a second column. Done.

As others have stated, it completely defeats the performance purpose, if you need to lookup using another ID.


This is like saying you don’t like nails because you don’t understand how to use a hammer though. Developers are not understanding how to use the hints properly which is causing you a personal headache. The hints aren’t bad, the programmers are untrained - the acknowledgement of this is the first step into a saner world.


> Among the requirements of the DMA is that Apple ensures that headphones made by other brands will work with iPhones. It said this has been a block on it releasing its live translation service in the EU as it allows rival companies to access data from conversations, creating a privacy problem.

This sounds bogus right? If all the headphones can do is transmit audio via first party operating system features how is this creating a data privacy issue? How are headphones going to exfiltrate data unless they have their own Wi-Fi connection or application that can serve as a bridge? Just disallow both.


It is somewhat complicated by the specific requirements of the DMA specifications for Apple:

> The interoperability solutions for third parties will have to be equally effective to those available to Apple and must not require more cumbersome system settings or additional user friction. All features on Apple will have to make available to third parties any new functionalities of the listed features once they become available to Apple.

Apple is saying, "We designed our API in a way that requires trusted headphones as part of the privacy model, and DMA would force us to give everyone access to that API."

What goes unstated is that trusted headphones aren't necessary for the feature and a company trying to meaningfully comply with the spirit of the DMA probably would have chosen to implement the API differently.

https://digital-markets-act.ec.europa.eu/questions-and-answe...


And “trusted headphones” - all headphones, including AirPods, are untrusted until paired. This entire narrative that Apple is pushing is political, not technical.


Can you explain how you know that trusted headphones aren't necessary and where Apple is saying what you are quoting here?


Those are fair questions. This is what Apple says in the press release:

> Live Translation with AirPods uses Apple Intelligence to let Apple users communicate across languages. Bringing a sophisticated feature like this to other devices creates challenges that take time to solve. For example, we designed Live Translation so that our users’ conversations stay private — they’re processed on device and are never accessible to Apple — and our teams are doing additional engineering work to make sure they won’t be exposed to other companies or developers either.

We know it isn't necessary because Apple believes it is possible and are working on it. That's a pretty good indication that Airpods and their associated stack are currently being treated differently for a feature which fundamentally boils down to streaming audio to and from the headphones. It's not even clear how 'securing' live translated audio is any different from 'securing' a FaceTime call in your native language. I think a reasonable reading sans more technical information from Apple is that they give Airpods more data and control over the device than is necessary, and they want us to be mad at the DMA for forcing them to fix it.


Agreed. There is no sane reason why live translation and/or its privacy properties should depend on the specific headphones used. Even if the live translation were to happen in the headphones themselves, that should only tie the availability of the feature to the headphones. The privacy implications ought to be orthogonal.

I see three possibilities. Either the whole thing is made up entirely by Apple for bad faith reasons. Or some non-technical person with bad faith motivations at Apple suffered from some internal misunderstanding. Or somebody at Apple made some incredibly bad technical decisions.

Basically, there's no way that this isn't a screw up by somebody at Apple in some form. We just can't say which it is without additional information.


Official communications to an international governmental agency are surely checked by multiple employees and subject to review by lawyers, marketing, C suite, etc.

Apple said what they said. It wasn't a mistake. It was attempted deception.


Hmm, couldn’t Apple solve this properly with better technical measures? Right now, we have “Apple swears that its first-party AI system won’t exfiltrate data even though the OS doesn’t stop it from doing so” and “Apple doesn’t trust other vendors to pinky swear not to exfiltrate data”.

But Apple could instead have a sandbox that has no Internet access or other ability to exfiltrate anything, and Apple could make a serious effort to reduce or eliminate side channels that might allow a cooperating malicious app to collect and exfiltrate data from the translation sandbox. Everyone, including users of the first-party system, would win.


It sounds like a straight up lie. Third party apps have always been able to record from microphones, and the live translation doesn't work without a connection to its app. They're just annoyed that they have to share their private APIs that let them do it without the normal restrictions for apps.


> Third party apps have always been able to record from microphones

Maybe not the way Apple is doing it is my guess. Apple can bypass security concerns for Apple itself since they know they aren't doing anything malicious.

I love Apple and would love better integration with other headsets, but I have a feeling none of us have the full picture.


why should they have to share those private APIs?


Because the DMA legally obligates them to share those APIs when they are necessary to implement a feature for a connected device. The goal of the regulation is to promote healthy competition for connected devices by outlawing self-preferencing by massive players. Reasonable people can disagree about the goals or the downstream effects of the DMA, but creating Private APIs for connected device features absolutely falls under the umbrella of self-preferencing.


> creating Private APIs for connected device

In the same way, the EU could ask manufacturers of wireless headphones to open up and homologise their proprietary “APIs” with which they communicate with the other earpiece so you can mix&match single earpieces from different manufacturers.


Yeah, they could.


The point of this regulation (DMA) is to enable more competition in important market segments. If this exact thing becames somehow very important, sure, it's possible, otherwise it's a bit contrived. What's the point?


Their point is the reverse.

Forcing standardization and interop is obviously good for interop, but it's bad for companies trying to innovate, because it ties their hands. The moment apple ships a v1 they have to ship an API, and then they have to support that API and can't change it. When it's private they can figure it out.


Apple already spends years in R&D before releasing anything. Many of their R&D devices never see market. Requiring them to share an API they've actually shipped to paying customers is not a significant additional hurdle. We know how to version APIs now. They can still make improvements to public APIs without hurting anyone.


> but it's bad for companies trying to innovate

Which is why DMA only applies to huge, dominant companies (the complete list: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft) and there too it does not apply to all technologies, only for those where standardization is important to enable competition. It's much more important to have at least some competition than letting dominant companies monopolise entire markets through 'innovation' with private APIs.


Let's flip this. It's the user's device, providing the user's data to the user's headphones, via an app the user has chosen, that was written by a developer vetted by Apple, who's already reviewed and approved the code that will be running. And it's the law that they have to.

Why shouldn't they share those APIs?


Because the user's device, providing the user's data to the user's Meta headphones, via a Meta app, can then record all the time and exfiltrate all that recorded data to Meta.

Or whatever other shady company wants to make headphones that sell for dirt-cheap in order to get their private spy devices into people's homes and offices.

I'm personally a bit on the fence about whether I think this is a sufficient concern to justify what Apple's doing, but AIUI this is the gist of their objection.


If it violates Apple's views on acceptable privacy practices, why are they approving the app? They already have guidelines against identifying information or collecting more data than absolutely required. The developer data use page is quite frank about the expectations:

    Apps on the app store are held to a high standard for privacy, security, and content because nothing is more important than maintaining users' trust. 
This is a rhetorical question, obviously. Apple is happy to stand on principle when it benefits them, and more than willing to soften or bend those principles when it'd be too difficult.


If a particular app only demonstrates this undesirable behavior when the phone is paired with a particular subset of headphones (or other hardware), then Apple may never notice it in App Review.


Because (since they control the platform/market) they're giving themselves an unfair advantage over competitors.

Example: iCloud photos backup can upload a photo to iCloud in the background immediately after it was taken. Competing cloud storage providers cannot do this[1], because Apple withholds the API for that. Of course they're saying this is for "privacy" or for "energy saving" or whatever, but the actual reason is of course to make the user experience with competing services deliberately worse, so that people choose iCloud over something else.

[1] There is some weird tricks with notifications and location triggers that apps like Nextcloud or Immich go through to make this work at least somewhat but those are hacks and it's also not reliable.


> Competing cloud storage providers cannot do this[1], because Apple withholds the API for that. Of course they're saying this is for "privacy" or for "energy saving" or whatever, but the actual reason is of course to make the user experience with competing services deliberately worse, so that people choose iCloud over something else.

Which makes Google Photos so much more impressive because it's heads above iCloud in this regard. No idea how they do that, pure magic.


They can chose not to share them. But then they should stop preventing other from shipping the same functionality.

So, I'm a user who's looking to buy some headphones. Why can't I buy any headphones that offer live translation functionality except Apple's?


I read this thinking about how movie studios tried to have a fully encrypted chain between the TV, the cable, the graphics card, all the way down so that HDCP would prevent anybody from putting something in the middle to record movies onto.

I don't think it's beyond the pale to argue that some shady headphone company could throw a cell modem into a set of over-the-ear headphones to exfiltrate audio. I just can't see the business case for it, even considering shadier business cases.


> This sounds bogus right? If all the headphones can do is transmit audio via first party operating system features how is this creating a data privacy issue?

Wait until third parties "require" an app to be installed, and the headphones send audio as data to the app instead of calling itself a microphone, and the app then sends that data to wherever you don't want it to.

Bose, for example, "requires" an app to be installed. For "updates", they tell you. Updates... to headphones...?!


> Bose, for example, "requires" an app to be installed. For "updates", they tell you. Updates... to headphones...?!

The headphones work without the app, but the app is required for updates (the headphones have onboard software) and also if you want to manage the multipoint connection capability from your phone (which can be more convenient than doing it from the headphones and each device you want to connect to, but is not necessary to use the feature.)


Stop the FUD with those quotes. Bose does not require or "require" an app to be installed to use their headphones and I'm not sure any vendor of BT headphones does; feel free to share if that's not the case...

I do not install vendor apps for BT peripherals, and have been through the QC and 700 series of headphones without using their app. Same for Google and Samsung BT earbuds.

Can you install an app and get updates for bugs or changes to equalizer, noise cancellation, or other features (wanted or unwantes)? Yes, but it is not required nor "required", whatever that means.


> Stop the FUD with those quotes. Bose does not require or "require" an app to be installed to use their headphones and I'm not sure any vendor of BT headphones does

Is it FUD? It's fear, for sure. Uncertainly maybe. Doubt, not really.

An app that doesn't do that today is an app that could do that after an update tomorrow.

As for firmware... well the fact that something that just processes audio needs a firmware update demonstrates that the company isn't doing proper engineering. Proper engineering processes would be able to resolve just about anything with firmware before it gets released. Yes there "might" be bugs. No, those bugs shouldn't be severe. And regardless of proper engineering, a firmware that doesn't send telemetry back today is a firmware that could send telemetry after an update tomorrow.

So it is FUD? No. It's awareness of what's possible.


Yes, it’s FUD. You are implying things which are untrue to serve your purpose through fear. That’s dishonest.


Nothing about what I said is untrue.

Apps get updated all the time, and most of the time the update is fine. That's not untrue. It doesn't change the fact that an app could be updated with new/additional telemetry. That's not untrue, either. Telemetry is nothing less than a data grab of my private information. What do I use, what do I do, where do I do it, blah blah. That's my data and no "business" has a right to it. That's also not untrue no matter what you think.

Headphones, wireless or not, should not "need" firmware updates. That's not untrue. If the device is not fit for use, then make a recall.

Bose has nice products. I've used several generations of QuietComfort headphones. But the fact remains that they offer an app for updates when it shouldn't be needed at all, and they strongly "request" that it's needed.


Yes this is a thing. I.e. I have Samgung Buds and first thing my Samsung phone did was to load new firmware into them, probably for active noise protection


We all work at companies and know how it works - you get away with what you can. You bet wrong and nobody will believe otherwise. If it was a mistake, it’s fireable.


For whole chain of command up to CEO and board. That is least we should ask each and every time.


You could make money off of this if you are able to pair willing manufacturers to realistic and popular ideas that get generated. It could become a real market place.

Hilarious project.

Edit: I did both Mouthwash Ramen and Time Machine to the Present. I’m now addicted to this, thanks.


I know of a company that is huge in laser for physics and started like this in the 80s (through magazine catalogs).

They would list all kinds of lasers. When they got some offers for one of them, they'd sell it and schedule the delivery in 90 days. Then, they started the project from scratch. Crazy stuff and borderline legal :D


We do something similar at work. Except usually the dev department doesn't know until handed the project from sales and so the project goals might be entirely unrealistic given the deadline.

What do you mean that feature doesn't exist? Well, I sold it to the customer, they have to go live in two weeks and their workflow depends on this feature.


While I've fortunately never had this happen to me, I'd be tempted to say something like, "Wow. Well, I sure hope you don't get fired over this. Good luck. We'll scope it out and let you know how much time we'll need."


Having been on on the customer side it's frustrating how often the situation is: Me: "So, you got a bid which offers features A, B, C, and D we asked for, and you say it also has X and Y and hit our budget?" / Buyer: "Yes".

A week later. "OK, their install team says it can't technically do C yet, however there's an early 2026 preview scheduled which addresses most of C. The D feature isn't in the edition we have, our buyers are talking to their sales people and we may need to pay extra to unlock D. And you're correct that two other organisations in our industry confirm X is dogshit and you'd be better off without it but it can't be disabled. Still A does work, and we have filed bugs about the known defects with B so hopefully we can get those fixed"

Every time I buy a product as an ordinary consumer I marvel at how much worse my huge employer is at buying products than I am. I reckon if they were sent to the store to buy a whole roast chicken with a £20 note they'd come back with six expired chicken sandwiches and no change.


> Every time I buy a product as an ordinary consumer I marvel at how much worse my huge employer is at buying products than I am.

It's the size of the deal that matters. Most of the consumer goods you buy are sold on a take-it-or-leave-it basis. No individual sale is worth the vendor forming a "relationship" with that customer or promising bespoke features. B2B sales are often large deals that require months of negotiation and may be worth millions. Bullshitting in order to land the deal is incentivized on both sides, to the point where both only have a fuzzy idea of what exactly is being bought and sold.

But consumers get this experience as well when they make larger purchases. When I buy a car, maybe I fail to mention the unreported fender bender my trade-in was in, and maybe the salesman tries to charge me $1200 to etch "anti-theft tracking numbers" on the new car's windows, citing some dubious statistics about vehicle recovery rates.


> consumers get this experience as well when they make larger purchases

Or as I like to do, buying random things on AliExpress and Temu knowing full well that some of the things will not meet the expectations you’d have from the product listings.

Sometimes I’m lucky and the stuff is good. Sometimes I’m a little unlucky and it’s worse quality than I’d like.

At least I quickly learned to read carefully what was said to realize that what’s depicted is not exactly what’s being sold. Some sellers do this misleading trick where they have some amazing photo up front but there are either multiple variations of it or the thing being sold is only some component for that thing. I still sometimes see product reviews from other buyers that were upset that they didn’t get what they thought they were buying and I don’t blame them because it can be pretty misleading at times, but if you read carefully and look at all the pictures and check what the “color” or similar option dropdown says etc you will usually spot it when they are selling something different than what it might look like at first. So I haven’t had that kind of misfortune for years now. But sometimes you still get products that are lower quality than you were hoping for, even when the product listing was pretty accurate. Some kinds of bad quality is just not possible to judge unless you see the product in person.


Maybe they exist but I haven't worked in a company yet that wouldn't fire an engineer or manager for refusing to implement a feature that some salescritter already sold. One of them made the company money (on paper, sure) while the other is threatening to undo the deal. It's not hard to guess which one the c-suites would send packing first.


oh you agree to do it but you laugh, literally laugh, at their deadline. and you say, you can fire me but that's not going to get your software done on time. in fact it will delay it.

they shut up. it's done when it's done.

I've done this many, many times. Oh you promised it by the end of the week and didn't ask me? lol, that sounds like a YOU problem.


The Whitehouse once called my team at Microsoft and asked for some features.

We said yes, we'd get right on it. :-D

We were all too stunned to have any real feedback.


Fortunately it doesn't happen too often, and some can be attributed to our somewhat complex feature matrix that differs by regions due to reasons.

On the other hand, in our niche customers usually don't swap software providers often due to integration work needed.

When an opportunity arises, it's usually because the yearly license expires. So we got to either sell it now with a hard deadline in the near future, or wait 5+ years till next time they switch.

So that can lead to sales being a bit optimistic when making the pitch.


I've been on both ends of this workflow. Sales always wins.

"Wow. Well, I sure hope you don't get fired over this. Good luck. We'll scope it out and let you know how much time we'll need."

"We'll see."

The big-screen TV in the modern glass conference room showed the final slide: “Questions?”.

"I.. I'd like to add that this feature we sold is not in the product and we can't just go around adding features that Sales makes up out of the blue just... just to close a deal. I mean, we gotta plan these things, there's a procedure, we should get product involved..."

Head of Sales, interrupting: "Can't we, Jeff?"

Jeff, the middle-manager, shuffled his feet: "Uh. Yeah. Right. I think we shouldn't. Hey! Haste makes waste, that's what they say, right?"

Head of Sales: "Can't we Barbara?"

Barbara, the boss: "I don't know. Let me call Pradeep"

(Barbara presses the "huddle" button in Slack on her big iPhone. A few rings and a bored voice replies)

"Yeah?"

"Sorry to jump on you like this, Pradeep. Would you mind coming to meeting room seven for a second?"

Less than a minute later Pradeep walks in, his thick glasses casting a green hue over his eyes, his arrogant demeanor preceded him like a shadow.

"Pradeep, did you read the feature request I messaged you?"

"Yes."

"How fast can you do it"

"Just merged it this morning."


Ah you must work at my same company!


Some smart stealer was posting bikes of his neighbors online second hand marketplace and waited to get contacted for specific model to steal them. Genius evil


A previous boss did this in the early 2000s. Put up a bunch of single page descriptions with "coming soon" labels, include an email subscription to "stay on top of news", turn on AdWords to get some traffic... and then start working on what people were actually interested in.


Isn't this just sort of market research for how to prioritize the roadmap? I think it was a great way to do so.


Yeah this seems like the kind of thing people would have advised me to do when I was trying to start startups around 2010. But I was too focused on engineering and had no head for the business side so I never tried it.


Ya, I know this strategy under the name “smokescreen mvp”. I don’t remember exactly, but I think it is advocated in the lean startup. Personally I am a big supporter of the strategy. Many startups fail because _nobody cared about the problem_, and this is totally avoidable


Engineering-to-Order! Not all that uncommon of a model in some industries, but problems arise when Sales doesn't have good communication with Engineering about what is actually possible for what price on what timelines.


Kinda reminds me of how Swingline didn't actually sell red staplers -- until they realized there was a demand.


Back in the 80s and 90s rhey would advertise products on TV with "6 to 8 weeks for delivery".

Now I wonder if they did this to batch up a manufacturing run once enough orders were received.


Totally. You also get batch efficiencies shipping 10k orders in a day vs dribbling them out over weeks, and you can use sub-standard shipping methods that are cheaper because… the carriers themselves are also batching the work.


This is the story of all the niche software products out there. Put together a smoke and mirrors demo, get a customer, build the product.


Ah I loved that catalog


This is what Amazon has been doing for years with Marketplace: check what's popular and then compete on price.


It's also why it's not worth it to develop a hit hardware product, China will undercut you 50% in a month (and probably build it better).


See all hardware products on Kickstarter.


If he can find a manufacturer to build the flying motorcycle that can go mach 0.8 with the price tag of $18 I'm in...


Someone email me when I can buy Barbed Wire Toilet Paper. That one is my favorite. Its so devious. Imagine needing TP but all you find is one roll. Rolled in barbed wire.


Of all the items, that's probably the easiest to make yourself, for under $10:

https://www.amazon.com/Barbed-Barbwire-Baseball-Feeder-Garde...


Those baseball bats are a bit unnerving...


I don't know what's worse, those or the... xmas trees?!!


https://anycrap.shop/product/barbed-wire-alcohol-infused-toi...

> This toilet paper combines luxurious comfort with unyielding protection against unwelcome visitors.

I’m in love with this thing!


Seems easy enough to put some constraints on it. You would probably need a subscription for generating and a fee for requesting a quote. (With a deadline) I can think of a lot of things i would like to know the price of. Some for 1, some for 1000 units. Im not in a hurry, may send me better offers regularly.


It’s amazing how fast it was to create and sell a squirrel haberdashing startup machine, and I’m encouraged by the reviews so far:

https://anycrap.shop/product/create-a-startup-company-that-s...


Think you mean a millinery.


This already exists for some things, i.e. arcade.ai





I'm so afraid to ask which end is... inserted.


If you hook it up to a production line, capitalism will reach it's peak.

I love it.

https://anycrap.shop/product/ai-powered-roller-blades-for-go...


I understand the user pool comment but don’t understand why you wouldn’t be able to have a rust layer that’s the same as the Python one API-wise.

I say this as a user of neither - just that I don’t see any inherent validity to that statement.

If you are saying Rust consumers want something lower level than you’re willing to make stable, just give them a higher level one and tell them to be happy with it because it matches your design philosophy.


The issue with Rust is that as a strict language with no function overloading (except via traits) or keyword arguments, things get very verbose. For instance, in python you can treat a string as a list of columns as in `df.select('date')` whereas in Rust you need to write `df.select([col('date')])`. Let's say you want to map a function over three columns, it's going to look something like this:

``` df.with_column( map_multiple( |columns| { let col1 = columns[0].i32()?; let col2 = columns[1].str()?; let col3 = columns[3].f64()?; col1.into_iter() .zip(col2) .zip(col3) .map(|((x1, x2), x3)| { let (x1, x2, x3) = (x1?, x2?, x3?); Some(func(x1, x2, x3)) }) .collect::<StringChunked>() .into_column() }, [col("a"), col("b"), col("c")], GetOutput::from_type(DataType::String), ) .alias("new_col"), ); ```

Not much polars can do about that in Rust, that's just what the language requires. But in Python it would look something like

``` df.with_columns( pl.struct("a", "b", "c") .map_elements( lambda row: func(row["a"], row["b"], row["c"]), return_dtype=pl.String ) .alias("new_col") ) ```

Obviously the performance is nowhere close to comparable because you're calling a python function for each row, but this should give a sense of how much cleaner Python tends to be.


> Not much polars can do about that in Rust

I'm ignorant about the exact situation in Polars, but it seems like this is the same problem that web frameworks have to handle to enable registering arbitrary functions, and they generally do it with a FromRequest trait and macros that implement it for functions of up to N arguments. I'm curious if there are were attempts that failed for something like FromDataframe to enable at least |c: Col<i32>("a"), c2: Col<f64>("b")| {...}

https://github.com/tokio-rs/axum/blob/86868de80e0b3716d9ef39...

https://github.com/tokio-rs/axum/blob/86868de80e0b3716d9ef39...


You'd still have problems.

1. There are no variadic functions so you need to take a tuple: `|(Col<i32>("a"), Col<f64>("b"))|`

2. Turbofish! `|(Col::<i32>("a"), Col::<f64>("b"))|`. This is already getting quite verbose.

3. This needs to be general over all expressions (such as `col("a").str.to_lowercase()`, `col("b") * 2`, etc), so while you could pass a type such as Col if it were IntoExpr, its conversion into an expression would immediately drop the generic type information because Expr doesn't store that (at least not in a generic parameter; the type of the underlying series is always discovered at runtime). So you can't really skip those `.i32()?` calls.

Polars definitely made the right choice here — if Expr had a generic parameter, then you couldn't store Expr of different output types in arrays because they wouldn't all have the same type. You'd have to use tuples, which would lead to abysmal ergonomics compared to a Vec (can't append or remove without a macro; need a macro to implement functions for tuples up to length N for some gargantuan N). In addition to the ergonomics, Rust’s monomorphization would make compile times absolutely explode if every combination of input Exprs’ dtypes required compiling a separate version of each function, such as `with_columns()`, which currently is only compiled separately for different container types.

The reason web frameworks can do this is because of `$( $ty: FromRequestParts<S> + Send, )*`. All of the tuple elements share the generic parameter `S`, which would not be the case in Polars — or, if it were, would make `map` too limited to be useful.


Thanks for the insight!


I reached this conclusion pretty quickly. With all the hand holding I can write it faster - and it’s not bragging, almost anyone experienced here could do the same.

Writing the code is the fast and easy part once you know what you want to do. I use AI as a rubber duck to shorten that cycle, then write it myself.


I am coming back to this. I’ve been using Claude pretty hard at work and for personal projects, but the longer I do it, the more disappointed I become with the quality of output for anything bigger than a script. I do love planning things out and clarifying my thoughts. It’s a turbocharged rubber duck - but it’s not a great engineer


Me too. I’ve been playing with various coding agents such as Cursor, Claude Code, and GitHub Copilot for some time, and I would say that their most useful feature is educating me. For example, they can teach me a library I haven’t used before, or help me debug a production issue. Then I would choose to write the code by myself after I’ve figured everything out with their help. Writing code by myself is definitely faster in most cases.


> For example, they can teach me a library I haven’t used before.

How do you verify it is teaching you the correct thing if you don't have any baseline to compare it to?


You are right, I don't have any baseline. I just try it and see if it works. One good thing about the software field is that I can compile and run the code for verification. It may not be optimal, but at least it's testable.


My thoughts on scripts are: the output is pretty bad too, but it doesn't matter as much in a script, because its just a short script, and all that really matters is that it kinda works.


What you're describing is a glorified mirror.

Doesn't that sound ridiculous to you?


That's what rubber ducking is


It sounds better when you get more specific about what it is. Many people have fallen prey to this and gone a tad loopy.


I am still working on tweaking how I work and design with Claude to hopefully unlock a level of output that I’m happy with.

Admittedly, part of it is my own desire for code that looks a certain way, not just that which solves the problem.


I’ve been trapped in a hole of “can I get the agent to do this?” And the change would have taken me 1/10th the time.

Choosing the battles to pick is part of the skill at the moment.

I use AI for a lot of boiler plate, tedious tasks I can’t quite do a vim recording for, small targeted scripts.


How many of these boilerplate do you actually have to do? Any script or complicated command that I had to write was worthy to be recorded in some bash alias or preserved somewhere. But they mostly live in my bash history or right next to the project.

The boilerplate argument is becoming quite old.


One recent example of boilerplate for me is I’ve been writing dbt models and I get it to write the schema.yml file for me based on the sql.

It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast.

If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand.


I’m doing a lot of upgrades to neglected projects at the moment and I often need to do the same config over and over to multiple projects. I guess I could write a script, or get AI to write a script, but there’s no standard between projects. So I need the same thing over and over but from slightly different starting points.

I think you need to imagine all the things you could be doing with LLMs.

For me the biggest thing is so many tedious things are now unlocked. Refactors that are just slightly beyond the IDE, checking your config (the number of typos it’s picked up that could take me hours because eyes can be stupid), data processing that’s similar to what you have done before but different enough to be annoying.


A similar, non-LLM battle, is a global find and replace, but _not quite identical_ everywhere. Do I just go through the 20 files and do it myself, or try to get clever with regex? Which is ultimately faster...


I’ve just had to do just this, a one line prompt and one example was the difference between mind numbing work and a comfortable cup of coffee away from the monitor.


In this case LLM is probably the answer. I’ve done this exact thing. No messing with regex or manual work. Type a sentence and examine the result in a diff.


Writing the code in the grand scheme of things isn't the hard part in software development. The hard parts are architecture and actually building the right thing, something an LLM can't really help you with.

It's not AI, there is no intelligence. A language model as the name says deals with language. Current ones are surprisingly good at it but it's still not more than that.


What? Leading edge LLMs are great at architecture, schema design and that sort of thing if you give them enough context and are not working on anything too esoteric. I’d argue they are better at this than the actual coding part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: