Hacker Newsnew | past | comments | ask | show | jobs | submit | knollimar's commentslogin

I do some electrical drafting work for construction and throw basic tasks at LLMs.

I gave it a shitty harness and it almost 1 shotted laying out outlets in a room based on a shitty pdf. I think if I gave it better control it could do a huge portion of my coworkers jobs very soon


I would really love a magic wand to make things like AVEVA and AutoCAD not so painful to use. You know who should be using tools to make these tools less awful? AVEVA and AutoCAD. Engineers shouldn't be having to take on risk by deferring some level of trust to third party accelerators with poor track records.

I feel like the BIM model of Revit will be more successful getting agents to use than autocad in a similar way that LLMs are good at typescript

Can you give an example of the sort of harness you used for that? Would love to play around with it

I've been using pyrevit inside revit so I just threw a basic loop in there. There's already a building model and the coworkers are just placing and wiring outlets, switches, etc. The harness wasn't impressive enough to share (alos contains vibe coded UI since I didn't want to learn XAML stuff on a friday night). Nothing fancy; I'm not very skilled (I work in construction)

I gave it some custom methods it could call, including "get_available_families", "place family instance", "scan_geometry" (reads model walls into LLM by wall endpoint), and "get_view_scale".

The task is basically copy the building engineer's layout onto the architect model by placing my families. It requires reading the symbol list, and you give it a pdf that contains the room.

Notably, it even used a GFCI family when it noticed it was a bathroom (I had told it to check NEC code, implying outlet spacing).


I'm going to try to get it to generate extrusions in Revit based on images of floor plans. I've tried doing this in bunch of models without success so far.

You might want to give it some guidance based on edge centers? It'll have a hard time thinking of wall thickness and have it draw points if you're trying to copy floor plans.

for clarity now that I'm rereading: it understands vectors a lot better than areas. Encoding it like that seems to work better for me.


"AI could never replace the creativity of a human"

"Ok, I guess it could wipe out the economic demand for digital art, but it could never do all the autonomous tasks of a project manager"

"Ok, I guess it could automate most of that away but there will always be a need for a human engineer to steer it and deal with the nuances of code"

"Ok, well it could never automate blue collar work, how is it gonna wrench a pipe it doesn't have hands"

The goalposts will continue to move until we have no idea if the comments are real anymore.

Remember when the Turing test was a thing? No one seems to remember it was considered serious in 2020


> "the creativity of a human"

> "the economic demand for digital art"

You twisted one "goalpost" into a tangential thing in your first "example", and it still wasn't true, so idk what you're going for. "Using a wrench vs preliminary layout draft" is even worse.

If one attempted to make a productive observation of the past few years of AI Discourse, it might be that "AI" capabilities are shaped in a very odd way that does not cleanly overlap/occupy the conceptual spaces we normally think of as demonstrations of "human intelligence". Like taking a 2-dimensional cross-section of the overlap of two twisty pool tubes and trying to prove a Point with it. Yet people continue to do so, because such myopic snapshots are a goldmine of contradictory venn diagrams, and if Discourse in general for the past decade has proven anything, it's that nuance is for losers.


> Remember when the Turing test was a thing? No one seems to remember it was considered serious in 2020

To be clear, it's only ever been a pop science belief that the Turing test was proposed as a literal benchmark. E.g. Chomsky in 1995 wrote:

  The question “Can machines think?” is not a question of fact but one of language, and Turing himself observed that the question is 'too meaningless to deserve discussion'.

The Turing test is a literal benchmark. Its purpose was to replace an ill-posed question (what does it mean to ask if a machine could "think", when we don't know ourselves what this means- and given that the subjective experience of the machine is unknowable in any case) with a question about the product of this process we call "thinking". That is, if a machine can satisfactorily imitate the output of a human brain, then what it does is at least equivalent to thinking.

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."


Turing seems to be saying several things. He writes:

>If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd.

This anticipates the very modern social media discussion where someone has nothing substantive to say on the topic but delights in showing off their preferred definition of a word.

For example someone shows up in a discussion of LLMs to say:

"Humans and machines both use tokens".

This would be true as long as you choose a sufficiently broad definition of "token" but tells us nothing substantive about either Humans or LLMs.


The turing test is still a thing. No llm could pass for a person for more than a couple minutes of chatting. That’s a world of difference compared to a decade ago, but I would emphatically not call that “passing the turing test”

Also, none of the other things you mentioned have actually happened. Don’t really know why I bother responding to this stuff


Ironically the main tell of LLMs is that are too smart and write too well. No human can discuss the depth of topics they can and no humans writes like a author/journalist all the time.

i.e. the tell that it's not human is that it is too perfectly human.

However if we could transport people from 2012 to today to run the test on them, none would guess the LLM output was from a computer.


> No llm could pass for a person for more than a couple minutes of chatting

I strongly doubt this. If you gave it an appropriate system prompt with instructions and examples on how to speak in a certain way (something different from typical slop, like the way a teenager chats on discord or something), I'm quite sure it could fool the majority of people


I still haven't witnessed a serious attempt at passing the Turing test. Are we just assuming its been beaten, or have people tried?

Like if you put someone in an online chat and ask them to identify if the person they're talking to is a bot or not, you're telling me your average joe honestly can't tell?

A blog post or a random HN comment, sure, it can be hard to tell, but if you allow some back and forth.. i think we can still sniff out the AIs.


A couple of months ago I saw a paper (can't remember if published or just on arxiv) in which Turing's original 3-player Imitation Game was played with a human interrogator trying to discern which of a human responder and an LLM was the human. When the LLM was a recent ChatGPT version, the human interrogator guessed it to be the human over 70% of the time; when the LLM was weaker (I think Llama 2), the human interrogator guessed it to be the human something like 54% of the time.

IOW, LLMs pass the Turing test.


> blue collar work

I don't think it's fair to qualify this as blue collar work


I'm double replying to you since the replies are disparate subthreads. This is the necessary step so the robots who can turn wrenches know how to turn them. Those are near useless without perfect automated models.

Anything like this willl have trouble getting adopted since you'd need these to work with imperfect humans, which becomes way harder. You could bankroll a whole team of subcontractors (e.g. all trades) using that, but you would have one big liability.

The upper end of the complexity is similar to EDA in difficulty, imo. Complete with "use other layers for routing" problems.

I feel safer here than in programming. The senior guys won't be automated out any time soon, but I worry for Indian drafting firms without trade knowledge; the handholding I give them might go to an LLM soon.


It is definitely not. Entry pay is 60k and the senior guys I know make about 200k in HCoL areas. A few wear white dress shirts every day.


Was the case against the goto statement so good we can't mention it?

More or less, I meant how this would be inlined in assembly with a goto that could goto back where the branching originated from.

When does the tech sector become the computer sector?

Agriculture would have been considered tech 200 years ago.


full throttle until AGI is achieved, then we will see

Maybe one day we will discover that a method exists for computing/displaying/exchanging arbitrary things through none other means than our own flesh and brains.

Their point is doing a thing for a long time doesn't enshrine it as a right.

The comment before could have said "should be a human right".

imo it's very frustrating having people say "thing I want is a right". What gives them that right? Are all laws not violation of rights if you extend that


All rights now encoded in law were originally moral claims.

And before they were rights encoded in law were they rights?

I feel it makes your claim weaker to go from "should have" to "is a right" if there's any doubt in it.

There's strong "we have a right to ancillary thing" arguments you can make that rely on a right, but those rely on that right being a given, not the premise


When somebody says "X is a right", that does not necessarily mean they think the case is closed and the discussion is over. It can also mean that they are making an assertion, which frames the discussion for the follow-up questions that you are now making.

They are completely ignoring the context of this whole thread, which exists because the highest court in the land (Kenyan land, that is) has affirmed that right.

Ggp's is as absurd as a North Korean commenting on a SCOTUS ruling on the right to a fair trial by saying "This is a new human right I didn't even know I had."


The rights of particular countries' citizens aren't usually construed with 'human rights.' I believe 'human rights' is of UN origin.

The rights of US citizens, for instance, don't currently apply to the folks getting deported. It's a big controversial point, but of course the rights of the constitution aren't guaranteed to some guy in France.

Human rights aren't those.

In this case, Kenyan citizens gained a right, not humans.


> Human rights aren't those.

What are they, then? If you can name one, I'll find you a jurisdiction where that right is not respected.

Your (incorrect, IMO) definition of human rights based on the lowest common denominator whittles them down to nothing. Fundamentally I suspect what you and I are calling "human rights" is not the same thing at all.


I'm experimenting with the gemini 3 and will do opus 4.5 soon, but I've seen huge jumps doing EE for construction over the last batch of models.

I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)

Let me know what you've experienced. Not many construction EE on HN.


EE on HN and from construction!

I use to draft in AutoCad and Revit before switching to software.

Saw your comment around using Gemini. I’d love to chat with you. I started building something for the build side of the electrical world, but eventually want to make the jump to the design side of the house.


You probably run a bigger (perhaps double) neutral and care about a stronger ground. But yeah, the $12 is rounding error at this scale

In the US, you don’t need a neutral for a 240V 50A circuit on a residential single-phase service, there are two line conductors (120V to ground) both connected to a 2-pole single-phase breaker, line-to-line between the two is 240V.

You would need a neutral if it was a 208/120V three-phase service.

Neutrals and grounds are sized per the NEC, neutrals are the same size as the line conductors and equipment grounds are sized off of a table.

#6 conductors and #10 ground is what the NEC calls for.


I just haven't seen server rooms that don't demand a doubly sized neutral when one is required.

I live the NYC 208V life doing mostly resi, though.

Quick search of the spec for that is 6 power supplies, 2 of which are redundant. Looks to use a neutral to me. Says it uses C19/C20 connectors

edit: wait most ranges use 14-50R outlets and need a neutral ran. I am calling your statement into question. Surely harmonics and 120V internal draws cause non-zero neutral current. And I'm sure GPUs have harmonics being semiconductor flavored.


Turns out you are correct on both points!

My bad, NEMA 14-50R is a 4-wire receptacle with a neutral.

I learned something new today, 200% neutrals are not required by the NEC but can help with non-linear loads, and certain transformers that mitigate harmonics need 200% neutrals.


I'm thinking you'd have a hard time fitting it in a smaller residential unit purely because of the load calculations. You'd probably have to add 3000W worth of AC units or heat pumps. Considering it's 4+ space heaters worth of output, I think you could fit it on a 200A service. 150A if you're bold and use optional calcs (these assuming you have electric cooking). To your point, basically the same as if you have an EV but with some cooling in there.

By this logic loss leaders to drive out competition are good gor the consumer, no?

Increased capital investment (e.g. software) could explain productivity not attributable to the employee.

On its face, productivity vs pay isn't a faor metric. I agree that problem exists, but we shouldn't use it to benchmark


> I own your tools therefore I own everything you do with them

Like I said, madness.

It's like slavery was technically outlawed but a lot of America strongly disagreed. And they're posting on HN.


If I lease tools that triple your productivity for the same cost as your wage, how much more do I pay you?

My point is increases in productivity can be caused by capital investment, not totally attributable to the worker. That money has to come from somewhere.

Ideally there's a fair balance. This isn't it but you can't look at the number you referenced blindly.


You're not thinking at a systems level. Check this out - https://data.worldhappiness.report/chart. The US is increasingly a miserable place to live in - in large part because of pay not keeping up with housing/school/medical/etc.

Correct systems-level answer to your question "how much do I pay you" is "as much as it takes to stabilize the US curve". Happiness correlates with financial security, which we won't get if the rich get richer from those capital investments then buy up all the housing.

Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone.


I handwave dismiss bigfoot, too

I think it's a fundamental limitation of how context works. Inputting information as context is only ever context; the LLM isn't going to "learn" any meaningful lesson from it.

You can only put information in context; it struggles learning lessons/wisdom


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: