Hacker Newsnew | past | comments | ask | show | jobs | submit | partdavid's commentslogin

I'm extremely interested in pushing along these fronts even in a performative way, because I don't want to get bogged down in "switch away from Emacs" conversations with coworkers. I've done a lot of modernizing on my Emacs setup this year but I would love a current take on "getting close to cursor" that gets me beyond what I'd had set up with copilot and lsp.


Having tried a bit of Cursor and some Zed I still find Claude Code a lot better than the rest (though maybe now the Claude Code Zed Beta will change things.) That means what I'm mostly doing is keeping Claude Code up in the terminal and having it do things. Claude Code has an option to view diffs in your editor which you can configure and works fine with emacs and its various diff modes (and looks great too.)

I usually always make sure everything in my branch is committed before letting Claude Code loose on my code so I can view the changes normally as a magit diff and then choose whether I want to edit any of its changes (90% of the time) or commit as is (10% of the time.) I can also restore files selectively this way and have all of my git tools available if needed.

If you want deep Claude Code integration Cursor style, then check out https://github.com/manzaltu/claude-code-ide.el . The latest releases of emacs support using `vc` blocks to specify packages so you can grab the elisp package straight from the repo and get it working within your emacs.

If you want a chat style interface, GPTel exists but requires some config (not much but not zero either) before it becomes usable as a general chat tool like Claude Desktop or ChatGPT. I'm working on an elisp package to recreate a chat interface atop GPTel and decrease the config burden.


Since the jargon we've invented in technology has derived from natural language, it's often repurposing common terms as terms of art. In my opinion this leads to ambiguity and I sometimes pine for the abstruse but more precise jargon from classical languages you can use in medicine (for example).

For example, how many things does "link" mean? "Process"? "Type"? "Local"? It makes people (e.g., non-technical people) think that they understand what I mean when I talk about these things but sometimes they do and sometimes they don't. Sometimes we use it in a colloquial sense, but sometimes we'd like to use it in a strict technical sense. Sometimes we can invent a new, precise term like "hyperlink" or "codec" but as often as not it fails to gain traction ("hyperlink" is outdated).

That's one reason we get a lot of acronyms, too. They're too unconversational but they can at least signal we're talking about something specific and rigorous rather than loose.


Medical jargon (or at least biology jargon) using can still conflict with common language. For example: thorn, spine, and prickle all have different meaning in biology, and the term thorn doesn't cover anything native to England, where that word direves and was used in Shakespeare's plays.


Can you say more about which prior art you think overlaps here? We have a similar use case to Figma and are implementing a similar solution. I'm not particularly concerned whether the path we're following is novel but I am particularly concerned with whether there are gotchas along it that we should be watching out for, so if there are more mature solutions, we'd be interested.


Read QuakeWorld 1999, client-server prediction, Counter-Strike: Source lag compensation.

See also, “reconciliation.”

Also, debatably the article hints towards an incorrect implementation because they specifically mention sending events instead of user input.

By design, if you do this, you can create events that are supposed to eventually stop, but the user may drop packets and disconnect all the while you were processing an event that was never supposed to occur.

If you poll user input and experience loss, there’s no event to misfire.

This behavior manifests itself in games as a player that continues to walk forward despite having lost their connection to a server and is a indicator that a multiplayer server was implemented incorrectly.


Multiplayer video games are the big one. There's no reason why you couldn't apply the same kind of synchronization to a web app. You'll just need to decide on an approach because not all games do it the same way. Most likely you'd want rollback netcode.


Indeed, Replicache works this way, using server reconciliation (one part of client-side prediction): https://doc.replicache.dev/concepts/how-it-works


I think you're agreeing with GP, not disagreeing.


I was disagreeing with the notion that this law has to be taken into account. I suppose that’s true for certain software, but if e.g. Apple can get away with breaking these use cases then I don’t see why, as an API designer, I should care either.


I think this really depends on who your customers are and how they pay for your services.


I'm assuming "crash"--it's a mild malapropism based on some varieties of English phonetics that I've seen before.


I get what you're saying, but what's interesting to me is that this case is a mild signal that a subsequent developer could take the same erroneous implication. "Id" does in fact imply to me that entries are indexed by "Id", i.e., an attribute of the item being indexed, and that they are not array-like, in that they wouldn't all get different IDs by a deletion, for example.


Accusations are often confessions.


The shallow analogy is like "why worry about not being able to do arithmetic without a calculator"? Like... the dev of the future just won't need it.

I feel like programming has become increasingly specialized and even before AI tool explosion, it's way more possible to be ignorant of an enormous amount of "computing" than it used to be. I feel like a lot of "full stack" developers only understand things to the margin of their frameworks but above and below it they kind of barely know how a computer works or what different wire protocols actually are or what an OS might actually do at a lower level. Let alone the context in which in application sits beyond let's say, a level above a kubernetes pod and a kind of trial-end-error approach to poking at some YAML templates.

Do we all need to know about processor architectures and microcode and L2 caches and paging and OS distributions and system software and installers and openssl engines and how to make sure you have the one that uses native instructions and TCP packets and envoy and controllers and raft systems and topic partitions and cloud IAM and CDN and DNS? Since that's not the case--nearly everyone has vast areas of ignorance yet still does a bunch of stuff--it's harder to sell the idea that whatever AI tools are doing that we lose skills in will somehow vaguely matter in the future.

I kind of miss when you had to know a little of everything and it also seemed like "a little bit" was a bigger slice of what there was to know. Now you talk to people who use a different framework in your own language and you feel like you're talking to deep specialists whose concerns you can barely understand the existence of, let alone have an opinion on.


> Do we all need to know about processor architectures and microcode and L2 caches and paging and OS distributions and system software…

Have you used modern software… or just software in general to be honest.

We have had orders of magnitude improvement in hardware performance and much fewer orders of magnitude increase in software performance and features.

May I present the windows start menu as a perfect exhibit, we put a web browser in there and made actually finding the software you want to use harder than ever, even search is completely broken 99% of the time (really try powertoys run or even windows + s for a night and day difference).

We add boundless complexity to things that doesn’t need it, millions of lines of code, then waste millions of cycles running security tools to heuristically prevent malicious actors from exploiting our millions of lines of code that is impossible to know because it is deemed to difficult to learn the underlying semantics of the problem domain.


I've done okay with copilot as a very smart autocomplete on: a) very typical codebase, with b) lots of boilerplate, where c) I'm not terribly familiar with the languages and frameworks, which are d) very, very popular but e) I don't really like, so I'm not particularly motivated to become familiar with them. I'm not a frontend developer, I don't like it, but I'm in a position now where I need to do frontend things with a verbose Typescript/React application which is not interesting from a technical point of view (good product, it's just not good because it has an interesting or demanding front end). Copilot (I use Emacs, so cursor is a non-starter, but copilot-mode works very well for Typescript) has been pretty invaluable to just sort of slogging through stuff.

For everything else, I think you're right, and actually the dialog-oriented method is way better. If I learn an approach and apply some general example from ChatGPT, but I do the typing and implementation myself so I need to understand what I'm doing, I'm actually leveling up and I know what I'm finished with. If I weren't "experienced", I'd worry about what it was doing to my critical thinking skills, but I know enough about learning on my own at this point to know I'm doing something.

I'm not interested in vibe coding at all--it seems like a one-way process to automate what was already not the hard part of software engineering; generating tutorial-level initial implementations. Just more scaffolding that eventually needs to be cleared away.


A non-forking prompt command is an absolute requirement for me and always has been, so I did this in pure shell. It's not a lot of code but it's a little tricksy.

If I still used bash, starship would be a non-starter for me, in part, since it's fork/execed in the prompt command. Further up this thread someone says that the zsh installation is different and it's a native shared library that gets loaded into zsh. That seems neat.

(The other reason it's a non-starter is that maybe I can stomach sending over a dotfile to various systems, but selecting a platform-specific native binary is too much configuration management to be done to prepare for an SSH session/kubectl exec. Eventually I made my peace with doing this for emacsclient so I could have local editing of remote files, but that's a lot less critical of a piece to miss than something that appears in your prompt. Conceptually, if you want to ship over/config-manage a native binary, you might as well install a better shell, which became a compelling argument to me when I switched to Powershell).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: