Hacker Newsnew | past | comments | ask | show | jobs | submit | jodosha's commentslogin

> The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers

This is what happens with big technological advancements. Technology that enables productivity won’t free people time, but only set higher expectations of getting more work done in a day.


I’m in Indonesia at the moment for vacation.

Just checked with NordVPN connected to their server Indonesia #54 (Borneo) and I was able to access twitter.com (via Chrome) and Discord (via app).

I’m on iPhone.


Still no CLI like Claude Code?


You are looking for Codex CLI [0].

0 - https://github.com/openai/codex


Thank you!


It works on Codex CLI, install it with npm.

That's been out for a while and used their 'codex' model, but they updated it today to default to gpt-5 instead.


Oh nice, thanks!


Not a native English speaker but, isn’t the word “attributable” in the title at least misleading?

Shouldn’t it be “linked” instead?

The paper indicates correlation, not causality.


Based on the abstract it's not a study showing that "Type 2 Diabetes and cardiovascular disease [are] attributable to sugar beverages", it's a paper quantifying the "[Amount of] Type 2 Diabetes and cardiovascular disease [that are] attributable to sugar beverages [in various countries]". The link and causation is already well established. This is trying to determine how much harm it's doing in different parts of the world.


Thanks for sharing the info.

> where each chunk can go through a multi-stage process, in which the output of the first stage is passed into another prompt for the next stage

Is it made possible by your custom code or is this that now OpenAI offers off of the shelf via their API?

If the latter, that would partially replace LangChain for simple pipelines.


It is made possible by my code. But I would emphasize that the code is quite trivial. It's literally just populating a prompt template with the output of a previous template-- simple string manipulation. I never could understand why anyone would want to use Langchain for that sort of thing.


Yes. Hanami is Rack based, so Warden or RodaAuth works great with it.


I'm the author of Hanami.

We started at the same time, but unfortunately I never had the chance to play with Phoenix.


We're trying to cover two main use cases: API apps and full stack web applications. There are features in Rails that we recognize to be really useful. For that niche we're trying to be a lightweight alternative.


Why should I use lotus over Sinatra, Padrino, Grape, Rails, etc?


Speed, control, testability and maintainability.


I'm giving Lotus a try for an api. I'm looking for something a little more lightweight than Rails. I'm hoping for a fun experience.


@vidarh Lotus doesn't depend on Lotus::Model. I kept it out from the dependencies because of the reasons that you've described. Future versions of Lotus will have some facilities for Model, but it won't be a requirement.

EDIT: BTW Lotus::Model is shipped with an SQL adapter that is a nicer wrapper on top of Sequel. If you love this library you will love Lotus::Model too.


Mine was a provocatory title. What I want to say is: opinions can be right for some people, wrong for other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: