Hacker Newsnew | past | comments | ask | show | jobs | submit | lateforwork's commentslogin

What should they be using instead? These astronauts are not Linux hackers.

If they were, they'd probably have skipped the mission.

These astronauts are trained to use the system NASA puts them in.

And ultimately they have a lot more important things to be doing then learning a different email client than the one they use at their desk on earth. This is an email client on a laptop, not a navigation system.

No they don’t. They’re our best and brightest, and they train for years at their one, important job, which is to use the system they’re given.

The mission of the astronauts on board is to test the damn Orion spacecraft in preparation for a human landing on the moon.

> NASA flight controller and instructor Robert Frost explained the reasoning plainly in a post on Quora (via Forbes). “A Windows laptop is used for the same reasons a majority of people that use computers use Windows. It is a system that people are already familiar with. Why make them learn a new operating system,” he reportedly wrote.

https://www.msn.com/en-in/technology/space-exploration/nasa-...


Maybe he should have designed the rest of the controls to look like the cockpit of 2003 Toyota Camry. It is a system that people are already familiar with. And actually reliable.

Do you think the US has idle capacity that can be activated at a moment's notice?

> Do you think the US has idle capacity that can be activated at a moment's notice?

I'm sure some very smart MBA increased profits by eliminating spare capacity or making cuts that would make it much harder to spin up. That's American business culture: focus on this quarter or this year, nothing else matters.


We can just buy them off Alibaba

STRICT has severe limitations, for example it does not have date data type.

Why is it a problem that it allows data that does not match the column type? SQLite is intended for embedded databases, where only your application reads and writes from the tables. In this scenario, as long as you write data that matches the column's data type, data in the table does match the column type.


>> Why is it a problem that it allows data that does not match the column type?

“Developers should program it right” is less effective than a system that ensures it must be done right.

Read the comments in this thread for examples of subtle bugs described by developers.


> “Developers should program it right” is less effective than a system that ensures it must be done right.

You're right, of course. But this must be balanced with the fact that applications evolve, and often need to change the type of data they store. How would you manage that if this is an iOS app? If SQLite didn't allow you to store a different type of value than the column type, you would have to create a new table and migrate data to a new table. Or create a new column and abandon the old column. Your app updates will appear to not be smooth to users. So it is a tradeoff. The choice SQLite made is pragmatic, even if it makes some of us that are used to the guarantees offered by traditional RDBMSs queasy.


> Why is it a problem that it allows data that does not match the column type? SQLite is intended for embedded databases

I'm afraid people forget that SQLite is (or was?) designed to be a superior `open()` replacement.

It's great that modern SQLite has all these nice features, but if Dr. Hipp was reading this thread, I would assume he would be having very mixed feelings about the ways people mention using SQLite here.


No, I think that people can use SQLite anyway they want. I'm glad people find it useful.

I do remain perplexed, though, about how people continue to think that rigid typing helps reliability in a scripting language (like SQL or JSON) where all values are subclasses of a single superclass. I have never seen that in my own practice. I don't know of any objective research that supports the idea that rigid typing is helpful in that context. Maybe I missed something...


> where all values are subclasses of a single superclass

I don't understand this. By values do you mean a row (in database terms)? I don't understand what that has to do with rigid typing.

Lack of rigid typing has two issues, in my opinion: First, when two or more applications have to read data from a single database, lack of an agreed-upon-and-enforced schema is a limitation. Second, when you use generic tools to process data, the tools have no idea what type of data to expect in a column, if they can't rely on the table schema.


First off, I am so glad the famous "HN conjure" actually worked! My "if Dr. Hipp was reading this thread" was tongue in cheek because on HN it was extremely likely that's precisely what would happen. Thank you for chiming in, Dr. Hipp - this is why I love HN!

So, in case you missed it, you're responding to Dr. Hipp himself :)

> I don't understand what that has to do with rigid typing.

Now I would like to learn a bit from Dr. Hipp himself, so here's my take on it:

Scripting languages (like my fav, Python) have duck or dynamic typing (a variation of what I believe Dr. Hipp, you specifically call manifest typing). Dr. Hipp's take is that the datatype of a value is associated with the value itself, not with the container that holds it (the "column"). (I must say I chose the word "container" here to jive with Dr. Hipp's manifest. Curious whether he chose that word for typing for the same reason! )

- In Python, everything is fundamentally a `PyObject`.

- In SQLite, every piece of data is (or was?) stored internally as a `sqlite3_value` struct.

As a result, a stack that uses Python and SQLite is extremely dynamic and if implemented correctly, is agnostic of a strict type - it doesn't actually care. The only time it blows up is if the consumer has a bug and fails to account for it.

Hence, because this possibility exists, and that no objective research has proven strict typing improves reliability in scripting environments, it's entirely possible our love for strict types is just mental gymnastics that could also have been addressed, equally well, without strict typing.

I can reattempt the "HN conjure" on Wes McKinney and see if this was a similar reason he had to compromise on dynamic typing (NumPy enforces static typing) to Pandas 1.x df because, as both of them are likely to say, real datasets of significant size rarely have all "valid" data. This allows Pandas to handle invalid and missing fields precisely because of this design (even if it affects performance)

A good dynamic design should work with both ("valid" and "invalid") present. For example: layer additional "views" on top of the "real life" database that enforce your business rules while you still get to keep all the real world, messy data.

OTOH, if you dont like that design but must absolutely need strict types, use Rust/C++/PostgreSQL/Arrow, etc. They are built from the ground up on strict types.

With this in mind, if you still want to delve into the "Lack of rigid typing has two issues" portion, I am very happy to engage (and hope Dr. Hipp addresses it so I learn and improve!)

The real world is noisy, has surprises in store for us and as much as engineers like us would like to say we understand it, we don't! So instead of being so cocksure about things, we should instead be humble, acknowledge our ignorance and build resilient, well engineered software.

Again, Dr. Hipp, Thank you for chiming in and I would be much obliged to learn more from you.


Thank you for the great explanation. But SQL isn't as dynamically typed as you suggest. If a column is defined as DECIMAL(8, 2), it would be surprising for some values in that column to be strings. RDBMSs are expected to provide data integrity guarantees, and one of those guarantees is that only values matching the declared column type can be stored.

Relaxing that guarantee has benefits. For example, it can make application evolution easier--being able to store strings in a column originally intended for numbers is convenient. But that convenience can become a liability when multiple applications read from and write to the same database. In those cases, you want applications to adhere to a shared schema contract, and the RDBMS is typically expected to enforce that contract.

It also creates problems for generic tools such as reporting systems, which rely on stable data types--for example, to determine whether a column can be aggregated or how it should be formatted for display.


>> but if Dr. Hipp was reading this thread

He is.


If you reached out and notified him, Thank you. I hope he has time to revisit - had a few more followups. Cheers!

No I did not I think he’s been a regular community member a long time he probably just saw it on front page.

When your application's design changes, you may need to store a slightly different type of data. Relational databases traditionally require explicit schema changes for this, whereas NoSQL databases allow more flexible, schema-less data. SQLite sits somewhere in between: it remains a relational database, but its dynamic typing allows you to store different types of values in a column without immediately migrating data to a new table.

This flexibility is convenient when only one application reads and writes to the table. But if multiple applications access the same tables, the lack of a strictly enforced schema becomes a liability. The same is true when using generic tools to process data in SQLite tables, because such tools don't know what type of data to expect. The column type may be X but the actual data may be of type Y.


> get it pumping out CVEs.

Is that a good thing or bad?

I see that as a very good thing. Because you can now inexpensively find those CVEs and fix them.

Previously, finding CVEs was very expensive. That meant only bad actors had the incentive to look for them, since they were the ones who could profit from the effort. Now that CVEs can be found much more cheaply, people without a profit motive can discover them as well--allowing vulnerabilities to be fixed before bad actors find them.


It's good and bad.

Not all CVEs are the same, some aren't important. So it really depends on what gets founds as a CVE. The bad part is you risk a flood a CVEs that don't matter (or have already been reported).

> That meant only bad actors had the incentive to look for them

Nah. Lot's of people look for CVEs. It's good resume fodder. In fact, it's already somewhat of a problem that people will look for and report CVEs on things that don't matter just so they can get the "I found and reported CVE xyz" on their resume.

What this will do is expose some already present flaws in the CVE scoring system. Not all "9"s are created equal. Hopefully that leads to something better and not towards apathy.


It also depends on if the CVEs can be fixed by LLMs too. If they can find and fix them, then it's very good.

Fixing isn't often a problem for CVEs. The hard part is almost always finding the CVE in the first place.

There are some extreme cases that might require extensive code changes, and those would benefit from LLMs. But a lot of the issues are things like off by one issues with pointers.


Fixing is now the bottleneck.

Most patches are non-trivial and then each project/maintainer has a preferred coding style, and they’re being inundated with PRs already, and don’t take kindly to slop.

LLMs can find the CVE fully zero interaction, so it scales trivially.


The biggest question is can you meaningfully use Claude on defense as well, eg can it be trusted to find and fix the source of the exploit while maintaining compatibility. Finding the CVEs helps directly with attacks while only helping defenders detect potential attacks without the second step where the patch can also be created. If not you've got a situation where you've got a potential tidal wave of CVEs that still have to be addressed by people. Attackers can use CVE-Claude too so it becomes a bit of an arms race where you have to find people able and willing to spend all the money to have those exploits found (and hopefully fixed).

How about releasing your own source code? It is a beautiful site, love the UX as well as functionality.

It screams vibe coding. This is the anthropic look. Just ask Claude and give it a screenshot.

Vibe coding is also why this was released hours after leak instead of days/weeks.

Of course I expect it is vibe coding. It would be insane to code anything by hand these days. But that doesn't mean there is no creative input by the author here.

>> It would be insane to code anything by hand these days.

I strongly disagree, but it made me chuckle a bit, thinking about labeling software as "handmade" or marketing software house as "artisanal".


There's a lot of errors you can miss by coding by hand, even as a seasoned developer. Try taking Claude Code, point it at your repo, and ask it to find bugs. I bet it will.

Claude is actually a crazy good vuln researcher. If you use it that way, your code might just be more secure than written purely by hand.


Sure, just like drug-sniffing dogs. Whether they've actually found something or are just pleasing the operator is another story.

Our organic artisanal code is written by free-range developers

"free-range" means fully remote, right?

Depends on what you’re building and whether it’s recreational or not. Complex architecture vs a ui analysis tool, for example. For a ui analysis tool, the only reason you code by hand is for the joy of coding by hand. Even though you can drive a car or fly in a plane there are times to walk or ride a bike still.

Depending on your standards and what company is making it you could even have “cruelty free.”

You well on the path to AI-fueled psychosis if you genuinely believe this.

I genuinely believe this. Even if you're inventing a new algorithm it is better to describe the algorithm in English and have AI do the implementation.

At least it's more productive than AI Derangement Syndrome.

Yes, it is vibecoded, had to get it like in 10-15 minutes. I did not know how to write a piece of code 4-5 months back.

Must everything be artisanal for some people? </s>

Guess what? People have ZERO reason to Open Source anything now.

One reason, beside basic altruism, is so you can put the projects on your resume. This is especially helpful if the project does very well or gets lots of stars.

This said Jeavon's Paradox will likely mean far more code is open sourced simply due to how much code will get written in total.

We should be applauding the promotion of science and useful arts that genAI is fueling.

But egos are involved.


Why would you think that?

I'm a committed open source dev and I've flipped my own switch from "default public" to "default private".

Because nobody wants their shit stolen by some punk.

As a cynical modern eng look for landing page skills

Go to https://www.copilot.com/ and ask a question. You'll see from the answers that it is indeed for entertainment only. It is ridiculously behind ChatGPT, and I don't know how that can happen since Microsoft has access to the same models.

it not as bad as in gpt 4.1 days, but i am wondering if it is just the system prompt or what is going on

Are you not entertained?!

Oracle database has unparalleled scalability. Ask someone who works at Microsoft SQL Server division what their bug database looks like. They will tell you that a single SQL Server instance cannot scale to the entire SQL Server division. Oracle on the other hand has a single database for the entire company. No other database is this scalable.

But Oracle is not just a database company. Oracle started as a database company, but today they are more an applications company than a database company. They have ERP back-office applications (finance, operations, HR), and CRM front-office applications (sales, marketing, service). Oracle bought a large number of applications software companies such as Seibel, PeopleSoft, JD Edwards, NetSuite and Cerner to become this big.

Of course Oracle is also a major cloud services provider and provide AI superclusters, and GPU instances from NVIDIA and AMD (context for today's layoffs).


I'm actually impressed by the amount of abuse our Oracle instances are able take from our developers.

Massive amounts of parallel single reads and writes with millisecond responses mixed with mega-joins of incorrectly indexed tables that works flawlessly "on their machine" that limp on well enough to sneak past performance testing with just the planner silently writhing in agony.


The original question does discount the capability of Oracle's database too much, as only something "golf executives" buy. When you have a large problem that is best solved with a relational model, Oracle delivers and can indeed be worth all the money and license hell involved.

What is the alternative? Have 30,000 meetings? How long will that take?

A great alternative would be operating a company correctly so you don't end up in a situation where you need to cut 30k jobs at once with no notice. That's a bizarre thing that's becoming practically normalized in the USA tech industry.

Agree. People understand and accept firing for performance issues. People understand and accept layoffs when they're a rare event needed to save the company from bankruptcy. What's not understandable or acceptable to most is the current trend of companies doing annual or even quarterly layoffs as an ongoing way to manage earnings.

The realistic alternative is to regularly cut a smaller number of people, which is awful for morale.

Does it have to be awful for morale if the reasoning is clear and compassionate? People understand that shit happens.

And I don't mean this in a mean or evil way, but (of course there's a but) I wonder if this would motivate people to work more effectively as well. My organization has had cuts lately, but it hasn't in a decade. It has been transformative. People are reminded that their jobs depend on them showing up and being valuable.

I don't want people to be scared for their jobs. Perhaps this cycle creates false security, though. There must be a balance in here somewhere.


People tend to be bad in estimating the performance of others and are almost always bad in estimating their own performance. So you end up with people asking themself why it wasn't them and if they will be next. And management can't tell you you are safe, because it might change - and if they promise they can only do that once.

Can you imagine a company spending a long time on meetings?!

6+ months' notice with a severance package equal to at least an annual salary.

Why would you give someone 6 months notice? What good is that for the employee? Especially if the severance is generous.

“Hey, we’re going to fire you in 6 months. Just a heads up.”

Nah. Give me the year of salary and send me home today. Better for the employee and for the company than pointlessly dragging it out. Again, this is assuming generous severance.


Job hunting takes time. Also, they won't be deported in 30 days, along with their families.

I can do a lot of job hunting with a year of severance.

Valid point about employees on visas though.


Maybe they could be kept on the payroll without access to actually work.

But the real problem is any law that would deport someone 30 days after they were laid off, even if they had been working for years. That should be 6 months minimum.


Keeping them on the payroll also enables companies to easily manage and extend medical insurance. I’m pretty sure that what you propose is what a lot of companies actually do, too. They keep them on the payroll for the duration of their severance but do not expect them to actually work.

Agree that no one should be getting deported on 30 days because they got laid off.


A "performance improvement plan" is almost always a 6-month/1 year warning that you're going to get fired/laid off.

It's common in some companies.


Giving any kind of notice about layoffs while expecting employees to continue working is just bad for everyone.

The employees stress out about whether they're going to be impacted. Nobody gets much work done as they update their resumes and prepare for the worst. The best people start looking for other opportunities and find them. If specific employees are told they're going to be laid off, some seek revenge.

Much better to immediately notify those impacted, revoke their access, give them generous severance instead of expecting them to work, and let everyone else know they're safe.


You immediately notify to the affected persons with the given notice period. This is how it's done in civilized countries.

6 months notice + 12 months salary, which is what you are proposing, seems strictly worse to me than just 18 months salary and no notice.

Were those people not already having regular 1-on-1 meetings with a manager?

In many cases the manager is among those laid off. In fact some VPs and their entire org have been laid off.

The flip side should be considered as well. There should be some sort of protection for small startup companies. A big company should not be able to steal an innovative startup's technology by hiring away the employees that worked on the product. That used to happen a lot when Bill Gates was running Microsoft, for example.

Patents provide some protection, but it is flawed because a big company can put you out of business if you get into a patent war. An employee should be able to leave at any time and work for a competitor, but maybe should not do identical work, otherwise startups will have a hard time protecting their IP.


Companies need to put more care into who they trust, and maybe incentivize skin in the game. If leaving for a competitor means you lose equity, agency, ownership, or some intangible, that can outweigh bigger paychecks.

The market should be able to solve this problem without the government setting arbitrary rules, and people should be allowed to sign contracts that limit or restrict their freedom, so long as it involves informed consent from all parties.

If Microsoft wants to hire an AI expert for a million dollars a year, and restrict him from competing for 2 years after leaving Microsoft so as to avoid losing market advantage, that seems like a reasonable thing for Microsoft to want. If all Apple has to do to get all the Copilot secrets is hire the chief copilot engineer for 1.5 million, seems like that creates a toxic dynamic and all but guarantees acquihires and a near immediate turnaround in a startup to corporate pipeline for raiding IP.

Maybe we should be limiting businesses to doing business at a scale they can responsibly handle. If you can't get human customer service for your computer issues because Windows and Mac have scaled far beyond the number of users they could ever hope to handle, maybe that market needs regulation, and unless they scale customer service accordingly, they don't get to target a majority of the world's population as their customer base?

That'd certainly create jobs and opportunities for Linux and induce a revolution in software markets, and it'd limit the incentives for MS and Apple and big tech to do shitty things to suppress the markets overall.


The solution here in finance is garden leave, where people are contractually barred from competing with their former employer during a period for which they are compensated as if they were fully employed!

A lot of politics is people pretending the solution space hasn't already been explored.

Employers have plenty of leverage over workers already.

Every time a pro-worker bill passes, there's an endless scree of "But what about the corporations?". Wow it's tiring.


Small startups in California (where many, if not the majority, of tech startups are headquartered) do just fine without enforceable non-compete agreements.

It's also already unlawful to steal another company's assets when you leave. Besides, companies should file provisional patent applications as soon as they invent valuable proprietary technology to prevent the sort of subject matter leakage you mention.


This is not a mechanism to protect startups. This is a mechanism to protect the flow of ideas, whether the ideas are flowing from a big company to a startup or vice versa. Workers who find a big company bureaucratic should be able to launch a startup. Workers who find a small startup insufficiently resourceful should also join a big company to get resources.

No big company is going to bother poaching that way. They are either going to purchase the company outright or undercut them with their own competing product to kill it off through attrition. We're not in the 2010's anymore where people are banging at the door for singular SWE's.

acquhire practicies show that yes - sometimes people really ARE the company. However, i think for the average C# developer, or Epson printer specialist or wordpress or Bosch controller analyst, these arent really true.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: