>> The desktop outlook (the real one, not the 'new' one which is just the web version) is much better of course as it searches locally but it's only on windows.
I am very confused by the MicroSoft product branding, but on MacOS there is a "proper" application: "Microsoft Outlook for Mac". As I understand this is called the "New Outlook" which is a native, non-Electron version. As it is not Electron based it is only 2.6GB (/s).
Anyways.. the search capabilities are insanely bad for searches outside of your current mailbox. It might be related to handling of large result sets where it just provides a limited set of random hits as opposed to a set with the most recent hits. When you provide from-to dates (from a hideously complicated "advanced" menu) the results seem a bit better.
edit/addition: on MacOS, Outlook supposedly uses the native "Spotlight" search engine. MacOS spotlight, when used from the Finder, actually does a really good job in finding the E-mail .eml files from the file system and, when clicked, they open up in Outlook.
> I am very confused by the MicroSoft product branding
Have a look at Word. The app, the web version, the Teams versions. Try editing in one and then opening in another - they aren’t even compatible. It’s such a nasty swamp.
Yes Mac is the exception. They still have a real app there. On windows they're discontinuing the real app for an electron one (or Webview2 as they call that)
It's unfortunately just a webview to their cloud outlook. If you have an account that's not with Microsoft they will pull your entire mailbox into their cloud (though they don't charge for it). Just pulling directly from another mailserver is something they don't care about.
I'm surprised the search is so bad on Mac too. But spotlight has degraded a lot. When it first arrived in tiger it was great but when I was last on Mac 3 years ago it was indeed pretty bad.
Was not aware of this console and especially not of the processor a contemporary of the Atari 2600 which apparently was primarily successful in Europe.
Programming for this must have been a joy as it had 37 BYTES of memory (the Atari had a whopping 128 BYTES).
As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months. Since every release includes all ports and packages, you end up having to recertify your main application constantly.
Compare this to RedHat: yes, a paid subscription is expensive, but RedHat backports security fixes into the original code, so open source package updates don’t break your application, and critical CVEs are still addressed.
Microsoft, for all its faults, provides remarkable stability by supporting backward compatibility to a sometimes ridiculous extent.
Is FreeBSD amazing, stable, and an I/O workhorse? Absolutely: just ask Netflix. But is it a good choice for general-purpose, application-focused (as opposed to infrastructure-focused) large deployments? Hm, no ?
each point release is only supported for about three months
Where are you getting 3 months from? It's usually 9 months and occasionally 12 months.
Also, major versions are supported for 4 years and unless you're messing with kernel APIs nothing should break. (Testing is always good! But going from 14.3 to 14.4 is not a matter of needing lots of extra development work.)
I stand corrected, the official current release plan is "...while each individual point release is only supported FOR THREE MONTHS AFTER THE NEXT POINT RELEASE".
For keeping up to date with vulnerability fixes for packages/ports (which are far more frequent) the "easy" path is to use the last FreeBSD point release.
Yes, so what you do is you run `freebsd-update fetch` then `freebsd-update install` or if you switch a minor version you do freebsd-update upgrade -r MAJOR.MINOR` and do the same. Minor release upgrades are not the breaking kind. ABI, etc. will stay intact. There aren't expected breakages it's just that stuff will have new features and you might have some really specific use case where that shell command version output is checked and breaks stuff when it changes.
I think that's a big misunderstanding coming from other systems. Minor system updates are the kind of updates that a lot of other systems would pull in silently, while FreeBSD's major releases are a lot more like OpenBSD's releases (where minor and major version numbers don't make a difference).
Minor in FreeBSD means that stuff isn't supposed to break. It's a lot more like "Patch Level". I always want to mention Windows here for comparison, but keep thinking about how much Windows Updates break things and did so for a long time (Service Packs, etc.).
Maybe going about it from the other side makes more sense: FreeBSD got a lot of shit for not changing various default configurations for compatibility reasons - even across major versions. These are default configurations, so things where the diff is a config file change. I think they are improving this, but they really do care about their compatibility, simply because the use case of FreeBSD is in that area.
This is in contrast to eg OpenBSD where not so few people run -current, simply because it's stable enough and they want to use the latest stuff. They only support the last release (so essentially release +6 months) but again even there things do not usually break beyond having to recompile something. They all have their ports/packages collections and want stuff to run and OpenBSD being used a lot more "eating your own dogfood" style, which you can see with there being an OpenBSD gaming community, while that OS doesn't "even" support wine.
> As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months. Since every release includes all ports and packages, you end up having to recertify your main application constantly.
How much support do you plan on getting? The old releases don't really turn into pumpkins. Yes, every two or three major releases, they end up with a minor release that adds something to libc where binary packages from X.2 won't run on X.1 or X.0. But this isn't usually a huge deal for servers if you follow this plan:
Use FreeBSD as your stable base, but build your own binaries for your main service / language runtimes. If you build once and distribute binaries, keep your build machine / build root on the oldest minor revision you have in your fleet. When you install a new system, use an OS version that's in support and install any FreeBSD built binary packages then.
You do have to be prepared to review updates to confirm if they need you to take action (many to most won't if you are careful about what is enabled), backport fixes, build packages yourself, or upgrade in a hurry when necessary, but you don't often actually need to.
I don't think this strategy works for a desktop deployment; there's too many moving pieces. But it works well for a server. Most of my FreeBSD servers for work got installed and never needed an OS upgrade until they were replaced by better hardware. I did have an upgrade process, and I did use it sometimes: there were a couple kernel bugs that needed fixes, and sometimes new kernels would have much better performance so it was foolish to leave things as-is. And a couple bugs in the packages we installed; usually those didn't need an OS upgrade too, but sometimes it was easier to upgrade the handful of old servers rather than fight everything; choosing battles is important.
Or you can go like Netflix and just run as close to -CURRENT as you can.
>> Or you can go like Netflix and just run as close to -CURRENT as you can.
The point is that for any system that has a publicly facing (internet) part you will have to keep up to date with known vulnerabilities as published in CVEs.
Not doing so makes you a prime target to security breaches.
The FreeBSD maintainers do modify FreeBSD to address the latest known vulnerabilities.... but you will have to accept the new release every 3 months.
Aditionally, those releases do not only contain FreeBSD changes but also changes to all third party open source packages that are part of the distribution. Every package is maintained by different individuals or groups and often they make changes that change the way their software works, often these are "breaking" changes, i.e. you will have to update your application code for it to be compatible with that.
> Aditionally, those releases do not only contain FreeBSD changes but also changes to all third party open source packages that are part of the distribution
No they don't. Only major releases so, which are once every 2 years or so. And the old ones stay supported until the release after that. There's always two major releases in support. So you have about 4 years.
> The point is that for any system that has a publicly facing (internet) part you will have to keep up to date with known vulnerabilities as published in CVEs. Not doing so makes you a prime target to security breaches.
Sure, you have to be aware of them, but for something like this [1], if you don't use SO_REUSEPORT_LB, you don't have to take any further action.
The defect is likely in other FreeBSD releases that are no longer supported, but still, if you don't use SO_REUSEPORT_LB, you don't have to update.
If you do use the feature, then for unsupported releases, you could backport the fix, or update to a supported version. And you might mitigate by disabling the feature temporarily, depending on how much of a hit not using it is for your use case. Like I said, you have to be prepared for that.
You can also do partial updates, like take a new kernel, without touching the userland; or take the kernel and userland without taking any package/ports updates.
Some security advisories cover base userland or ports/packages... we can go through an example one of those and see what decision criteria would be for those, too.
What you measured is just the overlap between minor releases of the same major release. It helps to think of them as service packs if you want a MicroSoft analogy. So each minor release is supported until it has be surplanted for 3 months by a new one on the same major release line or the whole major release line goes end of life.
Sure, but the point is that each minor release contains changes in all third party open source packages/ports by taking them to the head version.
Open source packages often include breaking changes, all but guaranteeing your application to fail. With (a paid version of) RedHat Linux, RedHat modifies the open source packages to remediate CVEs by modifying the original version.
> in all third party open source packages/ports by taking them to the head version.
No it doesn't!
You can totally stick with the old version of packages. You are NOT forced to switch third party version numbers. And as mentioned elsewhere I did switch eg. Postgres versions interdependently of the OS.
What is being updated is the userland in the OS not in ports per se. According to the Release Notes of the latest FreeBSD release 14.3[1], OpenSSL, XZ, the file command, googletest, OpenSSH, less, expat, tzdata, zfs and spleen have been updated when it comes to third party applications as well. ps has been updated and some sysctl flags to filter for jails have been introduced.
These are the kinds of updates you'll get from point releases, not the breaking kind. These go into major releases, which is exactly why the support strategy is "The latest do release + X months and at least that long".
> As much as I love FreeBSD, the release schedule is a real challenge in production: each point release is only supported for about three months.
I think point releases "don't count". Point releases means you run freebsd-update, restart and are done.
And major releases tend to be trivial too. You run freebsd-update, follow instructions it puts out, do `pkg update -u`.
Been doing that for production database clusters (Postgres) for hundreds of thousands of users for over a decade now and even longer in other settings.
Sure you do your planning and testing, but you better do that for your production DB. ;)
These are thousands of queries a second setup including a smaller portion of longer queries (GIS using PostGIS).
That said: Backwards compatibility is something that is frequently misunderstood in FreeBSD. Eg. the FreeBSD kernel has those COMPAT_$MAJORVERSION in there by default for compatibility. So you usually end up being fine where it matters.
But also keep in mind that you usually have a really really long time to move between major releases - the time between a new major release and the last minor release losing support.
And to come back to the Postgres Setup. I can do this without doing both the DB (+PostGIS) upgrade at once cause I have my build server building exactly the same version for both versions. No weird "I upgrade the kernel, the OS, the compiler and everything at once". I actually did moved a from FreeBSD 13 to 14 and PG from 14 to 18 - again with PostGIS which tends to make this really messy on many systems - without any issues whatsoever. Just using pg_upgrade and having the old versions packages in a temporary directory.
This is just one anecdote, but it's a real life production setup with many paying customers.
I also have experience with RedHat, but for RedHat the long term support always ends up being "I hope I don't work here anymore when we eventually do have to upgrade".
But keep in mind we talk about years for something that on FreeBSD tends to be really trivial compared to RedHat which while supporting old stuff for very long does mean a lot of moving parts, because the applications you run are a lot more tied to your release.
On FreeBSD on your old major release you totally can run eg the latest Postgres, or nginx, or python or node.js or...
The just released FreeBSD 15 for example as a major release is supported until end of 2029, how much more LTS support do you want?
The minor point releases are close to a year in support. And that is only talking base system. Packages and ports you can also easily support yourself with poudriere and others.
As for backwards compatibility: FreeBSD has a stable backwards compatible ABI. That is why you can run a 11.0 jail on a 15.0 host. With zero problems.
Other way around is what doesn't work. You can't run a 15.0 jail on a 11.0 host for example. But backwards compatibility is definitely given.
As someone who was a Linux sysadmin for several years, looking after a large fleet of RedHat boxes, I can say that the "don’t break your application" promise is BS. Their patches broke applications several times resulting in having to hold them back for months for it to be fixed.
The only Linux distro that actually lives up to that promise in my experience is Alpine.
Comparing FreeBSD with paid RedHat is a bit of a tilted comparison. The vast majority of Linux deployments do not use paid RedHat and do not get that kind of extreme backporting of security fixes.
Yes and no. If you get yourself into a position where you have servers deployed on version x.y of whatever Linux distribution you went with and now can't or won't upgrade from that, the vast majority of the time you're going to be exactly as stuck as if you were on FreeBSD. If you wanted to benefit from paid RedHat backports you had to decide to deploy your application to LTS RedHat on day 1, and the vast majority of people don't.
Great memories, I worked for a CAD/CAM company at the time.. as an intern. One of the problems they had was that they did not want to ship demoes of their system to certain countries as they had seen code be reverse engineered and stolen.
I made a demo suite that allowed for 3D renders to be played back in vector mode from the internal 4115 memory.
The feedback we got from the main office in England was not good: the demo made it seem that the system was capable of creating real-time rotation views of complex models.
Well, yes, it took several days to compute all frames but once I had the vectors the 4115 could show the frames at incredible speed.
They flew me to headquarters to explain how I got that to work and potentially incorporate a demo module into the system.
Company went sideways after that, I ended up in Cambridge at another startup in a similar but different space, they used Sun Workstations !
Good days.
"Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine's capability. The Soviets also agree to the experiment. Colossus and Guardian begin to slowly communicate using elementary mathematics (2x1=2), to everyone's amusement. However, this amusement turns to shock and amazement as the two systems' communications quickly evolve into complex mathematics far beyond human comprehension and speed, whereupon Colossus and Guardian become synchronized using a communication protocol that no human can interpret."
Then it gets interesting:
"Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference. "
When we built systems in the early 90's (non web GUI), typical requirements were that login/startup would at most take a few seconds and any user action would have to be satisfied with sub second response time. I often think about that when I am waiting for the third SSO redirection to complete or the web page to complete its 200 web requests after a single click. We gained a lot but efficiency seems to have taken a backseat in many cases.
The article is great but the web site is supposedly related to a book "inventing the future".. which is nowhere to be found. Other than a big, slowly loading graphic, 3 posts and indexes for the book... the site doesn't provide a clue about where to acquire the actual (PDF only?) book.
I assume you have to sign up to find out more ?
On the web I can only find articles about the book.
So.. what is the deal in making the actual book hard to find ?
I had a similar issue, clicking the author's name gets you to a decent page, but yeah I'd actually prefer if he made it a bit easier to buy the book! I'll have to get it now after such a nice article
This page [0] still links to (when clicking the main image) an oddish seeming corner of your site that mentions purchasing the book but has no link to it
Based on the quality of the article, the subject matter of the book being right in the center of my wheelhouse and the references I could find on the internet, I just ordered a copy (apparently a paper copy), look forward reading it.
Just rewatched it on Laserdisc last month (era-appropriate and all) :)
The computer effects are amazing (especially considering it was made in 1983), the concept is very interesting, the acting is a bit odd, and... Natalie Wood sadly died during production (her sister stepped in to help complete the movie).
I am very confused by the MicroSoft product branding, but on MacOS there is a "proper" application: "Microsoft Outlook for Mac". As I understand this is called the "New Outlook" which is a native, non-Electron version. As it is not Electron based it is only 2.6GB (/s).
Anyways.. the search capabilities are insanely bad for searches outside of your current mailbox. It might be related to handling of large result sets where it just provides a limited set of random hits as opposed to a set with the most recent hits. When you provide from-to dates (from a hideously complicated "advanced" menu) the results seem a bit better.
edit/addition: on MacOS, Outlook supposedly uses the native "Spotlight" search engine. MacOS spotlight, when used from the Finder, actually does a really good job in finding the E-mail .eml files from the file system and, when clicked, they open up in Outlook.
reply