Hacker Newsnew | past | comments | ask | show | jobs | submit | drfuchs's commentslogin

Colossus: The Forbin Project is simply a renamed release of The Forbin Project, a few months after the later had a poor opening. Didn’t help the box office much. I liked it, back when it was easy to dismiss as an impossible dystopia.

Oh. Well the sequel in print was named Colossus. It is about continuing life under the reign of supercomputer.

Waymo halted service in San Francisco as of Saturday at 8 p.m., following a power outage that left approximately 30% of the city without power. The autonomous cars have been causing traffic jams throughout the city, as the vehicles seem unable to function without traffic signals.


It still says "44 characters" when I click the link.


Fixed again. I missed one. :)

Thanks.


Any chance it was for the "IBM Personal Computer AT/370" that nobody remembers (perhaps because nobody used)?


That was one option I thought of at first (mentioned in the first section), but the info I found indicated that the /370 models used the same firmware as the "plain" 5170s - if there were any BIOS extensions, they were probably somewhere on the add-on cards. The AT/370 also had 512K of on board RAM, while this BIOS seems to indicate 640K.


Plenty of people remember, and used, them. Just not people who tend to hang out here. I knew several IBM VM dev types who had them as light dev/remote mainframe access machines, usually at home. They were popular enough there was a followon product: the PC/390 which was the same idea, more advanced processor, based on a PS/2 microchannel platform (and, AFAIK, OS/2).

You want really obscure? Unisys had the same idea with the "Micro-A", which was a PC running OS/2 with a coprocessor card with a single chip implementation of an A-series mainframe. I know of 2, possibly 3, still around.


Details: The IBM AT/370 used standard bios on the motherboard, and the two 68k custom cards had their own bioses. The 68ks were very heavily modified by one of the motorola engineers.

Its the second version of the AT Bios that was disgusting was verion 2, that ran on 6mhz 286s and prevented you from swapping the crystal for a 16Mhz/8Mhz speed up. The first version had bugs, and the third version was for the 8Mhz machines. ( still a few bugs ).

This is the AT/370:

https://en.wikipedia.org/wiki/PC-based_IBM_mainframe-compati...

https://www.cpushack.com/2013/03/22/cpu-of-the-day-ibm-micro...

https://anycpu.org/forum/viewtopic.php?f=22&t=350

There was one additional model of the IBM AT: THE IBM XT/286: An AT class mother board in an XT sized case.

https://www.dosdays.co.uk/computers/IBM%20PC-XT-286%20(5162)...


Oops. Anyway, I remember attending a talk by one of the IBM engineers back when they first released the XT/370. He said that they looked at all possible ways to integrate their production line as a kind of secondary track off of one of the main production lines for the PC/XT, but the most economical option ended up being a separate facility that would receive normal pallets of regularly boxed, end-user XTs from the main factory, unbox them, make the mods, and pack them back into XT/370-labeled boxes for shipping.


Article discusses and dismisses that possibility


I remember that. I think it ran VM/SP or whatever it must have been called.

I recall the 370 part was on a card.


3 cards. CPU/Memory and communications cards.


The high sale price was due to the fact that this was a rare "REVENGE of the Jedi" rather than the normal "RETURN of the Jedi" poster. The back-story is that the movie title was originally going to be "Revenge..." but then there was pushback because Yoda had said "A Jedi craves not revenge" in the previous episode, so it got changed.


And there are 2 varieties of this "revenge" poster, too. Both of which were in this collection. One without the date, and one with [0] which sells for ~1/3 as much. Even though these were printed in reasonably high quantity and distributed straight to the collector market at the time of the movie's promotion, since the franchise was by then quite popular.

[0]https://auctions.emovieposter.com/Bidding.taf?_function=deta...


> Yoda had said "A Jedi craves not revenge" in the previous episode

No, he never said that.


Yeah, and I suppose you’re going to tell me that Han didn’t shoot first, either. Did you refer to an original 1980 70mm release print, before all the fiddling around they did on subsequent releases? And newspapers and fanzines from 1982 that covered the issue (at first, LucasFilm denied these posters even existed).

On the other hand, it seems that you are, in fact, correct. Oh, well.


In the lore of this early title, I heard that George Lucas said something to that effect, that a Jedi would not seek revenge.


Real programmers would have donated $524,288. But seriously good news nonetheless.


You can chip in remainder in soft monthly installments of $512 over two years.


Indeed. Take a gander at the last screenful of ziglang.org


We had to leave some room at the top for SpiralDB and ZML to get to the next power of two, or they'd have to raise the exponent. ;P


For those who don't intuitively think in base 2,

2¹⁹ bytes, or 512KiB.


I'd prefer to express it in hexadecimals, and 1 would be 256 cents. So it would come out to be very slightly more, at 0x00030000.00 hexadollars, or 196608.00 hexadollars, or 50331648 cents – $503,316.48

I may have been looking at the binary year 2038 countdown :D https://retr0.id/stuff/2038/


That’s a ZipCode.


Not being able to chown() caused us grief developing Frame Maker back in the 80s. The responsible way to handle "save" was to write the document into a new file mydoc.new, then rename mydoc.cur to mydoc.backup and then rename mydoc.new to mydoc.cur, so that failure never left you in the lurch. The only problem was that there was no way to create mydoc.new to have the same owner as mydoc.cur and customers complained that we'd keep changing the owner of their files. If only the semantics of the unix filesystem supported file generation numbers, like on Tops20 or VaxVMS, where the default for writing to a file isn't "yeah, sure, write over top of the old data, and let's hope nothing fails along the way" this would not have been a problem.


> caused us grief developing Frame Maker back in the 80s

To be fair, Frame Maker caused the rest of us a whole lot of grief back then, too. :)

The license manager daemon, lmgrd (?) would crash regularly enough that we just patched the dependency out of our binaries. Sorry about that!


ive always felt that file systems are by far the weakest point in the entire computing industry as we know it.

something like zfs should have been bog standard, yet its touted as an 'enterprise-grade' filesystem. why is common sense restricted to 'elite' status?

ofcourse i want transparent compression, dedup, copy on write, free snapshots, logical partitions, dynamic resizing, per-user/partition capabilities & qos. i want it now, here, by default, on everything! (just to clarify, ive ever used zfs.)

its so strange when in the compute space you have docker & cgroups, software defined networking, and on the harddrve space i'm dragging boxes in gparted like its the victorian era.

why can't we just... have cool storage stuff? out the box?


All of those things come with tradeoffs.

Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.

Dedupe needs indexing to find duplicates and makes writes complex (at least for realtime dedupe). I think online dedupe has pretty limited application, but offline dedupe is interesting.

Copy on write again makes writes complex, and tends to fragmentation of files that are modified. Free snapshots are only free when copy on write is the norm (otherwise, you have to copy on write while a snapshot is open, as on FreeBSD UFS). Copy on write offers a lot, but some applications would suffer.

Dynamic resizing (upwards) is pretty common now. Resize down less so. Zfs downsizing is available, but at least when I tried it, the filesystem became unbootable, so maybe not super useful IMHO.

Logical partitions, per user stuff, qos adds complexity probably not needed for everyone.


> Compression tradesoff compute vs i/o, if your system has weak compute, it's a bad deal. Most modern systems should do well with compression.

Older systems with worse compute also had worse i/o. There are cases where fast compression slows things down, but they're rare enough to make compression the better default.


I certainly don't want my compiler to get slower, because we now compress the files that are gone in a few minutes anyway. Compression is useful for archiving files, but for anything that you currently work with, it's useless and only wastes compute.


If you're limited by your SSD, one core running lz4 (or zstd) will double your write speed for object files. If you're not writing hundreds of megabytes per second, then you'll barely notice the overhead at that phase while it makes later phases that load the data back snappier.

If everything fits in ram then compression could be postponed.

And for that area in between, where your files don't fit in ram but compressed they would fit in ram, compression can give you a big speed boost.


That's true. I think less memory is often accompanied with less compute. For example "one core running" is 50% of my available compute.


Because the vast majority of personal computer users have no need for the complexity of zfs. That doesn't come for free, and if something goes wrong the average user is going to have no hope of solving it.

FAT, ext4, FFS, are all pretty simple and bulletproof and do everything the typical user needs.

Servers in enterprise settings have higher demands but they can afford an administrator who knows how to manage them and handle problems. In theory.


FAT bulletproof? The newest versions have a few improvements but this is a line of filesystems for disposable sneakernet data.


Maybe bulletproof is a bit strong but I mean, it was fine on DOS/Windows for decades. I never lost data due to filesystem corruption on those computers. Media failures, yes frequently in the days of floppy disks.


Running scandisk and chkdsk was also a thing for decades. Luckily the lost sectors were often in unimportant files. But definitely a gamble.

The bulk of the safety came from the redundancy of copying the file across machines, not filesystem protections.


I had a HD fail on me while using Windows 98 as main OS, yet thanks to ext, I think it was ext2 at the time this happened, I still managed to repurpose it for Linux, for several months.

It was ok from possible data failures point of view, I didn't had much data other than the distro and the stuff I also needed to compile under Linux.

Somehow it managed to still work with the disk, with the sectors that were not damaged.


Because it was extremely difficult to create something like zfs? And it was proprietary and patent-encumbered, and the permissively licensed versions were buggy until about 5 minutes ago?

That's like saying the Romans should have just used computers.


I would guess that many early systems just didn't have the storage space for a lot of multiple versions of files. Was VMS saving diffs or full copies of files?

Once storage space was plentiful, the pattern of "overwrite the existing file" was already well established.


Typical TOPS-20 and VMS hardware of the time would have less than a gigabyte of spinning disk space, to be shared among many dozens of users. Full copies of files were saved, and there were strict per-user disk allotments. Creating Generation 2 of a file would mark the Generation 1 version as deleted. When you ran out of allotment during execution, the OS would pause your program and give you the chance to issue an Expunge command to really recycle all (or a subset) of the deleted files, and then you'd just Continue the paused process. Similar to desktop "Trash" folders where deleted things go, and that you may have to Empty once in a while.


And when it finally dies and is disposed of, the mercury in the internal (ingenious) mechanism will likely end up in the wild. P.s. They came in colors? I only ever saw them in tan, which virtually everyone had half a century ago.


The ad mentions you can easily paint it, so I think it just came in "silver-bronze".


Steve Gibson? Now that’s a name I’ve not heard in a long time… Maybe 30 years? SpinRite?


Steve Gibson also does a show called Security Now with Leo. One of the best IT security podcasts out here. He knows so much about IT and IT security it is amazing.

https://twit.tv/shows/security-now

Long Live Steve Gibson


Steve “RAW sockets Will Destroy the Internet” Gibson


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: