Hacker Newsnew | past | comments | ask | show | jobs | submit | masklinn's commentslogin

Although Yaml is a dreadful thing, given the context and the size of a normal gemspec I would be very surprised if it showed up in any significant capacity when psych should be in the low single digit MB/s throughput.

Affine types / destructive moves, type-level safety signal (sync/send), container-type locks.

I really miss these when doing concurrent stuff in other languages.


> To me it feels like rust is barely readable sometimes. [...] C++ have a low barrier of entry and any beginner can write code.

Here's rust code:

    fn main() {
        println!("Hello, world");
    }
Here is the equivalent C++ for the vast majority of its life (any time before C++23, has MS even shipped C++23 support yet?):

    #include <iostream>

    int main() {
      std::cout << "Hello World!" << std::endl;
      return 0;
    }
C++ initialisation alone is a more complex topic than pretty much any facet of Rust. And it's not hard to find C++ which is utterly inscrutable.

> This can't possibly be guaranteed to work just by disabling the checker, can it?

It works in the sense that the borrow checker stops bothering you and the compiler will compile your code. It will even work fine as long as you don't write code which invokes UB (which does include code which would not pass the borrow checker, as the borrow checker necessarily rejects valid programs in order to forbid all invalid programs).


> It will even work fine as long as you don't write code which invokes UB (which does include code which would not pass the borrow checker, as the borrow checker necessarily rejects valid programs in order to forbid all invalid programs).

To be clear, by "this" I meant "[allowing] code that would normally violate Rust's borrowing rules to compile and run successfully," which both of us seem to believe to be UB.


Not quite, there is code which fails borrow checking but is safe and sound.

That is part of why a number of people have been waiting for Polonius and / or the tree borrows model, most classic are relatively trivial cases of "check then update" which fail to borrow check but are obviously non-problematic e.g.

    pub fn get_or_insert (
        map: &'_ mut HashMap<u32, String>,
    ) -> &'_ String
    {
        if let Some(v) = map.get(&22) {
            return v;
        }
        map.insert(22, String::from("hi"));
        &map[&22]
    }
Though ultimately even if either or both efforts bear fruits they will still reject programs which are well formed: that is the halting problem, a compiler can either reject all invalid programs or accept all valid programs, but it can not do both, and the former is generally considered more valuable, so in order to reject all invalid programs compilers will necessarily reject some valid programs.

> So it is surprising to me that it doesn’t blow up somewhere when that invariant doesn’t hold.

The final program may be broken in various manners because you don't respect the language's prescribed semantics, in about the same way they do in C and C++. From the compiler's perspective the borrow checker validates that rules it assumes are upheld are actually upheld.

mrustc already compiles rust code without having a borrow checker (well IIRC recent-ish versions of mrustc have some borrow checking bits, but for the most part it still assumes that somebody else has done all the borrow checking).


The borrow checker does not deal with ownership, which is what rust’s memory management leverages. The borrow checker validates that borrows (references) are valid aka that they don’t outlive their sources and that exclusive borrows don’t overlap.

The borrow checker does not influence codegen at all.


> I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.

`user` is typed as a struct, so it's always going to be a struct in the output, it can't be nil (it would have to be `*User`). And Decoder.Decode mutates the parameter in place. Named return values essentially create locals for you. And since the function does not use naked returns, it's essentially saving space (and adding some documentation in some cases though here the value is nil) for this:

    func fetchUser(id int) (User, error) {
        var user User
        var err Error

        resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
        if err != nil {
            return user, err
        }
        defer resp.Body.Close()
        return user, json.NewDecoder(resp.Body).Decode(&user)
    }
https://godbolt.org/z/8Yv49Yvr5

However Go's named return values are definitely weird and spooky:

    func foo() (i int) {
     defer func() {
      i = 2
     }()
     return 1
    }
returns 2, not 1.

> Which I’m imagining is what Rust is doing with a Result type?

Result only carries information about the success / failure of an unspecified operation, it is not a long term signal and furthermore is not resistant to tampering (so a mistake processing the Result can undo the validation).

What you want in this case is a new separate type, which can only be constructed through the check operation. This is the ethos of "parse, don't validate".

And you're correct that in that case you don't need the check to be close to the consumer, in fact you want the opposite, for the check to be as close to the software edge as possible such that tainted data has limited to no presence inside the system and it's difficult or impossible to unwittingly interact with it.

But of course the farther into that direction you head the more expressive a type system you need. And some constraints are not so easily checked as there's a multitude of consumers each with their own foibles, or as in this case you need to check the interaction of multiple runtime objects.


> an era where shaving bytes on storage was important

Fixed size strings don’t save bytes on storage tho, when the bank reserves 20 bytes for first name and you’re called Jon that’s 17 bytes doing fuckall.

What they do is make the entire record fixed size and give every field a fixed relative position so it’s very easy to access items, move record around, reuse allocations (or use static allocation), … cycles is what they save.


> Fixed size strings don’t save bytes on storage tho

I have seen plenty of fixed strings in the 8 to 20 byte range, not much, but often enough for a passable identifier. The memory management overhead for a simple dynamically allocated string is probably larger than that even on a 32 bit system.


> If it was designed for non-null terminated strings, why would it specifically pad after a null terminator?

Padded and terminated strings are completely different beasts. And the text you quote tells you black on white that strncpy deals in padded strings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: