Is .map specialcased or do user functions accepting callbacks work the same way? Because you could do the Scott-Mogensen thing of #ifTrue:ifFalse: if so, dualizing the control-flow decision making, offering a menu of choices/continuations.
For any other function accepting a callback, the function on the server will receive an RPC stub, which, when called, makes an RPC back to the caller, calling the original version of the function.
This is usually what you want, and the semantics are entirely normal.
But for .map(), this would defeat the purpose, as it'd require an additional network round-trip to call the callback.
I don't think you could make filter() work with the same approach, because it seems like you'd actually have to do computation on the result.
map() works for cases where you don't need to compute anything in the callback, you just want to pipeline the elements into another RPC, which is actually a common case with map().
If you want to filter server-side, you could still accomplish it by having the server explicitly expose a method that takes an array as input, and performs the desired filter. The server would have to know in advance exactly what filter predicates are needed.
But you might want to compose various methods on the server in order to filter, just like you might want to compose various methods on the server in order to transform. Why is `collection.map(server.lookupByInternalizedId)` a special case that doesn't require `server.lookupCollectionByInternalizedId(collection)`, but `collection.filter(server.isOperationSensibleForATuesday)` is a bridge too far and for that you need `server.areOperationsSensibleForATuesday(collection)`?
* Looking up some additional data for each array element is a particularly common thing to want to do.
* We can support it nicely without having to create a library of operations baked into the protocol.
I really don't want to extend the protocol with a library of operations that you're allowed to perform. It seems like that library would just keep growing and add a lot of bloat and possibly security concerns.
Couldn't this be done in some way when validation exists, that the same validation is used to create a "better" placeholder value that may be able to be used with specific conditional functions? (eq(), includes(), etc.)
(1998). Java existed, but neither Scala nor Java-with-generics did.
From the conclusion:
"We have presented a programming protocol, Extensible Visitor, that can be used to construct systems with extensible recursive data domains and toolkits. It is a novel combination of the functional and object-oriented programming styles that draws on the strengths of each. The object-oriented style is essential to achieve extensibility along the data dimension, yet tools are organized in a functional fashion, enabling extensibility in the functional dimension. Systems based on the Extensible Visitor can be extended without modification to existing code or recompilation (which is an increasingly important concern)."
I don't think that's right - IIRC it used to be possible to write out a file, if loaded from a file:// URL, directly from JavaScript. Then that ability got nobbled because security (justifiable) without properly thinking through a good alternative (not justifiable). I mourn the loss of the ability, TiddlyWiki was in a class of its own and there should have been many more systems inspired by its design. Alas.
ETA: Wikipedia has reminded me the feature was called UniversalXPConnect, and it was a Firefox thing and wasn't cross-browser. It still sucks that it was removed without sensible replacement.
I used TiddlyWiki a lot to manage my D&D 3.5 campaign back in the day. As I recall, it originally was a true stand-alone HTML document capable of overwriting itself, but once browsers dropped support for this capability, users had to begin using various workarounds, and this remains the status quo today.
TiddlySaver.jar was one such workaround. A check in the Wayback Machine suggests that it was originally required only for Safari, Opera, and Chrome; IE and Firefox needed no such plugin. Nowadays, there are several workarounds, and setting up one is a mandatory installation step: standalone applications, browser extensions, servers, etc. Some are clunky (e.g. you have to keep your wiki in your Downloads directory or the browser can't write to it), and either way, TiddlyWiki is no longer truly a single stand-alone HTML file, at least not for writing purposes. It's still a very versatile tool, though.
> Undermining the credibility of computer science research is the best possible outcome for the field, since the institution in its current form does not deserve the credibility that it has.
Horseshit. This might be true for AI research (and even there that's an awfully broad brush you're using, mate), but it's certainly not true for other areas of computer science.
Is there a lot of good research in computer science, of course.
Is there even more stuff which really shouldn't be published, and has experiments which are abused to show off how great new technique A is, while hiding that was attempt 72 at making an experiment that showed A was great? Also of course.
Maybe your lab is different (if you work in a research setting), but most of the researchers will readily admit that most of their research output is at least somewhat bull**. It's something that is trained in to people from high school research projects onwards - people judging your results usually do not have time or ability to research you work and even if they have they usually have much more importing things to do than check your mediocre results.
As a society, we have far too much trust in science, however any time this argument is brought up, we focus on conspiracy theorists who struggle with 100+ years old theories as if the visage of the public trust in science will change their mind, ignoring that any member of the public, who accidentally discovers the Jenga tower the science is built on (but hidden) will become much more likely to believe those charlatans in the future.
Excuse me? Based on what? In my time in academia exactly zero researchers would claim that their work is somewhat bullshit.
As a society, there is laughably little support for science, instead the majority of policy and business decisions are based on fairy tales and snake oil. We need more trust in science.
> Static analysis can tell what forms are invoking an fexpr and which are function calls. It's not got different from knowing which are macros. That problem can be solved.
I don't think this is the case. Consider Kernel's
($lambda (f) (f (+ 3 4)))
Is `f` a fexpr or a closure? We cannot know until runtime.
We look it up at compile time. If it has a function binding defined at compile time, we go with that hypothesis. If it is a macro, we expand it. If it is a fexpr, we go with that hypothesis (and then do what? check if the application provides compilation semantics for the fexpr or abort.)
If it's unbound, we assume that it will be function and compile accordingly. We make a note that we did this.
If, by the end of the compilation unit, a definition of f has not been seen, we issue a warning. If a conflicting definition is seen, like a macro or fexpr, we also issue a warning.
(We provide a macro with-compilation-unit that the programmer or their build system can use to clump together multiple files into one compilation unit for the purpose of generating these kinds of diagnostics related to definitions.)
We carefully document all of this in our reference manual, in a section about how compilation semantics can differ from interpretation semantics.
At run time, you can inspect the current binding of f to see whether it is a macro, function or whatever. In an interpreter with fexprs, it would be late like this. At the time (f ...) is being called, if it was redefined to a fexpr, we go with that. If it is still a function, we call that.
"Preserves <something>", for example "Preserves data", reads like "it preserves data". Probably less so in the middle of a sentence, due to the uppercasing, but in the TOC it reads like bullet points enumerating what is preserved.
Putting Preserves in italics would be an alternative.
The name nevertheless feels awkward to me, also in spoken conversation. A made-up word like maybe “Pres” or “Edal” (from “expressive data language”) would work better IMO.
XML would make a fine choice. It lacks atomic data types other than text, and compound data types other than sequences, unless you count element attributes, which are in a kind of awkward position because of the historical development of the language. Preserves has a richer suite of primitive data types and decomposes XML's elements into separate notions of map, sequence, and tagged value.
The version number is the schema language version, not the version of the collection of types described in the file.
The schema language is extensible/evolvable in that pattern matching ignores extra entries in a sequence and extra key/value pairs in a dictionary. So you could have a "version 1" of a schema with
Then, Person.v2 from "version 2" would be parseable by Person from "version 1", and Person from "version 1" would parse using "version 2" as a Person.v1.
The schema language is in production but the design is still a work in progress and I expect more changes before a 1.0 release of the schema language.
(The schema language is completely separate from the preserves data model, by the way -- one could imagine other schema languages being used instead/as well)
Thanks for the clarification! That sounds about as evolvable as JSON or any system that uses string keys (like HTTP headers).
Protobufs have an extra level of indirection built in: code refers to fields using names, but numbers are sent on the wire. Without convenient access to field numbers, they can’t as easily be hard-coded. This also strongly encourages using the schema file for most tasks. With protobufs (or similar), any user-friendly editor will need a schema to make sense of the data.
JSON-like systems and protobufs have opposite design goals: encouraging versus discouraging schemaless data access.
reply