The tourists do not have to be particularly wealthy. All they need is to afford a per-night/week fee that is a bit higher than average equivalent local long-term rent.
At 100€ per night, at 70% occupancy the equivalent monthly rent is ~2000€.
> Similarly how capital gains tax has 0 effect on billionaires and just rip off investors (aka the poorest).
Capital gains tax has zero effect on billionaires precisely because it is not levied on them in most cases. When one's total wealth hits high tens of millions suddenly they get access to instruments to use their wealth without triggering "taxable events". Probably the most well-known example are loans against shares.
This eliminates [some of] downward pressure on some asset prices, triggering positive feedback loop on price and thus wealth transfer.
> Perhaps this is me being an ignorant American, but the idea of the government telling people how to use private property doesn't sit well.
Yes, it's you being both ignorant and american. This argument quite directly, without any slopes (slippery or not), extends to for example acquiring a piece of land, building some housing at premium costs and then slapping a factory (and waste landifill, for a good measure) on the rest of the lot.
Housing regulations exists for many abundant reasons. Living quarters vs short-time rentals drastically change requirements for and load on surrounding infrastructure, which is built out based on established zoning.
IIUC, the problem is a bit tautological. Regardless of legality of reverse engineering itself, HDMI is a trademark which you obviously cannot use without being licensed. Using HDMI connector itself is probably a grey-ish area: while you can buy the connectors without agreeing to any licenses and forwarding compliance on vendor, it would still be hard to argue that you had no idea it was a HDMI connector. If you are using the HDMI connector, but are not sending anything else but DVI over it, it should be fine-ish.
The real problem starts when you want to actually support HDMI 2.0 and 2.1 on top. Arguing that you have licenced for 2.0 and then tacked a clean-room implementation of 2.1 on top gets essentially impossible.
For stuff like connectors, this gets worked around by using terminology like “compatible with HDMI” all the time. You are explicitly permitted to reference your competitor’s products, including potential compatibility, by trademark law. I suspect the risk here is mostly contractural - AMD likely signed agreements with the HDMI forum a long time ago that restrict their disclosure of details from the specification.
Im shocked i had to scroll so far to find a real hard stop blocker mentioned.
Valve has no reason to care about using the HDMI trademark. Consumers dont care if it says HDMI 2.1 or HMDI 2.1 Compatible.
The connector isnt trademarked and neither is compatibility.
The oss nature of isnt one either as valve could just release a compiled binary instead of open sourcing it.
The 'get sued for copying the leak' argument implies someone would actually fancy going toe to toe with valves legal team which so far have rekt the eu, activision, riot games, microsoft, etc. in court.
Proving beyond doubt that valve or their devs accessed the leaks would be hard. Especially if valve were clever from the get go, and lets face it, they probably were. Theyre easily one of the leanest, most profitable, and savviest software companies around.
IIUC the issue is not them being unable to implement 2.1 at all, but rather provide specifically open source implementation. They probably could provide a binary blob.
The connector itself shouldn't be an issue, because it doesn't fall under IP. The shape of the connector is entirely functional, so there's no creative work involved, so it would fall under patent law. However, the connector itself is unlikely to be innovative enough to be patentable, so it's not protected by patent law either.
Using HDMI connectors is totally fine. You just can't label it as "has HDMI port", as "HDMI" is a trademark.
Is that true? There is obviously some creative work in connector design - optimizing for looks, robustness to damage, dirt, easy of use, reliability technically, etc.
> If I add or remove a random element, the rest of the elements stay in the correct place.
This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
> Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
I need to write this up properly, but one of my bugbears with responsive design is that it became normalised to push the sidebar down below the content on small screens. And if you didn't have a sidebar, to interweave everything in the content no matter what screensize you were viewing on.
What I want is a way to interleave content and asides on small screens, and pull them out into 1+ other regions on larger screens. Reordering the content on larger screens would be the icing on the cake but for now I'll take just doing it.
Using named grid-template-areas stacks the items you move to the sidebar on top of each other, so you only see one of them.
'Good' old floats get most of the way, but put the item in the sidebar exactly where it falls. Plus they're a pain to work with overall: https://codepen.io/pbowyer/pen/jEqdJgP
>This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed,
this not fit for purpose may in fact be historically superseded usages that still are baked in to some usages affected by the relatively rapid change of the various platforms that must interact and use the respective technologies, the specification version of "technical debt"
that is to say some subsets of the numerous technologies can be used to construct something fit for the purpose that you are describing, but as a general rule anything constructed in a solution will probably be using other subsets not fit for that particular purpose, but maybe fit for some other purpose.
For years we said bring something sane to browsers instead of trying to salvage js.
At this point, though, why don't they just implement DOM bindings in wasm and make internets a better place overnight?
TypeScript is a really decent language though, I wouldn't feel happier or more productive using Fortran or whatever. Its type system is actually really powerful which is what matters when it comes to avoiding bugs, and it's easy to write functional code with correct-by-construction data. If you need some super optimized code then sure that's what WASM is for but that's not the problem with most web apps, the usual problem is bad design, but then choice of language doesn't save you. Sure TS has some annoying legacy stuff from JS but every language has cruft, and with strict linting you can eliminate it.
It's also better if there's one ecosystem instead of one fragmented with different languages where you have to write bindings for everything you want to use.
> Its type system is actually really powerful which is what matters when it comes to avoiding bugs
It is really powerful as compared to Javascript. It is even really powerful as compared to most other languages people normally use. But not very powerful as compared to languages that have 'proper' type systems. Typescript still relies on you writing tests for everything.
The type system is a huge boon for the developer experience, of course. It enables things like automatic refactoring that make development much more pleasant (although LLMs are getting better at filling that void in dynamically typed languages). But it doesn't save you from bugs in a way that the tests you have to write anyway won't also save you from. And those same tests would also catch the same bugs in Javascript, so you're in the same place either way with respect to that.
> It's also better if there's one ecosystem instead of one fragmented with different languages where you have to write bindings for everything you want to use.
This argument is slightly backwards. This is essentially the argument used for "javascript in the backend" and "let's package the whole browser as application runtime so we can use javascript". The core of the argument is that javascript is ipso facto the best language/runtime to write any code in, including refactoring existing codebases. Bringing javascript out of the browser also means you have to write bindings for javascript and recreate the existing ecosystems anyway.
Even if you approach this from "single codebase across runtimes" angle, the conclusion to bridge the gap between browsers and languages with existing codebase, expertise and ecosystems is much more reasonable than rewrite everything in javascript.
> DevOps should take care of the constant battle between Devs and Operations
In practice there is no way to relay "query fubar, fix" back, because we are much agile, very scrum: feature is done when the ticket is closed, new tickets are handled by product owners. Reality is antithesis of that double Ouroboros.
In practice developers write code, devops deploy "teh clouds" (writing yamls is the deving part) and we throw moar servers at some cloud db when performance becomes sub-par.
Hudson/Jenkins is just not architected for large, multi-project deployments, isolated environments and specialized nodes. It can work if you do not need these features, but otherwise it's fight against the environment.
You need a beefy master and it is your single point of failure. Untimely triggers of heavy jobs overwhelm controller? All projects are down. Jobs need to be carefully crafted to be resumable at all.
Heavy reliance on master means that even sending out webhooks on stage status changes is extremely error prone.
When your jobs require certain tools to be available you are expected to package those as part of agent deployment as Jenkins relies on host tools. In reality you end up rolling your own tool management system that every job has to call in some canonical manner.
There is no built in way to isolate environments. You can harden the system a bit with various ACLs, but in the end if you either have to trust projects or build up and maintain infrastructures for different projects isolated at host level.
In cases when time-wise significant processing happens externally, you have to block an executor.
Opposite of Jenkins where you have shared workspaces and have to manually ensure workspace is clean or suffer from reproducibility issues with tainted workspaces.
At 100€ per night, at 70% occupancy the equivalent monthly rent is ~2000€.
reply