Eh, I wouldn't really call it "basic input validation", it's more like fuzzy presumptions of trustworthiness. You have one part of PowerShell team that is wary of deserializing ScriptBlocks into ScriptBlocks instead of strings because that could trivially lead to RCE; and then there is other part which sees nothing wrong with executing code with arbitrary semantics (e.g. Get-ItemProperty) on whatever strings are lying around in the blob.
The root of the problem is IMHO is having code with arbitrary semantics; it's undoubtedly quite handy to use but the price is the inherent uncertainty about the security implications. I mean, who is aware that if you feed Get-ItemProperty something that starts with two backslashes, then it will blindly send your credentials to whatever domain is written after those backslashes? Why is it even doing that? Oh, right, because that's how you make remote shares look like local directories.
I didn't mean to trivialize the issue. You describe a problem that arise when multiple parties share data with "presumptions of trustworthiness" i.e. do not perform proper input validation. No?
Well, I guess you can put it like that but I personally wouldn't call it "basic input validation", that would be something on the level "does this field really contains an integer?"
I don't think the problem is even the Get-ItemProperty itself, even though you'd probably want to use Select-Object instead wherever you can, but the fact that deserializer allows ridiculous PSPath values in the nested objects/properties; why does it do it? Is there no actual schema for e.g. Microsoft.Win32.RegistryKey type?
I looked briefly at the encoder and it looks like the ad names are truncated on the size 32. Not sure why the threat actor would do that do. I guess they need some size limitation and just picked an arbitrary number
> How would you know that the attacker have not manipulated the size parameter?
This scenario doesn't make a lot of sense. Say I have a goodfile that is 512 bytes long and hashes to 3d8850e1, and someone else wants to produce badfile and convince you that it's my goodfile. GP's suggestion is that I publish a size-plus-hash value "512-3d8850e1" for you to check against. If the attacker is in a position to alter the size part, they're also in a position to alter the hash part, in which case why even bother with a collision? They can just change the hash to be whatever badfile hashes to.
The true answer to GP is that if you do this, it's no longer a hash function. A hash function is defined as taking an arbitrary input and returning an n-bit output for some fixed value of n. By including the size of the input in your output, the size of your output grows logarithmically with the size of your input. This may seem pedantic, but fixed-size-output and arbitrary-size-input is extremely important for general usage of a hash function.
1) Is that really a bad thing? Isn't it generally a good thing that law enforcement finds law-breakers?
2) So do drunk drivers that crash.
3) You're arguing against yourself. Yes, it's bad that bad guys get notified so they can avoid checkpoints.
4) Shouldn't law enforcement spend their time enforcing the law? It's well spent time IMO. As you say in (1), they also solve other more serious crimes (e.g. finding wanted criminals)
Author here. Thanks for your comment. I think you have a valid point about users clicking anything. However I would only say that's the case if you send around 20 phishing mails. In a targeted attack you want to send one or two phishing mails and you wanna maximize your chances of success to avoid a reaction from the blue team.
I agree that the impact is low compared to other vulnerabilities. It is definitely the case that you get a t-shirt (at best) for it. Though, my point is that they could be critical for the users, not for the website itself. An attacker that don't really care about the vulnerable website can still exploit the trust in the vulnerable website to perform attacks on the user he is interested in (e.g. hash stealing or malicious redirects). In fact, I believe malicious redirects is a really common payload of XSS flaws.
Right, but a targeted phishing attack against an internal user is just as likely to rely on an application (or a facsimile of an application) you don't control, like a benefits management portal or something that supposedly authenticates through an SSO.
I guess if your argument is that there would be high value in eradicating open redirects wholesale, I sort of see your point. But the incremental value of eliminating one open redirect is marginal at best.
There is also value in customer trust. If a customer gets burned by a Google.com link, they are going to check next time they see such a link. Google would be better of if customers felt 'oh, it is Google so it must be safe'.
That might not be an issue for Google, but I could see it being a big problem for a company that relies heavily on projecting a "family friendly" image (think Disney).
Back in the day you could change some URL parameters and make it look like Toys R Us was selling firearms on their website because they used the same ecommerce back end as a sporting goods store. Like you'd go to the URL and it would be a hunting rifle (or whatever) but it would be on the Toys R Us site.
I don't think Toys R Us was ever harmed but it was mildly amusing and I'm sure a few people's panties got knotted after they found out Toys R Us doesn't actually sell firearms and they got all enraged over nothing.
I hadn't heard of this issue specifically, but it sounds like you may be talking about eBay Enterprise[1]. They ran a lot of e-commerce operations for brick and mortar stores at one point, including both Toys R Us and Dick's Sporting Goods.
Firstly, it is not possible to opt out of facebook. [1] And they do indeed collect private data that we didn't choose to share (shadow accounts, third party website trackers, etc).
Facebook have broken "actual laws". There are so many cases were facebook have broken the law. [2] [3]
Also, please read up on Fallacy of relative privation ("not as bad as").
I have read these and many more such articles, and apart from using catchy terms like "shadow profile" none of them have been able to describe how it affects me at all if I don't have a Facebook account. Facebook scanned my friend's contact book which happened to have my name and email on it -- what then?
Regarding [1], it is possible to opt out of facebook: don't visit websites with Facebook like/share buttons. (This website, for example). When you access a website with facebook tracking installed, you are consenting to being tracked.
Yet websites try again and again to load the facebook like button.
As per GDPR, which is in effect but not enforced until May, tracking me without explicitly and clearly asking me if that is okay is not allowed and anything else, like withdrawing service until I agree to be tracked, does not construct consent.
When I visit a new website I do not know if they have facebook like buttons. I have to load the page to check that and without a script or ad blocker I will also load the like button and facebook will track that.
At which point in that process did I consent to any and all scripts on that webpage leeching of my personal data?
>don't VISIT websites with Facebook like/share buttons.
So people should know a site uses Facebook share buttons before opening it through Google search. Then, they should keep closing pages until they find the one that don't have share buttons. Then, memorize a list of "safe sites?"
The Fanboy Social list is mostly cosmetic filtering, it won't prevent connection to Facebook servers.
For this I suggest dynamic filtering[1], as this guarantees there will be no connection to sites with block rules, and block rules can be easily locally overridden in case it is needed for a specific site.
Besides, the GDPR is clearly not allowing things like that. It has to be consent given and I have to be aware that I gave consent. It is not ok to use implicit consent. Checkbox or it didn't happen.