Hacker Newsnew | past | comments | ask | show | jobs | submit | a1a's commentslogin

Wow, this is a Hyper-V breakout! I am amazed that it's 2024 and we still have problems with basic input validation.


Eh, I wouldn't really call it "basic input validation", it's more like fuzzy presumptions of trustworthiness. You have one part of PowerShell team that is wary of deserializing ScriptBlocks into ScriptBlocks instead of strings because that could trivially lead to RCE; and then there is other part which sees nothing wrong with executing code with arbitrary semantics (e.g. Get-ItemProperty) on whatever strings are lying around in the blob.

The root of the problem is IMHO is having code with arbitrary semantics; it's undoubtedly quite handy to use but the price is the inherent uncertainty about the security implications. I mean, who is aware that if you feed Get-ItemProperty something that starts with two backslashes, then it will blindly send your credentials to whatever domain is written after those backslashes? Why is it even doing that? Oh, right, because that's how you make remote shares look like local directories.


I didn't mean to trivialize the issue. You describe a problem that arise when multiple parties share data with "presumptions of trustworthiness" i.e. do not perform proper input validation. No?


Well, I guess you can put it like that but I personally wouldn't call it "basic input validation", that would be something on the level "does this field really contains an integer?"

I don't think the problem is even the Get-ItemProperty itself, even though you'd probably want to use Select-Object instead wherever you can, but the fact that deserializer allows ridiculous PSPath values in the nested objects/properties; why does it do it? Is there no actual schema for e.g. Microsoft.Win32.RegistryKey type?


> I am amazed that it's 2024 and we still have problems with basic input validation.

https://news.ycombinator.com/item?id=41392128 :rolling-eyes:


I envy your wonder. I predict amazement for decades to come.


I looked briefly at the encoder and it looks like the ad names are truncated on the size 32. Not sure why the threat actor would do that do. I guess they need some size limitation and just picked an arbitrary number


Yes, if I interpret your suggestion correctly. How would you know that the attacker have not manipulated the size parameter?

That's the best case. Worst case you end up with a memory vulernability (see heartbleed https://xkcd.com/1354/)


> How would you know that the attacker have not manipulated the size parameter?

This scenario doesn't make a lot of sense. Say I have a goodfile that is 512 bytes long and hashes to 3d8850e1, and someone else wants to produce badfile and convince you that it's my goodfile. GP's suggestion is that I publish a size-plus-hash value "512-3d8850e1" for you to check against. If the attacker is in a position to alter the size part, they're also in a position to alter the hash part, in which case why even bother with a collision? They can just change the hash to be whatever badfile hashes to.

The true answer to GP is that if you do this, it's no longer a hash function. A hash function is defined as taking an arbitrary input and returning an n-bit output for some fixed value of n. By including the size of the input in your output, the size of your output grows logarithmically with the size of your input. This may seem pedantic, but fixed-size-output and arbitrary-size-input is extremely important for general usage of a hash function.


Think there is a need for a clarification. It says _member states_ of EU have trackers on their websites. Not EU itself.


Depends on if you think fonts.googleapis.com and fonts.gstatic.com cookies are "trackers on their websites" both of which are found on http://europa.eu/rapid/press-release_STAT-19-1728_en.htm

Though I did have to do a bit of clicking around until Privacy Badger found something so it looks like they at least are trying.


It's certainly handing data to google, which isn't great for integrity. But we're complaining at a high standard.


Seriously. What would be more interesting would finding some low standard things worth complaining about. Like potable tap water in <pick your city>.


So? EU laws are a suite of national laws.


What the hell are you talking about?


1) Is that really a bad thing? Isn't it generally a good thing that law enforcement finds law-breakers?

2) So do drunk drivers that crash.

3) You're arguing against yourself. Yes, it's bad that bad guys get notified so they can avoid checkpoints.

4) Shouldn't law enforcement spend their time enforcing the law? It's well spent time IMO. As you say in (1), they also solve other more serious crimes (e.g. finding wanted criminals)


Author here. Thanks for your comment. I think you have a valid point about users clicking anything. However I would only say that's the case if you send around 20 phishing mails. In a targeted attack you want to send one or two phishing mails and you wanna maximize your chances of success to avoid a reaction from the blue team.

I agree that the impact is low compared to other vulnerabilities. It is definitely the case that you get a t-shirt (at best) for it. Though, my point is that they could be critical for the users, not for the website itself. An attacker that don't really care about the vulnerable website can still exploit the trust in the vulnerable website to perform attacks on the user he is interested in (e.g. hash stealing or malicious redirects). In fact, I believe malicious redirects is a really common payload of XSS flaws.


Right, but a targeted phishing attack against an internal user is just as likely to rely on an application (or a facsimile of an application) you don't control, like a benefits management portal or something that supposedly authenticates through an SSO.

I guess if your argument is that there would be high value in eradicating open redirects wholesale, I sort of see your point. But the incremental value of eliminating one open redirect is marginal at best.


There is also value in customer trust. If a customer gets burned by a Google.com link, they are going to check next time they see such a link. Google would be better of if customers felt 'oh, it is Google so it must be safe'.


That might not be an issue for Google, but I could see it being a big problem for a company that relies heavily on projecting a "family friendly" image (think Disney).


Back in the day you could change some URL parameters and make it look like Toys R Us was selling firearms on their website because they used the same ecommerce back end as a sporting goods store. Like you'd go to the URL and it would be a hunting rifle (or whatever) but it would be on the Toys R Us site.

I don't think Toys R Us was ever harmed but it was mildly amusing and I'm sure a few people's panties got knotted after they found out Toys R Us doesn't actually sell firearms and they got all enraged over nothing.


I hadn't heard of this issue specifically, but it sounds like you may be talking about eBay Enterprise[1]. They ran a lot of e-commerce operations for brick and mortar stores at one point, including both Toys R Us and Dick's Sporting Goods.

[1] https://en.wikipedia.org/wiki/EBay_Enterprise


Firstly, it is not possible to opt out of facebook. [1] And they do indeed collect private data that we didn't choose to share (shadow accounts, third party website trackers, etc).

Facebook have broken "actual laws". There are so many cases were facebook have broken the law. [2] [3]

Also, please read up on Fallacy of relative privation ("not as bad as").

[1] https://boingboing.net/2017/11/08/involuntary-profiling.html

[2] https://www.theguardian.com/technology/2018/feb/12/facebook-...

[3] https://techcrunch.com/2018/02/19/facebooks-tracking-of-non-...


I have read these and many more such articles, and apart from using catchy terms like "shadow profile" none of them have been able to describe how it affects me at all if I don't have a Facebook account. Facebook scanned my friend's contact book which happened to have my name and email on it -- what then?


Regarding [1], it is possible to opt out of facebook: don't visit websites with Facebook like/share buttons. (This website, for example). When you access a website with facebook tracking installed, you are consenting to being tracked.


I did not consent to being tracked by facebook.

Yet websites try again and again to load the facebook like button.

As per GDPR, which is in effect but not enforced until May, tracking me without explicitly and clearly asking me if that is okay is not allowed and anything else, like withdrawing service until I agree to be tracked, does not construct consent.

When I visit a new website I do not know if they have facebook like buttons. I have to load the page to check that and without a script or ad blocker I will also load the like button and facebook will track that.

At which point in that process did I consent to any and all scripts on that webpage leeching of my personal data?


> Yet websites try again and again to load the facebook like button.

Take that up with the websites, not Facebook.

Vast numbers of websites use Google Web Fonts which enables Google to track users. I didn't opt in to that so I just block it.


Yes but technically less inclined users won't block and don't consent to it either.

Visiting a website is not consent for tracking.


>don't VISIT websites with Facebook like/share buttons.

So people should know a site uses Facebook share buttons before opening it through Google search. Then, they should keep closing pages until they find the one that don't have share buttons. Then, memorize a list of "safe sites?"


I know this isn't the best solution but uBlock has a list that blocks all social buttons for you. Very easy to install and use.


The Fanboy Social list is mostly cosmetic filtering, it won't prevent connection to Facebook servers.

For this I suggest dynamic filtering[1], as this guarantees there will be no connection to sites with block rules, and block rules can be easily locally overridden in case it is needed for a specific site.

[1] https://github.com/gorhill/uBlock/wiki/Dynamic-filtering:-to...


I use uBlock, Nanodefender, Adguard and HOST blocks.


  When you access a website with facebook tracking installed, you are consenting to being tracked.
Sorry, but that's just silly. That's about as if I advise you that when you don't like to be tracked by Facebook then don't use the internet.


Besides, the GDPR is clearly not allowing things like that. It has to be consent given and I have to be aware that I gave consent. It is not ok to use implicit consent. Checkbox or it didn't happen.

Bury it in long legalese isn't accepted either.


Or just install any of the hundred ad and tracking blockers out there.


How would you know a site has those buttons or not unless you visit it first?


So that's why there is no gravity over there?

Seriously though, the statement is ignorant at best. Please help me understand the point of posting it?


Seems to me to be an obvious attempt at humour.


It was his attempt to justify why encryption must have backdoors and that if Australian Parliament were to legislate that, they’d have to accept it.



I'd recommend deleting all content associated with the account and removing the address from any third party site (recovery etc).

I would however never actually delete the account.

My concern with deleting the account is that it exposes you to some really nasty impersonation attacks. It is free to keep. Just keep it.


I agree. The act of deleting the account isn't even a guarantee that the data associated with the account will be deleted.


I don't know about Yahoo, but most services will not allow to reregister a previously deleted account.


I vaguely recall that yahoo does recycle old email addresses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: