I edit videos on a hobbyist level (mostly using davinci resolve to edit clips of me dying in video games to upload to a shareX host to show to friends). The big takeaway for
me was reading that for quality/efficiency libx264 is better than nvenc for rendering h264 video. All this time I’ve assumed nvenc is better because it used shiny GPU technology! Is libx264 better for recording high quality videos too? I know it will run on CPU unlike NVENC but I doubt that’s an issue for my use case.
Edit: from some googling it looks like encoding is encoding, whether it’s used for recording or rendering footage. In that case the same quality arguments the article is making should apply for recording too. I only did a cursory search though and have not had a chance to test so if anyone knows better feel free to respond
Yeah, this is a very common misconception. There are hardware encoders that might be distribution quality, but these are (to my knowledge) expensive ASICs that Netflix, Amazon, Google, etc. use to accelerate encode (not necessarily to realtime) and improve power efficiency.
GPU acceleration could be used to accelerate a CPU encode in a quality-neutral way, but NVENC and the various other HW accelerators available to end users are designed for realtime encoding for broadcast or for immediate storage (for example, to an SD card).
For distribution, you can either distribute the original source (if bandwidth and space are no concern), or you can ideally encode in a modern, efficient codec like x265 or AV1. AV1 might be particularly useful if you have a noisy source, since denoising and classification of the noise is part of the algorithm. The reference software encoders are considered the best quality, but often the slowest, options.
GPU is best if you need to temporarily transcode (for Plex), or you want to make a working copy for temporary distribution before a final encode.
You might want to try dumping your work from Resolve out in ProRes 422 HQ or DNxHR HQ and then encoding to h264/h265 with Compressor (costs $; it's the encoder part of Final Cut as a separate piece of software) on a Mac or Shutter Encoder. Also, I'm making a big assumption that you're using the paid version of Resolve; disregard otherwise. It might not be worth it if your input material is video game capture but if you have something like a camera that records h264 4:2:2 10bit in a log gamma then it can help preserve some quality.
> But officers can also make emergency data requests, or EDRs, in cases involving a threat of imminent harm or death. These requests typically bypass any additional verification steps by the companies who are under pressure to fulfill the request as quickly as possible.
How do companies decide which EDRs to fulfill and which ones require a judicial subpoena? Are companies ever even under the obligation to fulfill an EDR?
> So in a lot of the searches that we reviewed, we had about 500,000 to take a look at. We found the word “investigation” – or variations of the word “investigation” – or “suspect” a lot with really no details about what the investigation pertained to or what the suspect may have done.
> A lot of searches also just listed gibberish, like “ASDF” – that’s the sequence of letters in the center row of your computer keyboard. Or just said that they were there for random checks. We even found a search that just said “donut” or that didn’t say anything at all.
Reminds me of when Reddit posted their year end roundup https://web.archive.org/web/20140409152507/http://www.reddit... and revealed their “most addicted city” to be the home of Eglin Air Force Base, host of a lot of military cyber operations. They edited the article shortly afterward to remove this inconvenient statistic
Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf
I'm not saying those trends charts demonstrate anything, just that commercial human astro-turfers or bot networks are no less of a thing than intelligence ones and it wouldn't really be a conspiracy theory to think McDonalds or any other company, trade association, lobbyist, PR firm etc, is operating a lot of social media accounts that could theoretically show up on a report like that if they were doing a lot of it from a specific place.
You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.
OP has just completely pulled this analysis out of their ass. They aren’t all constantly running g cyber operations on Reddit, that bears zero resemblance to what cyber operations look like in real life including the point that you raised.
Daily reminder (for myself especially) to engage as little with social media (reading/commenting) as possible. It's a huge waste of time anyways not like I don't have better things to do.
This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.
These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.
Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.
Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.
The short term solution: reduce reliance on and consumption of corporate captured social media
The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech
When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.
I notice a distinction made in the docs for image, video, and "web page" slop. Will there be a way to aggressively categorize filter web page slop separately from the other two? There's an uncomfortable amount of authors, even posted on this forum, who write insightful posts that (at least from what I can tell) aren't AI slop, but for some reason they decide to header it with a generated image. While I find that distateful, I would only want to filter that if the content of the post text itself was slop too. Will the distinction in the docs allow for that?
Yes, images and text are scored separately.
In the example you shared, the blog's image would be tagged as AI and downranked in image search. The blog post itself would still display normally in search results.
Image slop is directly detectable by a model, but web page slop is necessarily a multi-signal system (page format, who posted it, link structure, content,...)
So having AI images in a webpage is just one input signal for the page being slop (it's not even used yet in the classification for webpages).
One note that might be good to highlight in the article is that the take-home is expected to be 2 hours long. From my experience, they are much longer so I was initially surprised to see take-home's being given before an initial call until I looked at the assignment itself.
I still consider this a red flag. The company wants me to put time into the hiring process, but they can’t be bothered to do the same.
If there is at least a recruiter screening first, I’ll apply and ask about “Bring Your Own Code Examples”, mostly when their daily work would use tools that I have some code published.
> I still consider this a red flag. The company wants me to put time into the hiring process, but they can’t be bothered to do the same.
Exactly this.
It costs a company nothing to give you a take-home, but it will cost you (the candidate) potentially many hours. On my last job search, I got burned a number of times where I'd work for hours on a take-home only to get ghosted. I don't think they even looked at my solution.
Now I have a personal policy where I will refuse to do a take-home unless the interviewer sits there with me while I do it. This demonstrates to me that the interviewer is actually serious and respectful of my time.
Another thing problematic about take-home projects: They don't scale for the candidate. Sure, 2 hours is nothing if you want a job, but typically the candidate is going to be applying for dozens, if not hundreds of jobs. Even 20 take-homes just like this is now 40 hours of work--just to get through a hiring gate!
It is not "respectful of the candidate's time" if everyone is doing it.
It CAN cost nothing to give a take home, but this is not a requirement. At my previous employer any candidate that made it as far as the take home project was paid for the time they worked on it.
I can see both perspectives. If you are a skilled hiree, this seems like a waste of time. But if you are hiring online, you will inevitably get a lot of terrible candidates and you need to filter them out. If you are a small team, you can't spend weeks interviewing randos with little to none coding experience for a SE role. The problem is that online hiring is full of noise, but both sides suffer from the expenses this creates.
> The Boeing 737-800 had just 220kg of fuel left in its tanks when it finally landed, according to a picture of what appears to be a handwritten technical log. Pilots who examined the picture said this would be enough for just five or six minutes of flying.
For reference, passenger airlines immediately declare emergency if their planned flight path would put them under 30 minutes of fuel (at least in the US). Landing with 5 minutes remaining of fuel is very atypical
It's a little difficult to parse but this is hourly share of transactions. If transactions were evenly spread out over 8 hours a day, 7 days a week, each hour would get about 1.8% of transactions. So a 0.4% change in hourly share for a given hour is quite significant.
Are there any news sources corroborating this? Maybe it's early but I'm surprised I can't find any articles from OpenAI or press outlets about this. Googling "openAI bonus" gives some reddit threads, some linkedin posts, and this hacker news post
Edit: from some googling it looks like encoding is encoding, whether it’s used for recording or rendering footage. In that case the same quality arguments the article is making should apply for recording too. I only did a cursory search though and have not had a chance to test so if anyone knows better feel free to respond