Yes, it charts the adoption rate (adopting firms/firms) against time. But it doesn't use the term "adoption rate" to mean the first derivative of "adoption" with respect to time.
When it talks about the adoption rate flattening it is talking about the first derivative of the adoption rate (as defined in the previous paragraph, not as you wish it was defined) with respect to time tending toward 0 (and, consequently, the second derivative being negative.) Not the third derivative with respect to time being negative.
I assure you I don't have any wishes one way or another.
What tickled me into making the comment above had nothing to do with whether adoption rate was used by the author (or is used generally) to mean market penetration or the rate of adoption. It was because a visual aid that is labeled ambiguously enough to support the exact opposite perspective was used as a basis for clearing up any ambiguity.
The purpose of a time series chart is necessarily time-derivative, as the slope or shape of the line is generally the focus (is a value trending upward, downward, varying seasonally, etc). It's fair to include or omit a label on the dependent axis. If omitted, it's also fair to label the chart as the dependent variable and also to let the "... over time" be implicit.
However, when the dependent axis is not explicitly labeled and "over time" is left implicit, it's absolutely hilarious to me to point to it and say it clearly shows that the chart's title is or is not time-derivative.
I know comment sections are generally for heated debates trying to prove right and wrong, but sometimes it's nice to be able to muse for a moment on funny things like this.
It maps pretty cleanly to the well understood derivatives of a position vector. Position (user count), velocity (first derivative, change in user count over time), acceleration (second derivative, speeding up or flattening of the velocity), and jerk (third derivative, change in acceleration such as the shift between from acceleration to deceleration)
It is a beautiful title and a beautiful way to think about it—alas, I think gp is right: here, from the charts anyway, the writer seems to mean the count of firms reporting adoption (as a proportion of total survey respondents).
Which paints a grimmer picture—I was surprised that they report a marked decline in adoption amongst firms of 250+ employees. That rate-as-first-derivative apparently turned negative months ago!
Then again, it’s awfully scant on context: does the absolute number of firms tell us much about how (or how productively) they’re using this tech? Maybe that’s for their deluxe investors.
It is not velocity, it is not change. Have you read the graphs? What do you think 12% in Aug and Sep for 250+ Employee companies mean, that another 12% of companies adopted AI or is it a flat "12% of the companies have adopted in Aug, and it did not change in Sep"
Yes. The title specifically is beautiful. The charts aren't nearly as interesting, though probably a bit more than a meta discussion on whether certain time intervals align with one interpretation of the author's intent or another.
I think that's nearly exactly what I paid for 2x32GB at a retail store last week. I hadn't bought RAM in over a decade so I didn't think anything of it. Wish my emergency PC replacement had occurred a year earlier!
In 2008 I was given 2 offers from a company: WFH or paid relocation to work in-office. I chose the former, which came with a 26% lower salary, and have been remote ever since. Just comparing the salaries in that case is a little disingenuous, however, since the relocation was from a low cost of living city to a high cost of living city.
A large impact on the extent to which WFH may need to come at a discount is specialization: If you're easily replaceable with an in-office worker, why would the company deal with remote? If you're not so easily replaceable, the company is more likely to be willing to work with you on your terms.
There's generally been a large disconnect between the job market in the tech sector and the rest of the economy, at least until a few years ago. There's now much more of a bifurcation within the tech job market, where rank-and-file and entry level software engineers are suffering while experienced and specialized software engineers may be doing better than ever. This plays into the RTO/WFH discussion because some people may not have the option to get their preference at any discount, or given either option in the first place.
You can use spherical harmonics to encode a few coefficients in addition to the base RGB for each splat such that the rendertime view direction can be used to compute an output RGB. A "reflection" in 3DGS isn't a light ray being traced off the surface, but instead a way of saying "when viewed from this angle, the splat may take an object's base color, while from that angle, the splat may be white because the input image had glare"
This ends up being very effective with interpolation between known viewpoints, and hit-or-miss extrapolation beyond known viewpoints.
This is a little off-topic and nitpicky, so I waited a day to avoid cluttering comments while the thread was on the front page..
I believe the term "going concern" means exactly the opposite of what you were trying to say here. Generally, comments about pedantry aren't helpful or uninteresting. This case was amusing to me in the context of assuring people Cohere is likely to stay around by boldly stating Cohere is at risk of being insolvent or ceasing operations ("Cohere is not a going concern"). Beyond that, I think it's pretty interesting how understandable it is to look at the term without knowing its meaning and assume the presence of the word "concern" must mean people are concerned about it going [bankrupt?].
I'm sure given the context nobody got the wrong impression. If anything, it makes me wonder if the term could, at least in informal contexts, reach a point of semantic inversion.
I managed to eke out a couple more years after Pebbles were discontinued by finding replacements on ebay. If this is a low volume run, I'm contemplating the opposite—whether I can justify not buying multiple while I still can.
It's absolutely essential to be able to differentiate between gross profit and net profit to establish unit economics, especially as the scale of a newly founded operation may drastically change relative to some amount of fixed capex or SG&A expense.
Of course. But here we're talking about the opportunity cost of the founders and other employees so gross profit isn't as relevant. Context matters and the context here is that the founders and employees would probably have a much higher take home split amongst all of them if they were to work in the wearables division of a large company like Google or Apple.
The difference between gross profit & net profit for companies like this is largely comprised of employee & founder salaries (SG&A and R&D). That delta is literally paying for their opportunity cost. Net profit is most relevant to shareholders.