It’s a really tough time for non-profits. Large amounts of their funding was cut, which is genuinely impacting their day-to-day. Employees work long hours and are not paid well. Turnover rate is really high. It’s just a really hard industry right now.
Agents are pretty good at finding people on the internet now. It worked. We never got into grant-writing though, but it seems to be a different service. How do you do it?
At plinth, we've built a software/AI platform for charities to help them measure their impact and get more funding for the work they're doing. We work with both charities and funders (e.g. foundations, Government), providing a way to easily collect, visualise and report on client and impact data.
We're looking for an engineer with experience with React, NextJS etc, who wants to spend almost all their time heads down writing code, and not spending time in meetings.
So, I don't use this stuff, but every time I see someone complaining about it doing something stupid, the response they get tends to be "that's because it's GPT-3, everyone uses GPT-4 now"; I took this on face value.
I think it's a case of tech bubble vs the rest of the world. Most people are not subscribing to the paid version of ChatGPT, but a lot of people who spend a lot of time with these things are.
Do you have any better sources for the power usage stats? It would be good to get a bit closer on that front. Having said that, even if the cost share is closer to 80%, that still puts it on par with a laptop for an average person.
I can definitely imagine they're not covering the amortised cost of the training with the cost per individual inference request. It seems less likely to me that they're making a significant loss on each subsequent request, but again no source from me on that either.
Looking a bit more into this, I found this paper: https://arxiv.org/pdf/2311.16863.pdf. It references a table saying that text generation uses 0.047 kWh per 1000 inferences, which is 1-2 orders of magnitude lower than my estimate. Though that is for GPT2, so possibly tracks to something roughly in the ~0.001 kWh per inference for GPT3.5.
I'm not sure. The figures I've seen suggest that GPT3 required 10x more energy to train than GPT2 (e.g. https://www.nnlabs.org/power-requirements-of-large-language-....), so I think a roughly 1-2 order of magnitude increase in energy usage from GPT2 to GPT3.5 makes sense.
That's actually not correct. Stripe invoicing charges 0.4 or 0.5% on top of the normal payment transaction fee (for either card payments or bank transfers). Essentially, you are paying a % fee for a nice PDF (plus some reconciliation/reporting etc)