For my area, everybody uses LaTeX styles that more or less produce PDFs identical to the final versions published in proceedings. Or, at least, it's always looked close enough to me that I haven't noticed any significant differences, other than some additional information in the margins.
At least in my experience, grad students don't pay submission fees. It usually comes out of an institutional finances account, typically assigned to the student's advisor (who is generally the corresponding author on the submission). (Not that the waiver isn't a good idea — I just don't think the grad students are the ones who would feel relieved by that arrangement.)
Also, I'm pretty sure my SIG requires LaTeX submissions anyway... I feel like I remember reading that at some point when I submitted once, but I'm not confident in that recollection.
A lot of discussion about the benefits/drawbacks of open access publishing, but I don't see anybody talking about the other thing that's coming along with this commitment to open access: the ACM is introducing a "premium" membership tier behind which various features of the Digital Library will be paywalled. From their info page [0], "premium" features include:
* Access to the ACM Guide to Computing Machinery
* AI-generated article summaries
* Podcast-style summaries of conference sessions
* Advanced search
* Rich article metadata, including download metrics, index terms and citations received
* Bulk citation exports and PDF downloads
The AI-generated article summaries has been getting a lot of discussion in my social circles. They have apparently fed many (all?) papers into some LLM to generate summaries... which is absurd when you consider that practically every article has an abstract as part of its text and submission. These abstract were written by the authors and have been reviewed more than almost any other part of the articles, so they are very unlikely to contain errors. In contrast, multiple of my colleagues have found errors of varying scales in the AI-generated summaries of their own papers — many of which are actually longer than the existing abstracts.
In addition, there are apparently AI-generated summaries for articles that were licensed with a non-derivative-works clause, which means the ACM has breached not just the social expectations of using accurate information, but also the legal expectations placed upon them as publishers of these materials.
I think it's interesting that the ACM is positioning these "premium" features as a necessity due to the move to open-access publishing [1], especially when multiple other top-level comments on this post are discussing how open-access can often be more profitable than closed-access publishing.
[1] The Digital Library homepage (https://dl.acm.org/) features a banner right now that says: "ACM is now Open Access. As part of the Digital Library's transition to Open Access, new features for researchers are available as the Digital Library Premium Edition."
They also prefix every PDF with a useless page telling you the authors (which are already listed on the first (now second) page anyways) and a list telling you which of the author's universities were members of ACM Open and paid for the publishing via flatrate.
The latter is of course the actual reason for this extra page, but it is also entirely useless information since the people reading the paper don't care. The people writing the paper are also usually annoyed by this (source: I'm an author of one such paper)
I came here with this perspective and it made the rest of the thread feel like submarine PR cleanup for this mess. Perhaps they can afford to keep their high profits because of AI company money?
There will be customers even though it is a useless feature tier.
Monetizing knowledge-work is nearly impossible if you want everyone to be rational about it. You gotta go for irrational customers like university and giant-org contracts, and that will happen here because of institutional inertia.
I believe parent commenter was referring to recreational use, i.e., use by people without such diagnoses who want a "performance boost". I heard about that sort of thing being popular when I was in college — people would take Adderall to cram for an exam or to study late into the night.
You're right that, for people with ADHD and related disorders, stimulant medication sort of just adjusts their baselines so they can pay attention like a "normal" person.
> You're right that, for people with ADHD and related disorders, stimulant medication sort of just adjusts their baselines so they can pay attention like a "normal" person.
I have ADHD and take metylphenidate(I've tried many kinds of stimulants as well) -- and the NO2 analogy is an imperfect but better analogy than saying stimulants simply adjusts the baseline of people with ADHD to function like "normal" persons.
I feel there is a narrow window of dosage and time where it might feel that way -- i.e. stimulants at the onset might calm you down, reduce anxiety, but all stimulants are very broad hammers.
For me it feels like it's impossible to re-create chemically exactly the neurotypical focus that I've seen in other colleagues.
Like spending 5-6 hours of continous work where you drill down just enough, get back on track, don't get distracted, don't get too anxious, don't get hyperfocused AND do that consistently, day after day after day.
My non-chemical modes are either hyper focus for 2 weeks on a problem, immerse myself but then completely lose interest, most of the time without showing much for it OR procastinate it a long way, get extremely anxious and work really hard on the problem.
With stimulants it's a bit like:
- dosed just right:it evaporates anxiety, stressful situations feel easy to deal with, BUT there's always increased heart rate, grinding teeth and some tension at the end of the day
- some stimulants make mundane things wildly interesting (on isopropylphenidate I spent a few hours playing with a PLSQL debugger because I thought it was really cool), but no sense of "GO, GO, GO, do it".
- some make things seem urgent enough and help stay on track -- like the metylphenidate I'm prescribed.
- some make going into a flow-like state easy and fun (like methamphetamine and phenmetrazine).
- some are pure energy and urgency -- like modafinil.
All of the stimulants have the potential to give me euphoria, all of them temporarily increase libido I still have to be mindful of not focusing on the wrong thing, the "normal" feeling is very fleeting, it's very easy to get hyper on stimulants, all of them feel like wear & tear at the end of the day, some more than others.
I've had similar experiences to you.
I never can quite get that normalcy. I now just take rilatin but it is finnicky.
Getting enough sleep and eating the right amount of the right stuff just before ingesting is extremely important so I don't even take it all that much even tho i struggle.
I wonder if you tried lisdexamfetamine? I can't get it prescribed easily here since it's not covered the way the alternatives are but someone i know had amazing success with it. Seemingly because it's a prodrug.
I can't help but be hopefull that I'll get to try it one day and that it ends up being what I always needed.
Not the OP, but I‘ve had a rather bad experience with methylphenidate (ritalin) where it made me way more awkward around people, and increased my obsessive tendencies. It did help with focus, but the effects were very short-lived. It also obliterated my hunger and once the effects wore off, it left me feeling semi-depressed until the end of the day.
Once I got prescribed lisdexamphetamine, my life turned around almost instantaneously. While it doesn‘t really get rid of my ADHD, it does help tremendously. The everlasting brainfog isn‘t as debilitating anymore. When I get excited about something I actually tend to follow through. I still battle with my obsessive tendencies — like getting stuck at setting up the perfect project tooling stack or spending way too much time on planning and research instead of just getting to work — but these are not so much related to ADHD.
On lisdexamphetamine, I am more social, my appetite is better, when I actually commit to something, I tend to stick to it for much longer, and I have also picked up a bunch of healthy habits. For example I exercise almost every day now.
If you someday get a chance to switch to lisdex, do it. It’s much smoother, longer-lasting, with fewer side effects. But honestly, anything is better than ritalin in my book.
It's not legal where I live also, I did try 2-FMA and it felt better in certain scenarios -- like following a hard course, but I also felt the tolerance ramps up much faster in releasers than re-uptake inhibitors so methylphenidate still is a wonderful tool.
Watching a good friend of mine struggle with this after diagnosis for a few years now and I feel this really captures the nuance and complexity of this struggle well. Stimulants are an incredible tool but also an incredibly imperfect one.
Totally agree, I don't think em dashes are a particularly useful AI tell unless they're used in a weird way. Left to my own devices (as a native English speaker who likes em dashes and parentheticals), I often end up with at least one em dash every other paragraph, if not more frequently.
On another note, it may be useful to you to know that in most English dialects, referring to a person solely by their nationality (e.g., when you wrote "as a Chinese") is considered rude or uncouth, and it may mark your speech/writing as non-native. It is generally preferable to use nationalities as adjective rather than nouns (e.g., "as a Chinese person"). The two main exceptions are when employing metonymy, such as when referring to a nation's government colloquially (e.g., "the Chinese will attend the upcoming UN summit") or when using the nationality to indicate broad trends among the population of the nation (e.g., "the Chinese sure know how to cook!"). I hope this is considered a helpful interjection rather than an unwelcome one, but if not, I apologize!
Thank you! It would indeed require extra effort for me to notice issues like this, and it is very nice of you to have pointed it out!
Speaking of personal devices, I also have a dedicated key binding for en dashes “–” (because, well, I already have a whole tap layer for APL symbols, and it costs nothing to add one more). Since we're on HN, I believe many people here can easily do that if they wish to, so I too don't think en/em dashes are very telling, especially on HN.
I wonder whether this was intentional or a coincidence, but for others (and maybe you) the "Lisp Machine" was a real hardware architecture unrelated to emacs: https://en.wikipedia.org/wiki/Lisp_machine
This is like arguing that we shouldn't try to regulate drugs because some people might "want" the heroin that ruins their lives.
The existing "personalities" of LLMs are dangerous, full stop. They are trained to generate text with an air of authority and to tend to agree with anything you tell them. It is irresponsible to allow this to continue while not at least deliberately improving education around their use. This is why we're seeing people "falling in love" with LLMs, or seeking mental health assistance from LLMs that they are unqualified to render, or plotting attacks on other people that LLMs are not sufficiently prepared to detect and thwart, and so on. I think it's a terrible position to take to argue that we should allow this behavior (and training) to continue unrestrained because some people might "want" it.
There aren't many major labs, and they each claim to want AI to benefit humanity. They cannot entirely control how others use their APIs, but I would like their mainline chatbots to not be overly sycophantic and generally to not try and foster human-AI friendships. I can't imagine any realistic legislation, but it would be nice if the few labs just did this on their own accord (or were at least shamed more for not doing so)
Unfortunately, I think a lot of the people at the top of the AI pyramid have a definition of "humanity" that may not exactly align with the definition that us commoners might be thinking of when they say they want AI to "benefit humanity".
I agree that I don't know what regulation would look like, but I think we should at least try to figure it out. I would rather hamper AI development needlessly while we fumble around with too much regulation for a bit and eventually decide it's not worth it than let AI run rampant without any oversight while it causes people to kill themselves or harm others, among plenty of other things.
At the very least, I think there is a need for oversight of how companies building LLMs market and train their models. It's not enough to cross our fingers that they'll add "safeguards" to try to detect certain phrases/topics and hope that that's enough to prevent misuse/danger — there's not sufficient financial incentive for them to do that of their own accord beyond the absolute bare minimum to give the appearance of caring, and that's simply not good enough.
Yes. My position is that it was irresponsible to publish these tools before figuring out safety first, and it is irresponsible to continue to offer LLMs that have been trained in an authoritative voice and to not actively seek to educate people on their shortcomings.
But, of course, such action would almost certainly result in a hit to the finances, so we can't have that.
Alternative take: these are incredibly complex nondeterministic systems and it is impossible to validate perfection in a lab environment because 1) sample sizes are too small, and 2) perfection isn’t possible anyway.
All products ship with defects. We can argue about too much or too little or whatever, but there is no world where a new technology or vehicle or really anything is developed to perfection safety before release.
Yeah, profits (or at least revenue) too. But all of these AI systems are losing money hand over fist. Revenue is a signal of market fit. So if there are companies out there burning billions of dollars optimizing the perfectly safe AI system before release, they have no idea if it’s what people want.
Releasing a chatbot that confidently states wrong information is bad enough on its own — we know people are easily susceptible to such things. (I mean, c'mon, we had people falling for ELIZA in the '60s!)
But to then immediately position these tools as replacements for search engines, or as study tutors, or as substitutes for professionals in mental health? These aren't "products that shipped with defects"; they are products that were intentionally shipped despite full knowledge that they were harmful in fairly obvious ways, and that's morally reprehensible.
Pretty sure most of the current problems we see re drug use are a direct result of the nanny state trying to tell people how to live their lives. Forcing your views on people doesn’t work and has lots of negative consequences.
I don't know if this is what the parent commenter was getting at, but the existence of multi-billion-dollar drug cartels in Mexico is an empirical failure of US policy. Prohibition didn't work a century ago and it doesn't work now.
All the War on Drugs has accomplished is granting an extremely lucrative oligopoly to violent criminals. If someone is going to do heroin, ideally they'd get it from a corporation that follows strict pharmaceutical regulations and invests its revenue into R&D, not one that cuts it with even worse poison and invests its revenue into mass atrocities.
Who is it all even for? We're subsidizing criminal empires via US markets and hurting the people we supposedly want to protect. Instead of kicking people while they're down and treating them like criminals over poor health choices, we could have invested all those countless billions of dollars into actually trying to help them.
I'm not sure which parent comment you're referring to, but what you're saying aligns with my point a couple levels up: reasonable regulation of the companies building these tools is a way to mitigate harm without directly encroaching on people's individual freedoms or dignities, but regulation is necessary to help people. Without regulation, corporations will seek to maximize profit to whatever degree is possible, even if it means causing direct harm to people along the way.
I'm not saying they're equivalent; I'm saying that they're both dangerous, and I think taking the position that we shouldn't take any steps to prevent the danger because some people may end up thinking they "want" it is unreasonable.
No one sane uses baseline webui 'personality'. People use LLMs through specific, custom APIs, and more often than not they use fine tune models, that _assume personality_ defined by someone (be it user or service provider).
Look up Tavern AI character card.
I think you're fundamentally mistaken.
I agree that to some users use of the specific LLMs for the specific use cases might be harmful but saying (default AI 'personality') that web ui is dangerous is laughable.
I don't know how to interpret this. Are you suggesting I'm, like, an agent of some organization? Or is "activist" meant only as a pejorative?
I can't say that I identify as any sort of AI "activist" per se, whatever that word means to you, but I am vocally opposed to (the current incarnation of) LLMs to a pretty strong degree. Since this is a community forum and I am a member of the community, I think I am afforded some degree of voicing my opinions here when I feel like it.
Disincentivizing something undesirable will not necessarily lead to better results, because it wrongly assumes that you can foresee all consequences of an action or inaction.
Someone who now falls in love with an LLM might instead fall for some seductress who hurts him more. Someone who now receives bad mental health assistance might receive none whatsoever.
I disagree with your premise entirely and, frankly, I think it's ridiculous. I don't think you need to foresee all possible consequences to take action against what is likely, especially when you have evidence of active harm ready at hand. I also think you're failing to take into account the nature of LLMs as agents of harm: so far it has been very difficult for people to legally hold LLMs accountable for anything, even when those LLMs have encouraged suicidal ideation or physical harm of others, among other obviously bad things.
I believe there is a moral burden on the companies training these models to not deliberately train them to be sycophantic and to speak in an authoritative voice, and I think it would be reasonable to attempt to establish some regulations in that regard in an effort to protect those most prone to predation of this style. And I think we need to clarify the manner in which people can hold LLM-operating companies responsible for things their LLMs say — and, preferably, we should err on the side of more accountability rather than less.
---
Also, I think in the case of "Someone who now receives bad mental health assistance might receive none whatsoever", any psychiatrist (any doctor, really) will point out that this is an incredibly flawed argument. It is often the case that bad mental health assistance is, in fact, worse than none. It's that whole "first, do no harm" thing, you know?
...nobody? I didn't determine any such thing. What I was saying was that LLMs are dangerous and we should treat them as such, even if that means not giving them some functionality that some people "want". This has nothing to do with playing god and everything to do with building a positive society where we look out for people who may be unable or unwilling to do so themselves.
And, to be clear, I'm not saying we necessarily need to outlaw or ban these technologies, in the same way I don't advocate for criminalization of drugs. But I think companies managing these technologies have an onus to take steps to properly educate people about how LLMs work, and I think they also have a responsibility not to deliberately train their models to be sycophantic in nature. Regulations should go on the manufacturers and distributors of the dangers, not on the people consuming them.
here’s something I noticed: If you yell at them (all caps, cursing them out, etc.), they perform worse, similar to a human. So if you believe that some degree of “personable answering” might contribute to better correctness, since some degree of disagreeable interaction seems to produce less correctness, then you might have to accept some personality.
I'm desperately looking forward to, like, 5-10 years from now when all the "LLMs are going to change everything!!1!" comments have all but completely abated (not unlike the blockchain stuff of ~10 years ago).
No, LLMs are not going to replace compiler engineers. Compilers are probably one of the least likely areas to profit from extensive LLM usage in the way that you are thinking, because they are principally concerned with correctness, and LLMs cannot reason about whether something is correct — they only can predict whether their training data would be likely to claim that it is correct.
Additionally, each compiler differs significantly in the minute details. I simply wouldn't trust the output of an LLM to be correct, and the time wasted on determining whether it's correct is just not worth it.
Stop eating pre-chewed food. Think for yourself, and write your own code.
I bet you could use LLMs to turn stupid comments about LLMs into insightful comments that people want to read. I wonder if there’s a startup working on that?
A system outputting correct facts, tells you nothing about the system's ability to prove correctness of facts. You can not assert that property of a system by treating it as a black box. If you are able to treat LLMs as a white box and prove correctness about their internal states, you should tell that to some very important people, that is an insight worth a lot of money.
As usual, my argument brought all the people out of the woodwork who have some obsession about an argument that's tangential. Sorry to touch your tangent, bud.
For my area, everybody uses LaTeX styles that more or less produce PDFs identical to the final versions published in proceedings. Or, at least, it's always looked close enough to me that I haven't noticed any significant differences, other than some additional information in the margins.
reply