I work in AI research, I used to work in politics, and I poast about both
John Q Public
Loading...
WAR DON'T SEX
CRIMES HIDE CRIMES
Kevin Elliott, Made Whole
Depending mostly on how you present yourself to it and the sort of character that evokes from it.
A very helpful way to think about LLMs imo is as simulator substrates for impressions of people. They’re always playing a character; the question is which one.
First argued for afaik in a classic
Plus being able to use RLVR means the models will be better at that task anyway.
(None of this is a dunk on OP! Good thread, you should read it)
And the most extreme case on the other end of the spectrum from proofs is the entirely unverifiable “is this model conscious?”
What it has to tell you on that when you sample from it has ~no probative value at all
internal and not subject to verification by anything, so you can very easily drift off into that LLM psychosis people talk about. Or just develop confidently wrong opinions.
The more automatically verifiable the output, the better the case for AI
“AI” is a hard thing to have a single opinion about. One of the main affordances of the technology is predicting and adapting to your behavior.
The same model in the same interface can be a helpful research assistant to one person, a code writing tool to another, and urge a third to found a cult
blog post: www.lesswrong.com/s/N7nDePaNab...
subsequently presented with more data, less impressionistically in a Nature article: www.nature.com/articles/s41...