Postdoc at University of Warsaw's Center for Excellence in Social Science studying political communication.
Other interests include: anonymity and identity online
My research: https://scholar.google.com/citations?user=Ua7qrSYAAAAJ
🏳️⚧️she/her
Erin Wertz
Loading...
The article makes this fairly explicit, but it's worth emphasizing that Grammarly doesn't have any clear option to request a specific reviewer (hence you can't easily check if you're included) and even a cursory attempt gets it to produce a bunch of random scholars from my or adjacent fields.
I was impersonated by Grammarly as well.
Besides the lack of consent, a big problem is that the LLMs do a highly plausible impersonation of my views for anything not specific to me but miss exactly where my views differ from many AI experts or critics.
So worse than useless. Actively misleading.
Erin Wertz
Democrats had a good night, winning major races including governor's races in Virginia and New Jersey and a redistricting ballot measure in California, while also confronting the future of the party. n.pr/3JKZzJO
The more philosophical issue is of course that the vast majority of people really don't care much about trans issues either way and none of these surveys really seem interested in learning more about whether/how people who might actually change their behavior over them (granted, that's tricky).
I mean, the ethical arguments against LLMs seem broadly strong to me, so not necessarily important; but a lot of even reasonable, cautious advocacy still seems ahead of the data to my mind? At least based on occasional quick skims of the data.
But, setting a causal argument that has nothing whatsoever to do with the data they actually collected aside, the results seem mostly similar to what we see elsewhere.
And then that final question is *worlds* apart from any other data I've seen. Wording effects? Biases from ordering?
It strikes me that even the writing-adjacent areas where LLMs are likely to be useful, a lot of these benefits are still essentially unproven, and I am honestly dubious that the automation will routinely speed or enhance output when factoring in essential human scrutiny and review.
Like, The Argument's article was designed to fit a narrative they don't even pretend to investigate ("people are souring on trans rights, this might be because a random pseudonymous trans woman on Bluesky was mean to me and therefore progressives are really unwilling to compromise.")
Like they can feel very fast and powerful to use; but outside of some limited contexts I wonder whether, once you make the writing read well, fix hallucinations and review sources cited, review everything for factual inaccuracy, etc you may actually spend more time or still trend toward worse output
zeynep tufekci
The big thing that interests me in Lakshya's data is that the answers re: which side is better on LGBT rights seem to diverge much more sharply from others.
The individual rights questions aren't that dissimilar to Pew, etc. But I'm very curious how/why Lakshya gets this result: