//
sign in
Profile
by @danabra.mov
Profile
by @dansshadow.bsky.social
AviHandle
by @danabra.mov
AviHandle
by @dansshadow.bsky.social
ProfileHeader
by @dansshadow.bsky.social
ProfileHeader
by @danabra.mov
ProfileHeaderAlt
by @jakesimonds.com
ProfileMedia
by @danabra.mov
ProfilePlays
by @danabra.mov
ProfilePosts
by @danabra.mov
ProfilePosts
by @dansshadow.bsky.social
ProfileReplies
by @danabra.mov
Record
by @atsui.org
Skircle
by @danabra.mov
StreamPlacePlaylist
by @katherine.computer
+ new component
ProfileReplies









Loading...
you’re acting like I believe LLMs are conscious, which I don’t.
okay then give me another word, you’re arguing semantics
y'know, I think the model designed to understand word relationships would know what common words mean. but thanks for the input
you act like when people turn to LLMs for code that they aren't supervising at all, and that when those changes get PR'd they aren't tested extensively you're like the people who dogpiled on one of the bluesky team members for praising AI-assisted coding and blaming them for bugs that aren't even-
All of your Bluesky data is already being trained on AI by much bigger corporations you should worry about.
-in their department or scope.
Bluesky has found a use for AI that empowers regular people to have more control over social media online. This is a good thing. Get over your weird reactionary revulsion to anything AI.
-deploying the updated feed and waiting a couple hours for it to have included more of what you wanted
an LLM knows what words are commonly associated with "geo-politics", hence why it'd be able to write code for a feed gen for geo-politics. an LLM knows what syntax commonly follows each other, hence why it can write code.
Just Don’t Use It™️