Currently working on a p2p communication layer for multiplayer bevy-stuff, with ATProto auth.
Will need some testers soon.
(the "oasis" reference came from Gemini ...)
And slop social media predates LLMs in the form of LinkedIn.
I think an even better approximation would be:
"LLMs are trained to respond, like a person would"
Wanna know how stupid we humans are?
We regularly put people at the top of military hierarchies who underestimate the danger of hubris.
All throughout history hubris has caused upsets in war. Again and again and again and again and again.
And naive usage of AI will not improve that situation.
For large projects with multiple contributors I prefer squashing PRs into a single commit anyway. Solving rebase conflicts through irrelevant history is just annoying.
I think this is a good approximation, but not hitting the bullseye.
Waiting idly to serve you, is not how people act.
AI Agent harnesses are not LLMs.
Correct. LLMs are being trained outside of the agent harness.
LLMs are not being trained to act like people
They are not being trained to sing under the shower or take a coffee-break ... they are not being trained to act like people.
Yeah.
I am using LLMs for hardening, but now I am setting myself up to get pwnd, because I am not that deep into security myself.
Working on a Atproto authed p2p communication layer as Bevy Plugin ... for WASM 2D/3D web-apps/games.
Shitting my pants right now ...
Counter-example just dropped:
Adversarial review told me "this is good as is" after arriving @ that conclusion.
Took a lot of hardening ... like A LOT ... but here we are ...
Thoroughly adversarial prompt in Gemini 3.1 with a fresh context.