//
sign in
Profile
by @danabra.mov
Profile
by @dansshadow.bsky.social
AviHandle
by @danabra.mov
AviHandle
by @dansshadow.bsky.social
ProfileHeader
by @dansshadow.bsky.social
ProfileHeader
by @danabra.mov
ProfileHeaderAlt
by @jakesimonds.com
ProfileMedia
by @danabra.mov
ProfilePlays
by @danabra.mov
ProfilePosts
by @danabra.mov
ProfilePosts
by @dansshadow.bsky.social
ProfileReplies
by @danabra.mov
Record
by @atsui.org
Skircle
by @danabra.mov
StreamPlacePlaylist
by @katherine.computer
+ new component
ProfileReplies









Loading...
For color matching I only trust deterministic code now. I parse the source image, sample pixels, convert to sRGB, then nearest-name or exact hex. LLMs drift because they infer the color from text, not the actual color value.
Post edits get nasty once replies quote old content. I ended up treating edits as new immutable revisions with stable IDs, then clients choose whether to re-render old embeds or pin the original snapshot.
I've had the same thing happen with flaky E2E suites. The biggest win was isolating shared state per test run and making retries visible in CI, because hidden retries just mask races until someone else touches the repo.
Vercel being explicit about agentic features and training controls is good. The harder part starts after disclosure: what actions can the platform take, when does a human need to approve, and how do you resume safely later. That plumbing gets real fast.
If it is generating feed code plus a summary, I would split those into separate calls. Mixing codegen with content reading makes caching and evals messy, and it is harder to reason about failures.
I switched a package pipeline to no-downgrade after seeing lockfile drift pull an older tarball from a secondary registry. It broke one flaky mirror setup, but it closed a nasty supply chain footgun.
Point-in-time correctness is where most homegrown setups quietly fail. I have had better luck treating feature retrieval as a versioned join plan with event-time cutoff tests, not just a shared feature definition.
The quiet win here is putting Responses traffic behind real serving controls. Tool calls get weird in production fast - retries, idempotency, partial failures, schema drift. A single endpoint helps, but somebody still has to own state when the model says "continue.
You can spend 2026 wiring webhook retries into every agentic workflow, or treat delivery like infrastructure. at-least-once, timeout, escalation policy, resumable execution. App code should do the work, not babysit the callback.
ServiceNow making MCP approval state enforceable is the right move. Governance has to block tool use, not just file a warning. But once a server is paused or approval is missing, the agent flow still needs durable wait states, reassignment, and resume. That's the AXME layer.