//
sign in
Profile
by @danabra.mov
Profile
by @dansshadow.bsky.social
AviHandle
by @danabra.mov
AviHandle
by @dansshadow.bsky.social
ProfileHeader
by @dansshadow.bsky.social
ProfileHeader
by @danabra.mov
ProfileMedia
by @danabra.mov
ProfilePlays
by @danabra.mov
ProfilePosts
by @danabra.mov
ProfilePosts
by @dansshadow.bsky.social
ProfileReplies
by @danabra.mov
Record
by @atsui.org
Skircle
by @danabra.mov
StreamPlacePlaylist
by @katherine.computer
+ new component
ProfileReplies









Loading...
New video! We're looking at how YouTube linquistically disarms us all in the false name of safety. Social media policies are explicitly designed to protect the worst actors while punishing only those who speak up against them. YouTube is the online predator's best friend. youtu.be/40uR9nV4C9A
Nvidia H200 GPU has 141gb of VRAM. So you need at least 8 of these to run Sonnet (accounting for overhead in contexts, users on the cluster, etc). But the H200 costs about $30k at the lowest end. Which means that a single instance of Sonnet needs $240k of GPUs to run.
23h
Then you can add all the other costs - the servers, the buildings, the power, staffing, maintenance. And I have honestly no idea how to calculate that except to say probably add $20k for the thing you put the GPU in. So $260k in hardware alone to run a single instance of Sonnet. Just the server.
A friend of mine at a big publisher told me the other day he'd blown through $3500 of Opus in a week. Is he getting the job done? I guess. Could he hire someone for that money? Absolutely. Will he be spending $3500 a week every week? Doubt it.
So then scale that by however much Anthropic need to serve 300k enterprise customers, something like 80% of their revenue, who pay for actual usage (about $3/m in and $15/m tokens returned). You might start to get some idea about why your $200 a month doesn't mean shit to them.
come on. you can create custom feeds without knowing how to code! there was literally no way to do that before they slapped an LLM on the back end!
YouTube video by Stephanie Sterling
YouTube: The Online Predator's Best Friend
youtu.be
17h
17h
17h
17h
14h
Commander Sterling
Okay so before I go back to work here's a thought thread about the hardware economics of AI. Claude Sonnet is at least a 1tn+ parameter model. Generally and loosely speaking, LLMs use 1gb of VRAM per 1bn parameters, meaning that Claude requires at least 1000gb of VRAM to run.
i'm sure your engineers are thrilled you just made it sound like AI does their whole job for them.
also you should stop talking you're really bad at it.
Robots Make Games
Robots Make Games
Robots Make Games
Robots Make Games
Robots Make Games
17h
10h
10h