Vittelius@feddit.org to Fuck AI@lemmy.worldEnglish · edit-22 days agoconsequences of the current AI induced rise in hardware pricesfeddit.orgexternal-linkmessage-square21fedilinkarrow-up1313arrow-down123
arrow-up1290arrow-down1external-linkconsequences of the current AI induced rise in hardware pricesfeddit.orgVittelius@feddit.org to Fuck AI@lemmy.worldEnglish · edit-22 days agomessage-square21fedilink
minus-squareClockworkOtter@lemmy.worldlinkfedilinkarrow-up12·2 days agoWhat services can LLM-AI providers offer that would otherwise require high RAM usage at home? I feel like people who do home video editing for example aren’t going to be asking ChatGPT to be splicing their footage.
minus-squarecron@feddit.orglinkfedilinkarrow-up6arrow-down1·2 days agoRunning a LLM on your own hardware.
minus-squareClockworkOtter@lemmy.worldlinkfedilinkarrow-up3·2 days agoI’m way out of the loop on that. Is that more than just a hobby?
minus-squarebthest@lemmy.worldlinkfedilinkEnglisharrow-up5·edit-22 days agoSlow as fuck unless you have a monster rig. Doing a basic job is like rendering a 4K 120 FPS video. Text comes out like an 1890s ticker-tape telegram.
What services can LLM-AI providers offer that would otherwise require high RAM usage at home? I feel like people who do home video editing for example aren’t going to be asking ChatGPT to be splicing their footage.
Running a LLM on your own hardware.
I’m way out of the loop on that. Is that more than just a hobby?
Slow as fuck unless you have a monster rig. Doing a basic job is like rendering a 4K 120 FPS video. Text comes out like an 1890s ticker-tape telegram.