Once a vibrant platform for artists, DeviantArt is now buckling under the weight of bots and greed—and spurning the creative community that made it great.
there’s some stuff image generating AI just can’t do yet. it just can’t understand some things. a big problem seems to be referring to the picture itself, like position or its border. another problem is combining things that usually don’t belong together, like a skin of sky. those are things a human artist/designer does with ease.
there’s some stuff image generating AI just can’t do yet
There’s a lot.
Some of it doesn’t matter for certain things. And some of it you can work around. But try creating something like a graphic novel with Stable Diffusion, and you’re going to quickly run into difficulties. You probably want to display a consistent character from different angles – that’s pretty important. That’s not something that a fundamentally 2D-based generative AI can do well.
On the other hand, there’s also stuff that Stable Diffusion can do better than a human – it can very quickly and effectively emulate a lot of styles, if given a sufficient corpus to look at. I spent a while reading research papers on simulating watercolors, years back. Specialized software could do a kind of so-so job. Stable Diffusion wasn’t even built for that, and with a general-purpose model, also not specialized for that, it already can turn out stuff that looks rather more-impressive than those dedicated software packages.
I think Corridor Digital made an AI animated film by hiring an illustrator (after an earlier attempt with a general dataset) and “draw” still frames from video of the lead actors, with Stable Diffusion generating the inbetweens.
think of an episode of any animated series with countless handmade backgrounds, good luck generating those with any sort of consistency or accuracy and you will be calling for an artist who can actually take instructions and iterate
there’s some stuff image generating AI just can’t do yet. it just can’t understand some things. a big problem seems to be referring to the picture itself, like position or its border. another problem is combining things that usually don’t belong together, like a skin of sky. those are things a human artist/designer does with ease.
There’s a lot.
Some of it doesn’t matter for certain things. And some of it you can work around. But try creating something like a graphic novel with Stable Diffusion, and you’re going to quickly run into difficulties. You probably want to display a consistent character from different angles – that’s pretty important. That’s not something that a fundamentally 2D-based generative AI can do well.
On the other hand, there’s also stuff that Stable Diffusion can do better than a human – it can very quickly and effectively emulate a lot of styles, if given a sufficient corpus to look at. I spent a while reading research papers on simulating watercolors, years back. Specialized software could do a kind of so-so job. Stable Diffusion wasn’t even built for that, and with a general-purpose model, also not specialized for that, it already can turn out stuff that looks rather more-impressive than those dedicated software packages.
I think Corridor Digital made an AI animated film by hiring an illustrator (after an earlier attempt with a general dataset) and “draw” still frames from video of the lead actors, with Stable Diffusion generating the inbetweens.
It‘s even hard to impossible to generate the image of a person doing a handstand. All models assume a rightside-up person.
This hasn’t been true for months at least. You really have to check week to week when dealing with things in this field.
think of an episode of any animated series with countless handmade backgrounds, good luck generating those with any sort of consistency or accuracy and you will be calling for an artist who can actually take instructions and iterate
We’ll soon be hearing that only Luddites care about continuity errors