Viewing a single comment thread. View all comments

4

twovests wrote

I've worked in "AI" for awhile (machine learning primarily with deep neural networks) and things are moving way faster than I imagined they would.

I assumed "fine tuning models based on a small corpus from one artist" would be something four-years down the road, but in the 30 minutes since I wrote this post, I learned this is happening in practice: https://petapixel.com/2022/12/21/photos-used-to-generate-ai-images-for-client-so-photographer-never-shoots-again/

The tone of this article is so fucked up. "Wow! From just a few samples, this photographer was quickly put out of work! Amazing! I understand both sides of the coin, buy my AI art NFTs!"

The most technologically surprising thing to me:

After the shoot was done, Karpinnen collected the model’s own selfies to better train the image synthesizer model. In total, the Finnish photographer has 20 different images of each model.

Training a model to have even slightly interesting results usually requires vast amounts of samples. Fine-tuning that model traditionally requires a fraction of samples, but that's still usually a vast number. It's really quite scary that a tiny corpus is being used effectively.

I searched a bit more, and I found this is already commonplace. People are fine-tuning models on specific artists work: https://twitter.com/jdebbiel/status/1601663197031075840