6

AI art endgame

Submitted by twovests in just_post (edited )

  • 2022: Accessible art tools using DNNs are proliferating, with weights that run on consumer devices.
  • 2023: First court cases determine stable diffusion and others are generally fair use, but models fine-tuned to a specific artists generally aren't considered fair use. Lots of grey area inbetween. EFF starts writing vague warnings like this post. Mickey Mouse enters public domain.
  • 2024: By now, DNN tools are refined and common in photo-editing apps. Pressure on the market is felt. Includes:
    • Smart upscale and infilling
    • Generating brushes, filters, templates, materials and assets like textures from prompts
    • Real-time diffusion models allow people to play with the "knobs", for very fine-tuned generation
    • Very little "text-to-image" in the industry
  • 2025: Grass-roots campaigns to expand copyright law to address AI art get funding from large corporations such as Disney .
    • Adobe buys Pixelmator, Serif, and other competing software.
  • 2026:
    • Copyright law now requires that models be trained on appropriately licensed data. (I.e. Public domain, or content the IP holder owns.)
    • Many "AI art features" mentioned above are removed, or replaced with weaker weights.
    • Disney buys Adobe
    • Mickey Mouse copyright gets extended again. Disney successfully sues studios and artists who violated the copyright.
    • Various documentaries are silently removed from streaming services.
  • 2027:
    • Disney announces "DisneyDream" model weights, trained on the massive corpus of work they own
    • Integrates nicely into various art software, but requires a license cost
    • Independent artists and studios around the world are paying $10/user/month for access to the weights, being considered almost a necessity for creative work.
    • Photoshop now backs up all your work to the cloud. There is no opt-out
  • 2028:
    • Fair uses cases since 2026 bubble up to the conservative US supreme court. Fair Use gets gutted in the US
    • Parody generally considered illegal
    • The CEO of Linux is sued for copyright infringement
  • 2029:
    • Last Android phone is sold; the AOSP ends after Oracle successfully relitigates against Google
    • Disney buys Google
    • Only about 20 public libraries remain in the US
    • Amazon is now considered a public utility, but instead of getting regulated, they're just given tax money
    • Disney releases "DisneyDream 4.0", built using "usage information" (saved photos) from users of the photo editing software they own

Comments

You must log in or register to comment.

4

twovests wrote

I've worked in "AI" for awhile (machine learning primarily with deep neural networks) and things are moving way faster than I imagined they would.

I assumed "fine tuning models based on a small corpus from one artist" would be something four-years down the road, but in the 30 minutes since I wrote this post, I learned this is happening in practice: https://petapixel.com/2022/12/21/photos-used-to-generate-ai-images-for-client-so-photographer-never-shoots-again/

The tone of this article is so fucked up. "Wow! From just a few samples, this photographer was quickly put out of work! Amazing! I understand both sides of the coin, buy my AI art NFTs!"

The most technologically surprising thing to me:

After the shoot was done, Karpinnen collected the model’s own selfies to better train the image synthesizer model. In total, the Finnish photographer has 20 different images of each model.

Training a model to have even slightly interesting results usually requires vast amounts of samples. Fine-tuning that model traditionally requires a fraction of samples, but that's still usually a vast number. It's really quite scary that a tiny corpus is being used effectively.

I searched a bit more, and I found this is already commonplace. People are fine-tuning models on specific artists work: https://twitter.com/jdebbiel/status/1601663197031075840

2

Moonside wrote

Tbh I have the feeling that the real dystopian parts of this will be the social ones, image generation brrr is really the innocuous part to me. Like imagine Disney automating in-betweening work using patented proprietary methods, outcompeting every other animation studio in the business. I fear the backlash will lead to an expansion of copyright, whereas, as a provisional demand at least, we could be demanding something cooler like basic income.

All that said it's cool to have someone deep in the topic around here.

2

twovests wrote

Yeah, this is exactly what I fear. Disney gets stronger, copyright gets stronger, and individual artists lose out even more.

AI art generation is one of the most comprehensible horrors, but there are plenty which are harder to comprehend and which are more horrific!