Viewing a single comment thread. View all comments

4

twovests wrote (edited )

A bit of both.

The best models still absolutely fucks up code in the same way it does prose, but GitHub's Copilot gets it right 90% of the time. If you're writing in a new language, you can end up with an MVP waaaay faster than you would've before. If you have a method signature with descriptive names, it can usually fill it out for you. If you're a product manager, instead of hiring a team of datascientists, you might just hire one.

I think it's fair to say the supply-vs-demand of programming expertise changed a lot over 2023, purely because of these tools. That's my bread-and-butter and it's depressing.

With Google and other search engines turning to garbage, communities/support becoming increasingly dependent on Discord, and StackOverflow also turning to garbage (with AI answeres), so I found myself just giving up and checking ChatGPT and Copilot instead.

A lot of my coworkers are vocally dependent on these models, and suddenly I was one of them too.

AFAICT, this kind of replacement isn't happening wholesale for artists. People are using AI models in asset-creation pipelines and for decoration and marketing materials, and people are using models instead of commissioning art. But AFAICT there's not the industry-wide "ask ChatGPT rather than your coworkers or docs" that I'm seeing.

This makes sense, because "How do I unwrap this observable?" has one correct answer, whereas you really can't find a model that can, say, "Create concept art for an 80s-inspired retro desk fan prop in our South African inspired animal people world".

TLDR:

  • Current models still mess up, but get things right very often.
  • Programmers are definitely becoming dependent on these models.
  • Programmers are definitely being replaced, and my understanding is that artists aren't being replaced as widely yet.