6

i used to do machine learning ("AI") research, and i hate how bad the repute for AI has gotten

Submitted by twovests in artificialintelligence (edited )

during my time as a phd i did machine learning research, primarily in cybersecurity and resource allocation for autonomous vehicles. i also did some education in cryptography and cryptocurrency

cryptocoins:

the thing with cryptocurrencies is that they are useless. "snake oil" is a fantastic analogy: (1) it has some rare applications of genuine use, and (2) it was almost definitely pushed by white scam artists under east asian monikers.

in the most generous interpretation, cryptocurrencies/blockchains have extremely niche, contrived use-cases that are almost always better solved by existing technologies

in the few cases cryptocurrencies offer a sensible solution, it's impossible for those to get adopted when 1% of the people in the field are trying to solve problems and 99% of the rest are scam artists who have no business trying to build a public-facing interface for, like, local notaries

ai:

"AI" might as well mean "computer". people have been surprised by what machines could do ever since we made them, and it was game over once we started calculating.

pre-2000s, we thought we could give computers a list of facts and a list of rules, and once we put enough in, we could ask the program to derive any new fact. this is what we used to call "AI", and is now called "symbolic AI". it never really worked though!

"machine learning" basically refers to probabilistic models and optimization methods that do away with statistical / formal rigor, with an emphasis on empirical results.

while symbolic AI was having a lot of difficulty, neural networks were having a worse time. tbh NNs were considered a joke for awhile, but then Gamers (and then cryptocoin people) made GPU manufacturers invest heavier into GPUs, and the neural networks people had a field day with it. someone writes a GPU driver, then suddenly neural networks were millions of times faster

(side note: neural networks are built on linear algebra running on GPUs, and everything is built on linear algebra. so, the gains for NNs were fantastic across all the non-NN machine learning.)

"deep learning" (machine learning with neural networks with many layers) became a big thing, and since 2012 there was a lot of focus on solving old problems with deep learning.

There's a lot of extremes that had hit the news, like "AI tentacles" and "AI protein folding" and "AI surveillance" and "AI recognition of ethnic minorities".

But there are also a lot of mundane problems, like "what's the ideal bus route and schedule for this campus?" or "what's the ideal allocation of city-provided bikes?" or "can we improve on this PID controller for weapons manufacturing"?

I can proudly say that a lot of my research was in the mundane and good applications without a lot of potential for dual-use (i.e. evil) advances.

But around when I started my PhD was around when a lot of "facial recognition of han vs uighur chinese" papers came out in. It took about two years for the genocide of Uyghur chinese people to become a mainstream concern (and it's still ongoing! it's AI powered! and nobody talks about it!)

But I want to say...

ai bros and anti-ai-bros frustrate me endlessly:

I constantly see people with no subject-area knowledge whatsoever make the worst arguments in the world. What's frustrating is that the Evil Potential for generative AI is so, so, so far beyond AI art that it seems wild that people are arguing over the definition of art while we have AI-powered ethnic minority organ harvesting programs that have been running for years.

if i have to hear someone say "AI art is/isn't art because <hilariously incorrect description of how the latest txt2img model works, that also has no bearing on the definition of art>" i'm going to kermit. they are 100% as dumb as the "AGI is just around the corner and will kill us all"

either way, AI is replacing programmers way, way before it's replacing artists

So, I want to say two things:

  1. "AI" is not like cryptocurrencies, because AI is actually effective. the metric for "does AI have applications" is not how well you can use the latest model to replace a graphic designer

  2. The ethics of "AI art" pale in comparison to the horrors of all the rest of AI-- both those ongoing and those yet to come.

side note, on AGI:

the lesswrong/EA "AGI is coming and will end us all" is kind of stupid. it's like saying "we'll soon make a bomb that can EXPLODE THE EARTH."

  1. sure, we could probably make a bomb in the next 10 years that can explode the earth, but

  2. the bombs we already have are scary enough and are wielded by irresponsible people that i'm not even afraid of the bombs that could feasibly exist.

that is to say, "AGI" (while super poorly defined) is feasible within our lifetimes, and it would be very very bad. Remember that "intelligence" means nothing: We just have very complicated machines that we don't understand, and they were built from trying to optimize a function that a scientist wrote trying to describe they values. An "AGI" would be a machine that tries to optimize some formal definition of utility, and it's very likely such a machine would be very very very bad for a lot of reasons. Unfortunately, a lot of them are basically the plot of whatever your favorite scifi book about evil AI is. (Even the stupid ones, like "evil robot takeover".)

But I'm worried about the torment nexus that we already have, not the ones that might happen in the wildest fantasies of the same people who are afraid of "Roko's Basilisk"

side side note, on roko's basilisk:

hahagagagagaghhhahahaha

hahahahahahahahah

jesus christ. you can safely ignore anybody who is afraid of roko's basilisk

the tldr: in this matter, the centrist-sounding take is basically correct. "AI is good and bad, but it isn't snake oil"

the main evil is still militarized capitalism, and the scariest thing is still that earth is becoming uninhabitable, and the second scariest thing is still that we have enough nukes that we can make it uninhabitable instantly

Comments

You must log in or register to comment.

5

anethum wrote

AI is replacing programmers way, way before it's replacing artists

aren't large language models messing up codes much in the same way they mess up prose? or do you mean other types of NN.

(did you even mean that sentence to imply imminence in the first place)

4

twovests wrote (edited )

A bit of both.

The best models still absolutely fucks up code in the same way it does prose, but GitHub's Copilot gets it right 90% of the time. If you're writing in a new language, you can end up with an MVP waaaay faster than you would've before. If you have a method signature with descriptive names, it can usually fill it out for you. If you're a product manager, instead of hiring a team of datascientists, you might just hire one.

I think it's fair to say the supply-vs-demand of programming expertise changed a lot over 2023, purely because of these tools. That's my bread-and-butter and it's depressing.

With Google and other search engines turning to garbage, communities/support becoming increasingly dependent on Discord, and StackOverflow also turning to garbage (with AI answeres), so I found myself just giving up and checking ChatGPT and Copilot instead.

A lot of my coworkers are vocally dependent on these models, and suddenly I was one of them too.

AFAICT, this kind of replacement isn't happening wholesale for artists. People are using AI models in asset-creation pipelines and for decoration and marketing materials, and people are using models instead of commissioning art. But AFAICT there's not the industry-wide "ask ChatGPT rather than your coworkers or docs" that I'm seeing.

This makes sense, because "How do I unwrap this observable?" has one correct answer, whereas you really can't find a model that can, say, "Create concept art for an 80s-inspired retro desk fan prop in our South African inspired animal people world".

TLDR:

  • Current models still mess up, but get things right very often.
  • Programmers are definitely becoming dependent on these models.
  • Programmers are definitely being replaced, and my understanding is that artists aren't being replaced as widely yet.
3

Moonside wrote

Yeah I definitely think AI doomerism/boosterism is a distraction from both climate change and nuclear threats.

3

twovests wrote

This is 100% right. It's crazy seeing "effective altruism" people center around "AGI" (which is a stupid word btw) for no reason whatsoever

2

toasthaste wrote

I am someone who is quite pro-EA in the sense of "charity donations can go a lot farther if you apply them thoughtfully, I love it when fewer children die from malaria", which in my mind is/should be the core Thing of EA, and I deeply deeply resent the AGI people sucking all the oxygen out of the room and torching piles and piles of goodwill the way they do. So please don't read the following as a defense of the AI doomers and longtermists, I find them very tedious and frustrating:

  1. The reason EA types don't tend to touch on climate change is because of its core Thing which is trying to find and address problems that are "important, neglected, and tractable". Stopping/reversing climate change is both Important* and Tractable, but it's really not Neglected-- there are tons of really really smart people working on it, it's far from low hanging fruit, and that means the floor for meaningfully contributing to those efforts is pretty high, diminishing returns and all; Altruism in this area will be less Effective. This is definitely a bullet to bite but I think it basically makes sense.

  2. I think EA was at one point trying to do stuff about nuclear threats? Maybe it still is, idk, my understanding was that it ended up really not being very tractable, like, how do you know you're reducing the overall threat of nuclear war and proliferation? What does that look like? The most tractable part of that that I can think of (getting existing nuclear weapons dismantled) is not very directly solvable with money, which is the primary tool EA uses (since the original point is making a donated dollar do as much good as possible)

  3. Now you might ask, well, how does AI extinction risk rank in important/neglected/tractable, I mean for one how tractable could "AI alignment" possibly be? To which the AI people will say "shhhhhhhhhhhhhhhh"

* (The AI people will say it's not that important because climate change isn't going to drive humans extinct, let alone wipe out all life on earth/the galaxy the way they say AI will, it's just going to make lots of people die and make life worse for everyone who survives.)

Anyway even if EA never does anything tangibly good ever again (unlikely imo, the We Want Less Kids To Die From Malaria faction is still going strong), I think they at least get credit for GiveDirectly, which lets you give direct cash transfers to people living in extreme poverty around the world with no strings attached, so they get to decide where it's best spent, rather than someone half a world away dictating that to them. I highly recommend anyone and everyone donate money to GiveDirectly if they have some spare cash lying around.

In conclusion, sorry if you already knew all this or it's an illegible wall of text, it turns out I was in an infodumping mood, what can ya do.

2

twovests wrote

Hey!! Replying on mobile so I'll be sparse, but ya! I basically agree with you, I read your comment in good faith and I knew most of this but I appreciate it anyway. Good + informative comment, esp for the shoutout to give directly.

I used to identify as an effective altruist for basically all the same reasons you are pro-EA!