Uh oh, AI!
Mar. 13th, 2024 09:59 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I'm as worried about AI as the next person, if not on my behalf then on that of my children's generation - but the discussions I see on social media, particularly among writers, artists and other creative people, often miss a couple of things that I think important, so I'm writing them here.
First they came for the Luddites
One is fairly local to the discussions themselves rather than the general issue - they're drenched in classism. AI in the broadest sense has been taking people's jobs since punch-carded Jacquard looms stole jobs from individual craftsmen and drove them to become exploited employees in the mills of the industrial revolution, but somehow it only becomes a "problem" when it threatens the jobs of middle-class novelists? Please.
There was a good example of this on a programme I heard on Radio 4 the other day (which partly prompted this post), where the discussion turned to AI-generated film actors and whether they might make real actors redundant. As an example of ways in which AI could actually be helpful in that profession, someone cited the example of elaborate or prosthetic make-up, where the AI could be trained on the actor's face just once, making it unnecessary to apply the make-up every day of the shoot. Nice for the actor and for the studio's bank account - but no one stopped to mention that the saving comes out of the pockets of the make-up artists and technicians. I see this a lot.
The God of the Gaps Redux
Also evident in that programme (but also my Facebook page, etc.) is the trope of setting red lines, which then get crossed, only to be replaced with other red lines, ad infinitum. For example: 'A computer will never be able to master English grammar. Oh, now it can? Then a computer will never be able to invent a funny joke. Oh, now it can? Then a computer will never be able to writing a moving short story. Oh, now it can? Then a computer will never be able to, etc. etc."
This reminds me very strongly of the so-called God of the Gaps of the late nineteenth century, the rearguard action fought by some Christians to find something that could not be explained by science. The trouble is and was, of course, that science often found ways to explain the supposedly inexplicable - e.g. the evolution of eyes - resulting in the search for the ever-smaller gaps in explicability where God might possibly be found. Doesn't that sound like a lot of AI debates to you, too?
Of course, it's asking the wrong question. We should care, not about what computers can or can't create, but how we relate to their creations. If I showed you two poems, and told you that one was written by a human, the other by a computer, how would you read them? My guess is that many people would scour them for "clues" betraying their origin, lines or phrases of which they can declare: "No computer could have written that" or "No human would have written that."
Why? Is it because they're attempting to show the technical limits of AI? Not really - it's because they want to make a connection (intellectual, emotional) with another human consciousness. As I wrote in another place: "The idea of a text that lacks intentionality is troubling to them; a piece of music generated by a computer, however beautiful 'in itself', will be less satisfying than an identical piece of music written by a human composer" (Literary Studies Deconstructed, 114). In other words, it's not the music (or poem) itself that's important, but the consciousness assumed to lie (or not lie) behind it.
If computers achieved consciousness as humans understand it, and humans accepted that fact. then the problem I set with the two poems would lose much of its point. Then, however, we would have the much bigger problem of sharing the planet with a superior intelligence. Hopefully they'll find us cute, and keep us as pets.
First they came for the Luddites
One is fairly local to the discussions themselves rather than the general issue - they're drenched in classism. AI in the broadest sense has been taking people's jobs since punch-carded Jacquard looms stole jobs from individual craftsmen and drove them to become exploited employees in the mills of the industrial revolution, but somehow it only becomes a "problem" when it threatens the jobs of middle-class novelists? Please.
There was a good example of this on a programme I heard on Radio 4 the other day (which partly prompted this post), where the discussion turned to AI-generated film actors and whether they might make real actors redundant. As an example of ways in which AI could actually be helpful in that profession, someone cited the example of elaborate or prosthetic make-up, where the AI could be trained on the actor's face just once, making it unnecessary to apply the make-up every day of the shoot. Nice for the actor and for the studio's bank account - but no one stopped to mention that the saving comes out of the pockets of the make-up artists and technicians. I see this a lot.
The God of the Gaps Redux
Also evident in that programme (but also my Facebook page, etc.) is the trope of setting red lines, which then get crossed, only to be replaced with other red lines, ad infinitum. For example: 'A computer will never be able to master English grammar. Oh, now it can? Then a computer will never be able to invent a funny joke. Oh, now it can? Then a computer will never be able to writing a moving short story. Oh, now it can? Then a computer will never be able to, etc. etc."
This reminds me very strongly of the so-called God of the Gaps of the late nineteenth century, the rearguard action fought by some Christians to find something that could not be explained by science. The trouble is and was, of course, that science often found ways to explain the supposedly inexplicable - e.g. the evolution of eyes - resulting in the search for the ever-smaller gaps in explicability where God might possibly be found. Doesn't that sound like a lot of AI debates to you, too?
Of course, it's asking the wrong question. We should care, not about what computers can or can't create, but how we relate to their creations. If I showed you two poems, and told you that one was written by a human, the other by a computer, how would you read them? My guess is that many people would scour them for "clues" betraying their origin, lines or phrases of which they can declare: "No computer could have written that" or "No human would have written that."
Why? Is it because they're attempting to show the technical limits of AI? Not really - it's because they want to make a connection (intellectual, emotional) with another human consciousness. As I wrote in another place: "The idea of a text that lacks intentionality is troubling to them; a piece of music generated by a computer, however beautiful 'in itself', will be less satisfying than an identical piece of music written by a human composer" (Literary Studies Deconstructed, 114). In other words, it's not the music (or poem) itself that's important, but the consciousness assumed to lie (or not lie) behind it.
If computers achieved consciousness as humans understand it, and humans accepted that fact. then the problem I set with the two poems would lose much of its point. Then, however, we would have the much bigger problem of sharing the planet with a superior intelligence. Hopefully they'll find us cute, and keep us as pets.
(no subject)
Date: 2024-03-13 12:27 pm (UTC)(no subject)
Date: 2024-03-13 08:50 pm (UTC)interesthorror.(no subject)
Date: 2024-03-13 08:55 pm (UTC)"If it were up to the MPs, or the general voters, the choice would surely be Sunak, who seems slightly less batty. But it's up to the party activists, who probably prefer Truss for the same reason."
(no subject)
Date: 2024-03-13 05:48 pm (UTC)The question of how we're living and how we intend to live, and what to do with AI, is especially relevant in light of the tremendous resources AI (well: LLMs) takes (how thirsty it is for water, how much electricity it requires). Do we want to exhaust and heat Earth further in order to automate processes, so a handful of humanity can live vacuously and well while most of us suffer?
(no subject)
Date: 2024-03-13 08:42 pm (UTC)An excellent question.
(no subject)
Date: 2024-03-13 08:42 pm (UTC)(no subject)
Date: 2024-03-13 10:02 pm (UTC)(no subject)
Date: 2024-03-13 09:00 pm (UTC)Re point 2, there's also a God of the Anti-Gaps, the one who says that some desirable innovation is 20 years off, and a decade later it's still 20 years off. And so on.
(no subject)
Date: 2024-03-13 09:15 pm (UTC)We've lived with point 2 most of my life. Natural language processing, for example, turned out to be far more contextual, ambiguous and complex than its early proponents initially thought/hoped. But things have changed dramatically even in the last few years, and will do so more in the future, rather quickly. I feel like we're more or less at the point they were in 1900 re. powered flight. Looking back at a century of failed attempts, one would be forgiven for scoffing at new ones - but one would be wrong.
However, I am not a professional prophet, despite being without honour in my own country.