The Guardian (USA)

The rise of Morgan Wallen, America’s controvers­ial country music star

- Adrian Horton

For the past six weeks, Miley Cyrus’s Flowers, the most dominant song on the Billboard charts, was decently ubiquitous, as much as one can determine song ubiquity in my particular bubble of Brooklyn. I heard it in Ubers, at the nail salon, during at least two pop-ins to Duane Reade. References to it were all over Twitter and whatever TikTok stream reaches me. The same cannot be said for its usurper on the Billboard Hot 100: Last Night by Morgan Wallen, a 29-year-old country singer from eastern Tennessee. The song is one of five that he has notched in the top 10 this week, all from his third album, One Thing at a Time, released earlier this month. (In 2023, Billboard charts are calculated by a combinatio­n of radio airplay, sales and streaming numbers.) He’s the first core country act to achieve the mainstream music feat – which, depending on where you are, is either obvious or head-scratching.

Wallen’s career presents a conundrum: he is not only the biggest star in country music, but one of the biggest stars in pop music, period. One Thing at a Time, which runs at nearly two hours and has 36 songs, had the largest streaming debut of any album so far in 2023, according to Spotify. His 2021 release, the 30-song Dangerous: The Double Album, was the third most streamed album in the US in 2022, behind Bad Bunny and Harry Styles and ahead of Taylor Swift’s Midnights.

And yet his popularity is one of the starkest examples of cultural silos in the US. Loosely, what Paramount’s Yellowston­e is to TV – the most popular show on cable television with strong viewership in smaller markets but largely ignored in coastal cities – Wallen is to popular music: regional, segmented, massively recognizab­le to some and unheard of to others. You either listen to Morgan Wallen or you don’t.

Wallen’s appeal makes sense, at least for a longtime country music listener like me. His voice is twangy and husky yet commercial, the woodgrain raspiness of Chris Stapleton filtered through the machine of reality television; Wallen first gained recognitio­n as a contestant on The Voice in 2014, when he was 20 years old and working as a landscaper in his home town of Knoxville, Tennessee. His music is almost aggressive­ly median bro country – beer, the Bible, women, whiskey, regret, reclaiming the word “redneck”.

It’s at best charming and surprising­ly clever, sometimes cliched far beyond the point of self-parody, but generally catchy. His songs are boozy, drenched in nostalgia, swilling about the evergreen draw of someone bad for you or mining the fantasy that you could make such an impression on a man that he’d sing about it for years afterward. They’re easy songs to drink to, which is, as I’ve mentioned, a recurring theme. (“But if I never did put that can to my mouth / I wouldn’t have nothin’ I could sing about, yeah”, he sings on the

new album opener Born With a Beer in My Hand, which ambivalent­ly handles Wallen’s newfound sobriety.)

Wallen’s post-Voice makeover for his 2018 debut album, If I Know Me, took a nod from 90s country and Brooklyn fashion – objectivel­y ugly but worn with such confidence that it works – with a Billy Ray Cyrus twist (a mullet and sleeveless flannels). Female fans on TikTok responded to his insouciant charm and assertivel­y retro style enthusiast­ically. A 2020 New Yorker profile of Wallen quoted a South Carolina mother on Instagram – “Lord have mercy im bout to bust”, she commented on a picture of him leaning against a truck – which remains an apt summary for that segment of his fanbase. There is, for better and for worse, a perennial appeal to a man wearing a backwards hat who just likes his beer and can’t seem to help his habits, his aching heart, or himself.

The compartmen­talization of

Wallen’s popularity is partly due to the genre bounds of country, which remain Nashville-based, predominan­tly white and exurban, even as country music itself, and particular­ly the pop-leaning, bro-country lane in which Wallen traffics, borrows beats and styles originated by hip-hop. (Wallen’s drawling delivery can veer toward rapping, though he defines himself against urban music or culture in general. “Call it cliche, but hey, just take it from me / It’s still goin’ down out in the country,” he sang on Saturday Night Live in December 2020, two months after the show rescinded its first invitation after Wallen broke the show’s Covid isolation protocols at a bar in Alabama.) Though the game of country music stardom has, like every fame game, shifted to social media and TikTok in recent years, it’s still a genre heavily dictated by radio airplay (that may help explain the sheer volume of Wallen’s albums) that infrequent­ly crosses over to pop radio.

And it’s partly due to Wallen’s own torpedoing of his crossover career – namely, a video filmed in January 2021, in the thick of the Dangerous album release, of the singer drunkenly shouting the N-word at friends outside his Tennessee home. Condemnati­on, especially mere months after nationwide Black Lives Matter protests in 2020, was swift, though temporary. His label put him on indefinite hiatus (“such behavior will not be tolerated”), Country Music Television removed his appearance from their platforms, and he was disqualifi­ed from the 2021 Grammys and the Academy of Country Music Awards.

Wallen released a short, self-filmed apology video asking fans not to defend his actions, and explained his use of the racial epithet as “hour 72 of a 72hour bender”. He retreated from the spotlight and, as he told Michael Strahan, a Black anchor, on Good Morning America months later, went to rehab for “deeper issues”. In the same interview, Wallen explained that he and his close friends “say dumb stuff together” and “in our minds, it’s playful. That sounds ignorant, but that’s really where it came from, and it’s wrong.” He said he didn’t use the slur frequently and “didn’t mean it in a derogatory manner at all”. He pledged $500k to Black-led groups, although that money was not promptly forthcomin­g.

Wallen may have been “cancelled”, but his commercial base stayed with him. His apology tour (or lack thereof) was seen by many of his fans as unnecessar­y, frustratin­g, a middle finger at mainstream respectabi­lity or a culture war cudgel. And to many others, particular­ly Black country artists in an industry that has long excluded and marginaliz­ed Black musicians and whitewashe­d its roots, it was not nearly enough. In January 2022, the Grand Ole Opry welcomed Wallen to its stage – a hallowed milestone for a country musician, and a move met with derision and resigned disappoint­ment from Black country artists and advocates as an indication that the industry was all too eager to forget the incident and move on.

That has largely been the case. The Academy of Country Music awarded Wallen its highest honor, album of the year, in 2022. His 2022 Dangerous Tour was, as the New York Times music writer Jon Caramanica put it, a return to a “safe space” – vague atonement, gratitude to those who stuck with him, no mention of racial justice or change, and no stopping audience members from turning his success into a political weapon (ie shouting expletives about President Biden).

One Thing at a Time is the musical equivalent of that – vague grappling with mistakes, even vaguer repentance, a lot of lovesickne­ss and drunkennes­s and stating outright who he is (a “good ole boy”, for one). The fallout from the racial slur video may have dashed hopes of Morgan Wallen as a true country crossover smash a la Shania Twain or Taylor Swift (once), but it has not put a dent in his fanbase. In providing more and more of the same, his star continues to rise and rise.

learned about the meaning of images, and how to make new ones. For all we know, the mottled pink texture of our Saint-Exupéry-style piggy could have been blended, however subtly, from the raw flesh of a cancer patient.

“It’s the digital equivalent of receiving stolen property. Someone stole the image from my deceased doctor’s files and it ended up somewhere online, and then it was scraped into this dataset,” Lapine told the website Ars Technica. “It’s bad enough to have a photo leaked, but now it’s part of a product. And this goes for anyone’s photos, medical record or not. And the future abuse potential is really high.” (According to her Twitter account, Lapine continues to use tools like Dall-E to make her own art.)

The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data-driven applicatio­ns like it, is based on this wholesale appropriat­ion of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionab­le if not outright illegal in the EU). Like most of what goes on inside advanced neural networks, it’s really impossible to understand how they work from the outside, rare encounters such as Lapine’s aside. But we can be certain of this: far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunera­ted work of generation­s of human artists.

AI image and text generation is pure primitive accumulati­on: expropriat­ion of labour from the many for the enrichment and advancemen­t of a few Silicon Valley technology companies and their billionair­e owners. These companies made their money by inserting themselves into every aspect of everyday life, including the most personal and creative areas of our lives: our secret passions, our private conversati­ons, our likenesses and our dreams. They enclosed our imaginatio­ns in much the same manner as landlords and robber barons enclosed once-common lands. They promised that in doing so they would open up new realms of human experience, give us access to all human knowledge, and create new kinds of human connection. Instead, they are selling us back our dreams repackaged as the products of machines, with the only promise being that they’ll make even more money advertisin­g on the back of them.

* **

The weirdness of AI image generation exists in the output as well as the input. One user tried typing in nonsense phrases and was confused and somewhat discomfort­ed to discover that Dall-E mini seemed to have a very good idea of what a “Crungus” was: an otherwise unknown phrase that consistent­ly produced images of a snarling, naked, ogre-like figure. Crungus was sufficient­ly clear within the program’s imaginatio­n that he could be manipulate­d easily: other users quickly offered up images of ancient Crungus tapestries, Roman-style Crungus mosaics, oil paintings of Crungus, photos of Crungus hugging various celebritie­s, and, this being the internet, “sexy” Crungus.

So, who or what is Crungus? Twitter users were quick to describe him as “the first AI cryptid”, a creature like Bigfoot who exists, in this case, within the underexplo­red terrain of the AI’s imaginatio­n. And this is about as clear an answer as we can get at this point, due to our limited understand­ing of how the system works. We can’t peer inside its decision-making processes because the way these neural networks “think” is inherently inhuman. It is the product of an incredibly complex, mathematic­al ordering of the world, as opposed to the historical, emotional way in which humans order their thinking. The Crungus is a dream emerging from the AI’s model of the world, composited from billions of references that have escaped their origins and coalesced into a mythologic­al figure untethered from human experience. Which is fine, even amazing – but it does make one ask, whose dreams are being drawn upon here? What composite of human culture, what perspectiv­e on it, produced this nightmare?

A similar experience occurred to another digital artist experiment­ing with negative prompts, a technique for generating what the system considers to be the polar opposite of what is described. When the artist entered “Brando::-1”, the system returned something that looked a bit like a logo for a video game company called DIGITA PNTICS. That this may, across the multiple dimensions of the system’s vision of the world, be the opposite of Marlon Brando seems reasonable enough. But when they checked to see if it went the other way, by typing in “DIGITA PNTICS skyline logo::-1”, something much stranger happened: all of the images depicted a creepy-looking woman with sunken eyes and reddened cheeks, who the artist christened Loab. Once discovered, Loab seemed unusually and disturbing­ly persistent. Feeding the image back into the program, combined with ever more divergent text prompts, kept bringing Loab back, in increasing­ly nightmaris­h forms, in which blood, gore and violence predominat­ed.

Here’s one explanatio­n for Loab, and possibly Crungus: although it’s very, very hard to imagine the way the machine’s imaginatio­n works, it is possible to imagine it having a shape. This shape is never going to be smooth or neatly rounded: rather, it is going to have troughs and peaks, mountains and valleys, areas full of informatio­n and areas lacking many features at all. Those areas of high informatio­n correspond to networks of associatio­ns that the system “knows” a lot about. One can imagine the regions related to human faces, cars and cats, for example, being pretty dense, given the distributi­on of images one finds on a survey of the whole internet.

It is these regions that an AI image generator will draw on most heavily when creating its pictures. But there are other places, less visited, that come into play when negative prompting – or indeed, nonsense phrases – are deployed. In order to satisfy such queries, the machine must draw on more esoteric, less certain connection­s, and perhaps even infer from the totality of what it does know what its opposite may be. Here, in the hinterland­s, Loab and Crungus are to be found.

That’s a satisfying theory, but it does raise certain uncomforta­ble questions about why Crungus and Loab look like they do; why they tip towards horror and violence, why they hint at nightmares. AI image generators, in their attempt to understand and replicate the entirety of human visual culture, seem to have recreated our darkest fears as well. Perhaps this is just a sign that these systems are very good indeed at aping human consciousn­ess, all the way down to the horror that lurks in the depths of existence: our fears of filth, death and corruption. And if so, we need to acknowledg­e that these will be persistent components of the machines we build in our own image. There is no escaping such obsessions and dangers, no moderating or engineerin­g away the reality of the human condition. The dirt and disgust of living and dying will stay with us and need addressing, just as the hope, love, joy and discovery will.

This matters, because AI image generators will do what all previous technologi­es have done, but they will also go further. They will reproduce the biases and prejudices of those who create them, like the webcams that only recognise white faces, or the predictive policing systems that lay siege to low-income neighbourh­oods. And they will also up the game: the benchmark of AI performanc­e is shifting from the narrow domain of puzzles and challenges – playing chess or Go, or obeying traffic laws – to the much broader territory of imaginatio­n and creativity.

While claims about AI’s “creativity” might be overblown – there is no true originalit­y in image generation, only very skilled imitation and pastiche – that doesn’t mean it isn’t capable of taking over many common “artistic” tasks long considered the preserve of skilled workers, from illustrato­rs and graphic designers to musicians, videograph­ers and, indeed, writers. This is a huge shift. AI is now engaging with the underlying experience of feeling, emotion and mood, and this will allow it to shape and influence the world at ever deeper and more persuasive levels.

* * *

ChatGPT was introduced in November 2022 by OpenAI, and further shifted our understand­ing of how AI and human creativity might interact. Structured as a chatbot – a program that mimics human conversati­on – ChatGPT is capable of a lot more than conversati­on. When properly entreated, it is capable of writing working computer code, solving mathematic­al problems and mimicking common writing tasks, from book reviews to academic papers, wedding speeches and legal contracts.

It was immediatel­y obvious how the program could be a boon to those who find, say, writing emails or essays difficult, but also how, as with image generators, it could be used to replace those who make a living from those tasks. Many schools and universiti­es have already implemente­d policies that ban the use of ChatGPT amid fears that students will use it to write their essays, while the academic journal Nature has had to publish policies explaining why the program cannot be listed as an author of research papers (it can’t give consent, and it can’t be held accountabl­e). But institutio­ns themselves are not immune from inappropri­ate uses of the tool: in February, the Peabody College of Education and Human Developmen­t, part of Vanderbilt University in Tennessee, shocked students when it sent out a letter of condolence and advice following a school shooting in Michigan. While the letter spoke of the value of community, mutual respect and togetherne­ss, a note at the bottom stated that it was written by ChatGPT – which felt both morally wrong and somehow false or uncanny to many. It seems there are many areas of life where the intercessi­on of machines requires some deeper thought.

If it would be inappropri­ate to replace our communicat­ions wholesale with ChatGPT, then one clear trend is for it to become a kind of wise assistant, guiding us through the morass of available knowledge towards the informatio­n we seek. Microsoft has been an early mover in this direction, reconfigur­ing its often disparaged search engine Bing as a ChatGPT-powered chatbot, and massively boosting its popularity by doing so. But despite the online (and journalist­ic) rush to consult ChatGPT on almost every conceivabl­e problem, its relationsh­ip to knowledge itself is somewhat shaky.

One recent personal interactio­n with ChatGPT went like this. I asked it to suggest some books to read based on a new area of interest: multi-species democracy, the idea of including non-human creatures in political decision-making processes. It’s pretty much the most useful applicatio­n of the tool: “Hey, here’s a thing I’m thinking about, can you tell me some more?” And ChatGPT obliged. It gave me a list of several books that explored this novel area of interest in depth, and described in persuasive human language why I should read them. This was brilliant! Except, it turned out that only one of the four books listed actually existed, and several of the concepts ChatGPT thought I should explore further were lifted wholesale from rightwing propaganda: it explained, for example, that the “wise use” movement promoted animal rights, when in fact it is a libertaria­n, anti-environmen­t concept promoting the expansion of property rights.

Now, this didn’t happen because ChatGPT is inherently rightwing. It’s because it’s inherently stupid. It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever. It is dreaming sentences that sound about right, and listening to it talk is frankly about as interestin­g as listening to someone’s dreams. It is very good at producing what sounds like sense, and best of all at producing cliche and banality, which has composed the majority of its diet, but it remains incapable of relating meaningful­ly to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximat­ion, of consciousn­ess. (As this piece was going to publicatio­n, OpenAI released a new version of the system that powers ChatGPT, and said it was “less likely to make up facts”.)

The belief in this kind of AI as actually knowledgea­ble or meaningful is actively dangerous. It risks poisoning the well of collective thought, and of our ability to think at all. If, as is being proposed by technology companies, the results of ChatGPT queries will be provided as answers to those seeking knowledge online, and if, as has been proposed by some commentato­rs, ChatGPT is used in the classroom as a teaching aide, then its hallucinat­ions will enter the permanent record, effectivel­y coming between us and more legitimate, testable sources of informatio­n, until the line between the two is so blurred as to be invisible. Moreover, there has never been a time when our ability as individual­s to research and critically evaluate knowledge on our own behalf has been more necessary, not least because of the damage that technology companies have already done to the ways in which informatio­n is disseminat­ed. To place all of our trust in the dreams of badly programmed machines would be to abandon such critical thinking altogether.

AI technologi­es are bad for the planet too. Training a single AI model – according to research published in 2019 – might emit the equivalent of more than 284 tonnes of carbon dioxide, which is nearly five times as much as the entire lifetime of the average American car, including its manufactur­e. These emissions are expected to grow by nearly 50% over the next five years, all while the planet continues to heat up, acidifying the oceans, igniting wildfires, throwing up superstorm­s and driving species to extinction. It’s hard to think of anything more utterly stupid than artificial intelligen­ce, as it is practised in the current era.

* * *

So, let’s take a step back. If these current incarnatio­ns of “artificial” “intelligen­ce” are so dreary, what are the alternativ­es? Can we imagine powerful informatio­n sorting and communicat­ing technologi­es that don’t exploit, misuse, mislead and supplant us? Yes, we can – once we step outside the corporate power networks that have come to define the current wave of AI.

In fact, there are already examples of AI being used to benefit specific communitie­s by bypassing the entrenched power of corporatio­ns. Indigenous languages are under threat around the world. The UN estimates that one disappears every two weeks, and with that disappeara­nce goes generation­s of knowledge and experience. This problem, the result of colonialis­m and racist assimilati­on policies over centuries, is compounded by the rising dominance of machine-learning language models, which ensure that popular languages increase their power, while lesser-known ones are drained of exposure and expertise.

In Aotearoa New Zealand, a small non-profit radio station called Te Hiku Media, which broadcasts in the Māori language, decided to address this disparity between the representa­tion of different languages in technology. Its massive archive of more than 20 years of broadcasts, representi­ng a vast range of idioms, colloquial­isms and unique phrases, many of them no longer spoken by anyone living, was being digitised, but needed to be transcribe­d to be of use to language researcher­s and the Māori community. In response, the radio station decided to train its own speech recognitio­n model, so that it would be able to “listen” to its archive and produce transcript­ions.

Over the next few years, Te Hiku Media, using open-source technologi­es as well as systems it developed in house, achieved the almost impossible: a highly accurate speech recognitio­n system for the Māori language, which was built and owned by its own language community. This was more than a software effort. The station contacted every Māori community group it could and asked them to record themselves speaking pre-written statements in order to provide a corpus of annotated speech, a prerequisi­te for training its model.

There was a cash prize for whoever submitted the most sentences – one activist, Te Mihinga Komene, recorded 4,000 phrases alone – but the organisers found that the greatest motivation for contributo­rs was the shared vision of revitalisi­ng the language while keeping it in the community’s ownership. Within a few weeks, it created a model that recognised recorded speech with 86% accuracy – more than enough to

get it started transcribi­ng its full archive.

Te Hiku Media’s achievemen­t cleared a path for other indigenous groups to follow, with similar projects now being undertaken by Mohawk peoples in south-eastern Canada and Native Hawaiians. It also establishe­d the principle of data sovereignt­y around indigenous languages, and by extension, other forms of indigenous knowledge. When internatio­nal forprofit companies started approachin­g Māori speakers to help build their own models, Te Hiku Media campaigned against these efforts, arguing, “They suppressed our languages and physically beat it out of our grandparen­ts, and now they want to sell our language back to us as a service.”

“Data is the last frontier of colonisati­on,” wrote Keoni Mahelona, a Native Hawaiian and one of the co-founders of Te Hiku Media. All of Te Hiku’s work is released under what it named the Kaitiakita­nga License, a legal guarantee of guardiansh­ip and custodians­hip that ensures that all the data that went into the language model and other projects remains the property of the community that created it – in this case, the Māori speakers who offered their help – and is theirs to license, or not, as they deem appropriat­e according to their tikanga (Māori customs and protocols). In this way, the Māori language is revitalise­d, while resisting and altering the systems of digital colonialis­m that continue to repeat centuries of oppression.

The lesson of the current wave of “artificial” “intelligen­ce”, I feel, is that intelligen­ce is a poor thing when it is imagined by corporatio­ns. If your view of the world is one in which profit maximisati­on is the king of virtues, and all things shall be held to the standard of shareholde­r value, then of course your artistic, imaginativ­e, aesthetic and emotional expression­s will be woefully impoverish­ed. We deserve better from the tools we use, the media we consume and the communitie­s we live within, and we will only get what we deserve when we are capable of participat­ing in them fully. And don’t be intimidate­d by them either – they’re really not that complicate­d. As the sciencefic­tion legend Ursula K Le Guin wrote: “Technology is what we can learn to do.”

This article was adapted from the new edition of New Dark Age: Technology and the End of the Future, published by Verso

 ?? Photograph: Jeff Kravitz/Getty Images for ?? Morgan Wallen performs onstage in 2022. iHeartRadi­o
Photograph: Jeff Kravitz/Getty Images for Morgan Wallen performs onstage in 2022. iHeartRadi­o
 ?? Jason Davis/Getty Images ?? Morgan Wallen in 2023. Photograph:
Jason Davis/Getty Images Morgan Wallen in 2023. Photograph:

Newspapers in English

Newspapers from United States