The Arts & The New Frontier

Artificial intelligence is reshaping creative work at a pace that’s hard to follow, but what does that actually mean for the future of the arts and for the next generation of artists? 

In this episode of The Arts & Everything, UNCSA Chancellor Brian Cole talks with filmmaker and UNCSA faculty member Bob Gosse, illustrator and digital tools pioneer Kyle T. Webster, and Martha Graham principal dancer Xin Ying about how generative AI is already changing film, visual arts and dance. Their stories move from internet-era disruption to copyright fights over training data to a haunting AI-crafted duet that brings Xin Ying “on stage” with Martha Graham. 

Through their different perspectives, they wrestle with questions about authorship, ethics and ownership while also looking at AI as a collaborator that might expand human creativity if used with care. You’ll hear concrete examples of how artists are experimenting with this technology, the risks they’re most worried about and the advice they’re giving young creatives who feel overwhelmed by the pace of change. 

Brian Cole: In the arts, every generation eventually runs into a moment of disruption when the tools start to change. In the late 19th century, recorded sound changed music. In the dawn of the 20th century, the advent of film changed theater. And the digital revolution that accelerated at the end of the century, well that changed everything. 

Every time a new technology appears, artists and audiences have to figure out what that means for the industry. Sometimes the shifts are small, sometimes they're seismic. 

Today, we're faced with what some might say is the most seismic innovation yet, artificial intelligence. This new technology has developed and spread at a breakneck pace. And it has significant consequences for just about every aspect of society, especially the arts. What exactly these consequences are may be up to us. 

Welcome to the Arts and Everything. I'm Brian Cole, Chancellor of the University of North Carolina School of the Arts. Today, we're exploring the Arts and the New Frontier, how artists are thinking about AI and the way it's already reshaping how we create. 

I was able to sit down with three artists and have conversations that ranged from how disruption has impacted the arts throughout history to concerns about intellectual property, the future of creativity, and how this technology can create new avenues for human artistic expression. 

I first sat down with longtime filmmaker and UNCSA School of Filmmaking Professor Bob Goss, whose production company, The Shooting Gallery, produced Billy Bob Thornton's Academy Award-winning film “Sling Blade.” Bob's career has spanned multiple technological shifts, and he's been able to adapt to each in turn. But this one just might be different. 

Brian Cole: So, just to start off, you in your career, you've lived through disruptive shifts in filmmaking. You know, from Indie Boom in the '90s to digital streaming, and now artificial intelligence, AI. Which of those felt most seismic to you personally? 

Bob Gosse: Oh, by far, AI. 

Brian Cole: Yeah.  

Bob Gosse: Yeah. The internet was a big one. That was kind of the new information media. And what I learned since is in looking back across different new media systems—television, radio, the telegraph, photography, you know you can go all the way back — but what you see a pattern is the new one swallows the previous one, right? 

So it takes on everything that the previous information media provided. And so, the internet swallowed television, movies, and music, and literature, and everything. And now AI swallowed that. 

And the big difference now, and the reason why I cite that as being major uh-oh or gee whiz, depending on my day, um, that this is the first technology like this that has generative properties, that it has agency. That's never happened before. So that is probably a blessing in some ways, because we will, you know, ways of addressing climate change or we’ll have new technology to solve, biological issues and cancer and all forms of miraculous positive things. 

And then like any tool it may have negative things like we saw with the internet and then smartphones. We saw, you know social media and isn't this gonna be a great? 

Brian Cole: Right.  

Bob Gosse: And we didn't foresee the unintended consequences in how algorithms were leaning into more outrage, and then we see division, and we see, you know, all the problems that we know. And, you know, we're seeing that now with misinformation, disinformation, and you know, fake videos, and things like that with AI. And we'll see that more and more, and we're gonna have to sort of adapt to that, for better or for worse.  

In terms of filmmaking and storytelling, this is a very huge shift because as we unroll it and reveal it to the students and see how they want to use it, I think that they have, as artists, they have to be reminded constantly that it's very powerful, but it's also a Ferrari with a broken steering wheel. You can't control it. Creatively it doesn't have a point of view. It'll model the language that it has a point of view, but it's just a machine basically giving its best guesstimate as to what the next word is or what the image might look like or what the sound might be. 

And so how they navigate that and apply their dreams and feelings and sorrow and joy and, you know, suffering and point of view, and try to use this tool to produce work that human beings can relate and be affected by, and be moved by, I think is where our job as educators needs to be, while openly admitting that, hey kids, this is new to us too. 

Brian Cole: Right. I agree with you very much that this moment, this disruption is the most seismic. I think—and there's a lot of excitement, fear, apprehension…

Bob Gosse: Hype.  

Brian Cole: Hype, yeah absolutely, into something that is certainly still unknown to some degree and evolves by the second seemingly. And I think that here can be something helpful about looking back historically and comparing now to these other moments of disruption. 

And I imagine a lot of artists nowadays will look back and they'll see in these moments where they felt resistant to a change and a moment where they felt like they were embracing it. What was that moment for you with AI and generative AI in filmmaking?

Bob Gosse: Right away. I think I started playing with it at the end of November, beginning of December of ‘22. And the summer before, I was playing around with Midjourney. I was playing around with images. And it was more in a gee whiz level. And there wasn’t video generation yet, or if it was, it was very crude. The famous Will Smith eating spaghetti is the example, you know? But it was rapidly, from month to month, getting better, and the large language models is what I leaned into the most because I was trying to see how well it could, you know, do research or crack a narrative problem or describe a character. I was trying to test stress-test it as best I could. It was very limited, very uncanny valley, generic. 

And then that March of ‘23 it bumped up to 3.5, and then we got to 4. I'm talking about OpenAI specifically. By 4 that model was pretty powerful. And then something occurred to me, and I realized we've been hacked by a device of our own creation. If you think about what the human operating system is, it's based on 26 phonetically encoded letters, and eight punctuation marks, and ten numerical symbols. And make of that what you will: write a novel, write the Constitution, write an opera, a play, and you know. And it had cracked the code. And it could make decisions. Maybe not creative decisions from a point of view because it has no lived experience, it’s a machine.

Brian Cole: So we have this incredible new tool with incredible promise and significant potential risk. But how did we get here? How is this technology built and what informs its seeming agency? 

Kyle Webster is a renowned illustrator, designer, author, and teacher. If you've used a digital brush in Adobe or Procreate, there is a good chance you've used one of his tools. His illustrations have appeared in the New York Times, the New Yorker, and other publications around the world.  

In recent years, he's been using his platform to call attention to what he sees as a fundamental problem in how AI was developed when it comes to artist's rights and intellectual property. 

Brian Cole: Well, you know, jumping into the topic of generative AI and its implications on your industry, artists have had a huge range of reactions to generative AI. Miyazaki has been quoted as saying, “I strongly feel that this is an insult to life itself.” Meanwhile, other artists are kind of embracing it or finding ways to incorporate into their practice. Where do you sit on this spectrum? 

Kyle Webster: I think I'm more in the Hayao Miyazaki camp more than anything. I think, you know, at the beginning, four years ago or so when generative AI was just kind of starting to become, you know, this topic of conversation, I think there were a lot of opportunities for technology companies to do the right thing. And everyone took the easy way out, took the lazy way out and just went for theft outright, and dollars, frankly. 

And so, yeah, I understand what Miyazaki's saying. And I kind of feel the same way. I think that to take art and commoditize it even further than we already have, to make it this sort of thing that, you know, you just on demand, you just kind of click a button and it quote unquote generates art for you. This whole idea to me strips away everything that makes art art. Which is, you know, the human input, the human experience that leads to the creation of the art itself. I mean, if have some life experience, positive or negative, I'm often, um, compelled to document it in some way, to respond to it in some way, for me with drawing or painting. 

And I know that this works the same way for poets and for musicians and for filmmakers. And because a bunch of ones and zeros cannot have these deep, meaningful experiences that we do as animals, it's just all so meaningless and empty to me. I'm also angry about how we could have possibly gone down a different road with it and really tried to make it useful for artists without just stealing, you know? 

Brian Cole: Do you believe, it sounds like you do, that there's a potential for generative AI to be beneficial to artists? I mean, are there any safeguards that could be put in place, you feel, around it that would alter your stance? 

Kyle Webster: Yeah, well, you know, we're so far down this road right now with copyright infringement. It just seems almost impossible for it to be—for us to change course. And I'm so sad about that. But I do believe that, you know, had we in the early, early days really respected rights of the artists and their ownership of their work and their right to compensation and credit. And obviously, they would have had to been consulted and they'd give permission for any training. You know, all these things were just bypassed. 

And I believe that if in the early days, if any of these companies had simply announced that they want to develop this technology. However, they want to do it ethically and in an ethically responsible way. They want to connect with artists, musicians, et cetera. And say, we'd like to build this training data and then get permission and then pay fairly for it, work out what that would look like. Of course, it would have been hundreds of millions of dollars, which they have. But to work out deals, and then artists who don't want to give permission, they simply don't train. And then find some system by which they could manage all that and make sure that they're not infringing, et cetera. I don't know. I think that it would have been a lot of work and that's why nobody did it. 

What I thought could have been possible would have been, you know, if we had collaborated, you know, we artists, had collaborated with some of these companies and said, here's how we think this technology could be useful in our industry in a way that allows us to create, to be our, you know, our best selves from a creative point of view, to still have control over the process, to not get rid of the best parts of the process, and that sort of became understood. This is not that. This is not that. There is no real craft here. What we're doing is we're bypassing all the decisions that happen in the process. 

Sometimes it frustrates me that in some way, we as a community, the creative community—and I really don't, actually, I don't mean to… it’s not really our fault. But I think a lot of people are in some ways, just kind of rolling over and saying, well, it's here now. What do we do next? 

And when I hear that, I kind of think, well, that's kind of exactly what these big companies want us to do is to just give up because we're exhausted. You know, we tried, we've been fighting, we've been saying, no, this is unfair. This is a violation of our copyright, you know, all of that. When we’re fighting and fighting and fighting and even bringing it to the courts and it's just too slow or we don't get enough traction, whatever it happens to be, that eventually we will just say, well, all right, fine. Now we're going to accept that you've stolen all our stuff, and we'll work on what's next. How do we then deal with that and still make it okay? And I, what I hate about that is no, it's completely wrong what's happened. 

Brian Cole: What you're saying is reminding me of something really important and I think inspiring from your work and something you said that you’ve related this to the three C's. You know, consent, compensation, and credit. Can you talk a little bit about that and how, you know, you've started to talk about it, but how that relates to generative AI?

Kyle Webster: Well, sure, I mean, when I was doing purely illustration, I would sell, along with an illustration to a magazine or a publisher for a book cover or something, the rights to reproduce my work one time for one specific purpose. And that would be limited to a certain region, say United States. 

And then if they want, oh we're gonna publish a French language version of this book, they’d pay me again for the rights to do that. And then sometimes I would have illustrations I had done 10 years ago, and a person would write to me and say, we want to reprint this for X and then they would pay me again. That whole idea about you're buying not just the image, but you're buying the rights to use it in a certain way.  

That's such a huge part of artists’ income. So, you're taking away not only that initial payment for the creation of the thing, but then you're also taking away that income that is generated later down the road because the artist owns the rights to that work that they created. They're the owner. 

So that's part of the, you know, with the three C's. On the consent and compensation part, you know, it's just immediately thrown out the window. And on the credit side, there's also no, there is no credit given. So, the name of the artist is never part of the training data. So, what you're left with is an image. Nobody knows where it came from. Nobody knows anything about it. And it just gets fed into the machine. And, you know, the thousands of hours of work that went into the body of work of an artist that is now part of this machine, gosh, there's nothing there for the artist that says, here, we value what you did. 

It’s an abusive system. And it's crazy to me that it just happened. And no matter how much the creative community rose up and said, hey, wait a second, this is illegal, uh, nothing happened. Nothing, it didn't mean anything. So, what does that say about the future of this technology and how it will be developed? I don't think that bodes very well for not just us, but really humankind.

Brian Cole: As legal battles over copyright and training data continue to unfold, artists across the creative world are still figuring out how to respond, as well as experimenting with how this technology could take human artistic expression even further. One such artist is Xin Ying, a principal dancer with Martha Graham Company. 

Xin Ying has been exploring how AI can be used in dance and teamed up with multimedia artist and creative technologist Mimi Yin to create an entirely new experience. They used archival footage to fabricate a duet between the late Martha Graham and Xin Ying.  

The piece is called "A Letter to Nobody." And I recently had the opportunity to see it performed in New York. It was fascinating, so beautiful, emotional, and haunting. And it really blew my mind open in terms of how AI could be used in dance. Turns out, A Letter to Nobody is just the start of what Xin Ying has been working on. 

Brian Cole: So this piece. You created a duet with Martha Graham that you performed on stage. I mean a figure that's certainly a titan, know, in the history of dance and movement, but also really an important person that shaped your artistic lineage. What did it feel like to bring it back to the stage in this way and any responsibilities that you felt? Or what surprised you about the process of doing that? 

Xin Ying: I think it's got to be an ultimate dream for a lot of Graham dancers to see the founder you have never met, and to dance with her and just see how that feels. Ask her questions, you know, this is kind of like a longing we all have. 

And then of course I met my collaborator Mimi while studying NYU getting my MFA. That first time I met her I showed what I had. I had this, like, self-edited old video. I cut up the sections, speed them up, slow them down, creating this kind of interesting conversation with an old archive from 1940s. And then I talked to her and said, oh, maybe we can use technology, we use the AI. And the first thing she asked me, she said, so why do you want to use it? Like, are you just using this thinking this is like a novelty, you’re just trying a new toy? Then I told her the connection with me and Martha Graham. 

This first line in the piece, which is Emily Dickinson's poem, really spoke to me. And I said how I am nobody. I remember looking her into the eyes, I said, you are nobody. I am nobody. And the machine is a nobody. So that's how kind of she, oh okay. I think I can speak to this kind of method, this kind of like creation process. 

And what surprised me the most is when we're starting to really dive into the relationship between me and her, the image, then you're starting to realize it's so hard to actually dance with her. And how sensitive that relationship can just get lost. It's not about, like 100% synchronization with her. It's about this, like almost care. Like if she, her image come to me, if I’d be able to catch her? If I’d be able just turn around when she’s turning away? At one point it's getting a little kind of ridiculous how serious we want to take the relationship with me and her image. And then I realized that's just how much care you have to put into the piece. 

Dance and technology, multimedia, on stage can get cheesy or you know, lost its meaning very quickly. We actually spend the most time not practicing on the turns, or like where I should travel or anything. It's this relationship, we spend the most time with. Sometimes we can just like one entrance from left to right, I can spend the whole day working it. Yeah. 

Brian Cole: Wow. I mean, what an experience. "A Letter to Nobody" is something that I think everybody needs to experience. I mean, it was truly—it expanded my mind to what's possible, not just from a technological standpoint, but for just thinking about dance and the future of dance. You spoke a moment ago about when you brought this idea to your collaborator, your partner Mimi, and the questions that she had that you all worked with together. When you started to talk about this idea with dancers and with the dance world, what was the response to this concept initially? Was there any pushback when you started to talk about this intersection with AI and technology? 

Xin Ying: It really has to do a lot of show and tell. Like for my artistic director, Janet Eilber, she's very open-minded. I always bring like crazy ideas to her. She always says yes. Then I'll tell her my concept, and I tell her I'm gonna use two beatbox artists for the sound. So, everybody’s thinking I'm going a little crazy because you are not speaking to the language they're used to anymore. And I knew at one point I just have to show them. I just have to show you one step at a time. I will show her a clip or a sample we tried with AI, then explain to her how this kind of works. 

And pretty much like, anything comes to technology, but when we say technology and AI, it becomes so big, grand, or it becomes this like, figure or ghost against you. But when you actually go into specifics, it's already in our life. Like, you either embrace it or against it. I think because the fear were unknown. 

Brian Cole: I agree with you completely. In an age where there's so much talk about AI and the arts, and there's excitement about certain kind of breakthroughs that come through, but even more than that, there's certainly a lot of fear, anxiety. I think one of the most important things, like you said, is to really see examples of how not just technology companies are imagining what these things can do, but how artists are using these technologies and put their eyes and ears around those examples. What do you, with your experience, see as AI's kind of role or possibilities in a general sense with the arts? 

Xin Ying: First, I think it's unavoidable at this moment. There's a new tool here. It will be used, like, either you like it or not. And to be responsible, I think for us, you have to know either you like it or not, or you choose not to, you have to know how. I think that's like the number one as a responsibility. If you really care about art, or you really care about dance, you can't just avoid it, say. The field's changing. 

How to reimagine and connect with reality and dancing with the human body is most important, for me at least. And what I want people to explore more is how to connect the machine and the people or audience. For Graham technique, the core is contraction and release. So, contraction basically is exhale, when you're breathing out. Then you can feel the center is contracting. And when you're breathing in, your body expands, then that's a release. So, when you make a sound, you are doing exhaling.  

So, I kept thinking how I can force this kind of contract and release to happen with the spectator when they're watching the technology installation. So, I've been trying, like for example, sound activation. If you want to watch this whole contraction motion exercise of a dancer in whatever installation we're putting on, you have to trigger it with the sound. So that's kind of using the tool to make the spectator feel this embodied experience, with the installation happening. That's what I'm really fascinated about. And I hope I can see more and experience more in the dance world. 

Brian Cole: Well, hearing you talk about it is fascinating me as well. If you think about in the future, what are things that you wanted to do maybe that the technology wasn't there yet? Are there things that weren't really possible now, that are kind of on the next horizon for new projects that you want to do, of how you'd like to use this technology?

Xin Ying: Right now, I developed a large language model with my collaborator, Catalina Hai, within Marthabot. So basically, it's a fine-tuned GPT you can talk to. But all the training data is Martha Graham's memories, her own writing, and all the memories from the past generation of dancers. So, it becomes this like, assemblage of past ghosts, memory. Which is her legacy basically, like, Graham's legacy, 100 years. She's not lived that long but still keeps going because of these ghosts contributing their own, you know, memory and intelligence in it. 

You know, I'm not trying to make Martha come back from the dead, but I want to kind of let her legacy and archive live again because it is alive. It's literally built by this. Everybody. I'm including whoever have worked in the company, contribute their body and their spirit into this legacy and build what we have now. I want that to come to life. So that's my hope in the future, is I can make Marthabot become more tangible because right now just test based. In the future I will actually have a real time conversation with her, maybe dancing with her on stage in real time.

Brian Cole: The field is changing quickly. For some artists, that change brings excitement. For others, it raises serious concerns. For most, the future is still taking shape. I asked Kyle and Bob what kind of advice they have for young artists navigating this new frontier and what the future may hold. 

Kyle Webster: It's a few things that I tell them. One of them is to look to build communities around your work and do work for them because they're not interested in art that isn't made by the people that they admire, the people that they like. So that's one thing. 

Another thing I talk about is the ability to make human connections face to face and how important that is for you to grow, not just a professional network, but to make people feel a certain closeness to you and want to support you and your work because they've met you in person. Talk to people, meet people face to face because that is going to be something we all crave more and more as we get further and further away from these in-person interactions. 

And then as far as AI, if I tell them specifically what to do about AI, I say like, don't pay attention to it as much as you can avoid paying attention to it because the more you think about it, the harder it is to just make your work. 

And I find this for myself too, the more I think about it, the harder it is for me to be motivated to make anything. I don't want to feel like it's pointless. I don't want to feel like it's hopeless. But I can imagine as a young person, it's even harder because you're looking at this landscape and saying, well, where do I fit in?

You know, I have met many artists at conferences in the last few years. But this comes up everywhere I go. People of all ages, they talk about AI. And we talk about it and we're all like, oh no, it's terrible. It's going to take away all our jobs. And then I'm also realizing, no wait a minute, look, we're all here together. 

And we're all talking about this thing we're so excited about, whatever it is that you make or I make. And people are showing their work to one another, and people are getting jazzed about each other's work. And they’re like, oh but I love what you do, I love what you do. And then it's like, oh, wait, we're having these two different conversations. One is the world is coming to an end. The other is, oh, there's all this great stuff people are making. And so, I kind of want to just remember all the time that we are still out there, artists of all kinds, making really cool stuff that really is, quote unquote, original, you know? And I think that's part of the thing we have to hold on to because a lot of the message from the tech companies I've heard is there's no such thing as anything original. Everything's derivative. You're already stealing anyway. And I say phooey to that. No matter how much I am influenced by other artists, whatever I make is still coming from something that only I could make. It's my own experience. 

Bob Gosse: You can allow it to replace your voice if you want to, but you’ve, now you've hung it up. You're no longer an artist. So, there's that. It just doesn't have a point of view. I think, I really do think a lot of the anxiety, a lot of the fear, a lot of the pushback to this is reliably a vision of the past. If you were a lamp lighter in London, and you saw them laying cable for electricity, you're probably going, oh, that's a fad, you know, as you're lighting the gas lamps. So, I see a lot of that—I see that instinct in myself. And collectively we're not good at change, we don’t like change. But life has changed. 

And now AI, while it still maintains a certain element of that uncanny valley to it, we are crossing a threshold where the uncanny valley is receding, and it's going to be more and more hard and to not be able to know what's real and what's not. And that will swamp and overwhelm, and it’s already started to overwhelm social media sites, YouTube, TikTok, um. There's a threshold where you won't trust anything and so, the authenticity and trust will now become commodity to value because of its absence. And we have to be mindful that the world we live in, human beings are just responding to the environment. 

And yet a lot of our environment has been subsumed by electronic media. Phones and flat screens and laptops and it's replaced a lot of human-to-human conversation and contact. And I think, like, that means that we will, if we're not careful, really not trust these things so much. So, when things do come through that do break through that are authentic, that are human, demonstrably so, they will have a greater value in my judgment. And I think that that's inevitable because without it, we're lost. 

Brian Cole: AI may change many things about how art is made, but the questions artists are asking about it are very familiar ones. 

Who gets to create? Who gets credit? What makes a work of art real? What makes that work of art feel human? As we navigate this new frontier, one thing feels crystal clear to me. These new tools should be used in ways that elevate human creativity, not replace or erase it. 

Because in the end, what gives art its power is not the tool, but the human voice, experience, and inspiration behind it. AI has the power to be an incredible game-changing collaborator for a human artist, one that could open new doors for creative inspiration. But if that artist's distinctive voice can't be heard or perceived in the outcome, then what was the point?  

The creative community worldwide is at an important inflection point for what this means. What opportunities should we embrace and innovate? What risks should we push back on? And very importantly for arts institutions like ours, what does this mean for how we train the next generations of artists? 

The conversations in this episode really just scratch the surface, and I look forward to future related topics here on “The Arts & Everything.” Thank you for listening, and until next time, keep finding the art in everything! 


"The Arts & Everything" is a podcast by UNCSA Media hosted by Brian Cole, produced by Maria Wurttele and Sasha Hartzell. Executive producers are Katherine Johnson and Kory Kelly, and Louie Poore is the associate producer. Creative Design is Alli Myers Gagnon and Digital Strategy is Natalie Shrader. Music was composed by Chris Heckman and performed by Chris Heckman, André Vasconcellos, Miah Kay Cardoza and Gabe Lopez

May 12, 2026

Bob Gosse

Filmmaker & Educator

Bob Gosse is a film producer, director and educator. He began his professional career collaborating with his cousin Hal Hartley on shorts and two feature films, “The Unbelievable Truth” and “Trust.” In 1991, he started the lauded indie film company The Shooting Gallery with a handful of local filmmakers, developing such features as Billy Bob Thornton’s “Sling Blade.” He currently teaches producing in the School of Filmmaking at UNCSA. 

more info

Kyle T. Webster

Illustrator

Kyle T Webster is an international award-winning illustrator who has drawn for The New Yorker, TIME, The New York Times, Wall Street Journal, The Atlantic, Entertainment Weekly, Scholastic, and many other distinguished editorial, advertising, publishing and institutional clients. He is known as the founder of KyleBrush.com, the brand behind the world's best-selling Photoshop brushes for professional illustrators, animators, and designers. He is currently the senior brush developer at Procreate.

more info

Xin Ying

Dancer

Xin Ying is a dancer, choreographer, mother, and interdisciplinary artist working at the intersection of movement, technology, and digital legacy. An Onassis ONX Fellow and Dance Magazine Cover Star, she has been a principal dancer with the Martha Graham Dance Company since 2011. Beyond performance, Xin creates work that moves across choreography, immersive media, and emerging technology.

more info