Legal Aspects Of Generative AI And Copyright With Kathryn Goldman


As generative AI tools continue to expand the possibilities for creators, what does this mean for aspects of copyright? Intellectual property lawyer, Kathryn Goldman, talks about the possible ramifications.

In the intro, Ben’s Bites newsletter, Microsoft Co-Pilot for Office tools [The Verge]; Canva Create AI-powered design tools; Adobe Firefly for generative images; OpenAI ChatGPT Plugins including Shopify; Examples of people using ChatGPT in normal life [Hard Fork]; Sam Altman on Lex Fridman podcast.

Plus, US AI copyright guidance; Human Artistry Campaign; New rules of publishing [Becca Syme]; Tsunami of crap + double down on being human; Generating fiction with GPT-4 [Medium]; Pause giant AI experiments letter; The age of AI has begun [Bill Gates];

This podcast is sponsored by Written Word Media, which makes book marketing a breeze by offering quick, easy and effective ways for authors to promote their books. You can also subscribe to the Written Word Media email newsletter for book marketing tips.

Kathryn Goldman is a copyright and trademark attorney and has worked in intellectual property for over 30 years. She runs CreativeLawCenter.com, which offers resources, workshops, and advice for creative professionals, including authors, artists, designers, and more.

You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below.

Show Notes

  • The perils and promise of AI in creative works
  • Some of the legal cases against aspects of generative AI [TechCrunch, Lawfare]
  • What is fair use? What is transformative?
  • The US Copyright Office’s guide to AI usage
  • Issues around making money from AI-generated work
  • Could AI copyright laws be retroactively applied?
  • Authors Guild model contract excluding AI training usage

You can find Kathryn Goldman at CreativeLawCenter.com or on Twitter @KathrynGoldman

Header image generated by Joanna Penn on Midjourney.

Transcript of Interview with Kathryn Goldman

Joanna: Kathryn Goldman is a copyright and trademark attorney and has worked in intellectual property for over 30 years. She runs CreativeLawCenter.com, which offers resources, workshops, and advice for creative professionals, including authors, artists, designers, and more. So welcome back to the show, Kathryn.

Kathryn: Oh, thank you, Joanna. I am so happy to be here again.

Joanna: This is going to be a very popular episode. We’re just going to jump straight in. So you have been running workshops on AI and Your Creative Work: Perils and Promise.

What sparked your interest in the impact of AI? And why is it important for authors to engage?

Kathryn: Well, I’ve always been a bit of a technology nerd. I’m not an early adopter, like you. I’m more in the second wave of adoption.

Back in the day, when computers first came out, I didn’t ask my parents for a Commodore 64, I asked them for a Trash-80. So I’m definitely second-generation, but I’ve been in the game ever since. 

I learned how to program very early on. And then I started building databases, and then websites, and you know, so whatever comes along, I kind of dip my toe in. And now of course, I represent a lot of creative professionals, artists, writers, photographers, and others.

When Midjourney and Stability launched last year, I began receiving emails from my clients, and from members of the Creative Law Center, asking all kinds of questions like, “Can they do this? Is this legal? How can I protect my work?” And so I jumped in, I had to dig in.

So AI has been around for a while, but with Midjourney, and Stability, and ChatGPT, some of my clients, or lots of them, felt that their livelihoods were at risk. So we needed to get to the bottom of this.

It turns out that the job of lawyers is also in jeopardy.

A lawsuit was just filed this week against a company called DoNotPay.com. And it uses AI to help people defend themselves in court from things like traffic tickets. You put these glasses on, and the glasses listen to what’s going on in the courtroom, to the prosecutor and to the judge, and they feed you answers, how you’re supposed to answer these questions. So yeah, lawyers are at risk, too. So I got involved in order to answer these questions about what’s happening to our livelihoods.

Joanna: Yes, I had heard of DoNotPay. And of course, you’re absolutely worth every penny, but lawyers can be pricey. And so you can see, I’ve had a look at DoNotPay, and like you said, they can generate these letters and all of this kind of thing. I’ve seen GPT-4 do tax returns and build websites, and all of this is in the demo of the recent GPT-4. And so it’s really interesting, isn’t it?

And of course, the other thing we should say, we’re recording this on Friday 17th of March 2023. And GPT-4 came out this week, Google launched their Bard AI, Facebook just put out another one today. I mean, this is accelerating, and we’re really at the beginning. So are you afraid?

You said the perils and promise of AI, so it seems like you’re balancing both.

Kathryn: Well, isn’t that the job of a lawyer.

Am I afraid? No, I personally am not afraid.

I mean, I’ve been in this business for a long time, and so I have a very stable book of business. So I don’t fear for my personal job, but I think this represents a sea change. Right?

We are going to lose a lot of professions, we’re going to lose a lot of jobs, but there are going to be new jobs created necessarily. 

So there’s one job listing that was sent to me by one of my clients for a prompt engineer and librarian. Somebody who can use, and this is still on GPT-3 because this is an old listing, somebody who can prompt these AI machines to give the output that is needed for whatever the business is. And they list the criteria for this job, you’d be a good fit if you have a creative hacker spirit and love solving puzzles, and they go through all these issues. They’re offering between $175,000 – $330,000 a year.

Joanna: Everyone’s going to need one! I mean, you’re completely right. Prompt engineer is a new job. I love ‘creative hacker.’ I feel like that is also a job description.

Kathryn: Yeah, creative, let’s say, legal hacker. And therein lies the problem, right?

What is legal about all of this that is going on? So do I have any fear about it? Not personally. I believe that there’s going to be a loss of certain jobs. I think that folks who, you know, I don’t want to say it, but I’m going to say it, those who are mediocre at their jobs are going to be replaced in certain categories.

Those who are really good at it, are going to become the folks who use these AI platforms or these machines as tools to help them get better.

Joanna: Although I would also say that I think there will be new types of people. And this is happening in the art community, there might be someone who’s a mediocre artist as a painter, but they can be a hell of a prompt engineer.

They can potentially do a much better job now than someone who originally was painting a picture, for example. So I think what is excellent or what is mediocre is also going to change. So I think let’s put a pin in it and say things are changing.

Let’s get into the legal side because that’s what we’re talking about today. So let’s start with this training data. So you mentioned Stability AI, there’s a court case about that. [TechCrunch overview of court cases]

And the issue seems to be—

Is it fair use to train models with data? Is the work transformative?

So what are your thoughts on this question?

Kathryn: Okay, so the notion of fair use has to be addressed on a case-by-case basis, right? It’s a case-by-case defense.

And so when you are looking at fair use, the first thing you have—so it’s a defense, right, so there is infringement.

So first, you have to determine that the AI platform had access to the protected work. All right, so let’s assume that for a moment.

And then you have to put the two works, the original work and the output from the AI, side by side. Are they substantially similar? How much of the original work was taken? Was it just enough as necessary to send the message? And I’m using air quotes, I know you can’t see me. But the message, what is the message that AI is sending? Or was the message the input from the prompter? Okay, so first, you’re going to have this—

Is there infringement? And then if there is infringement, was it fair use?

So the Getty case, which we’ll talk about in a minute, is probably going to get to this issue. Is it fair use to scrape all these images and then use them to create something else? Is that something else going to be substantially similar? Or is it going to be new and different? And I have seen examples of both. I’ve seen substantially similar.

[Copyright Alliance round-up with links to all the cases.]

In fact, in the Getty case, they have in their exhibits, examples of identical reproductions of something that’s in their database. Okay, that’s going to be infringement. That’s not going to be fair use.

On the other hand, there are examples of things produced by generative AI that don’t look anything like what’s in the database that they’ve trained on, as far as we know.

The bottom line is the database on which the AI has been trained, is really a great big black box, isn’t it? We don’t know for sure what it’s been trained on.

Joanna: This is so interesting, though, because I just want to assume and I’m not saying this is going to happen, and obviously we’re talking at a time when there really is no legal final word on any of this. It’s all up in the air.

But essentially, Getty — they are not trying to shut down the technology, the idea of large language models, large image models. In fact, they own iStockPhoto, and they’re looking at generating their own AI images from their own licensed work database.

Even if Getty wins the case, and some of the various models get shut down, and there’s a settlement for people, that’s not going to stop it, is it?

Kathryn: No, no, the technology is here. It’s not controllable at this point by government or the legal system, that’s my guess. But I will tell you that Getty is asking for the destruction of all versions of Stable Diffusion.

Joanna: Yes. Well, that’s what I’m saying. Even if that happened, then that’s not going to stop somebody else building another thing, or it might be on licensed work, I guess.

Kathryn: Right. Right. So I guess they’re trying to control how it’s going to move forward in the future. Personally, I don’t know if they can destroy all versions of Stable Diffusion.

Joanna: Hmm. Because it’s distributed, isn’t it? It’s distributed on even phones now.

Kathryn: Yep. I just don’t know if that’s even a possibility. And that’s why the technology cannot be stopped at this point.

How are we going to live with it? How are we going to work with it?

How are we going to control it? Who’s going to control it? That’s another big question. Is it beyond the control of government legislation? It’s an international thing, right? Who’s going to control it

Is it going to be in the hands of private business? Well, lovely. That’s the way it’s going right now. But it’s here, so —

We’re going to have to learn about it, and we’re going to have to work with it, and we’re going to have to live with it.

Joanna: Andrew Ng, who’s very famous in the AI community, talks about it as electricity. Some people say it’s like fire or the internet. As in for a start, it’s not one thing.

We say, “the AI” or “an AI,” but it’s like the internet. There are so many different applications, I guess, at this point. But again, like electricity, like fire, like the internet, it can be really awful and hurt people or it can be really amazing. And I certainly don’t want to live without electricity, fire, or the internet at this point.

Kathryn: Yeah, and it is pretty amazing what’s going on. It’s phenomenal what’s going on. The internet gave us like access to it, and we almost think now, in view of ChatGPT, the internet itself is kind of like static. It’s old school. It’s like, oh, that was back then, you know. And ChatGPT has just given us this access to this huge, communal, worldwide brain. It’s an amazing thing. So yeah, I also get kind of awestruck by it.

Joanna: And we were saying before, I just got access to GPT-4, and it’s like being plugged into like The Matrix. It’s wild. I think I felt that, but I talked to someone else and they didn’t, because they didn’t know how to interact with it. So this is a very interesting time.

But let’s get back to copyright. So another case, so the US copyright, it started off by denying all copyright for a comic, Zarya of the Dawn, but then she appealed. And I follow her on Twitter @icreatelife , as you know, her and her lawyers there. Ad they then said ‘no, this has got a lot of human input.’

And they said, ‘okay, you can have copyright for the text and the layout of the comic, but not the images.’ And they have gone back again to show her workings around prompting, and this is the idea of the prompt engineer. The author here, “the artist,” the artist in inverted commas, is a prompt engineer.

So the ruling—it was not a ruling—but the US Copyright Office seems to suggest that it’s the percentage of human involvement versus AI involvement. So what do you think about this?

What is an appropriate percentage of AI input to human input? And how are they ever even going to know?

They’re not going to examine every single thing like they’re examining Zarya of the Dawn.

Kathryn: Well, they do have copyright examiners and they do examine every single application. So that’s item number one.

So ChatGPT-4 came out this week. Also this week, things are changing at the Copyright Office. On Wednesday, which would have been March 15, they issued guidance now that in your application, you have to identify what portion of your work is generated by AI.

Okay, so I’m going to quote their guidance now.

“Applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work.”

Okay, so then they go on and say, “The Office will consider whether the AI contributions are the result of mechanical reproduction, or instead, of an author’s own original mental conception to which the author gave visible form. So the answer will depend on their circumstances, particularly how the AI tool operates and how it was used to create the final work.”

Okay, so very squishy language.

Joanna: Super squishy. That’s really hard. I think it’s easier with visual images, but with a body of a book, like a 90,000 word book, that’s going to be very hard.

Kathryn: Especially when you consider how many drafts you go through and all the editing. So one more thing that happened in the Copyright Office, again, this week, they announced that they are holding public listening sessions on the use of AI. And so they’re going to ask participants to discuss their hopes, concerns and questions about generative AI and copyright law. They’re going to focus on literary works, visual arts, audio, visual, and music and sound recording.

So the first one on literary works is April 19, and you can go to copyright.gov and sign up. You can request to speak at the session or you can just attend as a listener. So they are in information-gathering mode. 

So to answer your question of what percentage, copyright has never been in the business of determining a percentage. The way I like to think about it is, in terms of infringement, it’s not what percent of the original protected work was taken, but it’s—talk about squishy language—

It’s whether the heart and soul of that work was taken.

Then, of course, we have the Supreme Court considering a fair use case right now, without the benefit of any of this technology impact. So it’s all in motion right now.

Joanna: Yeah, it is funny, because I kind of want to play devil’s advocate on both sides, because I heard a US senator on the Hard Fork Podcast, which I highly recommend. It’s an excellent New York Times podcast. There was a US senator who’s on one of these committees in the Senate about AI. He’s gone back to university to do a degree in AI, because they can’t understand this stuff. So I’m not sure the degree will move fast enough, but he said there are more and more people talking about this. 

He also said, and I’m not quoting him, that the feeling of what he was saying was, “This is America. We don’t want to squash innovation. We want to compete.” You know, you and I could talk about China, I’m sure he didn’t mention China. But if America doesn’t do this, for example, Europe will probably take a much harder line.

The UK has just announced, again this week, the UK has announced in the budget like a billion pounds for a new supercomputer, because at the moment, you guys have OpenAI. You have the supercomputer. So it’s like, well, we need our own something like that. We can’t be reliant on the Americans.

So it’s so interesting.

We’re beyond just a book or a comic at the Copyright Office. We’re talking about a technology that is transformative to countries.

Kathryn: So yeah, very interesting. So I was listening to a podcast, the Wall Street Journal Technology Podcast, this week, and they were talking about China’s AI machine. And they were talking about how they are censoring their AI. So if you were to ask a question about politics, you would get in return, the output would be, “I’m sorry, we can’t talk about politics.”

Joanna: Yeah, this one is too, by the way. If you ask something that might be, you know, “how do I use a gun?” For example, this is what the conservative people are complaining about is that the AI seems a bit woke. So whether you call it censorship or not allowing certain questions, I don’t think that’s just China.

Kathryn: No, that’s a good point. That’s a good point. I hadn’t thought about that because there are other issues. It is just so huge and exciting, and we are on this ride.

Now think about this for a minute. Okay, it’s 2023. And when did we first start really browsing the internet? When did we get a graphical user interface for the internet? Was that in the 90s? Right?

Joanna: I think it would have been late 90s, but then there was the crash. And so then really sort of 2002, 2003, I guess. And 2007 was the iPhone, first iPhone.

Kathryn: 2007 was the iPhone, right. And that was really revolutionary.

The speed at which innovation is happening is compressing. It’s getting faster and faster.

It’s not even a full generation before we are confronted with a new technology that we’re going to have to learn how to incorporate into our lives. So it’s kind of amazing, and it’s so much fun. 

I know that it can be scary for people who think they’re going to be losing their livelihoods. And yes, I get that. But I’m having a great time watching what’s going on.

Joanna: Well, let’s be more specific because there are lots of us now, authors, and I’ll particularly refer to a short story I put out, With a Demon’s Eye. So I used Midjourney to generate the cover image. I did a blog post all about it. In the story, I used some text helped by ChatGPT and SudoWrite, and I documented it all in this in this blog post. So I’m very open about it. In my author’s note I have, “This is how I used AI.” So I think, hopefully, that might pass the Copyright Office.

Also, when I generated that image for the cover, which is a female combat photographer, and you could not get a stock photo of a female combat photographer. So I used that image, and I have put it as just like, look, this is Creative Commons. As far as I’m concerned, it’s Creative Commons, but I’m not a cover designer. So the question is—

If we use either Midjourney, or we use ChatGPT, or Sudowrite, or any of these tools, and we generate some work that we then publish and sell, is this an issue? Or how might that be an issue?

Kathryn: So I don’t know how it would be an issue to sell and make money off what you generate, off of Midjourney or ChatGPT.

The people who are buying your work are buying it because they’re your audience. They like the way you construct stories, and quite frankly, without you being the prompt engineer with Sudowrite or ChatGPT, the story wouldn’t come out the same if somebody else did it.

So that is your creative input, and you are selling the output to your audience.

I don’t see any issue with selling AI-generated work to an audience who has come to know and love you.

Even if let’s just say you’re starting out and you’re using it to create work, visual art, written work, what have you, if there’s somebody who wants to read it, you have to put yourself in the shoes of the reader, of the buyer, if there’s somebody who wants that, they don’t want to go out and make it for themselves. They want to buy it, they want to read it, they want to enjoy it, they want to be entertained. So from that economic transaction perspective, I don’t see a problem with it.

Joanna: Which is brilliant, because I’m glad, because I’ve done it. And the question, because I also agree, I’ve spent a lot of time looking into this. But one question would be like a retroactive issue. Let’s say, devil’s advocate again, Stability AI gets shut down, and there’s a payment that’s made. 

If I was someone who had used a tool that is in the future shut down because of a legal issue, can people come after me retroactively because of that ruling?

Kathryn: In the criminal law world, we call that the ‘fruit of the poisonous tree.’

Joanna: Ooh, that’s a good one.

Kathryn: Yeah. And I think that it would be very difficult to extend a ruling. Say the class action in California was successful and got Midjourney shut down, and you are now selling With a Demon’s Eye using a cover that you generated through Midjourney. Except for the fact that you admit it in your blog post, I don’t know that they would be able to trace all of the images that are created with Midjourney.

I will also tell you that professional cover artists and professional digital artists who are using Midjourney, which apparently is better than Stability, but who are using Midjourney in their work, they don’t just take that output and sell it to their clients. What they do is they take that output to represent an idea, and then it goes into Adobe Illustrator or Photoshop, another tool with AI in it, and they continue to work their magic with that tool to get the image just right for their client.

So I don’t see how a ruling that would shut down Midjourney could reach out to people who have used Midjourney to generate things and then put them into another product and sell them to their clients. I just cannot see that logistically, as a practical matter.

Joanna: Yeah, I mean, I agree with you. It’s interesting, isn’t it, because all of what we’re talking about is ethical authors, including myself, and including your clients, ethical creators who want to do the right thing, who want to use these tools, these amazing, amazing tools, to create more stuff.

And like I said —

It’s like being plugged into the matrix. I suddenly think I can create everything I want to create. Whereas before I felt I couldn’t. It’s like this huge lever that enables me as a single creator to do so much more. 

So I feel like there are a lot of ethical questions that people are asking for the right reasons.

But then, of course, we have to also be aware that it’s very easy. You could take my short story, paste it into GPT-4 and say, “Rewrite this with a different character name, a different gender, in a different location.” It will literally be the same story, but some key things change, and it would rewrite it.

It wouldn’t be plagiarism because the output would be different, but it’s happened in a second. So thoughts on that? Because we literally can’t stop that.

Kathryn: No, we can’t stop that. You know, I’m not sure I know the answer to that one.

In a traditional copyright infringement framework, you would have to do the side-by-side comparison, as I talked about earlier, you would have to show that there was access to your short story by the AI, and then you’d have to show that there is substantial similarity in the expression or in the structure, sequence and organization of the story. Whether changing the gender of the main character is sufficient, that’s going to become an issue for decision by a judge or a jury.

It’s going to be this one at a time consideration that an author, like yourself, would have to enforce against. And I’m not sure that most independent authors have the wherewithal to do that kind of enforcement. Which is why the Getty thing is very interesting, because they do have the wherewithal, although they may have ulterior motives. I don’t know how we stop that.

Joanna: Yeah, I think my answer is we can’t stop that, and we won’t bother. I mean, people will take my content the moment it’s printed, or the moment it goes up online. The moment this episode goes up online, I will get pingbacks from about 13 different websites that have just taken the whole thing and published all of this content on their site. So we’re talking to you, websites who are stealing this content!

You know, the same on YouTube, or any of my courses, any of my books. This already happens. So it will happen on steroids with AI. But again, the massive content, you said this earlier, it’s about connecting with readers who like our stuff. Some people will buy my stuff because I’m me, and I’m reaching out to them. And I guess that’s all we can do is focus on connecting with an audience, and the person who just generates AI content spam, they still have to find an audience.

Kathryn: That’s true. But you know, just to point out for a moment here, Joanna, connecting with an audience has been your mantra for years.

Joanna: Yes, it’s nothing new.

Kathryn: Right. And it’s —

Connect with an audience, own your own platform, control your own destiny.

Don’t put your entire career in the hands of an Amazon KDP who can decide to cancel your account. Yes, it’s about platform. It’s about your own audience. So that is not new. And your audience isn’t going to want to read something that is a knockoff of JF Penn.

Joanna: And if they do, you know, that’s the thing. What I’m hoping for is that there will be AI tools that will surface content that we want to read. So, for example, I obviously read, and I find books on Amazon sometimes, I find them in bookstores, that kind of thing. We all find books in different ways. But I’m hoping there’ll be a super smart AI book picker that will be able to find more books that I like in the sea of content.

And again, like we talked earlier about, what’s mediocre and what’s amazing. If there is someone in the future who is an amazing prompt engineer for the kind of fiction I love, then I will want to read those books. So it’s so interesting, isn’t it? It’s like, don’t throw the good stuff out with the bad. Don’t think it’s all awful just because there’s potential plagiarism.

Kathryn: Yeah, that’s true. That’s true.

Joanna: Okay, let’s just talk about something else that happened.

The Authors Guild has added to their example contract a clause prohibiting AI training usage.

[Link to the Authors Guild article.]

So tell us about this. And is anyone actually going to put this in, do you think?

Kathryn: I don’t know who’s going to adopt it. I do love their model contract. I rely on it all the time when I am negotiating publishing contracts with my clients. It just it gives me a lot of power to have them behind me. So just let me just say that to start out. But who’s going to adopt it? I don’t know. There’s going to be significant pushback.

I think, first of all, that there are existing publishing contracts out there that already give the publisher sufficient authority in the grant of rights provisions to let them use the book to train AI.

Joanna: Yes, I absolutely think that, completely. And they will, why wouldn’t they?

Kathryn: Well, why wouldn’t they? Well, because they have authors who generate a lot of money for them, and are they going to knock off their own authors?

Joanna: Well, I mean, like the Enid Blyton estate, I once heard someone say ‘the best author is a dead author’ because you can take their estate and do amazing things with it. Or like the people who now “co-write” in inverted commas with Robert Ludlum, or Wilbur Smith.

Kathryn: James Patterson.

Joanna: Well, James Patterson is still alive, to be fair!

Kathryn: Oh, sorry about that.

Joanna: Yeah.

So dead author estates would be a very interesting example.

Kathryn: That is a good example. And so we would have, again, whether those publishing contracts have sufficient authority in them to allow them to do that. And then whether whoever is in control of the literary estate is going to permit it. And then how do the royalties break down? So I think that there are a lot of open questions.

The other point about this contract is to remember that if you have a provision in your publishing contract that prevents the publisher from using your manuscripts to train AI, all that does is control the publisher. It doesn’t control third parties. 

So that goes back to: how did Stability even get access to the Getty database, right? And can these AI machines get access to the books that are on Amazon? I mean, I don’t have enough technology knowledge or experience to understand how they are training, where they’re getting this data, and whether they’re doing it legally or illegally. So I think that that’s going to be an issue. That publishing contract doesn’t control third parties scraping the internet.

Joanna: It’s interesting. You and I’ve been talking about techy things for a while, and the last time you were on the show, a year ago, in 2022, we talked about blockchain, NFTs and DAOs. Then there was a crypto crash, which happened just after we talked.

But now, this is where I think that —

Generative AI may accelerate the need for blockchain registration of IP and then subsequent licensing.

So as soon as I finish a work, a finished work that essentially I now want to get copyright on, I would upload it to a copyright blockchain, and that would mark it. Before I put it anywhere else, I put it there and then it would have an ID on it.

And then, for sure, I would license it for training models because I would like micropayments for that kind of thing. So my feeling is that perhaps this might drive the adoption of blockchain because we’re not going to be able to keep up otherwise.

Kathryn: So I think blockchain is a great answer for registering IP.

Again, because it’s international, as opposed to the Copyright Office in the US. But creatives all over the world have been taught that they don’t have to register their work to protect it.

Joanna: We all have to learn new things.

Kathryn: I know. That’s the biggest lie in copyright. So I think blockchain is a great answer for the registration process, when it is developed and properly secured.

But just to revisit what we talked about a year ago with those NFTs before the crypto crash, one of the most positive things that I felt about NFTs was the ability to lock in a royalty on resale. A royalty going back to the author back to the artist on resale.

So what happened with the crypto crash? These platforms, these NFT marketplaces, are letting buyers get out of paying royalties. And that was one of the best things, royalties on resale was one of the best things that NFTs and the blockchains were able to lock in. So we have blockchain, which is supposed to be an immutable contract, and they’re changing it. That’s not immutable.

Joanna: Yeah, I don’t know. All blockchains are not equal, for a start. And also, I think of blockchain and NFTs and all of that, I think of that as 1997 internet. And I hope that this crash is the 1999, 2000 internet crash.

And then what came out of that was much, much more robust design, you know, the actual people building actual things, as opposed to speculation and bubbles.

So I kind of think that there’s a lot of building going on now in the quiet after the crash that may emerge with this generative AI into an interesting new space. I think that the good ideas were there, it’s just maybe we need the architecture.

Kathryn: Yeah, right, exactly. The good ideas are there, but the structure is not there yet. I do think that it is the possibility for a worldwide registration system. And then, not just registration. but the reason you have registration is so you have enforcement. You have the right to say to all of these 13 websites who are going to steal this podcast, you have the right to say, take it down. And I don’t know, then it comes down. I don’t know.

Joanna: Or they just have to pay a micropayment and they can use it. That’s fine. That’s what it is, isn’t it? It’s fairness.

Kathryn: Yeah, right.

So and here we are, the ethical people talking about fairness. But there are those evil deed doers out there who aren’t going to give a hoot about fairness. And also, remember that if you’re registering on the blockchain, there’s your entire content completely available. Now it’s not even a question of whether it’s illegal to do the scraping, there it is.

Joanna: That’s a really good point. Oh, this is so much fun!

I mean, we’re out of time. We could talk about this forever. And you’re clearly going to have to come back on again sooner because things are changing so much. But you are doing all kinds of things at the Creative Law Center.

Tell people what they can find at Creative Law Center, and thes events that you’re running, and what you’re planning to help creatives with around AI stuff.

Kathryn: Okay, so let me just add two little tips for your audience that we did not discuss, we didn’t discuss everything. But I would like everybody to add an AI training restriction to the terms of use on their website. Okay, so put it in there. I don’t know if you’re going to be able to enforce it, or if you’re going to want to enforce it, but put it in there. Use the Author’s Guild contract language as a model and stick it in there.

Also add an AI training restriction to the copyright page of your self-published book. Because remember, buying an ebook is a license and it’s subject to contract. So again, you have a restriction that might be enforceable. So those are my two tips for your audience.

I can be found at the creativelawcenter.com I offer a membership program to creative professionals. My concept in setting up this membership program was to make legal services around copyright, trademark and creative business building affordable and accessible and actionable for creative professionals. So come and visit me at the creativelawcenter.com.

I lurk on Twitter. And I am on LinkedIn also. So @KathrynGoldman. Those are the places that you can find me. We have monthly workshops, and then we have years going back of workshops that you can get the replays to. So please come and visit me and shoot me an email if you have a question.

Joanna: Well, thanks so much for your time, Kathryn. That was great.

Kathryn: Oh, it was my pleasure to be here again. I look forward to talking about GPT-4 once I’ve had a chance to check it out.





The post originally appeared on following source : Source link

Related posts

Merry Christmas, Wordplayers! – Helping Writers Become Authors

Focus, Self-Care, And A Little Bit Of Tough Love |

Top 10 Books I Read in 2024