Writing With Artificial Intelligence With Andrew Mayne

by Creating Change Mag
Writing With Artificial Intelligence With Andrew Mayne


What is GPT-3 and how can writers use it responsibly as part of their creative process? How can we approach AI tools with curiosity, rather than fear? Thriller author Andrew Mayne talks about these aspects and more.

In the intro, I mention the discussion about whether Google’s language model, LaMDA, could be sentient [The Verge]; and the Alliance of Independent Authors Ethical Usage of AI tools.

If you’d like to know more about using AI for writing, images, marketing, voice, translation, and more, check out my course, The AI-Assisted Author.

Andrew Mayne is the multi-award-nominated and internationally best-selling author of thrillers. He’s also a magician, a magic consultant, and the author of over 50 books on magic. He invented an underwater stealth suit for shark diving, and he works with OpenAI as a science communicator. He also has books for authors, including, ‘How to Write a Novella in 24 hours,’ and a co-hosts the podcast ‘Weird Things.’

Show notes:

  • How Andrew got into AI and first used a vision model for his shark suit
  • What is GPT-3 and what is DALL-E? (You can find out more on OpenAI.com
  • How writers can use GPT-3 as an amplifier, and how to prompt it to get an effective response, as well as the different applications
  • Approaching AI with an attitude of curiosity, rather than fear
  • What is ‘AI-assisted’ and why we should label AI-generated words and art
  • Ways to address the lack of diversity and bias in AI models

You can find Andrew at www.AndrewMayne.com and his blog here which includes articles on using GPT-3. You can find GPT-3 on OpenAI.com. You can also follow Andrew on Twitter @andrewmayne

There are many tools built on top of GPT-3. Here’s my list of AI writing tools. I use and recommend Sudowrite for fiction, in particular.

Transcript of Interview with Andrew Mayne

Joanna: Andrew Mayne is the multi-award-nominated and internationally best-selling author of thrillers. He’s also a magician, a magic consultant, and author of over 50 books on magic. He invented an underwater stealth suit for shark diving, and he works with OpenAI as a science communicator. He also has books for authors, including, ‘How to Write a Novella in 24 hours,’ and a co-hosts the podcast ‘Weird Things.’ So, welcome to the show, Andrew.

Andrew: Hey, thank you for having me.

Joanna: Oh, you do so many things. But we are actually going to talk about AI today. But I wanted to first ask you:

With your background in magic, and also creativity, how did you become interested in AI?

Andrew: Well, ever since I was a little boy, I was really interested in science, and entertainment, and everything in between. And I loved robots when I was a kid. And I’d build robots from science fairs and stuff, and I would use coffee cans, and little motors and things I pulled from toys to do that.

And then when we got our first personal computer, I would try to build little chat bots and ask it questions, have it respond. And then I got kind of bit by the magic bug when I was in high school, because I lived in South Florida, and we had a lot of cruise ships there. And that seemed like a really cool way to see the world.

And that became more of just a passive hobby, with AI and artificial intelligence. But then, sometimes you keep coming back to things.

I had friends that were active in [AI]. And I knew some people were actually been to some pioneers in it that I used to sort of just pester with questions and stuff. But it was actually just a few years ago, when I got back into… I was just, realized that programming was something that I never really took seriously. Like, I always knew a little bit about programming. And I thought, ‘Why don’t I just go learn a programming language?‘ And, you know, the older you get, it’s helpful to just keep learning things.

So I started to learn to program. And then I found myself involved with doing a special for Discovery Channel for Shark Week, where I was gonna try to build a suit to make myself invisible to sharks, as you explained at the top of the show, which is dumb. But it sounded like a fun thing to do.

I had talked to shark scientists, and they’d explained to me, sharks have an incredible array of senses. And when you’re down there, you’re at a huge disadvantage, because we’re tree-climbing monkeys, in the bottom of the ocean, surrounded by apex predators. And I thought, ‘How can I help myself out?’

And I thought, one of the things I could do is try to build a system where I could see 360 all around me, and then use maybe, like, image recognition or something to tell me, ‘Hey, there’s a shark behind you.’ Because the thing I found out about Great Whites is they’re ambush hunters, and they can tell where you’re looking. And if you’re not looking at them, then they’re gonna sneak up on you. And if you’re not looking at them, and they’re sneaking up on you, you don’t see them.

There’s a lot of open source code for, like, vision systems, and creating software to recognize stuff, and there’s a lot of helpful tutorials. And I started off as just a complete novice in it, but I got fascinated by it, because as I was learning how you create something to detect an image, or train it to recognize something like a shark, I was talking to shark scientists who were explaining to me that, ‘Hey, you know, one of things that’s interesting about shark vision is that they don’t see you inside of the shark cage. They actually see the outline of the cage.’

And something clicked for me, because we talk about sharks having such incredible senses, a sense of smell, they see us better, or probably see as well or as better than we do. They have these ability to sense electrical fields, they can sense vibrations, all of this data is huge. But they have small brains relatively, compared to, let’s say, an upper primate.

And when that person said that to me, like, ‘Oh, that’s an algorithm used to reduce the complexity of the information so it can make a decision.’

And I’m like, ‘Oh, a Great White is a collection of algorithms, as we are, too.’

And that particular one they described is very similar to something I was looking at in vision research, which is dealing with, like, detection of edges.

And that was, when one thing connects between something and something else, and, I mean, that’s kind of the basis, I think that we, as writers, is we love to either make connections or find connections, and we see that connection, just something triggers in our head. And that, to me, was, ‘Oh, this is fascinating.

This is really what artificial intelligence is about — this optimization.

And so, I started getting more into artificial intelligence, learning how to build little, small neural networks, and neural networks is basically kind of like how our brain works, but it’s, like, just a simple arrangement of, we call them parameters, but that’s basically if-this-then-that kind of thing, but just in a sort of statistical way that it’s trained.

It was just fascinating, because you’d start to see results. I built a little image generator model from some code I found online. And I could, like, ‘Ah, that kind of looks like a goat. Oh, that kind of looks like something.’

And it just was just a wonderful awakening for me to realize how cool this was, and happened at this time, with artificial intelligence has just expanded so much, and that’s where the interest came from.

Joanna: Yeah, I love that. But let’s get specific around the AI tools for writers.

So, I’ve mentioned GPT-3 on this show before, but perhaps you could just give an overview from your perspective, and what other AI tools, like DALL-E from OpenAI as well, what is it like right now for writers?

Andrew: Well, it’s a very exciting period, and I was very fortunate, because two years ago, when OpenAI came out with GPT-3, I got invited as a creative to play around with this.

GPT-3 was an AI that was trained by taking a tremendous amount of text data, and they built the fifth-largest supercomputer in the world to build this. And then, when they got this, they weren’t sure, what were its capabilities?

I was invited to sort of play around and start experimenting with GPT-3, first, kind of…because they knew I was a writer. They didn’t know that I, how nerdy I was about AI. So, for me to be able to interact with the system for hours on end, to see what it could do, was exciting.

So, I helped write a lot of the examples that you’ll see if you get access to GPT-3, because then I ended up…now I work for them. But, like, I started creating examples and stuff, and exploring, because I found that as a writer, we’re very used to the world of words and how words are presented.

And that kind of was helpful, because this was a point at which AI understood a bit about structure, understood a lot about grammar, understood a lot about if you say what TL;DR means, what that means, or if you say ‘in conclusion,’ and I found that it was really, really helpful…

You could use it as an amplifier. You could start a little character description. It could help make the description a little more interesting.

For nonfiction writing, it was really helpful because you could write some things, and then it might help you continue your thought.

And then you could just write ‘in conclusion,’ and it could sum up things for you.

It actually affected the way that I write, when I write, like, nonfiction. For instance, like, I start everything with a TL;DR right now, because watching a model take a bunch of information and compress it down to the most important points made me think about, ‘Okay, when I start writing,’ and this is obvious to a lot of people…

For me, like, I really need to start from there, and then always look back to that, and not deviate. So, both for writing and learning how it worked, affected me.

Joanna: So, you mentioned a few things there. You mentioned amplifying, and expanding things, or continuing thoughts.

I’ve used it too, so I get what you mean.

But many authors who hear “AI and writing,” they think that you just click a button and out pops a novel.

You have written a blog post, ‘Will AI ever write a novel?‘ So, where are we with it right now? And I guess, how do we drive it? Because you’ve also written about prompt engineering?

Andrew: So, my personal belief is, yeah, of course, it’s going to be able to write novels, and people go, like, ‘but it won’t know human experience.’

Like, ‘No, it’ll be able to read a million biographies, and it will know human experience deeper than any of us know.

But it still comes back down to, is that doesn’t negate the value of writers. I think it actually amplifies it, because there are systems that create really good music right now.

If we take the singer Billy Eilish, she’s incredibly talented. And part of what makes her music have value to us is her story. She and her brother, composing music in their parents’ home, trying to make their own sound, not being people that were, you know, the industry would have picked to say, ‘You get to be the superstars,’ but they did it on their own. And then they became famous.

And that emotion is based in part the fact that they’re real people.

And there’s gonna be places for there’s fake, like, Japanese digital pop stars, and so there’s gonna be a place for that, but I’m always gonna want to sit down and read stories. Sometimes I might want just a computer story, but I like to know that there was a person who wrote this.

I like to know this was created by somebody, because writing is about choices, and creativity is about choices.

And these tools can give you a lot of options, but you still have to make choices, whether it’s GPT-3 is saying, ‘Do you want this character to be this? Or do you want it to be that?’ Or for you deciding where you want to take it.

And that’s what we see with DALL-E.

DALL-E is our system that generates images. And they’re incredible. Like, I have just ordered a couple to have hanging in my place here, because I just love the look of them. I can tell the difference between an artist who uses DALL-E and just somebody who just says, ‘Oh, a cool rabbit, or this.’

Artists bring more to it, and these tools really are amplifiers.

One more example to think about is, imagine in 1826 you showed a portrait artist the first photograph. They might have been frightened. ‘Oh, no. What happens to my industry?’ And it was disruptive.

But now imagine back then, 1826, trying to explain what does somebody like James Cameron do for a living? And it’s incomprehensible. But that field of art, the number of people who were able to make a living by art, increased dramatically.

The value we placed on that actually went up a lot. I’ve had friends be like, ‘Oh, well, Mozart got along fine without a commercial business.’ I’m like, ‘Mozart was supported by the royalty.’ Like, he was the one dude that got that opportunity.

We live in a world with millions of artists, and opportunities that come and go, but this technology can amplify that. If you have that creativity, don’t be scared by it. Be excited by it.

Joanna: I’m definitely with you. I’m an optimist about it. But it’s interesting there.

So, you talked about there a person wrote this or an artist created this, but we can use the tools, and it amplifies the value.

If we think about a continuum from zero to 100%, so 0% AI to 100% AI, this is the issue I see, and people argue with me about this, and so I’m wondering about what you think. So, where is the point where it becomes more AI-created, or, AI-enhanced?

Here’s a human example. Someone like James Patterson, who co-writes with so many people, and he does these outlines, right, but a lot of his co-writers do the bulk of the words. So, they get a co-written name on the book cover.

So, if an author has an idea and then uses something like GPT-3 to generate most of the words, do we have to talk about this in the publishing industry?

Do we just put the author’s name on? Where do you think this continuum goes, I guess?

Andrew: Well, and understand, OpenAI, by the way, OpenAI, we’re a research organization. We’re a for-profit owned by a nonprofit, all right.

And the goal of the nonprofit is to develop benevolent artificial general intelligence, an AGI.

AGI is basically described as something that’s as capable or more capable than a human, but more efficient to do certain things. And we want to make sure that AI is helpful overall. There would be disruptions, but we want it to be a net benefit for all of humanity. And we look at how things might be disruptive.

So, right now what we do with DALL-E, with that image generator, we put a little logo on there, that’s a little color bars, to show that this was, it’s our signature.

And we tell everybody using this, ‘You need to tell people an AI made this. You need to tell everybody that an AI created this.’

Because we’re trying to set a precedent to say, like, ‘You should tell your audience when it’s AI, or maybe assisted by AI, etc., going forward.’

I mean, the danger gets into what does ‘assisted by AI’ mean now? Because if you open up Google Docs, and you start to write something, it will do completion for you.

And it uses a model, not as capable as GPT-3, but a really good model. That’s AI-assisted now.

And so, that’s sort of where it gets gray is how much.

But if you’re just turning out a thing, and if it’s substantial, the most of the work was done by an AI, probably a good idea to tell your audience. You know, I think that…because people are going to want authenticity, and sometimes it won’t matter, but sometimes it will.

Joanna: Yes, well, actually, I did a short story last year, which I self-published, and I put in the back a little, in my author’s note, I did an AI declaration. And I’m also encouraging people to do this, but I don’t think most people are doing it.

[Note from Joanna: See the Alliance of Independent Authors Ethical Author Declaration of AI usage for more details.]

And you’re indie, but also traditionally published. So, how are you tackling this? I think you mentioned on your blog that you’re using some passages from AI in your next traditionally published novel?

Andrew: Yes, I actually think I probably had the first GPT-3 content published in a book, but it’s a part of a book where it’s a computer talking.

And so, that’s what I did, is I said, ‘It’s this thing,’ and that they realize, ‘Hey, it’s a computer talking,’ like, ‘Yeah, it was, actually.’ And there was sort of a race for me, like, ‘I wanna do whole books.’ Like, ‘Well, are they gonna be good? Are they going to be good?’

And eventually, they will be. I think they’ll probably be exceptional.

But I think that as far as in the publishing industry, you know, you look at the state of it now is that they look to see, ‘What is your platform and how many Twitter followers do you have? You know, what’s your Facebook count?’ and all this.

And I think that is sad for me, because I think, I mean, I was very fortunate because I haven’t done television stuff. Those things are good for me, but I think if, like, I was trying to make a name for myself as an author today, versus where I was 10 years ago, it’s a different landscape.

I guess my point is, like, I don’t know how publishers are going to handle the AI side of things. You know, with some publishers, if they can just have a small group of writers, a stable of stuff, working with AI, do it, and just churn out a bunch of public books, maybe that’ll happen. Will the quality be there? I don’t know.

Joanna: Well, and this is why I’m very interesting, because, of course, as authors, we license our rights to publishers, and there are some imprints of some famous publishing houses that will have a lot of books that, you know, are within a genre-specific template. It’s hard not to name names. But you know what I mean.

Within certain genres, there are templates that you could use. And if you had — and many of the publishers do — a huge corpus of copyright work that they own, that they could train a model with, could they not generate these books, that do satisfy readers?

And as you know, quality is in the eye of the beholder of the reader. So what we think is quality might not be what other people do, for example.

Andrew: So, the limit is, right now, when you have GPT-3, or you have a model generate your text for you, there’s a thing called the token size.

And basically, a token is how it takes words and converts them into a numerical value. And so, it’s really, it’s all math. Same with the images, everything else. So, GPT-3 has this, it can take in about, let’s say, 3500 words, or generate 3500 words that relate to each other. If you did chapter one, and had it generate 3500 words, and then did chapter two, it’ll have no idea what was in chapter one, because that’s too many words. You could do a little summary or a little ways, a little hacky things you can try to do to get it there.

GPT-3, and other models…and GPT-3 has, like, double what everybody else has. The standard’s, like, maybe 2000 tokens, right? None of these models have ever read a book in its entirety. None of them have ever read a book and read at the top, and then read at the end, and they’re able to tell you that.

They can tell you though, you’ll say, ‘Oh no, this AI told me about this about a book.’ Like, it read somebody’s review of that book. There are models for summarization, and I built some of these, that will take it and condense it down.

You’ll still see that they struggle with, like, an example is, like, it might read ‘Harry Potter.’ And then at the beginning of ‘Harry Potter,’ you know, Harry’s told his parents died in a car accident, which we know, spoiler alert, it’s not true. Voldemort killed them. But you don’t find that out till the very end. A model that might just read things in parts would tell you, ‘Oh, yeah. His parents were killed in a car accident.’ Like, ‘No, no. That got reversed later on.’

So, right now, for the shorter-term future, an AI can’t write a whole book, because it doesn’t understand the beginning, middle, and end of a long-form novel. That will change eventually.

But I think we, as writers, can start thinking about, you know, what does the novel mean? Like, I write series. Like, I, that spin, I found that I like to do it, and the market’s really good for that.

But writing a series is challenging, because I have to remember the names of everybody and everything else like that. And I would love to be able to use AI to help me keep track of all that stuff.

And I would love to be able to write on a much bigger canvas, too. And I think that’s the thing to think about, is, like, one, it can shorten the time you write, you could increase your output, it could let you spend more time making cool choices, and less time having to do things that you don’t wanna do, revisions, etc. And then, I think overall, can make us all better writers.

Joanna: Yes, and I totally agree.

Thinking of it as a tool, rather than another writer, is the much better way.

But I just wanted to circle back on prompt engineering, because when I started to play this, so, I’m not a programmer. I did get access to the beta for GPT-3, but I now use Sudowrite, which is, like, a front end on GPT-3. And many writers listening won’t be programmers.

But what I discovered very quickly was that you have to drive it in a certain way. And I believe you actually use the term prompt engineering.

How do we drive GPT-3 in the best way, in order to get the best results?

Andrew: So, a thing that I think is interesting, and it even alludes, like, I’ll see research papers come out talking about prompt design. And I think that even… Not to be too critical, but sometimes they really miss the point of what these models really are.

There’s a base level, they’re a prediction machine, but they’re emulators. And to emulate something, they have to know what they need to do.

If you ask one of these models, like, how many fish live in a wallet? Which is an absurd question, it might give you an answer, because it’s trying to find some sort of rationale to this question, and it’ll answer something.

It’s like if I wrote that on a piece of paper and slipped it under your door, and it said, ‘How many fish live in a wallet?’ You might be like, ‘I don’t know, one? Okay, whatever.’

But if you tell the model, ‘Hey, if this question makes sense, answer it. If it’s illogical, say illogical.’ Literally say that at the top, literally tell it that, and then go, ‘Question. How many fish live in a wallet? And answer,’ it’s more likely to say, ‘Wait, this is illogical. This doesn’t make any sense.’

And then if you say, ‘If I have $5 in my wallet, how much do I have in my wallet?’ And it’s, ‘Oh, you have $5.’ And so, telling it what you need. And it’s an example of why you need to direct it.

If you want it to do a style… If you say ‘write a news article,’ and I, ‘Oh, write an article about blank.’ Well, if you were a writer who was a contract writer, you would know the publication you were writing for, right? You would know the topic, you’d know the audience, you’d know all these things.

And so, sometimes people go, like, ‘Oh, I asked GPT-3 to write an article, and it wasn’t any good.’ I’m like, ‘I guarantee you, if you just threw that out to a bunch of random people with no other information, it wouldn’t be very good either.’

But when you give it more background, to say, if you say, ‘Oh, give me a starting paragraph about this topic for “The Atlantic” magazine,’ then it’s going to know, ‘Oh, I know the style,’ it knows the narrative, and then it’s going to follow through on that in a better way.

So that’s the thing to think about.

You have to give it enough information to do what you want. It doesn’t read minds.

I did some examples where I used one of our newer models to create games, just by using text descriptions. I actually got it to make Wordle without any code, by giving it, like, five instructions.

But I had to explain the rules of the game.

And I had some people look at this, like, ‘Oh, well, you had to write all the stuff to get that game.’ I’m like, ‘Well, yeah. I knew what I wanted to get.’ If I just said ‘create a game,’ it could have give me tic tac toe.

I had to be specific about what I want, and that’s the same thing with the models. You need to direct it with…give it the same sort of input you would, you know, if somebody were eager to help, maybe, kind of, a really smart middle-schooler.

Joanna: Yes, I mean, you’ve built an app thing, haven’t you? Where you can chat with a particular type of character, as such. Are you still working on that?

Andrew: Before I went to work for OpenAI, I built a lot of demo applications, and was kind of helping them figure out, like, how do you release stuff?

I built a thing where you could email historical figures. And you would just basically, like, you know, send an email to their name, I forgot, if you send an email to their name, whatever and ask a question, and you can just…anybody, Ben Franklin, whatever, and it was fun. It was this cool demo. You’d email them, and they’d email you back.

I don’t have that up and running anymore, because now our guidelines are we want to be more protective. Because we could have written Pol Pot, you know, some awful people in history, which now, we’d probably be like, ‘We want to make sure that somebody is checking these responses first before we invite somebody to commit atrocities.’

And part of the reason we do that, too, is people go, like, ‘Well, how dangerous are these models or whatever?’

One, we don’t know.

I think that we err on the side of being a little over-cautious.

Because maybe these models really aren’t potentially that harmful or dangerous, but the later ones that are coming are going to be way more powerful, and that’s when you’ve got to be careful.

When you have, basically, the ability to type in a request to a super-intelligent person that maybe doesn’t have the same sort of ethical boundaries we do, that could have scary implications.

Like, ‘Oh, create a computer virus to mine Bitcoin on everybody I know’s, you know, computer.’ The systems are gonna be able to do that, if they’re not… I mean, that’s gonna be a thing, you know, if we’re not careful.

So, I’m like, so, I tell people, like, ‘Oh, why are you so restrictive?’ It’s like, ‘Well, we wanna get in that habit now. We’d rather be a little restrictive early than a little restrictive too late.’

Joanna: Yes. And I do urge people to go and look at OpenAI, the website, and there’s the mission there, and statements. And I really appreciate how careful it’s been. However, I would say there’s always going to be issues, technology is a double-edged sword, isn’t it?

Andrew: Yes.

Joanna: But you mentioned there, the responsibility in the output.

The models are trained on data — including out of copyright books.

But these models are trained on data. And this is one of the things I think about a lot, which is, most of the data, apparently, it’s trained on are out-of-copyright books. So, I believe they’re not trained on works in copyright, because of the issues of licensing.

But, of course, most of the books out of copyright, are written by dead old white men, in English, probably mostly of a Christian persuasion. And so I feel like,

To address diversity and issues of bias, we need to train models on works in copyright, by far more diverse writers, in the last 75 years of publishing. So, how do you think we could figure that out, and fix that?

Andrew: Well, yes, I mean, one of the solutions we do right now is we have a series of models called our Instruct models, where the original GPT-3 DaVinci was trained on a large amount of text, and then it just does what it does.

And that is the amazing part about it, too. We never trained it to be a chatbot. It just figured it out. It figured out all these things, which is awesome, and terrifying. And so, the Instruct models are ones where we said, ‘Hey, we know people wanna perform certain tasks a lot, and they want reliability,’ so we train those models to be very reliable in what they do.

When you train that, you can also say, ‘Hey, let’s amplify certain things in the dataset. Let’s minimize other things.’

And a simple solution, people say, ‘Well just take out all the bad stuff.’ Well, the problem is, if you’ve ever met a child, and you’d stand next to that child on a street, and they look at people, and they make observations, children would get cancelled very quickly, because they don’t know any better. They don’t understand what’s an appropriate question or what’s not, you know. And that’s where, if you take all that stuff out of there, you might have a model that’s extremely naïve about the world.

And then it’s going to innocently ask you something like, ‘Why are you typing so slow? Are you dumb?’ You know, and be like, ‘Well you’re not supposed to say that.’

You want to build models that have a lot of knowledge about the world, but also have good judgment.

And part of the things you can do is if you want to make things that are…like, what we did with DALL-E is when we first tested, you said, ‘Hey, create a bunch of images of scientists.’ If they were all men, that’s not good, purely just on the basis of being as not a good artistic tool, because people are gonna wanna be able to have a wider variety.

So you can make things more steerable. And you can say, ‘Hey, have it, one, try to make sure the model is more diverse in what it represents, but also gives you more control.’

You could say, ‘I want an Indonesian woman scientist working in a lab, with two other women, as they observe something,’ you make it controllable, so the person has the opportunity to get the output they want.

Joanna: But I guess that is sort of artificial, trying to fix a bias. Whereas if we actually read in, say, the works of all writers in different languages across the last 80 years, then that might provide more variety in the training data.

And also, now, I believe we can create synthetic data, so we can do different, you know, and I’m thinking text, not images, obviously, for training the text models. And then I also kind of see this as a way that potentially writers could have data licensing, potentially, like, would you like to put your book in this corpus of data that we’re training the next model on, for example? Do you think that’s just entirely ridiculous?

Andrew: No, not at all. No. I would love for us to find ways in which we can do that. And Sam Altman, who’s the CEO of OpenAI.

Sam is an incredibly thoughtful guy, and he’s talked about the point, at some date, of, like, if we start training models, is to have ways for people to contribute to it, and also benefit from it, etc. Because there’s going to be a point when you’re just going to need a lot more data.

And finding that way of making sure that people feel, one, that everybody benefits from it, but two, like, yeah, trying to figure out just how we can have it just represent a lot more points of view, in a way that just is beneficial.

Joanna: Yes. And I guess the other thing is from a selfish writer viewpoint, like yourself, I’ve been writing novels for over a decade.

And I would just love to train a model with my backlist. And of course, none of us, individually, can have a backlist big enough to train any model, right?

But I almost see it as, say, let’s say, a group of thriller writers could get together and create their own training dataset for a model, and that would give us a far more fine-tuned output. Do you think that’s possible?

Andrew: Well, actually, right now, we could take all of your books, and we could break it into, let’s say, paragraph-by-paragraph sections, and we could put it into…we could train… We can fine tune.

So, we offer a service where you can fine-tune a model, and give it data. You could do that now. It’s not going to be able to write a whole book, but if you start writing a passage and mention a character, it would probably complete things based upon what it knows. And so, for probably doing a paragraph-by-paragraph level, probably possible right now.

Joanna: Oh, okay. So, are you thinking about doing that for your books?

Andrew: I’ve thought about it. I haven’t done it. Early on, when I was a writer, they were like, ‘Oh, let’s train some data. Let’s try some stuff.’ And I did a little bit of it.

But, I mean, I’m like the car mechanic whose car’s always broken down. I’m always pursuing some other shiny new object and stuff. And so, I’ve played with it to a limited extent, to see, like, ‘Oh, how could do completion stuff,’ but that’s…and I’ve done a lot of, like, helping other people, like, train stuff, like people who work in commercial properties and stuff, that wanna use it as a writing tools, like, ‘This is how you prepare the data. This is how you do it.’

But I should probably do it as an experiment, to see. I do try to be mindful because I’m a writer, active writer, and I work in AI, of just trying to do things in a way that people feel is responsible.

Joanna: Yeah, well, I mean, I think this is fascinating. And of course, you do, you span different publishing routes. So, you are still indie as well as traditional, right? You still self-publish?

Andrew: I haven’t self-published anything in a while. Like, I do two books a year right now, Thomas & Mercer. And so, that kind of keeps me busy, because I’m full-time at OpenAI, and then that, so…

Joanna: Well, I would call you a polymath, as in you seem to do a hell of a lot of things. You know, an inventor, and magic, and OpenAI and writing. And so, how do you manage your time between?

Is it a full-time job at OpenAI, and also writing and everything?

Andrew: Yeah, I’m full-time at OpenAI. I came on board originally as a contractor, and was helping them out with the GPT-3, and like I said before, writing documentation, creating examples.

And then I went in with the, I had the title ‘Creative Applications,’ because I’d just find creative ways you could use it. And it was sort of like, you have these incredibly brilliant people who get to build things, and they’re constantly having to build them. And I’m like, ‘Can I play with them then?’ And so, I get to just sort of take advantage of everybody else’s hard work, and just play with it, and show people cool stuff.

So, I worked on that for…worked with the API team doing that, and helping kind of just get the word out about it. And then, last year, I went, and I joined up…we have a comms team, we have an incredible team that helps, like, with communications and public relations, etc.

And I asked, ‘Hey, could I work with you guys to see what this is all about too, because I really wanna get involved in helping communicate that.’ So I came on board there, and they gave me the title ‘Science Communicator.’

So, now I do a lot of stuff, like I’ll talk to…if we wanna do a piece with, let’s say, some journalists on it, on the models, and they wanna get sort of a backgrounder, I do that. I help explain the stuff, as a writer, like, how do we communicate this? How do we try to shape the messaging so people understand what we’re doing?

Joanna: Well, this is part of your communication, there are lots of people listening, and very interested. And, I mean, well, that’s another question. So, I did apply for the beta, and I do have access to that API, but as I said, I’m not a programmer. So, we’re recording this June 2022.

If people listening want to use GPT-3, how would they go about doing that?

Andrew: Well, what you can do, if you go to OpenAI, and you sign up for it, what you’ll get access to is, there’s a lot of technical stuff, documentation for if you wanted to code, but there’s a playground, and there’s a playground there.

And you’ll get a certain number of credits for free. And I don’t remember how it is. But you can go in there and type things in, and just see things, what happens. You can put a little bit of your text, see what it does, you can go look at the examples.

And so, the playground environment’s easy. You don’t have to code to do it. You’ll see knobs like ‘temperature.’ And temperature, think of that as randomness. How random do you want the response to be? Zero, not random at all. Set it higher.

I’ve got some videos on YouTube where I explain some stuff. I need to do some better quality ones, but I’ve done a lot of sessions where just explaining that, but anybody could just go there and start playing around with it.

And then if they want something more specific, there are different companies that have different writing tools that are based on top of it.

[Note from Joanna: I use and recommend Sudowrite, which is built on top of GPT-3.]

But at the basic level, you’ll see the GPT-3 model, and also our newer model. We have a model called Text DaVinci, which is, if you’re doing, like, technical, or, like, nonfiction, that’s really good at giving you concise responses.

Joanna: Definitely worth playing with. I picked up on this with GPT-2, and then GPT-3 came along. And I now kind of talk about it as ‘GPT-X,’ because surely there’s going to be some more numbers.

And obviously, you can’t talk about timings or anything. But it has been two years, really, since GPT-3, and obviously, DALL-E, the second one has come out. Can we expect, do you think, anytime soon, or in the next year or two, another iteration?

Andrew: You know, we never talk about things before they’re ready. And it just comes down to that.

In the world of AI and research, too, is that it is, DALL-E, when that got released, I think we brought people in… And we have iterations of that. That got released maybe a month after, I think, we probably the model had finished training. You know, we published a paper talking about some of the concepts early on. So, these are things, like, we just don’t know. We often just have no idea. Like, we might, ‘Oh, we wanna get this…try to do something here.’

But in that time, though, one of the things that happened, which is just a couple months ago, is we released a couple new methods…

We updated GPT-3. And actually, like, we went from the 2000 tokens to the 4000 tokens, which is kind of phenomenal.

And we also added things like edit, where you can go in there and insert stuff in between. So, you can have a passage here and a passage below it, and you can say ‘add something in between there,’ and it will work backwards and forwards from there. Which, to me, is such a phenomenal capability, when you really think about what that can do.

Because previously, you would just give it the text and say, ‘Okay, complete this.’ Now, we could go fill in there. So you could give it a passage and say, ‘Okay, add more detail here, or do this.’ So, those capabilities right there, I think we announced it, we talked about them. But it didn’t get quite the attention that I think personally those things should get because of what they can do if you’re a writer.

You can actually put text in there, and then say, ‘Okay, rewrite this in the first person point of view,’ and it will change it to the point of view.

And so, these capabilities, by themselves, are, if we had said, ‘Oh, this is some new special model, or whatever,’ people would be like, ‘Oh, my,’ and they are new models, but they’d be like, ‘Oh, that’s really cool. It can do this now.’ It’s like, ‘Well, it can do it right now,’ so…

Joanna: Actually, that’s a good point. Because Sudowrite did introduce a new thing, and you could type in a line and then you’d say, ‘Make it more horror,’ and it would rewrite it in a more horror way, or a whatever, a sci-fi way.

Andrew: Yeah, that was based upon the edit functionality we added.

Joanna: There you go. Well, maybe it’s just because it didn’t have, like, a new number, a new release, but I guess, as you say, there’s iterative changes, and then there’s a big release, a big change.

Andrew: Yeah, and, you know, part of it is that we are a research company first. And so, that means to how we approach things is that we don’t wanna beat a drum in the way you would if it were, like, a new iPhone or something, generally speaking.

Because OpenAI, we try to be cautious about how we do stuff.

And we will do these incremental things and say, ‘Hey, look, what we’ve done. This is the capability now.’

The Instruct models that I mentioned before, that was a pretty big deal, because that helped you basically guide a model better, made it more reliable and not giving you unwanted outputs. And so, a lot of important things are gonna be these incremental developments.

Joanna: Right. We’re almost out of time, but I just had one more question.

On your blog, you wrote, ‘I think AI is going to upend everything in entertainment, and not just my little corner of publishing.’

So, since you come from TV as well, and magic, and your shows, and everything, how will it do that? How will it upend everything?

Andrew: So, I’ll give you an example. I don’t know if you watched the show, like, ‘Mandalorian,’ but, like, at the end of it, they did use CG, in, like, the first season, at the end of it, they CG’ed Mark Hamill’s younger face onto an actor, and it was not really… and everybody that works on the show works really hard, and everybody in VFX, not to be dismissive, but it wasn’t really the best effect. It wasn’t really the strongest way they could have done that.

Because AI deepfake technology had been way more advanced. But it was a thing where the entertainment industry can be slow to catch on to things.

And I’d made a joke to my wife, like, ‘Hey, you know, in two days, a YouTuber is going to upload a better version of this using an AI technology.’

And I kid you not. Two days later, a YouTuber by the name of Shamook uploaded a better version of that. And ILM, to their credit, hired him. He now works for them and helps them do, you know, that sort of stuff.

But AI, there’s a lot of incredible stuff out there right now that people don’t even realize. I have friends in VFX, who I’m like, ‘Why are you doing it this way?’ Like, ‘Well, that’s what the system does.’ I’m like, ‘There is a free code library that does this easily.’

And there’s a lot of these things where, like, removing backgrounds from images, right, you have services, you can go pay for it. It’s actually a free code you can download and run on your computer, if you just have a little bit of a technical capability, but people don’t know that, and people are paying for these things.

And so, I think, when it comes to entertainment, I think there’s a lot of things in the pipeline where, when I wrote my first novel with a publisher, I had the benefit of a wonderful editor, Hannah Wood, who was at HarperCollins at the time, and that, I learned so much from having somebody really smart go through and edit stuff.

When you think about when everybody gets that ability to collaborate, whether you’re a screenwriter that gets an AI, to not write it for you, but tell you, like, ‘Oh, I think there should be more punch here. This should be there,’ I think the stuff is going to get better. And I think that probably the pipeline is gonna shrink.

The number of people that are gonna be needed to make something, that’s going to get fewer. But then the opportunities for people to create more stuff will increase.

Joanna: So, thinking about that, then, just to leave people with a glimmer of hope in this AI future, what kind of attitude do we need to have?

Andrew: Be curious, and understand that every time there’s been a new tool, there was the fear. The word processor, people were afraid, well, great, everybody’s gonna write. What happened? Not really.

The big, disruptive sort of things that happened, like, the biggest disruptions to publishing were eBooks. And then things like, you know, I’m a big fan of the Kindle Unlimited and that marketplace there, because I think that’s, like, phenomenal. Like, like just the ability for an indie to go put there and compete with everybody else on sort of the same level.

I think that these things are, if I said I only want to do things the way I did them 10 years ago, I wouldn’t be where I was now.

And I went through being a person who worked in entertainment and cruise ships and stuff, and being able to do big illusion shows.

Then when watching the cruise lines and resorts go through recessions, and not being able to pay to bring big acts like that, to creating books and videos for magicians, then watching online piracy completely erase that.

Just transitioning from point to point, getting into publishing when print publishing was on its decline, and then working with a wonderful publisher, they understand digital.

So, we’re creatives because we like change, usually the change that we bring about, but if we embrace the change that’s around us and see opportunity, I think we’re gonna be in an incredible place.

AI needs creative people.

Like, if we just released DALL-E… And we brought in a bunch of artists to work with this, and just experiment with it. If we didn’t bring them into it, it wasn’t gonna be as exciting or neat as it is. GPT-3, they brought me in. I got to be, as a writer and go work with this.

So, everybody listening, it needs you. Like, AI is expanding, where it’s going to need creative people who look at this stuff and figure out how to use it to create new things. And it’s not gonna be as exciting without you.

Joanna: Thank you so much. So, where can people find you and your books online?

Andrew: I’m published by Thomas & Mercer, which is an Amazon company, so if you go to Amazon type, in ‘Andrew Mayne,’ easiest sort of way to see what I have there. If you go to andrewmayne.com, I usually have a list of my most recent books there.

Joanna: Brilliant. Thanks so much for your time, Andrew. That was great.

Andrew: Thank you so much.

Note from Joanna: If you’d like to know more about using AI for writing, images, marketing, voice, translation, and more, check out my course, The AI-Assisted Author.

The AI Assisted Author





The post originally appeared on following source : Source link

Related Posts

Leave a Comment