Beyond The Fence: Can Computers Write A Hit West End Show?

West End history was made this week when Nathan Taylor and his husband, composer Benjamin Till opened a West End show which was co-created by computers. Just four and a half months ago, Wingspan Productions, the same company who had made C4’s Prix Italia winning 2014 hit Our Gay Wedding The Musical tasked Nathan and Ben and teams from some of the world’s leading Universities to use the latest computer programs to conceive and create a hit West End Musical. So what happened next?

We met Nathan for a look into Tomorrow’s World of Musical Theatre.

Nathan Taylor: A Bionic Eye on Musical Theatre
Nathan Taylor: A Bionic Eye on Musical Theatre

Hi Nathan. Firstly, where can we see the show?

The first part of the documentary is on Sky Arts Wednesday 2nd March at 10pm. The second part is on Thursday 3rd March at 8pm and that’s immediately followed by a broadcast of the stage show which is being filmed live at the Arts Theatre London.

So the show is already running?

Yes, we opened this week and run until 5th March – it’s been hectic to say the least.

How did the idea come about?

Wingspan Productions had worked with us previously on musical projects and wanted to see if we as humans could collaborate with computers to write a hit musical. They came up with this crazy idea of combining human and computational creativity, a field that is burgeoning in research projects in Universities around the world, so it’s hot news at the moment.

Which Universities have been involved?

The whole project started with the world’s largest statistical analysis of Musical Theatre based in the Maths Department at Cambridge University. They studied 1696 musicals. Firstly they mapped them onto a massive graph:

  • Vertical axis = number of awards won
  • Horizontal axis = length of the original run

If you look at the top right corner of that graph you get shows which had lots of awards and a long run – the big commercial hits. In the bottom left are the flops. But you have two other quadrants

  • Top left = shows with a short run but lots of awards (critical success)
  • Bottom right = long running shows with very few awards (crowd-pleasers like We Will Rock You and Thriller Live!)

Interestingly, all of Sondheim’s shows are in the top left. That’s the kind of show that I would want to write, but we had to steer ourselves towards the hits – top right.

The team at Cambridge compared so many different aspects of those particular shows to see if they could come up with a formula for writing the perfect musical. They found correlations between:

  • Era
  • Location
  • High energy opening number
  • A death somewhere in the show but a happy ending
  • An injection of comedy just before the interval
  • The introduction of love needed to happen in Act One…but importantly not too early

All these different things that we were told were the perfect ingredients, we had to try to blend together.

Goldsmiths: Sometimes it's worth doing things on a whim...
Goldsmiths: Sometimes it’s worth doing things on a whim…

A team at Goldsmiths University were working on the “What if” project and the “What if” machine, a computer programme focused on Ideation which sparks creation in the human mind and creativity from a spark which came from the computer. The computer generates sentences which all begin with the words “What if…” One was really quite sweet – “What if there was a musical instrument too frightened to make a noise?” Well, there’s a movie waiting to happen!

The machine came up with 600 different “What ifs” and then we turned our attention to the findings of the team from Cambridge and from those 600 ideas we chose the one we felt best represented the kind of show that we wanted to write. “What if there was a wounded soldier who had to learn how to understand a child in order to find true love?”. That came from a computer, so we began unpacking data from it: already, just from one sentence you can begin to draw out the beginnings of a plot.

A team from Madrid University then joined who are working on narrative generation. For the past decade or so they have been working on Russian folk tales to analyse plot elements and narrative moments to see if they can start generating their own plots. For us they turned their focus to Musical Theatre, and they came up with a way of pushing a button and getting a ten point “story arc”. It’s quite vague and open to interpretation but suggests things like:

  • Aspiration
  • Decision to take action
  • Call to arms
  • Romance
  • I am what I am
  • Reconciliation

It gives you a basic structure on which to pin the events of your story. We chose one of those and started weaving that into our story of the soldier, the child and the mother. But we still needed a location and to really unearth what the story was about.

So how did the plot develop?

Going back to the Cambridge data, we were advised that we should set our show in the 1980s, that is should NOT be set in the USA, that we should have a female protagonist. So we started to base the story more around the mother rather than the soldier. We examined conflicts around the world which fulfilled those criteria; we looked at the Falklands war, the conflict in Northern Ireland, the Cold War, and then the peace protest camps at the Greenham Common RAF base in Berkshire.

Working with Android Lloyd Webber is almost as challenging as the real thing...
Working with Android Lloyd Webber – almost as challenging as the real thing…

We then started using another rather well known system called google to put all of the words from the “What if” sentence together with various conflicts like the falklands, so we had “wounded, love, understanding, child, falklands” and didn’t really come up with anything. But when we put in Greenham Common, we came up with a photograph of a child with a soldier through the fence and thought “that’s exactly what we’re looking for”.

So then Ben put all of the words from the “What if” into google with Greenham Common and the second hit on the results page was a link to the Greenham Common Song Book which was a book of protest songs that the women sang – currently held at the Women’s University. We didn’t put “music”, we didn’t put “songs” and yet we’d been led to a world where there are women, children, conflict and music – we knew we’d found our location. Google is as algorithmic, so why not use it?

Then we had to put our narrative story generator, all of the data from the Cambridge research team match with our ten point story arc from Madrid. We also had a story emerging about the soldier and the child from the WHIM (What if Machine) as well as the historical facts and chronology of Greenham Common. Essentially we had four timelines. Then with serendipity and just the smallest amount of juggling suddenly everything fell into place. It was quite incredible.

As if marriage isn't
There are three people in our marriage – and one of them is a computer…

And who wrote the script – you or the computer?

There’s just no computer system yet that can write dialogue which makes sense for characters, so Ben and I had to begin work on the book. We worked on a more detailed synopsis and then dialogue. We introduced more characters…but at every stage we went back to computers. So for example to find our protagonist’s name, Mary Moreton, who we calculated was 37, we went back to the censuses for what would have been her year of birth and found the top 5 baby names for that year. We then cross-referenced those names with our Musical Theatre database to see which scored the most highly. The most common baby name that year was Patricia, but Mary was no.2 and scored higher when we cross-referenced with MT – think of Mary Poppins, Mary Magdalene, Bloody Mary, there are so many Marys in Musical Theatre. We then used a random surname generator which came up with Moreton – close enough to Mama Morton to feel Kosher.

Did you and Ben both work equally on the book?

Ben and I both developed the book together. Musically, some of the songs are entirely Ben’s work, some are entirely mine, sometimes I wrote lyrics to his music, sometimes I made lyrical and musical changes and vice versa. It’s been very collaborative.

Taking music
Thank You For The Music: Mamma Mia it’s tough writing hit songs with a computer!

The latter stages were undoubtedly the hardest part of the process. Working with a computational composer called Dr Nick Collins who works at Durham University, we found some of the music the computer generates to be unsingable because it might leap an octave on a semiquaver and has no concept of form or structure. There’s a lot of repetition in popular music and it just doesn’t do that. All it gives you is a melody line with a chord per bar. We thought “okay, let’s take the best of what this program can offer and develop it as far as possible into a hit musical.” We played through about 1000 pages of music. Every now and again we would find something, a really nice moment or phrase and with a few small changes we would craft a melody. Then maybe one 4 bar phrase might develop into a verse for a song and another 4 bars from somewhere completely different could be worked into a chorus. Maybe it was a nice tune that didn’t go anywhere, so we would create the end of the melody to give it a sense of finality, a sense of journey.

What about the lyrics for the show?

Back to the boys at Cambridge! For fun, they put together a piece of software which generates poetry and lyrics. They didn’t expect it to be very useful, but actually it was – at least to an extent. It has learned everything it knows about the English language by scraping musical theatre lyrics from Wikipedia. It sees what it can learn about them (there isn’t a human being to teach the computer what to learn). It’s finding out for itself. It looks at the frequency of certain letters, lengths of words, how words interact: it looks at the probability that if a certain word occurs a certain other word will follow. We were shown it at every stage – at first when it had only learned 200 songs, it was just throwing up random letters on the page…but then after 500 songs the letters started to fall into groups – they looked like words but they were unpronounceable. And then after about 1,000 songs more of the words started to look like English words but the sentences were still gibberish.

Benjamin Till
Composer Benjamin Till: Making Sense of the chaos

Eventually they had fed it 7,000 song lyrics and from that it developed a pretty good understanding. It doesn’t know what it’s saying but it has a knowledge of what words are most likely to follow which others. So then the program is about ready to generate lyrics for you. You can ask it for 1,000 or 5,000 characters. We asked it for a million – I have a file on my computer which seems to go on forever, and like the music, most of it is gibberish. But like the music, we had to find the parts which could actually work. Every now and then you will find a phrase which is absolutely beautiful and you take that sentence and think “I can create a verse around this”, maybe it has a really nice sentiment. But we did have to write a lot of the lyrics because it doesn’t have a sense of directionality and context. It doesn’t know what it’s saying.

So you can’t tell it “write a song about flowers?”

No, nothing is bespoke – neither music nor lyrics. You cannot tell the music system to write a sad song or the lyrics to be funny. It’s like beachcombing or sifting through sand. We knew what we needed because of the markers in the script, what each song needed to achieve and to convey and then we searched and searched for words and melodies which would fit that. Then of course you have to try to marry those words to that music – and of course they were generated by totally disparate systems that had never worked together before. It just wouldn’t have been possible to do this without human intervention. But it’s definitely true that without the computers we wouldn’t have the show that we do. It really has been a collaboration between humans and technology. It opens the door to lots of philosophical questions; What is humanity? What is creativity? Where does the emotion come in – is it in the eye of the beholder? Does the audience invest emotion into something that comes from something with no emotion at all, or has that come from us as writers? Have we been putting the humanity into it? Or is it enough that the computers can randomly hit on that by themselves?

Rehearsals and performance remain very human processes.

Surely it’s still early days for much of this technology?

Yes, but people have been working on some of this for two decades and we don’t know where it’s going to go, or how far these things can progress. For example could we ever get to a point where we push a button and out comes West Side Story or My Fair Lady? It’s just not the case – yet. Interestingly, most of the scientists who we’ve been working with, that’s not their aim. They are simply trying to synthesise human creativity in an effort to better understand it. So if you’re trying to get a computer to do something, you have to break down and analyse exactly what it is that you are trying to get the computer to do – the researchers have to know exactly what the computer needs to do and how to make it do it…computers are very literal, they don’t work on emotion or intuition like we do. So many of the things that the Cambridge team said to us about the formula were common sense to people who love musical theatre. You would look at all of those points and say “Well of course, that makes perfect sense” – but these were people who had never seen musicals before – they came to their conclusions through statistical analysis using existing bodies of work. There was much that corroborated things I’ve always known in my heart but never known why.

Now that we have synthesised voices, do you think we might get to the point where we have a musical performed by computers?

There are already systems out there…Siri can already alter the pitch of a voice to give it natural sounding inflections, and to the pitch of music as well. There’s a system out there which, if you feed in a score for a barbershop quartet, will sing them back to you with synthesised voices. We don’t go down that route – we have an incredibly human show and although computers were used to write the show, when you put that into the mouths of human actor, the rehearsal process was exactly like that of any other. Actors, directors, lighting, all needs to work in exactly the same way as normal. We’ve got six musicians in the pit playing live and a cast of thirteen actors. It’s a real show with heart and humanity and I couldn’t be more proud of it. There are still some technical kinks to iron out and to adjust some aspects of our storytelling, but again that’s part of the same process for any show.

Sky Arts
Sky Arts will be broadcasting the recording of Beyond The Fence, performed live at the Arts Theatre

Most musicals take several years to get to the point of a West End production – we’ve had four and a half months from the first day of the writing process to the first day of rehearsals.

And what’s next?

Right now, it’s Ben’s musical Brass which he wrote for NYMT. It’s being performed again this summer at the Hackney Empire. It’s a WW1 show and this August is the Centenary of the first day of the battle of the Somme.

And for yourself?

Terry Johnson came to see the show last night, and asked me when I write another musical, will I be working with computers? And I can say absolutely…not. It’s been incredibly difficult, incredibly frustrating. However, I know that anything I write in the future will be different because of what I’ve learnt from this project.

Beyond The Fence is the subject of a Sky Arts documentary Computer Says Show,  the first part is repeated on Sky Arts Wednesday 2nd March at 10pm. The second part is on Thursday 3rd March at 8pm followed by a broadcast of the stage show.

See BEYOND THE FENCE – booking until 5th March at the Arts Theatre.

BOX OFFICE 020 7836 8463 or on-line at


Follow @thenathantaylor and @thebenjamintill on twitter for updates on their future projects

And for a reminder of just how great their wedding was…