Talking to Lady Lovelace about creativity

Luiz Botega
10 min readAug 28, 2019

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” — Lady Lovelace [Faster than Thought, B. V. Bowden 1953, Note G, page 398]

Watercolour portrait of Ada King, Countess of Lovelace, circa 1840, possibly by Alfred Edward Chalon

Augusta Ada King, Countess of Lovelace, was an English mathematician on the beginning of the XIX century, being the first computer programmer. Her most famous algorithm was developed to be used on the Charles Babbage’s Analytical Engine (a computer prototype) to compute Bernoulli numbers, though the machine never came out of paper while she was alive. Along with this first ever algorithm published in the Note G of a paper, Lady Lovelace wrote the citation above, implying, on a first look, that machines such as the Analytical Engine would be simply unable to generate originality, but solely follow a script.

It is important, before anything else, to say that Lady Lovelace mentioned explicitly the Analytical Engine, and not all computers. But, for the sake of discussion, let’s assume she did not believe that machines could ever be creative or develop any form of consciousness, as many today believe. When a machine performs or even composes a piece of music, it can only do so by the implementation of the programmer/engineer, so it can’t create by itself. Well, on a first look we are tempted to agree with this assumption. A computer follows (linearly or not) a script, so if can only perform what it is taught to do (lets forget self-programming algorithms). But does that limit machines to the reproduction of human intelligence? Is it possible for a machine to express even partially a real and strong intelligence? Is creativity limited to the human brain?

Divide and conquer

To think about these questions (and propose new ones), we should first assume Lady Lovelace’s remark was denying any sort of intersection between computers and creativity. This is the base for a series of Lovelace-questions proposed by Margaret Boden in her text Creativity and Computers, as the prologue to the book “Artificial Intelligence and Creativity: An Interdisciplinary Approach” edited by Terry Dartnall [1]. Boden derives these four topics (page 4 of the book):

  • First is whether computational concepts can help us understand how human creativity is possible.
  • Second is whether computers (now or in the future) could ever do things which at least appear to be creative.
  • Third is whether a computer could ever appear to recognize creativity-in poems written by human poets, for instance, or in its own novel ideas about science or mathematics.
  • Fourth is whether computers themselves could ever really be creative (as opposed to merely producing apparently creative performance whose originality is wholly due to the human programmer).

Tough Boden answered these questions brilliantly in her work, my spin here is to use them not to achieve conclusions, but to look at what is going on today and ask new, derived questions that may help us uncover new things. Let’s get to one at a time.

Computers and our own creativity

“Can computational concepts help us understand how human creativity is possible?”

To understand human brain and creativity is no simple task. Some say it is impossible. Researches on these themes permeate psychology, philosophy, psychiatry, and, why not, computer science. While on one side we can dissect the brain, measure its impulses, try to understand its cognition, using computer science we can attempt to model and mimic our own cortex. These models and algorithms can reveal several aspects of our mind that was unknown and, if validated accordingly, be used to propose new models of our cognition.

What we have are models like Neuronal Networks that try and emulate our neurons. With adequate algorithms behind, researchers are uncovering how synapses activate without the dependency of human testing. Our brains are still too complex and memory intensive for a computer to mimic, but effort has been put on simulating our brain, achieving up to 10% of the human cortex to the resolution of neurons and synapses [2]. If the brain is still too complex, Artificial Intelligence approaches (AI) can, for instance, help us predict and prevent suicide tendencies on youths by using a Gaussian Naïve Bayes model, reaching 94% accuracy [3].

These and many others are formidable examples of how computers, AI, machine and deep learning are revolutionizing the way we research our brains. These works have obvious impact on how unveiling our own creativity, being by the development of conceptual models of the human brain, or even by creating algorithmic-brains to evaluate and experiment. AI can help us to deal with high volumes of information, transform and make sense of data, find patterns, and allow us to make better and more informed decisions.

The problem that comes here is that these models can behave unexpectedly, and we are unable to tell if it is a problem with the model or if our knowledge of our brain is limited. We get suspicious because we can’t understand [4]. What we know is that many AI algorithms draw conclusions better than humans, with a precision that we don’t understand and can’t mimic. So how can we interpret the results of an AI model about our brain if we don’t know our brain that well? How can we assure our models are coherent if our validation can be limited and biased? Is AI already above us in interpretation, pattern finding, and decision making? My answer to this last question is a sound ‘yes, it is’.

Computers on the surface of creativity

“Can computers (now or in the future) ever do things which at least appear to be creative?”

So computers can help us understand our creativity. But can them appear to be creative? To answer that we first have to understand what is creativity. I can start by saying that there is no consensus, since this single term is still intensely studied in psychology, philosophy, psychiatry, engineering/design sciences, social sciences, and many others. I particularly like to shape creativity as the generation of new and useful ideas [5], with emphasis here on the useful part. This utility can also vary, from a socio-economic perspective to artistic and contemplative uses.

Harold Cohen. 031135, 2003. Plotter print, pigment on paper, computer generated image, 50 x 85 cm.

To try and answer this Lovelace-question, let’s see how AARON, an algorithm developed by Harold Cohen, is able to create original artistic images. When first developed, in 1970s, this program started with abstract art pieces, but evolved during the decades when more images with new objects and concepts were added to its knowledge-base. The result we have is the image beside, drawn in 2003 by this algorithm. In fact, even some art critics (much less we, lay people) can’t differentiate this work from a human art piece.

Harold Cohen himself states that he does “not believe that AARON constitutes an existence proof of the power of machines to think, or to be creative” but that it “constitutes an existence proof of the power of machines to do some of the things we had assumed required thought, and which we still suppose would require thought — and creativity, and self-awareness — of a human being.” [6, page 13]. But he closes with this remark:

“If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the ‘real thing?’ If it is not thinking, what exactly is it doing?”

Other efforts also resulted in algorithms able to produce new or derived art pieces, such as Le Comtesse de Belamy, painted by Obvious, and the known Google’s Deep Dream generator. So computers can produce art. Produce by itself, as many artist do: based on repertoire, on previous artworks. And these apparent creations are not restricted to arts, but also follow to engineering, design, mathematics and other applied sciences. So what is different than the human process? Is the “genius” spirit lacking? Can machines have new and useful ideas? Are machines already creative?

Computers for recognizing creativity

“Can a computer ever appear to recognize creativity-in poems written by human poets, for instance, or in its own novel ideas about science or mathematics?”

Computers can create artworks that we simply can’t distinguish from human pieces. That’s ok, but still doesn’t configurate creativity per se. Now, can computers recognize if human works or even their own works are creative? To answer this we first need to know if creativity can be measured, or if it is purely a conceptual and heuristic human ability.

In fact, there are several measurers for creativity, some more objective, others not. We can go from simply adding up the number of ideas someone has, to evaluating how different these ideas are, which depth they have, how useful they are to the original intention, among many others. There are also closed methods to measure creativity such as Alternative Uses Task (AUT), Remote Associates Test(RAT), or the Torrance Tests of Creative Thinking (TTCT).

Overall, these tests break creativity into scales, such as (in the original TTCT) fluency, flexibility, originality, and elaboration. Since the method is complex to perform, a contemporary algorithm would hardly be able to apply them. But if we analyze the scales on their own, given sufficient information a computer can easily compute fluency (number of ideas), flexibility (differentiation of ideas generated), originality (how infrequent the ideas are), and elaboration (depth of details presented with the idea). There we go measuring creativity automatically. The methods are there [7], it’s just that no one implemented them yet.

We assume something to be creative because in our minds there are no precedents of a similar idea, or because the aesthetics of the idea presented to us is somehow new to us. This can be also masked, as using new design to old products and calling them creative/innovative. If a learning algorithm is fed with sufficient data on what is and what is not creative, can it perceive that there is something new and useful? Is it a problem of volume of data or is there some inherent characteristic that makes us recognize creativity, one that computers simply can’t have?

The problem with recognizing something as creative is that it is very particular. Something creative for me can be obvious to you. In general terms we tend to agree, but there are several grey zones. We can’t expect computers to recognize creativity since there is no consensus on this area, and we may never achieve that. But remains here the question: how would an AI algorithm perform if subjected to traditional creativity measurers?

Computers with a creativity of its own

“Can computers themselves ever really be creative?”

Lady Lovelace pinpoints that computers can only do what they are said to do. She is right, since they (still) need to be programmed by humans, and this imposes heavy constraints. But is creativity possible without constraints? If we think of creation as the generation of new and useful ideas [5], constraints are key! There is no way of defining utility without thinking about the context and its constraints. Aren’t we also fruits of constraints? Aren’t we also “programmed” in certain socio-historical codes which restrain our creative abilities?

Other point on computer creativity: if they can only learn and extrapolate from given data, we can deduce a lack of originality in, for instance, AARON’s “creative” artistry. The system is constrained by the inputted aesthetics, since it is from there that he will draw inspiration. But what if we add large amounts of data on an algorithm (something not that difficult nowadays)? Would it create new stiles? We also create based on previous experiences, so what is so different from us to computers, if not the simple volume of information?

Boden in her essay about Artificial Intelligence and Creativity [1] puts biological and moral lights on the apparent problem attributed to creative machines. On the biological area, some say only our brains can produce intelligence, while metals and silicon could not spark creativity. But what do we know about our brains? We are barely scratching the surface of our own consciousness and how our neuroproteins work! How can we define that no other chemical configuration can support intelligence if our own intelligence is still up to debate? Maybe some people say that because they are too narrow minded to think of a way for other chemical compositions to support creation. But that is their limitation, not nature’s.

With the speed of contemporary technological developments, it is no absurd to think of fiction-like robots having a sort of consciousness soon. But that would incur in several ethical problems to humanity. If there is consciousness, what separates them from us? Wouldn’t they be subjected to the same rights as we are? As robots get closer and closer to human brain capabilities (Blade Runner is on the verge), where is the line that separates human and machine? That moral conundrum can easily go to hypocritical answers as “they are not natural”.

The frontiers of human and machine are blurring, and with that our need to differ ourselves as a natural species intensifies. Most of the denial on “machines can’t be intelligence or create” comes from this need for separating us (natural humans) from them (artificial machines). But we can ask ourselves how nature will evolve in the future. We had fish, turning into amphibians, to reptiles, then to birds and mammals. Maybe the next step in the evolutionary chain is something less Darwinian, and a little bit more silicon-based.

[1] Dartnall, T. ed. Artificial intelligence and creativity: An interdisciplinary approach (Vol. 17). Springer Science & Business Media, 2013.

[2] Jordan, J., et al. Extremely Scalable Spiking Neuronal Network Simulation Code: From laptops to exascale computers. Frontiers on Neuroinformatics, 2018

[3] Just, M. A., et al. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nature: Human Behaviour, 2017.

[4] Maoz, U. and Listead, E. Casting Light on the Dark Side of Brain Imaging: Brain imaging and artificial intelligence (Chapter 17). Academic Press, 2019

[5] Amabile, T. M. Motivating Creativity in Organizations: On doing what you love and loving what you do. California Management Review (Vol. 40), 1997.

[6] Harold Cohen. The Further Exploits of Aaron, Painter. Stanford Humanities Review, 1994.

[7] Ayas, M. B. and Sak U. Objective measure of scientific creativity: Psychometric validity of the Creative Scientific Ability Test. Thinking Skills and Creativity (Vol. 13), 2014

--

--

Luiz Botega

I work as an interdisciplinary Service and Strategic designer specialised in digital innovation, data-driven business and processes