Pickles, #HENThousand and Curating NFTs by Aleksandra Art

A day on crypto Twitternet feels like a week. Hence, I figured there's enough information for a blog series that will capture those days (weeks?) and perhaps provide insights for those missing from the dialogue. It's also an excuse to rejuvenate my blog - a place where I don't need to stick to 140 characters and on-topic banter. So here we go! 

The day is Friday, and its beginning is nicely captured by @Loopify


Now before you run off to buy a year's supply of the fermented vegetable, let me clarify what Loopify is referring to. "My Fucking Pickle", pardon my French, is a name of an NFT collection of 10'000 unique NFTs that will resemble a pickle, each with its unique traits. It started as a joke but quickly escalated to being released for purchase. Now I say "will" because the owners actually won't know what type of pickle they're getting (what traits the pickle will have) until a specific date next week. 

The project follows the example of other so-called generative collectible series like Cryptopunks, Bored Apes or Hashmasks, minus the fact that the execution time to bring it to life happened within a week - as opposed to others, some of which would take months in the planning. It will be interesting to see how this low-effort spin-off that doesn't make any promises will perform on the market. Since its inception, Fucking Pickles already tripled in price overnight. 

Lisa Odette's Lady_008_Totem

Lisa Odette's Lady_008_Totem

Next up, it was hard to miss some quality art exchange under the #HENThousand tag influenced by a ten thousand work edition released for 1 tez by John Karel. The initiative provided an opportunity to snap artworks by one's favourite artists for a fraction of the price with the remaining copies being burned by the artists in 24 hours. Some artworks minted include Von Doyle's morphed painting Andromeda, DALEK's 10000 spacemonkeys, Lisa Odette's Lady_008_Totem, and Marcus’ ‘Simpler Times’ to name a few.

Von Doyle's Andromeda

Von Doyle's Andromeda

And finally, we wrap up our Friday with "Curation in the NFT space", a weekly Twitter live talk with @VerticalCrypto, @colbornbell, @sambrukhman, @flakoubay and today’s special guest @martjpg. We discussed NFT collection building, latest creative approaches to curating crypto art, the opening of the Crypto and Digital Art Fair (CADAF), experimental approaches to selecting artworks, buying them, marketplaces, generative collectibles, and more! You can tune in next Friday to hear more ;)

MOSHED-2021-6-18-13-22-43.gif

Oh, and follow me on Twitter: @aljaparis

Cover image by Marcus titled ‘Simpler Times

Toonify yourself - your chance to become a Disney character by Aleksandra Art

abe_toon.jpg

Overnight there was quite a storm of cartoon characters appearing across Twitter. A new Ncage-style Chrome extension? Nope, it was a website called Toonify Yourself rolled out by developers Justin Pinkney and Doron Adler that uses deep learning to toonify images of faces. As they explain the solution was created after Doron fine-tuned a faces model on a dataset of various characters from animated films. The model picked up key elements of what a cartoon character looks like based on a couple hundred of images. The duo then made a hybrid system using which the structure remained of a cartoon face, but also kept a photo realistic rendering. Justin and Doron then launched https://toonify.justinpinkney.com website through which anyone could upload a picture of themselves to turn into a Disney character!

Cartoon images as inspiration via https:/justinpinkney.com/toonify-yourself/

Cartoon images as inspiration via https:/justinpinkney.com/toonify-yourself/

Unfortunately, just as it went viral the website had to be turned off due to… costs :(
Justin Pinkney commented on his twitter account “Sorry everyone, this is going to have to go offline for now. It got popular and running neural networks in the cloud for thousands of people costs real money it turns out!” Turns out keeping a fairy tale alive isn’t so easy!

So let’s hope they will find a way to bring it back because let’s be honest, who doesn’t love playing around with new technologies??

You can learn more about the process here.

StyleGAN2 blending of humans with cartoons by Doron Adler using @Buntworthy Google Collab notebook to create the blended model

5 London Art Exhibitions for Fall/Winter 2019 - 2020 (and yes, they involve digital art!) by Aleksandra Art

Olafur Eliasson Tate

Feeling the winter blues approaching?

Let this list of art events guide you

Recently several of my friends approached me, asking for art show recommendations. While sending over links and suggestions, I realised that many of these are not so easy to find if you’re not in the industry or are coming outside of London. Whether it is to escape the winter blues, find inspiration, hide somewhere warm or maybe refresh your Instagram feed, London is currently hosting an excellent selection of exhibitions. I decided to compile the following list below so a wider audience can take advantage of it. Without further ado, here are some of the highlights you may want to explore.

*Warning: The bias is present. My selection is likely skewed towards new media art.

  1. Olafur Eliasson at Tate Modern

Screenshot 2019-11-15 13.24.51.png

Tate Modern brings a comprehensive selection of the famous Danish-Icelandic artist Olafur Eliasson. Top show on every Instagrammers list.

Dates: Runs until 5 January 2020.

Pro Tip: (1) Bring an old T-shirt you no longer wear into Tate Modern for recycling, and you will get a 20% discount on an Olafur Eliasson exhibition T-shirt. (2) Tate Modern is also currently hosting a retrospective of Nam June Paik’s work, Korean artist and a pioneer of video art. It’s on my list to see, but if you have any space left to indulge more creativity after Eliasson’s show then can try visiting both exhibits while at the museum.

More Info

2. Somerset House: 24/7

Somerset House

Somerset House brings together a group show by some of the most renowned artists today working across a variety of media. Immersive, engaging and thought-provoking, the artworks serve as a reminder to take a break from our non-stop digital life. Personally, I fell in love with Pierre Huyghe’s video work, accompanied by house music. After the exhibition I definitely had a very good sleep.

Dates: Runs until 23 Feb 2020.

Pro Tip: (1) There is a special installation that you need to sign up in advance if you’re interested to participate. The piece is made by Iain Forsyth & Jane Pollard and called the ‘Somnoproxy’. You can sign up on arrival by speaking to a member of the Visitor Experience team. (2) The show is a 2 minute walk away from The Store X (on the list below) in case you want to capture both exhibitions at once.

More Info

3. Antony Gormley at the RA

Antony Gormely RA

Antony Gormley is one of the leading British sculptors and on the list you should know if you live in London. What I love the most is how he manages to redevelop his practice over time while keeping a consistent style in his work. RA brings together highlights of his practice and provides an interactive maze for visitors to explore.

Dates: Runs until 3 December 2019. (Hurry!)

Pro Tip: (1) The tunnel is actually a human body sculpture, which you are walking through! (2) Don’t be scared of gravity, try taking an image like the one below!

More Info

Girl in museum

4. Other Spaces by The Store X The Vinyl Factory

Vinyl Factory Digital Art Show

If you want a truly immersive experience , try visiting the free show at The Store X, 180 The Strand. The experience is presented in collaboration with the Fondation Cartier pour l’art contemporain, Paris

Dates: Runs until 8 December 2019.

Pro Tip: The show is a 2 minute walk away from the 24/7 show (No2) in case you want to capture both at once.

More Info

5. Anselm Kiefer at White Cube Bermondsey

Anselm Kiefer White Cube

White Cube gallery presents Superstrings, Runes, The Norns, Gordian Knot, a solo show by Anselm Kiefer, one of the most famous living German artists. The art exhibition showcases a selection of his new artwork. The interior of the entrance hall was also redesigned specifically for the event.

Although White Cube is a private commercial gallery, its space can easily compete with some of the public museums. If you haven’t been, strongly recommend discovering it. the neighbourhood is also excellent for a dinner or a glass of wine. The show takes place at their Bermondsey location, which is a 10 minute walk from London Bridge/ The Shard. The show is free, just check opening hours.

Dates: Runs until 26 January 2020.

Pro Tip: Grab a glass of wine after the show at B Street Deli or check out the new Vinegar Yard on the way.

More Info

Anselm Kiefer new work

Enjoy the shows and let me know if you have any questions of tips!

AI Artists, What Are You Selling: An Image, A Neural Network Or A Story? by Aleksandra Art

Mario Klingemann’s illustration, inspired by the Dung Beetle Learning series

Mario Klingemann’s illustration, inspired by the Dung Beetle Learning series

The last couple of years have marked a turning point for AI art. Major auction houses, such as Sotheby's and Christie's introduced pieces made using machine learning. Creative AI platforms such as Playform.io allowed anyone remotely familiar with technology to upload datasets and generate images. Artists coming from traditional media began outsourcing their artwork production to those familiar with the tools to keep up with the rising demand of our digital culture. In this article, together with prominent digital artists and experts, we explore art market perceptions towards AI art. To properly understand where the initiatives are heading, I ask a question – what is it that creators of AI art are selling? This inquiry allows shedding light on some of the shortcomings that the market currently faces when it comes to an understanding of the subject. It also calls for a point of view that would consider the broader context of tech culture.

When it comes to art, there are currently two groups of practitioners exploring AI and its contributions to the creative industry. First is Computational Creativity, the field that concerns itself with theoretical and practical issues in the study of creativity. Primarily, the group explores whether computers can be creative on their own and how could this be achieved. The second movement is the Creative AI movement. The focus in Creative AI lies more towards the widespread applications of AI tools to produce cultural goods. Some examples of the Creative AI include generative art, AI-written symphonies and even poems. One particularly fun example is a science fiction film 'Sunspring', in which actors were hired to act out a script written by an AI bot (clip below).

In the wake of Google's AI Go victory, filmmaker Oscar Sharp turned to his technologist collaborator Ross Goodwin to build a machine that could write screenplays. They created "Jetson" and fueled him with hundreds of sci-fi TV and movie scripts.

For the movie, music and literature industries new technology is nothing new. The effect of AI content production on the fields is particularly interesting to consider. However, in this article, the focus is on the industry where the processes are not as clearly defined when it comes to using technology – the art market.

A machine learning system that is currently most commonly used among AI artists is the generative adversarial network (GAN). Ian Goodfellow and his colleagues developed GAN while he was working as a research scientist at Google. A simple explanation by Google describes GANs as generative models, which create new data that resemble your training data.

For example, GANs trained on human portraits can create images that look like photographs of human faces, even though the people depicted do not exist. A good example of GAN in practice is the work of Mike Tyka, an AI artist and technologist at Google. Mike’s project ‘Portraits of Imaginary People’ (below), featured at Ars Electronica Festival ’17, explored the latent space of human faces by training an artificial neural network to imagine and generate portraits of non-existent people. To do so, he fed GAN with thousands of photos of faces he collected from Flickr.

On October 2018, Ahmed Elgammal, Professor of Computer Vision and an AI artist, published an article titled 'With AI Art, Process Is More Important Than the Product'. Dr Elgammal argued that AI art is conceptual art, an art form that began in the 1960s, in which the idea represented is considered more important than the finished object. "It's about the creative process – one that involves an artist and a machine collaborating to explore new visual forms in revolutionary ways," he wrote.

The notion of the artist and the machine 'collaborating' humanizes the latter. The idea of humanizing technology is nothing new. We gave names to natural language processing devices such as Siri and Alexa. We create robots that look like humans. However, these products address the market from a consumer standpoint. By humanizing the machine an artist works with, we take away the credit for the artists' work.

Mimesis, "imitation" in Greek, refers to nature and human behaviour mimicked in the arts. Art imitates life, so to say. In almost all areas of our professional experience, we use technology to aid us in our work. However, we do not give salary to our machines. Neither we credit them in our reports. Similarly, how can we consider giving credit to the machine for an artwork?

GANs provide a new way for artists to experiment, but they also cause a stir of thought. "What is Art?" is a subject of discussion throughout centuries. With the rise of AI, we have a new question that asks "Who is the Artist?". A group of professionals in the field of new media art share a view that AI is simply a tool to create the artwork, like a paintbrush.

2018 Lumen Prize Gold Award winner: Mario Klingemann’s piece ‘The Butcher’s Son’. A neural network’s interpretation of the human form.

2018 Lumen Prize Gold Award winner: Mario Klingemann’s piece ‘The Butcher’s Son’. A neural network’s interpretation of the human form.

Mario Klingemann, a known artist and a winner of Lumen Prize, the award for art and technology, compares AI to the piano. "If you hear somebody playing the piano, would you ever ask if the piano is the artist? No. So same thing here: just because it's a complicated mechanism, it doesn't change the roads" he explains in an interview with Sotheby’s. Carla Rapoport, who runs the Lumen Prize for eight years now, agrees. "Cavemen used sticks and coloured mud - today's visual artists use algorithms, among other tools. A number of shortlisted artists this year, for example, incorporated AI tools into a wider work, either moving image or sculptural.

The work by Jake Elwes, CUSP, shortlisted this year, fits into this category. He used an AI tool to create his birds and then 'set' them into a filmed landscape" she shares. Both Jake Elwes and Mario Klingmann use AI as an element of their work, a tool. They do so by either creating installations to stream the generated images or by integrating GAN images as an element of a video piece.

However, the tool itself is not the creator, neither it is a work of art. What artists choose to create using that tool holds more substantial value. However, does this apply to any digital device? With companies such as Acute Art and Khora Contemporary, allowing any artist to become a VR artist, has the technical knowledge become irrelevant?

It is necessary to approach the digital field within the context of digital culture. When famous artists of the past centuries outsourced their work, it wasn't because they couldn't do it themselves. "It's not that people couldn't do it, it's just not worth their time… Henry Moore I'm sure knew all about working in bronze" shares Michael Takeo Magruder, an internationally acclaimed digital artist. In a non-digital medium, if an artist wants to use another artist's style, they would still have to create the artwork themselves.

When it comes to digital tools, and especially AI, however, the process is fluid. And since the practice is relatively new, the market lacks the understanding to provide constructive feedback. Established critics from the traditional medium evaluate the worth of a digital piece under traditional measures and context of the art world, if at all. There is a level of scepticism present due to the infant stage of the movement. (Since I wrote this, I witnessed Jonathan Jones, an art critic at the Guardian, referring to AI as "Bullshit" at a recent panel).

Michael continues, "For myself, I don't do the heavy lifting, but I know absolutely what is possible… I understand the medium, and I come from that scene. When all of these artists and academics want to talk about digital, it's like yeah but do you really understand it, the culture?."

Michael Takeo Magruder, detail of Imaginary Cities — Paris (11097701034), 2019. Algorithmically generated mono prints on 23ct gold-gilded board. Photo: David Steele © Michael Takeo Magruder.

Michael Takeo Magruder, detail of Imaginary Cities — Paris (11097701034), 2019. Algorithmically generated mono prints on 23ct gold-gilded board. Photo: David Steele © Michael Takeo Magruder.

The 'culture' that Michael is referring to is the tech culture, the gamers, coders, and tech enthusiasts. Some traditional art market professionals may perceive the tech culture as 'outsider' culture in the art world. Another similar outsider culture is often considered street art. Michael considers Banksy as one of the greatest contemporary artists. He notes that although street art has an ecosystem of its own, Banksy demonstrates a thorough understanding of art and pays tribune to notable traditional artists in his work.

Both the street art culture and digital culture bring something new to the art world. However, when it comes to digital art, we see a different phenomenon. Established traditional artists begin to outsource their work to VR or AI specialists. While doing so, the artist receives all the recognition, using his name as a brand. But what if this happened in street art, what if David Hockney all of a sudden started doing graffiti, would the market recognize him as a prominent street artist? I doubt it (but who knows, right?).

A week after Dr Elgammal described AI art as conceptual, the first AI artwork sold at a major auction. Obvious, the Paris-based collective that produced the work used 15000 portraits painted between the 14th and 20th century to feed the system. By taking the available data of Renaissance paintings, the artist collective used GAN to create a fictional character, referred to as Edmond de Belamy (pictured in the family tree below). The press loved the story of the sale, especially since the work fetched an incredible sum of over $400,000 at Christie's.

The collective indeed demonstrated the highest level of salesmanship and marketing. Some would call it  'state of the art'. The story, however, sparked a level of criticism when the audience learned that another artist, Robbie Barrat, made the algorithm they used in creating their work. "We are the people who decided to do this, who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame," defended their actions Obvious when asked about their lack of credit to Barrat. However, the level of inquiry had limits due to the lack of understanding about who borrowed what from whom.

Obvious’ first collection is a series of 11 ‘realistics’ portraits generated by GANs. They used it to create a fictional family titled the Belamy Family.

Obvious’ first collection is a series of 11 ‘realistics’ portraits generated by GANs. They used it to create a fictional family titled the Belamy Family.

The chain of code would be a rabbit hole if one were to point out to try and establish who made a more significant contribution. Just consider the following: Since Ian Goodfellow's development of GANs, researchers have been using its open access for various adaptations. Namely, Facebook's Soumith Chintala partnered with Alex Radford, a researcher at Indico Data Solutions, to improve Goodfellow's GANs, so they work better with images. The collaboration was a component that further adapted the code for artistic practice.

After cooperating with Radford, Soumith shared the implementation on Github, an open-source for developers to share their work. Only after Soumith shared his work on Github did the code reach Robbie Barrat, who then made additional improvements by adding scrapers and pretrained models. Hence, the line is blurred when deciding between Barrat, Soumith, Radford or Goodfellow as the contributors to Obvious' piece. Therefore, all of the players, to a certain degree, can be considered to have contributed to Obvious' work. And yet, only one of them was called out.

In short, while we refer to AI  as a 'tool', we also can't deny that, with its complexity, it's a different kind of tool when compared to traditional material methods. There is value in the context within which the works are created and presented. Otherwise, the artist is most likely to be misunderstood. So what are the artists selling then? "Perhaps the term for AI art might be a 'generative story'?" suggests Carla. "But certainly not a neural network, as an oil painting wouldn't be identified by the brush or the chemicals chosen to create it." she adds.

Mike Tyka comments that "significance comes from the process, its implications and the connection to what else is happening in the AI field". He draws attention to the fact that knowledge about the AI field (the 'culture' that we discussed in the beginning) is a relevant factor when it comes to evaluating AI art.

Meanwhile, sceptics like Jonathan Jones from the traditional art world, argue that the AI works currently produced should be dismissed altogether since they lack aesthetic qualities. As an example, take a look at this conversation between Jason Bailey, founder of an art and tech publication about cutting-edge technology in art, and senior art critic and columnist for New York Magazine Jerry Saltz. Jason argues, that “The problem is nothing in traditional art world training has prepared the current gate keepers to understand or speak intelligently about the nuance of generative art.”.

On the one hand, you have the sceptics, the traditional art critics, judging the works based on their aesthetic qualities. On the other hand, you have computer scientists, researchers, and practitioners of AI, who are exploring new ways of what's possible and yet not fully understood. In near future, machines will become more sophisticated and accessible, causing a wider range of artists to adapt them in their work. Initiatives that could facilitate a dialogue between the two groups can enhance our experience and perception of the new medium. We are in an era where Art and Technology is a confluence, not a juxtaposition.

EONS is a short animation, a moving painting, a music video and an experiment in creating narrative using neural networks. EONS was created entirely using artificial neural nets: The Generative Adversarial Net BigGAN (Andrew Brock et al.) was used to create the visuals, while the Music was composed by Music Transformer (Anna Huang et al.). http://www.miketyka.com

Mike Tyka’s "EONS" - a video made entirely using #BigGAN, scored with Anna Huang’s #MusicTransformer