Creative design and communication: a new machine learning emerges

I usually receive a  good series of  weekly newsletters about Artificial Intelligence (AI). Some three or so. The one I have shared the most here in this blog has been the O’Reilly’s AI weekly newsletter. But recently I diversified and I found some other pretty good collection of newsletters, full of technical resources, articles and Arxived papers worth to read and check.

One of the most significant of those newsletters is called The Wild Week in AI, a fancy name. But the content Denny Britz shares and distributes in this newsletter is of good quality. I recommend the readers of The Information Age to have a close look in this newsletter and subscribe to its weekly installment.

This last week I found within The Wild Week Of AI an interesting article, worthwhile to read about the current state of Machine Learning as a computational subject, and the diversified other uses of it. For instance in the article it is claimed that Machine Learning is becoming more of a communication tool and not so much only raw computational one. And this development is a happy one if we regard creative thinking and innovation highly. Indeed communication is probably the quintessential creative and innovative pursuit of mankind. What is special about communication is the speed – coupled with immediate instant interface –  with which it might be possible to innovate and create; needless to say that machine learning contributes to this speed and in the process the field itself becomes intertwined with what it facilitates.

The article in question appeared in the Blog by engineer and professional web designer Paul Soulos with the author’s name as its same name. It was appropriately titled Machine Learning and Misinformation, and here we can already glance at its appropriateness to be shared also by this Blog. Actually if it begins by a defense of its arguments about the new Machine Learning shift to the information and communication research fields (only indirectly, though…), of which Natural Language Processing is only but one of recent trends, it also goes on to describe and judge the shortcomings of this trend, with the potential for misinformation to be a problem machine learning researchers and practitioners will inevitably be forced to tackle.

Of particular noteworthy significance as an addition to our repertoire of ideas about information is the distinction Paul Soulos makes between misinformation (unintentional dissemination of information without fully knowing the extent of falsehood or otherwise of that information) and disinformation (the intentional  malicious dissemination of false unsubstantiated information). Our times are already full of this evident facts all over the current mainstream media, but the added complexity of technological developments may concur to make the undesirable picture even worse.

Machine Learning and Misinformation

Communication is an essential pillar of society. Humanity’s progression over the past millennium was largely driven by the development and evolution of communication as a tool for distributing siloed thoughts from one individual to others. Communication is naively defined as content and the mode of transmission — symbols manifested as images, language transmitted through speech and writing, digital files sent through the internet. These are methods through which we communicate thoughts, ideas, facts, and opinions. New forms of communication emerge to expand the lexicon of thought and reduce the friction required to create and transmit content.


Early practitioners in the field of Interaction Design espoused human-computer symbiosis1, a tight bond between human and machine that transcends the capabilities of either individually. As communication devices, computers would facilitate a level of understanding between people that was previously only accessible to skilled writers, speakers, and artists. Computational creation reduces the skill needed to craft content that resembles the ideal form as it exists in your head; colors can be selected from a color picker instead of requiring an individual to understand the complex nature of mixing different paints to achieve a certain palette. The trend is towards uninhibited creation of a sort that only exists in the mind.

The creative aspects of machine learning are overshadowed by visions of an autonomous future, but machine learning is a powerful tool for communication. Most machine learning in today’s products is related to understanding — your phone can translate your voice into text and you can search photos for certain objects or people because of machine understanding. To accomplish this, machine learning compresses raw data into representations that is uses to find similarities and make other judgements. Representations are a cognitive concept that signify properties2. For example, a person’s mood can be compressed from an image of their face into a mood representation variable: happy, neutral, or sad.


As the requirements to create and transmit media are reduced, we approach a scenario where you can realize any thought in a shareable manifestation. If you imagine an object, you need skill as a visual artist to move that image from your mind to the physical world. In the future, computers will reduce the training that is required to realize ideas in the physical world to the point where the inception of an idea is on level with the realization and communication of that idea. Generative modeling will bring huge advances to our ability to communicate with each other, but it also poses an enormous threat with the creation and dissemination of disinformation and misinformation. The difference between disinformation and misinformation is intent; disinformation is created with a malicious intent while misinformation is communicated without knowing the extent of the falsehood.

Paul Soulos makes the useful point about current machine learning models, such as generative models, the take the representation of data from a higher level to the level of raw data. He claims that as the technique is useful in its own right to enhance speed and efficiency with which k«machine learning algorithms and models deal with unstructured data, they also create the seeds for the dissemination of misinformation, specially in the context of social media.

In the social media age, information becomes a weapon through networks, and we generally encounter misinformation. Propaganda pushed through state sponsored channels is disinformation, but the content in your social media feed shared by friends is misinformation. While new technologies accelerate our ability to communicate with each other, they also accelerate the spread of misinformation and disinformation. Whether we are ready for it or not, generative modeling is approaching. Will it bring progress or a misinformation nightmare that erodes the foundations of society?

Generative modeling may not be mainstream yet, but computers already aid us in frictionless communication. Consider using image search: this task can be exploratory when you want to know what something looks like, but you also use image search when you know what something already looks like and want to embed the image in a document, presentation, or conversation. The process of going through image results is a process of finding the image that most accurately approximates the image you see in your head.


Images generated from a text query.

The paragraph below is an important endorsement of the need to educate ourselves to a proper critical appraisal of what we write and post on every kind of editing platforms we easily have at our disposal. Low cost or barrier to entry of the posts we now sustain comes at the cost of the spreading of misinformation:

Phones have made it just as easy to create and consume images as text. The rise of social media apps dedicated to images reflects the changing habits of people. Rather than attempt to describe a scene to a friend, you can simply snap a photo of it and send the image. Unfortunately, our reliance on images creates a convenient opening for the spread of misinformation. We all learn to read and write in school, and while it can be difficult to craft a convincing statement, anyone can write a sentence that is false. We consume text ostensibly if it strays far from reality because we know how easy it is to generate a false narrative. Cameras capture reality and we generally ingest this information as closely related to the truth.

The questions around disinformation are more subtle. Here current technology has yet to deliver the ability to spread intentional falsehood easily and effortless. But that is precisely where generative models of image and photo editing enters the picture:

Computers can help us draw, even if we can’t.

There is still a barrier to create believable disinformation. While people shamelessly endorse and share disinformation produced by organizations with an agenda on social networks, we have not yet reached the point where the average person can easily create any piece of information they desire. Beyond words, images and visualizations help convince us that the underlying narrative is truthful. At the moment, fake images require you to be a skilled photo editor to maintain a sense of reality. Generative modeling is the tipping point where any individual can manifest the reality that exists in their head. One of the most interesting developments behind these techniques is the interfaces that we will use.


The cartoon above is from a seminal paper written by JCR Licklider, the father of interaction design4. Already in 1968, he was able to spot the ability of the computer to aid as the ultimate medium. A group at UC Berkeley recently published pix2pix5, a machine learning system that effectively realizes the cartoon in Licklider’s paper. Instead of having the necessary skill as an illustrator, you can sketch a rough version of the image you want to send, and the computer can render a high-resolution image. There is still work that needs to be accomplished before a pix2pix-like system makes it’s way into a consumer product, but generative modeling is already beginning to go mainstream in smaller ways.

The following photo/picture is commented brilliantly by Soulos in his illustration of the potential for manipulative bias when posting these kind of images. The sophistication of today’s techniques might go deeper in the quest to change the way we think about real life events, in further subliminal paths…:

Two similar images tell drastically different stories.

FaceApp6 is a recent mobile app that uses generative models to change certain facial features in photos. The two images above tell very different stories. The second image is Migrant Mother7, an image documenting the harsh conditions during the Great Depression. Knowing that the image comes from the Great Depression helps you understand which of these images is the real one because it is put in the context of a historical period that the image reflects. Propaganda is used by groups to overshadow the reality of a period. If the doctored version of Migrant Mother was published during the Great Depression along with other images that hid the difficult period beneath a lacquer of happiness, we may not know the period as the Great Depression today. Spreading misinformation can change the way that today’s events are written in history.

I encourage the reading of the entire post. It further warns us as to the potential of modern machine learning techniques – combining text, image and voice manipulation -,  to distort reality and even tell stories like a professional writer of novels in ways that manage to successful influence our beliefs and the structure of our human values, something the good novelists excel at. But for now I finish this post, sharing just the last bits of relection form a good post and a wonderful Blog:

(…) All of this media together supports a breaking-news report where the individual pieces are increasingly difficult to separate from reality. The immediate dangers of machine learning are not robot uprisings, but rather the destabilizing effects that disruptive technologies have when taken in a fragile social and economic climate that is slow to adapt. (…)

Some people hope that the ease of creating misinformation will cause people to question all media. Unfortunately this ignores the reality of misinformation and media consumption. When you encounter information, it has an immediate unconscious effect on your attitude and memory. Even once misinformation is discredited, it still persists in your attitudes and beliefs, an effect known as Belief Echoes10.


With the ease of creation that machine learning brings to content generation, it will be easier than ever to effectively communicate. The question that underlies new technology is whether people will use it for benevolent or malicious behavior. We explored the benefits and dangers that machine learning brings in the evolving media landscape. It is naive to create these tools without considering the disastrous impact they can have. Members across technology, academia, and news must begin discussing how to navigate this new landscape. Cooperation is necessary to defend society from the perverse agenda of those determined to hijack reality.

featured and body text images: Machine Learning and Misinformation


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s