Artificial intelligence Alignment. AI art
| |

AI, Art & Alignment

Alignment: Methods for preventing AI systems from inadvertently acting in ways that subvert human values.

Training Day

21st century computer programers may soon go the way of 20th century punch card programers. Computers are now being trained more than programed by next generation computer scientists.

Think of training a new puppy. You train it how to sit, how to roll over, how to walk on a leash. For sure, you train it not to poop on your carpets.

Let’s level up from puppies to people, the little ones you potty train. Babies are said to be like sponges. They soak up and internalize whatever you pour into them. A good parent will pour into their child the values they embrace. As they grow, they will help their child set positive goals.

Clearly, not every parent has the same values. The values and goals one parent instills in their child are not likely to align perfectly with those of another.

Computers have been like factory line workers in a paperclip factory. Once programed, they do repetitive work quite efficiently. Artificial intelligence is very different. AI systems are being trained to do creative things; crafting essays, spittin’ rhymes, composing music, walking on artists…or should I say, doing art. For better or worse, AI is doing it quite creatively. We certainly hope AI is being trained not to poop on the world.

Alignment — Misaligned

Artificial intelligence Alignment. AI art

Alignment is the term that describes methods for preventing AI systems from inadvertently or intentionally acting in ways that subvert human values. 

AI designers have the weighty task of developing a training curriculum with values that will align it with humanity. Thus far, the public has had little say about the content and intent of that training.

A recent (April ’23), controversy over a publicly displayed mural got me thinking deeply (humanoid deep learning) about AI, art and alignment.

An artist painted a mural on the exterior wall of a coffee shop. It featured graphic black figures against a stark white background stabbing or being stabbed by knives. I saw no sign of hope, of peace, of light or anything positive in the mural. It only portrayed anger and violence.

Many residents in the community saw what I saw, or didn’t see what I didn’t see. They were very unhappy about the mural. Some demanded that it be removed (painted over). I believe it misrepresents the peaceful vibe of the community. It also sends a negative message to children and students in neighborhood schools. Still, I never stated publicly that it be removed (see my post ‘What Are You Brewing’). I did, however, use the controversy to document scientific studies on the negative and harmful effects of violent imagery on children. See: ‘The Writing’s on the Wall’. Also Reflections & Affections.

Other residents insisted that the mural remain, often expressing their opposition to censorship.

Social media lit up with comments for and against the mural. Most comments were thoughtful and presented with respect, regardless of which side the commenters took. Some ‘freedom of expression’ commenters adopted the tone of the mural.

Consent, Condescend or Contempt

I’ve been watching the rapid development of AI with great interest. I’ve noticed how software engineers have aligned their products and rolled them out to consumers. Seems that most art creation AI is out of alignment with the values and goals of many artists, but that’s a topic for another post.

Will there be public consensus or division regarding AI? I ask that question of myself in light of the misalignment regarding the mural? Is consensus and alignment even possible in a nation as divided as the United States? That question could be asked of many nations today. Might AI one day be condescending, even contemptuous to its makers?

An open letter launched by the Future of Life Institute seeks a six months pause in the development of AI more powerful than GPT-4. It has garnered over 30,000 signatures. Many of the signers are leaders in the field of artificial intelligence; Elon Musk, Steve Wozniak, Andrew Yang and other luminaries.

Discounting those signers who are virtue signaling while moving forward with development behind closed doors, it seems that many developers, scientists, educators and others have seen a ‘Ghost in the Machine‘, a data stream of hallucinations and dreams.

Written by: this human
Art by: Ai & KAi

Similar Posts

4 Comments

  1. What an eye opener, and quite alarming. I say alarming because there’s a fine line between creation and destruction. Obviously everything has its pros and cons, but just because you can should you.?
    Thank you so much Keni for this thought provoking article. AI is now on my radar 👍🏼

    1. Yes, sometimes we have to look at the unpleasant things in order to move forward in the right direction.

Comments are closed.