Joho the Blog » [liveblog][PAIR] Doug Eck on creativity
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[liveblog][PAIR] Doug Eck on creativity

At the PAIR Symposium, Doug Eck, a research scientist at Google Magenta, begins by playing a video:

Douglas Eck ā€“ Transforming Technology into Art from Future Of StoryTelling on Vimeo.

Magenta is part of Google Brain that explores creativity.
By the way:

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He talks about three ideas Magenta has come to for “building a new kind of artist.”

1. Get the right type of data. It’s important to get artists to share and work with them, he says.

Magenta has been trying to get neural networks to compose music. They’ve learned that rather than trying to model musical scores, it’s better to model performances captured as MIDI. They have tens of thousands of performances. From this they were able to build a model that tries to predict the piano roll view of the music. At any moment, should the AI stay at the same time, stacking up notes into chords, or move forward? What are the next notes? Etc. They are not yet capturing much of the “geometry” of, say, Chopin: the piano-roll-ish vision of the score. (He plays music created by ML trained on scores and one trained on performances. The score-based on is clipped. The other is far more fluid and expressive.)

He talks about training ML to draw based on human drawings. He thinks running human artists’ work through ML could point out interesting facets of them.

He points to the playfulness in the drawings created by ML from simple human drawings. ML trained on pig drawings interpreted a drawing of a truck as pig-like.

2. Interfaces that work. Guitar pedals are the perfect interface: they’re indestructible, clear, etc. We should do that for AI musical interfaces, but the sw is so complex technically. He points to the NSyth sound maker and AI duet from Google Creative Lab. (He also touts deeplearn.js.)

3. Learning from users. Can we use feedback from users to improve these systems?

He ends by pointing to the blog, datasets, discussion list, and code at g.co/magenta.

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon