A few weeks ago, I downloaded an app for my android tablet called
Magic Piano. Like most other instrument based games, it works through
the visual representation of coloured orbs descending the screen; hit
them as they cross the illuminated line and viola, the right notes are
struck at the right time. Partly due to how expensive the in-app
purchases of new songs is, and partly due to how much Bohemian Rhapsody
by Queen rocks, it has pretty much monopolised my time on the app. It
didn’t take very long to become relatively proficient at tapping the
various combinations, and soon I was able to play the whole song in a
kind of flow state, subtly providing variation as I felt fit and gaining
a real sense of relaxation.
This got me thinking. To what extent is the pleasure derived from playing this app, and that derived from playing a real piano, different? Both enter a flow state; could one flow state be more significant than the other? And what if you tried to close that gap by making the app experience as “real” as possible? What if you had an augmented reality device that transcribed the music as descending orbs, but overlaid on a real piano?
This creates an interesting little thought experiment. Imagine this entirely plausible, relatively near-term scenario whereby augmented reality is provided by contact lens, or even a neural interface, and is connected to smartphone and mic that can, after one listen to a piano piece, transcribe the music onto the piano in a gamified format. What could potentially result is a “pianist” that could trick others into believing he can memorise and play any piece of music on the piano after just one listen.
Now, the real interesting question is this: is the punctuation in that last sentence justified? Has the user actually learned to play the piano? Is the user now considered a pianist?
One way to approach this question is by first asking what is the difference between this user – the ‘augmented pianist’ – and a ‘classical pianist’, and then asking if that difference is an integral and inherent part of learning.
The main difference that I can see is that the augmented pianist has essentially outsourced two things to an external device: identification, and memory. Software can recognise which notes to play and when, while the device is also taking the role of memory; the augmented pianist doesn’t have the memorise the notes before he plays them. This memorising, be it in terms of conscious, sub-conscious, muscle memory, etc, could easily be interpreted as integral to the concept of learning. Yet many things have been considered integral to concepts, only to be left behind in the historical dust. The question is then, can learning be considered to occur without the identification, and more importantly, memory elements?
The augmented pianist still learns some important stuff; on first try with the Magic Piano app I was awful, and found it very difficult. Over the course of a few days, I found that my coordination was massively improving, getting much faster and dealing well with brand new combinations of notes first time around. Obviously in the case of a tablet app the experience is greatly simplified, but even if projected onto a real piano, I don’t doubt that a similar experience could ensue, only even more immersive with a genuine sound and environment.
A brief look at the younger generations, and a basic historical grasp of how cultural trends work, can easily lead one to argue that in an augmented future where skills such as pattern recognition, note recognition, memory, etc, can be out-sourced, the very definition of learning may be about to change. No longer will one have to spend tens of thousands of hours to appear proficient, or even possibly prodigal, to an audience. This will significantly remove the barriers to playing music, to experiencing a flow state that is, potentially, every bit as immersive as the ‘real’ thing.
This reminds me of the philosophical ideas on consciousness, the so called ‘hard-problem’ of consciousness. Theoretically, everyone could be zombies simply acting in the same way someone with real consciousness would act; in the same way, everyone in the future could be pianists with outsourced skills, a mere power-failure away from simply not being able to play the piano. That said, a classical pianist is a mere falling brick to the head away from not being able to play the piano as well – is that really any different?
And so I see it going many ways: there will be the classical snobs insisting that augmented pianists are not pianists at all, and that use of augmentation should be viewed like performance enhancing drugs in sport; what is left of the record industry will be free to choose the prettiest or most showman-like people instead of those that spent years learning properly (I know this is already happening, but really no music-related job, from bar-room pianist to the school play, will be under threat); second-hand markets of cheap pianos and instruments will thrive and a new market of ‘dummy’ instruments will appear that don’t even work without external devices (much to the chagrin of classicists, no doubt); and millions of people will get the joy of playing instruments in an immersive and accurate feedback loop experience which allows for people to enter flow states, and play any style they choose.
As a little glimpse into a transhumanist future, I found these questions really satisfying to mull over, and would love to hear any thoughts you might have on how this impacts what it means to learn, where else drivers are going to produce similar classicist/augmented conflicts, and how excited you might be, or not, about the prospect of bringing down the barriers to enjoy playing music. I’d like to think it will usher in a new creative renaissance, especially as AI gets incorporated even into creative design processes. In fact, I can think of few better reasons for the introduction of a universal basic income than to facilitate the explosion of creative possibility that is about to hit.
This got me thinking. To what extent is the pleasure derived from playing this app, and that derived from playing a real piano, different? Both enter a flow state; could one flow state be more significant than the other? And what if you tried to close that gap by making the app experience as “real” as possible? What if you had an augmented reality device that transcribed the music as descending orbs, but overlaid on a real piano?
This creates an interesting little thought experiment. Imagine this entirely plausible, relatively near-term scenario whereby augmented reality is provided by contact lens, or even a neural interface, and is connected to smartphone and mic that can, after one listen to a piano piece, transcribe the music onto the piano in a gamified format. What could potentially result is a “pianist” that could trick others into believing he can memorise and play any piece of music on the piano after just one listen.
Now, the real interesting question is this: is the punctuation in that last sentence justified? Has the user actually learned to play the piano? Is the user now considered a pianist?
One way to approach this question is by first asking what is the difference between this user – the ‘augmented pianist’ – and a ‘classical pianist’, and then asking if that difference is an integral and inherent part of learning.
The main difference that I can see is that the augmented pianist has essentially outsourced two things to an external device: identification, and memory. Software can recognise which notes to play and when, while the device is also taking the role of memory; the augmented pianist doesn’t have the memorise the notes before he plays them. This memorising, be it in terms of conscious, sub-conscious, muscle memory, etc, could easily be interpreted as integral to the concept of learning. Yet many things have been considered integral to concepts, only to be left behind in the historical dust. The question is then, can learning be considered to occur without the identification, and more importantly, memory elements?
The augmented pianist still learns some important stuff; on first try with the Magic Piano app I was awful, and found it very difficult. Over the course of a few days, I found that my coordination was massively improving, getting much faster and dealing well with brand new combinations of notes first time around. Obviously in the case of a tablet app the experience is greatly simplified, but even if projected onto a real piano, I don’t doubt that a similar experience could ensue, only even more immersive with a genuine sound and environment.
A brief look at the younger generations, and a basic historical grasp of how cultural trends work, can easily lead one to argue that in an augmented future where skills such as pattern recognition, note recognition, memory, etc, can be out-sourced, the very definition of learning may be about to change. No longer will one have to spend tens of thousands of hours to appear proficient, or even possibly prodigal, to an audience. This will significantly remove the barriers to playing music, to experiencing a flow state that is, potentially, every bit as immersive as the ‘real’ thing.
This reminds me of the philosophical ideas on consciousness, the so called ‘hard-problem’ of consciousness. Theoretically, everyone could be zombies simply acting in the same way someone with real consciousness would act; in the same way, everyone in the future could be pianists with outsourced skills, a mere power-failure away from simply not being able to play the piano. That said, a classical pianist is a mere falling brick to the head away from not being able to play the piano as well – is that really any different?
And so I see it going many ways: there will be the classical snobs insisting that augmented pianists are not pianists at all, and that use of augmentation should be viewed like performance enhancing drugs in sport; what is left of the record industry will be free to choose the prettiest or most showman-like people instead of those that spent years learning properly (I know this is already happening, but really no music-related job, from bar-room pianist to the school play, will be under threat); second-hand markets of cheap pianos and instruments will thrive and a new market of ‘dummy’ instruments will appear that don’t even work without external devices (much to the chagrin of classicists, no doubt); and millions of people will get the joy of playing instruments in an immersive and accurate feedback loop experience which allows for people to enter flow states, and play any style they choose.
As a little glimpse into a transhumanist future, I found these questions really satisfying to mull over, and would love to hear any thoughts you might have on how this impacts what it means to learn, where else drivers are going to produce similar classicist/augmented conflicts, and how excited you might be, or not, about the prospect of bringing down the barriers to enjoy playing music. I’d like to think it will usher in a new creative renaissance, especially as AI gets incorporated even into creative design processes. In fact, I can think of few better reasons for the introduction of a universal basic income than to facilitate the explosion of creative possibility that is about to hit.
No comments:
Post a Comment