This music production tool is the reason why all new music sounds the same…

The Click

Imagine music as a recipe. Would you be able tell whether it had been made with artificially engineered ingredients or fresh produce from the farmer’s market? Canned tomatoes might work just fine—but maybe you wouldn’t know what you had been missing until you tried the same dish with heirlooms, each beautifully misshapen with unique streaks of sunburst yellow.

Drummer Greg Ellis wants listeners to begin thinking about sound like food—as something they physically ingest that has a quantifiable impact on their wellbeing. These days, he believes most people are consuming the musical equivalent of McDonalds: processed, mass produced, and limited in flavor.

A lot of this aural blandness has to do with technology. It begins with the producer who relies on a computer rather than live instrumentalists and ends with the devices we use to consume our music, which cut out the dynamics captured in the recording studio. Ellis, a session drummer who can be heard in the background of Hollywood blockbusters such as Argo, Godzilla, and The Matrix series, is exploring this phenomena in a forthcoming documentary, The Click.

What is “the click”?

The “click” is a digital metronome that musicians listen to while recording to ensure their rhythm is exactly in time with the tempo. A simple and now nearly ubiquitous part of the recording process, it has had a profound effect on the music we listen to.

While the click was originally intended as a tool for precision and cohesion, Ellis says its perfect uniformity ushered in an expectation that the rest of musical parts should follow. Suddenly singers, instrumentalists, and drummers were expected to sound like machines. When vocalists were slightly off key, they could be auto-tuned. If a bass player wasn’t perfectly in-time with the drummer, their parts could be processed in a recording program that syncs them up. Of course, that’s if a live musician is used at all—many producers in pop, hip hop, and R&B now use samples or synthetic sounds generated by computers instead of using their human progenitors.

These days, Ellis says he’s not given space to create most drumming parts. Although he’s played drums with greats including Billy Idol, Mickey Hart, and Beck, a producer who knows little about drumming will often create his part for him before he gets into the studio—and expects him to play it precisely on the click. He sometimes doesn’t even play through the entire song any more: He’s often asked to play just a couple measures, which are then repeated using a copy-paste function that prevents variation, dynamic, or embellishment.

And that could be having an effect on our enjoyment of the music: There is some scientific evidence on the value of giving listeners something they’re not expecting. “Music that’s inventive excites neural circuits in the prefrontal cortex,” says Daniel Levitin, a neuroscientist and author of This is Your Brain on Music. “It’s the job of the composer to bring us pleasure through choices we didn’t expect.”

Is technology making music more creative, or less?

Ellis says this popular method of production stifles creativity. “I’m not calling out anyone who uses the gear, I’m calling out the gear itself, which we’ve let dictate our sense of music and time,” Ellis says. “There’s a sense that when you’re faced with the real thing, it actually feels wrong to people.”

“Everyone’s used to hearing everything precisely on the click and with autotune,” agrees Petros, a producer in Los Angeles who has worked with hit-makers such as One Direction, Enrique Iglesias, and Dillon Francis. “So if a recording is not done that way, it will sound off.” However, Petros and other music producers are welcoming these new technological advances as a positive, not a negative. He says completely automating drum tracks is cheaper, easier, and more precise—and, in some ways, it allows for more creativity, not less.

With a live drummer, producers have a limited number of sounds to choose from, but with a program, they can quickly and easily experiment with dozens of different options until they find the one that sounds right. Petros says that most of his friends who are producers in the music industry don’t even know how to record a live drum set, and that a significant number of people who have songs in the Billboard Hot 100 don’t have any formal music training. But do they need to, any more?

Edward Sharpe and the Magnetic Zeros’ singer Alex Ebert says it’s become too easy for anyone to make music with a computer and free software. Consequently, there’s been an “undeniable loss of mastery” among a significant percentage of the musicians and producers making hits now. He’s says he’s not anti-technology: Technological experimentation, after all, is what allowed for the birth of revelatory albums including The Beatles’ Sgt. Pepper’s Lonely Hearts Club Band, Jimi Hendrix’ The Jimi Hendrix Experience, and Pink Floyd’s The Dark Side of the Moon. Instead, he’s against technology being used as a crutch rather than a tool for invention. “Musical successes are just being regurgitated in refinement,” he says.

Not everyone agrees. Robert Margouleff, a recording engineer most known for revolutionizing the use of the synthesizer on Stevie Wonder’s albums, has called the laptop “the folk instrument of our time.” It’s allowed for innovators like St. Vincent and Bon Iver to create new sonic experiences and entire albums by themselves, and has lowered the barrier for new artists to create masterpieces in their bedrooms.

But what about the consumers? As music becomes more mechanized, how is this trend affecting the experience for the people paying for it with their Spotify subscriptions?

How does the device we listen to music on change what we hear?

This technological wedge doesn’t stop at the act of music creation itself: Ellis believes that the way it’s packaged and then listened to only further separates us from the warm, feel-good vibrations we originally turned to music for. “There’s all kinds of losses that happen after music leaves the studio,” says USC professor of electrical engineering Chris Kyriakakis. “It’s basically all downhill from there.”

Engineers compress tunes in order to convert them to files compatible with our multitude of devices. Information is immediately lost during compression, and then even more information is lost depending on what system we then play that file through. It’s like “a palette that’s shrunk down to primary colors,” Ellis says. Listening to music through headphones that don’t perfectly fit into our ears, for example, or smartphone speakers that cut out frequencies emanating from the guitar, bass, and drums means we end up hearing an even more dumbed-down version of the sonic vibrancy the composer originally intended.

Some efforts are being made to mitigate these effects. For example, Spotify recently tweaked the volume of their entire song library in order to try and bring some of the original subtlety back that was stolen from their compression. As Bruno Romani writes on Motherboard, “When compression occurs in an exaggerated way, it makes everything louder, which ends up stealing the dynamics away from the music itself. It’s like listening to that one loud friend of yours who always yells when they’re drunk. In addition to being bothersome, it also becomes monotonous after awhile.”

Which type of music is better for us?

We may not be experiencing the full gamut of potential expression, but does mechanized music have a different effect on our brains?

Neuroscientist Levitin says we don’t know if music created with live instrumentation has more healing potential than its click-y counterpart. What we do know is that whether it’s created on a click or not, a steady rhythm is more likely to put people in a trance because the neurons in our brains start firing in synchronicity with the beat. Levitin says this trance can “help you to relax or achieve some insights you wouldn’t otherwise.”

Levitin has also co-authored a study that found people who listen to music together have synchronized brain waves. He hypothesizes that, at least in the case of a concert, audience members might feel more empathy and bonding if they’re able to see the musician. This is something Ellis argues we’re sorely lacking in our lives today, opting to watch YouTube footage of a live gig on our tiny screens on the way to work instead.

Brian Eno explains the loss of humanity in modern music…

VIA

In music, as in film, we have reached a point where every element of every composition can be fully produced and automated by computers. This is a breakthrough that allows producers with little or no musical training the ability to rapidly turn out hits. It also allows talented musicians without access to expensive equipment to record their music with little more than their laptops. But the ease of digital recording technology has encouraged producers, musicians, and engineers at all levels to smooth out every rough edge and correct every mistake, even in recordings of real humans playing old-fashioned analogue instruments. After all, if you could make the drummer play in perfect time every measure, the singer hit every note on key, or the guitarist play every note perfectly, why wouldn’t you?

One answer comes in a succinct quotation from Brian Eno’s Oblique Strategies, which Ted Mills referenced in a recent post here on Miles Davis: “Honor Your Mistakes as a Hidden Intention.” (The advice is similar to that Davis gave to Herbie Hancock, “There are no mistakes, just chances to improvise.”) In the short clip at the top, Eno elaborates in the context of digital production, saying “the temptation of the technology is to smooth everything out.”

But the net effect of correcting every perceived mistake is to “homogenize the whole song,” he says, “till every bar sounds the same… until there’s no evidence of human life at all in there.” There is a reason, after all, that even purely digital, “in the box” sequencers and drum machines have functions to “humanize” their beats—to make them correspond more to the looseness and occasional hesitancy of real human players.

This does not mean that there is no such thing as singing or playing well or badly—it means there is no such thing as perfection. Or rather, that perfection is not a worthy goal in music. The real hooks, the moments that we most connect with and return to again and again, are often happy accidents. Mills points to a whole Reddit thread devoted to mistakes left in recordings that became part of the song. And when it comes to playing perfectly in time or in tune, I think of what an atrocity would have resulted from running all of The Rolling Stones’ Exile on Main Street through a digital audio workstation to sand down the sharp edges and “fix” the mistakes. All of its shambling, mumbling, drunken barroom charm would be completely lost. That goes also for the entire recorded output of The Band, or most of Dylan’s albums (such as my personal favorite, John Wesley Harding).

To take a somewhat more modern example, listen to “Sirena” from Australian instrumental trio Dirty Three, above. This is a band that sounds forever on the verge of collapse, and it’s absolutely beautiful to hear (or see, if you get the chance to experience them live). This recording, from their album Ocean Songs, was made in 1998, before most production went fully digital, and there are very few records that sound like it anymore. Even dance music has the potential to be much more raw and organic, instead of having singers’ voices run through so much pitch correction software that they sound like machines. (witness the obscure disco hit “Miss Broadway,” for example, or LCD Soundsystem’s career.)

There is a lot more to say about the way the albums represented above were recorded, but the overall point is that just as too much CGI has often ruined the excitement of cinema (we’re looking at you, George Lucas) —or as the digital “loudness wars” sapped much recorded music of its dynamic peaks and valleys—overzealous use of software to correct imperfections can ruin the human appeal of music, and render it sterile and disposable like so many cheap, plastic mass-produced toys. As with all of our use of advanced technology, questions about what we can do should always be followed by questions about what we’re really gaining, or losing, in the process.

Universal basic income: A ‘humane’ idea whose time has come, or a $3 trillion black hole?

VIA

What if the federal government gave everyone a check, every month?

Tesla (TSLA) CEO Elon Musk and Facebook (FB) CEO Mark Zuckerberg are among those who say universal basic income, or UBI, is a good idea. With inequality widening, the idea of an unconditional, periodic cash payment that the government makes to everyone has suddenly become a hot topic.

The idea is whether a person is unemployed or wealthy, a $1,000 monthly government check could replace all current welfare programs, including Social Security.

“I think it would theoretically be superior to the existing social welfare system,” Michael Tanner, senior fellow at the Cato Institute, told CNBC’s “On The Money” in an interview. “It would be more efficient. It would be more humane and it would be a lot less paternalistic.”

The robots are coming

The conversation about UBI has reached a crescendo as the workforce leans more heavily on technology. Nearly half of all U.S. jobs could be replaced by robots in the next decade or two, according to an Oxford University study.

Last November, Tesla’s Musk said there was “a pretty good chance we end up with a universal basic income, or something like that,” as a rising number of workers lose jobs due to automation.

UBI supporters say the cash from the government could fund basic needs, like food and housing, freeing people up to find new jobs in the digital economy.

“A lot of people when they first hear this idea really like it,” said Jason Furman, former chief economic advisor to President Obama. That is, until you read the fine print.

“And then when you look at the details it turns out it just doesn’t work,” Furman explained to CNBC. “It costs two to three trillion dollars. You would need to double the current income tax to make it work.”

Furman, a professor at Harvard University’s Kennedy School of Government, added that “the premise underlying it is wrong too. There’s going to be a lot of automation but there’s also going to be a lot of jobs and our focus should be on making sure people can get those jobs not giving up. And universal basic income represents giving up in the face of that challenge.”

Yet Tanner took aim at the current social safety net. He argued the current “social welfare system spends nearly one trillion dollars a year fighting poverty, and it doesn’t do a very good job of enabling people to rise and get out of poverty and to be in control of their lives.”

He added: “Looking for some new alternative for that is not a bad idea.”

The debate has been heightened by Europe’s experiments in providing UBI to citizens, which have had mixed results. Those experiments have amplified calls to try a similar approach in the U.S.

“Our current system is certainly imperfect, I don’t want to be the defender of the status quo,” said Furman. He cited research where current government anti-welfare programs “that invest in children” providing food stamps, Medicaid and housing vouchers are successful and “increase their mobility.”

Yet Furman added that “it’s too simplistic to say, just write everyone a check, let’s spend trillions of dollars doing that, rather than doing the hard work of trying to get the (government) programs right.”

Tanner countered that it’s hard to determine which federal programs are effective and which aren’t. “We have over a hundred different welfare programs all with different rules and regulations. They’re overseen by dozens of different agencies. Simplifying, consolidating and moving to cash would make a great deal of difference I think.”

So might a UBI program work in America? “We don’t have a lot of wide scale evidence yet there are a number of ongoing experiments in places like Finland, the Netherlands and Canada,” Tanner acknowledged.

Still, Furman doesn’t see UBI or the “rise of the robots” as coming anytime soon.

“Maybe 50 or 100 years from now we have enough robots to make everything and they can just hand the proceeds over to us,” the academic said. “But I’m trying to think in the scale of the next 10, 20, 30 years, (robots) are not going to take our jobs on any time scale that I’m capable of envisioning.”

AZ talks new song ‘Save Them’…

The full AZ interview is up now. You can catch this cut on my May Mixtape.
—————————————————————————————————————

    The latest single “Save Them” features a snippet of a speech by Louis Farrakhan. Why was it important for you to take a clip from the Minister and place it in this song?

    Farrakhan is the voice of the hood. He’s a voice of the world. He’s for our people and he’s been there since the beginning. So at the end of the day, he has that powerful speech that was needed to reach the people. When you hear his voice, you will fall back and you’ll take notice and listen. And that’s what I was trying to get across to get the ears of the youth. When I was putting this record together, a lot of my peers and fans were like, ‘Yo, you got to save us!” And I use to be like, ‘Save you from what?’ and they would say, ‘The music now has no substance. We need substance. Save us.” So that’s where I got the title from.

    Why was it an important decision to add Raekwon and Prodigy on this album?

    They’re swords are sharp lyrically. And we all love lyrics. We wanted to bring that to the table. So me knowing that their discography contains nothing but that and they specialize in that.