“Rebuilt: How Becoming Part Computer Made Me More Human.”

The idea of becoming the next Darth Vader is one step closer.

http://www.wired.com/wired/archive/13.11/bolero_pr.html

I find this article enlightening not because the guy could enjoy music again, with the upgrade of 121 channels in his cochlear implant but this one paragraph that explains it all, of what my current hearing is. Let me explain a bit.

For all my life, my friends weren’t the only one who give me a funny look whenever I turn on my mini ipod and start listening. Their expressions say “Huh? I thought you’re deaf.” or “Give me a break, you can’t hear! Stop trying to look cool or be like hearing people.” As I try to explain my hearing, only to realize they weren’t only one that were baffled or puzzled. I was too.

That’s till I read this paragraph.

“Music depends on low frequencies for its richness and mellowness. The lowest-pitched string on a guitar vibrates at 83 hertz, but my Hi-Res software, like the eight-channel model, bottoms out at 250 hertz. I do hear something when I pluck a string, but it’s not actually an 83-hertz sound. Even though the string is vibrating at 83 times per second, portions of it are vibrating faster, giving rise to higher-frequency notes called harmonics….

….The engineers haven’t gone below 250 hertz because the world’s low-pitched sounds – air conditioners, engine rumbles – interfere with speech perception. Furthermore, increasing the total frequency range means decreasing resolution, because each channel has to accommodate more frequencies. Since speech perception has been the main goal during decades of research, the engineers haven’t given much thought to representing low frequencies.”

That’s it! That explains why I could hear low sounds better than higher pitches like vowels and consonants, why I have more RB songs than rock metallic songs on my ipod. I can understand the vowels fairly well but become lost with some consonants, like C, D, N, and T.

One reason why I was aback of getting a cochlear implant in the first place is because of its limited 22 channels, which is being vividly expressed by this: “When the device was turned on a month after surgery, the first sentence I heard sounded like “Zzzzzz szz szvizzz ur brfzzzzzz?” My brain gradually learned how to interpret the alien signal. Before long, “Zzzzzz szz szvizzz ur brfzzzzzz?” became “What did you have for breakfast?” After months of practice, I could use the telephone again, even converse in loud bars and cafeterias. In many ways, my hearing was better than it had ever been. Except when I listened to music.”

I feel it was not worth the implant to get only 22 channels because that is like only 10% of what a human is capable of hearing. I want to be able to hear what a person hears, not like “Zzzzz szz” or that alien-like language. Now, the latest cochlear implant can process more than 100 channels, giving much more range in the ability to hear.

Don’t get me wrong. I was born deaf so I will always be the Deaf person that I have come to know. I view this as a way of tool to communicate with people, like you would buy a Sidekick to keep in touch with ur friends. But never to change my identity as a Deaf person.

Tags: No Tags

No Comments, Comment or Ping

Reply to ““Rebuilt: How Becoming Part Computer Made Me More Human.””

Tags


korea deafness Life pics blogging thoughts Links birthdays family Writings videos adoption running google reviews workouts design sign language beers apple psychology economics philosophy education Golf languages travel food snowboarding traveling finance tips wordpress tech sports science identity asl reading childhood movies news coding honda shoes people buildings beauty surfing nature twitter obama blackberry howto time toys ergonomics party dreams textmate speeches wiki gmail san francisco dinosaurs extinction trains technology hydration element bike human capital deaf olympics xbox dating productivity communication ego hockey iphone