Testimony, How I Learned to Play This Wonderful Anthem

Hello once again!
I am sorry for not having posted here in a while, but I recently had one of my computers break, so I am borrowing one in the meantime until I can get a new one.

Before we go on,

Let’s take a sneak peek at what we’ll be discussing in the next paragraph.

I know!

I was pretty surprised when I first heard about this song that has become my GLBTQIA+ anthem back in 2014. If I had known how to watch Glee on my computer, I would have also probably been surprised to learn that its third season finale coincided with my 2012 high school graduation. This piece was released in March, so three months before that. Fortunately, some folx who wanted to preserve the TV shows that were recently taken down from the Blind Mice Movie Vault decided to start a new site called the Audio Vault.

I thought I’d write about how challenging it was for me to get one of the pieces I really love into some kind of accessible format because I had no way of obtaining it except as a PDF file that needed to be recognised, or a hardcopy document that needed to be scanned using a flat-bed scanner or camera. Alternatively, I could’ve sent or mailed them to someone who knew how to work with them, as I’ll discuss below. Of course, if the file was already in music XML or .mid, then there would be nothing else to do but open it in an accessible music composition software, like MuseScore, something that you can take with you on the go or a text-based programme like LilyPond or Braille Music Editor2, which is somewhat outdated, but it is based off the LilyPond engine. MIDI files can be opened in sequencers and digital audio workstations.
Fortunately, there are resources available to you, besides the ones I listed above, to help you whether you are blind or sighted, abled or disabled, etc, for you to make sheet music and music-learning in general accessible to everyone.
Although you would most likely need to hire a Braille music transcriber when a softcopy of the music is not available, or if it is in a PDF file that would require a labour-intensive process to recognise, it can take a while depending on how much of it needs to be transcribed for you to get it back, and whether it is within your budget in the first place. Transcriptionists generally do their work by hand, usually with some programme like Braille2000 or Duxbury, and some people are opting to use automated software, like Goodfeel by Dancing Dots. This is a suite of applications which was recently updated with some new features. It allows anyone who does not know anything about Braille music to scan, edit, and emboss the scores automatically. It can also open music XML files, too. The nice thing about it is that since it is automated, there is less room for typographical error, as is usually common in manual transcription. See the whole thing in action by watching this video.

A newer version of the Sharp Eye music-scanning and recognition programme, called PhotoScore and NotateMe Ultimate from Neuratron can also import multi-page PDF files. However, no OMR programme is one hundred percent accurate, so visual assistance is always going to be required to correct the scannos.
A few months ago, though, I called in on a service for the blind called Aira, and I asked around until I found an agent who happened to know how to read sheet music, and, using remote access, they were able to correct some of the errors in the file. The process took a lot longer than I would’ve liked, but it proved one thing… it is possible to do such a thing. If only more agents were knowledgeable about using the software, the process could’ve gone more smoothly, and I wouldn’t have wasted so many minutes.
The National Library Service’s Braille and Audio Reading Download programme has a pretty broad selection of Braille music scores that can be downloaded, read on a Braille display, or embossed. Additionally, the Library of Congress hosts a myriad of audio instructional materials from Bill Brown, for playing various instruments. Bill Irwin has contributed more than three hundred hours of verbal instructions for learning to play piano and organ, and understanding modern harmony.
Click here to view a list of current certified music Braille transcribers. If you’d like to become certified, click here to learn how.
If you’ve decided to pursue this, you’ll probably want to download this manual from the National Library Service to help you study.


Before you hear this song,

I must warn you that it contains triggering content that would raise many red flags, which would make any chorus hesitate to add to a list of songs to perform.
I thought I’d go into detail about how exactly I was introduced to this masterpiece.

To begin with,

I’ve been involved with music for a very long time. When I was seven or eight, my mother enrolled me in a Spanish choir as part of my church. In sixth grade, other than the required music class in school, I finally got to take part in a band. Oh yeah. Speaking of required music, when we were learning how to play a plastic recorder, the music teacher wrote a simple score up on the whiteboard, and people around me were playing Mary Had a Little Lamb in the key of G. The only difference was that we played, B, A, G, A, B, B, B, A, A, A, B, B, B, instead of B, D, D. When my turn came, I played through the piece clearly, quickly, and effortlessly, without stumbling, and when I finished, everybody clapped and applauded. I wondered why people were applauding for me when no one else did for the other students. One of them told me, when I later asked about it, that it was because they thought it was amazing how I could simply hear the notes and copy them right off the bat without even seeing them. I was, like, okay….?
Anyhow, I couldn’t be in band during middle school because they couldn’t find any way to accommodate me. However, I was able to join band in my eleventh grade year of high school when I finally learned how music notation worked in general, and that, in turn, allowed me to learn music Braille more easily. So, in my final year of school, I got to play in the pep band, and I got to march in the homecoming parade. I wonder how things would’ve worked if I had been in an orchestra, and they had used this system? Unfortunately, that wouldn’t be introduced for almost seven years into the future.
I first started coming out as transgender shortly after leaving the transitional programme in Vancouver, Washington in June of 2013. It started with a dream of me getting my brain transplanted into a female’s body while the person’s brain was transplanted into my body, so that there was this swap in between. I also imagined that both my vision and hearing were fully functional, but at the same time, I felt sorry for the person who now had my hearing and vision loss to deal with.
Then, in late August of 2013, thanks to the Q Centre staff for giving me the resources to visit SMYRC (pronounced smirk), I made some new friends, partook of a few drag shows, and joined Pride Project, a programme here in Washington County. Anyhow, they wanted to invite a few groups to perform at their fifteenth anniversary party held on Friday, 27 December 2013, and one of the groups they invited was the PDX GSA Youth Chorus, now known as Bridging Voices. I really wanted to learn more about them, so I asked for more information, and, while I was volunteering at the Children’s Club of North America, which is what I call it instead of the Boys and girls Club, I went to one of their rehearsals on Sunday, 8 December.
I got to play an electric organ there while I waited for the session to start. I remember that day being cold and rainy, and I had posted one of my articles on my blog earlier that morning about mental illnesses and NAMI. Gosh, how I remember those moments… the pizza we ate downstairs, feeling as though I would doze off if I didn’t will myself awake, and finally sleeping on the way home in the cab with the raindrops gently falling around me.
I didn’t return to the chorus until after New Year’s 2014, and I started learning some new songs shortly thereafter. However, I was very dysphoric about my somewhat masculine voice that I didn’t feel comfortable singing. So, I resorted to using my musical instruments to accompany the chorus members. By coincidence, I happened to learn about the castrati, and how important they were during the medieval and renaissance eras. They were also called eunuchs, and they developed very unusual vocal ranges and characteristics because of how big and tall they became. I then got to read a book called Choirboy by Charlie Jane Anders.
Some time in February, I got a blanket E-mail with a call to singers to solo audition for a chance to sing with the Philadelphia and Portland Gay Men’s Chorus. That song was called Testimony, although the name held no significance to me whatsoever at the time. I remember going to Rehearsal on Sunday, the 23rd of February, and, since my paratransit service brought me several minutes early, I got to play on the organ for a little bit. I was sort of obsessed with playing Pomp and Circumstance or Land of Hope and Glory. Anyway, the chorus manager told me that they were going to finish recording a video, so I watched the proceedings without much interest. Then, after everyone had come up from eating pizza and lavender ice cream, (or maybe that was in January) I heard the piano accompanist playing some weird chords and the artistic director guiding some people to harmonise with it. Then I heard some words… /eep me ‘way… kee/ me ‘way… tay ee ay… and so on. I heard two low voices singing, starting with a perfect fourth. Eb3- Ab3, F3-Bb4, Gb3-Bb3, Ab3-Bb3. I had no idea what they were doing. Then I heard the same thing, but the two higher voices were now Db4-Bb3, Gb4-Db4, F4-Db4, and Eb4-Db4. Finally, they combined all those four voices and I heard this really pretty but mournful chord progression.
In case you don’t know what those numbers mean, you can learn more about what scientific pitch notation is by reading this Wikipedia article.
Then, I learned that we were going to sing some new songs like Fallen Angel and something about Wings, by David York, who is a local composer here in Portland, so his music won’t be anywhere on-line. I never got to listen to the ninety-second through one hundred something measure of Fallen Angel, and I really want to hear it!
However, I couldn’t get those plaintive notes out of my head, and I just had to know from what part of which song it came from, if it really was part of a song. So, I composed an E-mail to the artistic director, and, having corrected me on the lyrics, I learned that keep me away was really take me away. Those words came from a piece called Testimony, written by Stephen Swarts, broadway musical composer. If you are having trouble accessing the previously linked webpage, click here for a cached version of this site.
You are probably thinking that the song hit me hard when I first listened to it. But, on the contrary, it did not…. at least not yet, anyway. I went to watch the chorus perform at Call Auditorium, which is on the Reed College campus.
I was busily posting stuff about laser hair removal and electrolysis, and about my classes at the commission for the blind during that time. To celebrate the end of our first season, we got to sing with the Portland Lesbian Choir on Sunday, the 22nd of June 2014. My favourite songs from that day were Give Us Hope by Jim Papolis and Mae Frances by Bernice Johnson. I happened to run into an old acquaintance I had just met two years before when I was at the summer work experience programme. One of the things I really loved about the practice tracks PLC had was that they were recorded and rendered into stems using a digital audio workstation. That made it easy for me to hear each part and listen to the lyrics and melody or harmony at the same time. Some of them had accompaniment, some of them didn’t, and some had both. Bridging Voices came back in session when they sang at the Portland sQuare on Sunday 14 September.
It wasn’t until nine months later, around Christmas time, that the song really hit me hard. It was just taking its time to build up inside me, I fancy. I kept listening to this song in my head, playing various lines of it, in different keys, especially the first half of it, and that is when I sat down and wrote that deep and depressing post I just put on my blog this year. I wrote it because I had gotten in a huge fight with one of my family members, and I didn’t know what else to do and who to talk to.

In 2015, I resolved to do whatever I could to find a way to learn that song. I couldn’t easily transcribe everything by listening because of my hearing loss, but at least I could get a general sense of what the piano and voices were doing. So, I started on my search for a way that I could learn it. I first tried to print the music and scan it, but this hardly worked out, so I requested a refund, which was approved. Finally, I contacted Stephen Swarts agent, and they provided me with a PDF copy, which I could scan directly. I used Sharp Eye at the time. Somehow, a problem many music OCR programmes have is that they don’t seem to recognise tied notes very well. Sometimes it recognises them, and sometimes it doesn’t. In the latter case, the notes would just sound like two separate notes. I had trouble scanning the multi-paged PDF file because I was using Sharp Eye at the time, which could only recognise one page at a time, and it could not extract the other pages in it. However, I was finally able to find a workaround, but it required me to convert it to a bunch of Tiff files using a different music recognition programme.
By using a MIDI sequencer called Quick Windows Sequencer, I was able to edit the accompaniment track in the MIDI file based on my best educated guess. This was in mid to late March of 2015, when I was getting ready to take my first on-line classes through Portland Community College. I was forewarned by some people that I should be careful how I took on this approach, because it is generally illegal to reproduce or redistribute such material unless it was for personal use, but I did a bit of research and found that there is an act of Congress called 17 U.S.C. 121, which permits organisations to distribute things in alternate formats for exclusive use by people with print disabilities and other reading barriers for a free or reduced cost, unless they were paying out of pocket to have it transcribed.
In May of 2015, I managed to locate a Braille transcriber, and we arranged to have my piece transcribed. I sent them a copy of the PDF file via E-mail, and then I waited, with some occasional notes sent to me by the transcriber about how I wanted it formatted, etc. Then, just as classes were finishing up for spring term in mid June, I got a wonderful surprise. The file was transcribed at last! Now I’d see how all the voices were arranged, and I’d be able to make any correctons to the accompaniment! I was so elated that I didn’t know where to start. I even deprived myself of sleep a little and was so tired that I almost didn’t eat my supper. I had to pay ninety-six dollars for the transcription because they charged four dollars per page, and there were twenty-four pages overall. I think I got the vocal score first, and then I got the piano score shortly thereafter.
since I memorised computer Braille, I was able to read the file using my screen reader’s speech output to convert the characters to Braille and write them out on my Perkins Braille writer because I didn’t have a working Braille display at the time. So, if the file said, [email protected] I’d know that the underscore and at signs were a dot four-five-six and dot four respectively, and all the others were their corresponding Braille characters.
Well, even before I got this Braille piece transcribed, I attempted to play this piece on the piano and almost succeeded in playing measures one through twelve. After editing the MIDI file, I tried to play measures twenty-one through twenty-nine. I asked the music director of Bridging voices at one point if they knew of any techniques to play really large chord voicings when one had small or medium hand spans, and they suggested that I arpeggiate or roll the chord upwards. So, that is what I did.

Note

This is my own perception of how well I identified with all of the piece, but I heard that there are some generations of people in the LGBTQIA+ community who feel that the song was too whiny, or it’s not all about getting better in or after high school, etc. It’s really hard to think about how a song might impact you if you’ve never gone through what another person has, and sometimes we think it would prove futile to post inspirational quotes and messages because there is a lot of hypocracy. One person even asked me why the song didn’t sound more dissonant in the beginning. To me, I thought it was pleasantly dissonant, because it was dissonant in a beautiful way. I know many of these songs are good for what they are, and that’s probably why these kinds of songs are created in the first place… to combat hate.

Back to Sunday, 15 February 2015, I had made arrangements to attend a Time to Thrive conference that was being hosted in Portland, then I went over to Portland Piano Company, and then later to an event where I got to meet several CHATpdx participants and allumni. CHAT stands for Curving HIV and AIDS Transmission. It was a cohort that trained youth to become better peer educators called sexperts.
So, I went back to Portland Piano Company later in August to look for the biggest concert grand piano I could find and play as much as I could of Testimony. I then went to the Q Centre and played a bit of it there; I think it was a week later.
One thing I forgot to mention was that once I memorised the lyrics and had all the notes corrected in the MIDI file, I imported the MIDI file into a programme called Vocaloid Editor, and then, having installed some voices, I put in the lyrics tied to each note. In this way, I made a pretty good rendition of this piece.

In early September, I decided to look for a stereo microphone so that I could try and reproduce the same sound as recorded in Skywalker Sound. I didn’t know that they had used a Bluthner grand piano, or that they had wide-spaced microphones until some time in April 2016.
I ended up buying an Olympus ME51s, and then a Yeti Blue stereo USB microphone. This last one I took with me back to Portland Piano Company, and I placed the mike in one of the spaces between the holes of the resonance chamber since the lid of the grand piano was propped on these massive beams. I brought my Braille sheet music with me and played some sections at a time. I played this on a Fazioli nine-foot concert grand, by the way. Then, after getting home, I edited the file in a single track editor called Studio Recorder and got rid of the excessive pauses and deleted the notes that weren’t supposed to be there. I cut, copied, mixed, faded, and did a couple of other things to make it sound as though I were playing it through non-stop. It ended up sounding like nothing I had in mind. Instead, it sounded as though I were in a small studio room or something.

On a side note,

I joined the Rose City Wind Symphony, formerly known as the Portland Gay Symphonic Band. I had a bit of a hard time advocating for my specific accommodations, but I was able to convince the librarian to write up a digital version of the sheet music and send it to me in different formats. Although I wasn’t able to play October, by Eric Whitacre, in the fall concert, I was able to play an arrangement of In the Bleak Mid Winter at the third annual Christmas holiday concert in the Legacy Emmanuel Hospital’s atrium. By the way, I’ve always been curious as to how conductors used gestures to communicate to the players or singers. Not being able to see that, but instead hearing them count, snap their fingers, or anything of the sort is not enough, particularly in a performance setting. It would be helpful to familiarise myself with the different forms of conducting. Besides, what if I were in a situation where the choir or a cappella group immediately began singing, and there was no accompaniment to let me know ahead of time, and my part came in right on the beat without warning? There’d be no way for me to know when to start save for an audible and or tactile cue. Also, what if there was a section that needed to be repeated for a predetermined time? How would I know when it is time to stop the repeat and proceed to the next section? I’m sure some of these can be figured out in advance.

In November of 2015, I attempted to sing all the parts of Testimony, and while that was somewhat successful, I didn’t really like how it came out in the end. I also made sure to add a little bit of information about this song in my debut novel, The Change of Tomorrow, although I’ll have to send in a request for permission if I want to publish a line or two of lyrics.
I didn’t really do much with the song in 2016, but I did get to talk to someone about it when I went to Denver, Colorado for the #GALA2016 Chorus festival. I also dropped off a Braille hardcopy for one of my blind friends who lives there, and who hosted me for the week I was staying.
On my way home, I met up with some folx who were members of the Portland Lesbian Choir, and I asked them about possibly singing with them in the fall. However, I realised that my bowling games were going to conflict since they always rehearsed on Wednesday evenings, and the games weren’t over until sixteen hundred pacific. In 2017, I asked again about possibly singing with them, and, when I told them that I was trans, one of the board members told me that there was a new choir being started called Transpose… great double entendre, right? Well, I went to their first open rehearsal in mid March, but I soon discovered that I was lacking so many accommodations for me to fully participate. So, I wrote up an E-mail, making some suggestions of what could be done to make the chorus more accessible if it weren’t going to be much of a burden on them as the Americans with Disabilities Act stated. The great thing about this community choir is that they do not use gendered language such as soprano, alto, tenor, bass, etc. They use non-binary language such as voice1-4… melody or lead, harmony, descant, foundation, etc. Any of these parts can be split within a single staff or have more than one staff. Some parts are written in treble clef, but they encourage us to sing an octave below it. They also modify any words of a song to make it inclusive to everyone. Additionally, they treat it like a musical playground (especially for the community choir) because they don’t want anyone to feel like they are restricted to only sing the voice part that they chose at the beginning of the term. They recognise that some people may feel comfortable singing one voice part for a few songs, but maybe a different voice part for another. So, you are basically allowed to switch voice parts any time during or in between rehearsals. I was actually the one who suggested that. With their a cappella group, though, they usually assign voice parts, much like a regular choir or band. They are pretty mindful of what voice ranges people prefer to sing, though.

Going back to September 2016, I attended this church retreat hosted by the Archdiocese of Portland, called the Office of People with Disabilities in Turner, Oregon, where I got to play all the way through measure eighty-eight of the piece without the words. Of course, it was a Catholic retreat, but I was hoping that, at some level, since I didn’t have the courage to come right out and tell them the truth, that by playing this song, someone might be apt to recognise it and see me in a different light. I think it almost did, or maybe it was a coincidence because of my hair and voice, but a guy actually said to me, ‘If you were a girl, I’d marry you.’ Oh how I wish that were true! if only….
A few days later, I went back to the school in Vancouver, and I recorded myself playing it again, still hoping to find that same quality I heard in the original recording. I later went back to try a different piano in a different auditorium in April of 2017. I even purchased a copy of the Bluthner Piano a year before, but I didn’t like how inaccessible the interface was at the time, so I was refunded for it.
Anyway, I discovered some really interesting things In December 2016 and October of 2017.
The first was a harmonic noise generator. You can adjust the brightness or darkness to make it sound as you please. I built a stack of chords that were in Testimony and made sort of a pad-like effect that could be used for meditation.
Then, in October of 2017, I was playing around with manipulating various Windows sounds, and I was able to make a folder, which I call the mix, containing multiple copies of a ding sound. If you’ve used Windows 2000, you’ll know what this sound sounds like. I used this same programme made by the American Printing House for the Blind, and I put together the accompaniment backing track for Testimony. I didn’t know that you could simply record a sample of it, edit it, and then make a sound font out of it. I mean, I vaguely knew about such things, but it wasn’t until I later started studying the courses by CAVI this year that I finally learned about it.
And finally, I played this song to an acquaintance I had met at a TransgenderDay of Rememberance and Resilience vigil on Monday, 20 November 2017.

Also, if you came across a post that was password-protected, it was only meant to keep wanderers from accidentally stumbling upon it, for it contains extremely triggering content. That post can be found here. To get in, use the password pride2019 because it was posted at the time we have been celebrating fifty years of pride.


Take a journey through Testimony

*sighs* Wow! That was a lot of information to read right there, wasn’t it? Well, I’m super glad I finally got to share this experience with you, because now it’s time for me to ask you a favour to my prospective ensemble director(s). First, let’s see what these folx have to say about what they thought and liked about Testimony.

Okay, what’s next?

So you’ve read through my novel… congratulations! 😁 Now it’s time for me to ask you to do something for me. If I am going to take part in your chorus, band, orchestra, ensemble, etc, do you think you would have what it takes to make sure I can fully participate? Not just for me, obviously, but for anybody else who might need it thereafter? Of course, I am willing to contribute a helping hand, whether it be monetary or not, to help you be more successful and welcoming to all.

First, my hearing loss.

  • It would be helpful to have a microphone system set up so I can hear the artistic director talking directly into my hearing devices or headphones.
  • It would be helpful to have written lyrics of whatever song we are doing, including solo lines, if applicable.
  • Have people speak the words in rhythm in time to the beat before adding pitches to them.
  • Sit as close as is permissible to the piano or other instrument, if applicable with it on my right side, for that is my better ear.
  • Be in a space that does not have too much reverberation, though this is usually mitigated by using the microphone system.
  • If someone who is far from me says or asks something of significance, it would be helpful to have that information relayed to me verbatim if the microphone cannot be passed around for any reason.

For my blindness.

  • Provide all sheet music in Braille and or electronic format, such as .xml or .mid.
  • Alternatively, create comprehensive sung practice tracks that everyone can benefit from, especially as they’re useful for folx who may have missed one or more rehearsals.
  • Play voice parts on piano or similar instrument individually, then together. This is especially helpful for folx who decide that music-reading is not for them.
  • Send all communications and materials, including lyrics (if applicable), in electronic format that I can interact with using my screen reader and or Braille display.
  • Have access to ride-sharing services and support such as Uber or Lyft, or use an accessible spreadsheet so people can request and offer rides for rehearsals each week.
  • Have a check-in buddy system so that folx can check in with each other and make sure they got home okay.
  • Know the logistics of the place, i.e. time and location
    well in advance, as well as a basic orientation of the space.
  • Make sure the platform where practice tracks are being hosted is accessible with screen readers, like Google Drive or Chorus Connection.

Now for both my blindness and hearing loss.

Remember that intersectionality matters.

  • It would be greatly appreciated if someone were available to be my support service provider, a person who can provide visual and environmental cues, and guide me from place to place.
  • Optionally, have someone transcribe whatever is being said in the event that I am not able to hear for any reason, like using this device.
  • If the assistive listening device stops working, or is not available, keep in mind that it will be much harder for me to learn new material, so avoid teaching anything new.
  • When you need to ask me a question or inform me of something, it would be helpful to address me and identify yourself by saying something like, This is Jay speaking… and then say whatever you want to say.
  • If your chorus is doing any correography, or anything unusual, it would be helpful to know how to do that in advance.

This list of accommodation needs is subject to change at any time, so keep checking in periodically.

A note on microphones and assistive listening devices:

If more than one hard-of-hearing person is going to be using your services, you may want to check out some vendors that can provide you with one microphone and multiple receivers. A good option I recommend is William’s Sound. Some of them allow you to connect it to the soundboard that is connected to the public address system, so that anybody with the receiver can tune in and hear exactly what is being said. This works well in concert settings.
You don’t necessarily need to be in Oregon to use this resource, but this place also has a great selection of items available for short-term rent, layaway, rent-to-own, or immediate purchase. Members who are D/deaf-blind may qualify for telecommunications-related accommodations through something called the National Deaf-Blind Equipment Distribution Programme (NDEDP) provided by I Can Connect. Although many of these things are considered only for telecommunications, some of these can be used for face-to-face communication, as well.
Also, I may connect my microphone system directly to a device used to record rehearsals. If folx have any problems with that, please let me know, and I’ll find a different thing. The recordings are for personal use only.
NOTE: Because of the recent corona virus pandemic, rehearsals are being held virtually. Please check to ensure that people with assistive technology will be able to use it with little to no problems.

A note on making practice tracks:

I believe that having a comprehensive folder of practice tracks should contain a full mix of all the parts sung by a person and or by a MIDI instrument (with and without accompaniment), followed by each of the individual parts, also with and without accompaniment, if applicable, and with the following attributes.

  • Alone: only your part is heard.
  • Dominant: all parts but your own are turned down, making your part prominent in the mix.
  • Missing: All but your part is present, requiring you to sing your part along with it to fill in the blank.

So, for example, a file name for Give Us Hope with accompaniment might be something like Give_Us_Hope_-_voice1alone_accompaniment.wav
A file with your missing voice3 part and without accompaniment would be named something like Give_Us_Hope_-_voice3missing.wav
Also, I imagine that making audio practice tracks for musical instruments would be extremely difficult, so I would therefore suggest that you make a digital version of the full score, or just the part I am going to play. I can easily mute and or solo tracks using my sequencer.
The problem with making such practice tracks, however, according to what one of the past directors told me, is that it would require intensive amount of time in a recording studio and some time for the actual mixdown, plus not to mention that a lot of choruses lack the budget necessary to support rehearsals at this level, even though these inclusive means for learning music efficiently most likely represents best practices in the long run.
Also, I am thinking of auditioning for a smaller vocal ensemble, and I wanted to know how steep the learning curve was. I got some good responses from experienced singers, and one assured me that if I do exactly as I had outlined, I should be ready for any ensemble, auditioned or not. It’s because of audiation. It is the auditory equivalent of imagination.
Here are some tips for making a good audition experience for someone with no sight and difficulty hearing.

  • Have an open rehearsal or orientation before auditions that allows people to be familiar with how the group rehearses each week and discuss member expectations.
  • Allow some extra time for folx who cannot read sheet music to learn an excerpt from a piece for vocal blending purposes using the same technique for learning music outlined above.

I am sure that with continued education about disability awareness in all modalities, we’ll be able to make music groups and spaces accessible to everyone.

To find more resources for music access, you may want to check out

Thank you for reading, and I am looking forward to being an active participant of your group for years to come! 😍

My very first MRI Scan

So, I wanted to talk about my first experience getting an MRI of my brain since I promised I’d follow up to those two posts I wrote about what happened to me. I would like to encourage you to visit this web site to better understand how these work. Also, I really enjoyed watching their videos, plus they recorded other experiments as well.
Anyhow, my ear, nose and throat doctor, whom I have seen since I was seven, back when he used the Rinne and Weber test using a 256-Hz and a 512-Hz tuning fork, saw me for the first time in six years. Also, when I was nine, he inserted a drainage tube in my left ear to try and clear up the fluid from my chronic otitis media. Then I was referred to see him back in 2010 because there was a significant decrease in my hearing, both in my left and right ears. I hadn’t seen him since then, but after what I went through back in 2016, I got to see him three more times. He agreed to do an MRI, as well as prescribe me some anti-anxiety medicine and send me to physiotherapy.
So, on Friday, 23 December 2016, I was given the order for my first MRI scan, which was to take place no more than half a mile away from where I was being seen. In fact, I was able to get an appointment very quickly. I also learned that the code most insurance companies used to identify an MRI scan was 7551 or 7552. I was really excited to get my first MRI scan, not only because I’ve read so much about it, but because I was taking one step closer to being able to 3D print a model of my brain, skull, and facial features.
I made arrangements to be picked up by my medical transportation provider on Tuesday morning, and we headed out to the medical plaza, which is similar to the main hospital, but it was more for out-patient use. My driver had a hard time locating the building because they went to one that was closed. So, I called them up, and we were able to get redirected to the right one. After stepping inside, I walked over to the registration desk where I took a seat as I filled out paperwork and handed over my insurance card. They got everything ready for me, and then, after about five minutes, I was guided up stairs to the third floor. They handed in my paperwork to the receptionist up there, and the same person led me to a row of chairs. After about ten minutes, the technologist (the same one I had spoken to on the phone when confirming my appointment) summoned me to the hallway where the imaging rooms were located. After making a stop at the restroom at my request, I swallowed one Percocet tablet I had gotten for wisdom teeth extractions, drank lots of water, and then I accompanied him to another room. There I found a locker where I could stash my belongings. I told him that I might not be able to hear him once my hearing aids were out. This is why I wish they utilised headsets like on a plane or helicopter. Later, I learned that their headsets were built like stethoscopes, meaning that they utilised air tubes. Anyhow, after everything was put away, I took my cane, since it was only aluminium, and the guy said it was not going to be attracted to the magnet. So we walked for about ten or so feet into the magnet room. We had to pass through two doors. The second door reminded me more of a soundproof booth. Still, it was a small tiled room with a table about a foot off the ground. After I got settled on the table, which felt like an arch to fit your back, like one of those changing tables, the technologist put a leg pillow to make my legs more comfortable and slightly elevated. Then he lifted the entire bed, but not before I tried feeling for the giant tube. He told me that it was located near the ceiling. So he elevated the bed to around five feet, and then he slid the bed back into the machine. I felt the sides of the tube, and it felt very smooth and cool to the touch. The entrance was like going into the bell of a French horn. The table was small enough to fit through the bottom of this opening. I imagine the coils are wrapped around the smallest part of the bell. If you stuck the insides of two French horn bells together, then I believe that is how it will feel, and what might cause the magnetic field to be generated around the bore. Oh wait! He also attached this headpiece that surrounded my head. It felt like bars were surrounding my face, but I could not feel them. Then he gave me some headphones, and a bulbous-like call button. Then he slid me into the tube and left the room and probably went next door to the control chamber. He tried talking to me through the intercom speaker, but I could not really make out what he said, but it sounded like, ‘still as a statue.’ Then I heard the low hum, knock, knock, knock, and then a whir as the machine was trying to find the best frequency to resonate with my body. That also included making low resolution images. This is called MR tuning. Once it has been tuned, it starts to work. Because I had headphones on, I could only hear the bass sounds of the machine. I could feel the side of the tube and the headpiece vibrate against my headphones. The pill I had taken before was already starting to make me feel more relaxed. After about twenty minutes, I was slid back out, and some gadolinium was slowly injected into my vein using a winged infusion set. Then the test continued for another ten minutes. After that test I was all done. He slid me out once more, removed the headpiece, headphones, and blanket, and then lowered me back to the ground. After I had my hearing aids put in, I was made aware of a hump, wump, hump, wump, hump, wump, hump, wump sound. I asked the technologist what it was, and he told me that it was the helium circulation system, keeping the coils from losing their conductivity.
A few weeks later, I ordered a Lyft to pick up the CD with my images in a DCM (diCom) format. Fortunately, I had gotten in touch with the biology instructor at Portland Community College, so I arranged to have those files sent. the first successful 3D print was made in early April 2017, which just consisted of my brain. I was hoping to send in my scan to an on-line library of other scans, similar to Thingiverse, but I haven’t found the right time to do it. We used a Tiertime Desktop Mini 3D printer.
So, there you go, my entire MRI and 3D-printing experience. And, let me finish this by saying that although I never had an MRI in my life until now, I thought I had invented the concept in my novel of my character lying on a bed, going to sleep and waking up, only to find that they were confined to a dark cocoon. And if that were not bad enough, they were six feet above the ground! So I was surprised to discover that this concept already existed. The MRI images the blood inside the brain, not the brain tissue itself. This is why a brain biopsy is still necessary, at least until we find some means of performing a stereotactic ultrasound.
Finally, I encourage you to look into getting a copy of your scans and have them 3D printed so you can study them. Perhaps we could have you work towards becoming a surgeon with blindness or other challenge contributing to the medical diagnostic imaging field! You could also help advance the bioengineering field by submitting models of your skin, skeleton, and other organs for use in various applications, like the cosmetic and reconstructive departments, too!

Check out these links for more information.

San Antonio Plastic Surgery

Get ready for some cuteness!


If you are assigned male at birth, click here to see how your face might look by submitting your picture.
Here’s a more in-depth explanation on how MRI and FMRI differ.
Enjoy!

My Experiences as a Totally-blind and Hard-of-hearing Person, part 2

Okay! Last time I talked about some of the social issues I’ve experienced due to my hearing challenges, and if you read my about me page, you’ll probably know that I can’t watch a lot of TV and films, which means that my perception of social dynamics might as well be static. I sometimes have a bit of trouble with the cocktail party effect. That’s basically when you are able to focus on one particular sound amid a bunch of other sounds.
Anyhow, I wanted to talk a little more about some of the auditory and technical issues I’ve dealt with, as well. First, however, I’d like to introduce you to a blind and hard-of-hearing gentleman who is pretty well-known among the blind community. Back in 2014, he wrote a blog article about how living with hearing loss has impacted his life to some extent, and what he has done to make up for it. Now hear this! The surprising thing was that I never knew he had a hearing loss in the first place, or that he also had the same condition I have.
‘Click, click. Is this on? Can you hear me? Hello? Is this working? Is that too loud? What was that again?’ These are many of the things I either heard other people say to me, or things I’ve asked of them. Some of them refer to using something called an FM system, which is a radio transmitter and receiver that operates on a frequency spectrum using FM without causing interference. The receiver sends this signal to a neck loop, which then sends the signal using magnetic induction much like how a guitar pickup coil works, to the hearing aid(s). This part of the hearing aid is called the telecoil. Sometimes I’ve used the FM system to spy on other people and do some eavesdropping. Although this post from Kids Health doesn’t have this, I remember reading stories from other kids about how they’ve taken advantage of their systems to tell their classmates when the teacher was coming back within range. This type of magnetic eavesdropping is more common than people realise, so to protect sensitive conversations, people usually go into a Faraday cage.
I first started losing my hearing at the age of seven, though it was barely noticeable at first because I’ve had perfect hearing from birth to about age six. Since I was born with a condition that made me prone to developing hearing loss, though, due to my brother’s having the same thing, I was later tested by the education service district’s audiology department when I first entered Kindergarten. Occasionally, I’d see my primary doctor, or someone at school would bring in an audiometer to this small room used for individuallised study. It was this big and bulky box with lots of buttons on it. The person running it placed noise-cancelling headphones over my ears and played a series of tones, some of them I remembered to be at 1000Hz, or 1kHz. Other times, they would simply insert a small probe into the ear canal and play the tones through it. Whenever I got ear infections, which was usually in my left ear, I couldn’t hear that tone at 30Db, I think, or maybe lower. They simply asked me to raise my hand corresponding to the ear they were testing if I could hear that pure sine wave tone. I also got a tube placed in my left eardrum to treat my otitis media on Tuesday, 9 December 2003.
I was always extremely talkative and was frequently dubbed chatterbox and other names I wish not to write here. I guess it is why, in later years, I became more afraid of being taunted for something I should’ve been free to do. A lot of people told me I never laughed, but how can you if you don’t know what people are laughing about? They’ve also criticised me for not yelling or making any loud vocal sounds. You see, part of the problem with hearing aids is that your voice may sound extremely loud to you, but it might sound very soft to others. Likewise, without hearing aids, you might speak up so you can hear yourself, but it might cause some people to cringe because you are speaking way too loud. And, because I come from a Spanish-speaking family, I never get to hear English on a daily basis except through books, reading the internet, and going out. Yes, although you could say English isn’t my first language, it is my primary language because I use it a lot more than I know Spanish, and it is because I know a lot more words and vocabulary than I know in Spanish. However, I rarely get humorous comments and sarcasm because I don’t often know by the tone of the situation, though this may work differently in writing. So, in elementary school, I went in every couple of days to speech and language pathology, either individually or in group session so they could better fine-tune my social and communication skills. After all, I’m pretty sure that’s the only reason I went in the first place.
Also, I never understood this until recently, but I remember an experience where I was supposed to give an oral presentation about a likely scenario that would occur five or ten years in the future. When it was my turn, I briefly talked about how I wanted to do something that involved using Braille Music, web design, and flying. At the end of my speech, the guy who facilitated the group thanked me, but he said something after that which I couldn’t catch, but I gathered from his tone that he wasn’t very pleased with my performance. That was in early April of 2010. Two years later, in December 2012, I was talking to someone about my fascination with my synaesthesia project, and the person at this party, who I actually met via some mutual friends, told me that they could tell how passionate I was because of the enthusiasm in my voice. In reality, it was because I had varied or modulated the inflection of my voice to sound less boring. While I may not have been conscious of it at the time, I am glad that I finally know about it so I can be sure to use that in leadership-related fields.
I got my first hearing aid for my left ear in Summer of 2001, and at that time, I remember experiencing tinnitus that sounded like the buzzing of a fly’s wings, or more like a sawtooth wave, though not as harsh. Some of them were around 325Hz, but the one I remember the most was one that I kept hearing in my left ear, which was around 265Hz. It lasted for about four months, and at one time, I thought it dropped about a semitone. Anyhow, I was first ecstatic for having gotten it, and that I could hear things just as well as I could hear with my right ear, but soon, I didn’t feel comfortable wearing it, mostly because I didn’t want others to know I had a hearing aid. I only wore it at school. I still had enough hearing in my right ear to not need my hearing aid at home. If you’ve read my other posts, you may have learned that I was bullied by some blind people for having hearing loss because I was the only one with it in our little clique.
In the summer of 2004, it was decided, based on a recent hearing test , to complement my setup with another hearing aid, which meant now that I had then developed bilateral hearing loss. It was evident by the audiograms that my right ear was better at perceiving higher frequency sounds than the left, so whenever I talked to people, I’d turn my head so that my right ear would be facing them, or i’d sit on the person’s left side. I’ve had some instances of diplacusis. That’s basically when a tone sounds slightly higher or lower than what you know it to be in the other ear. For example, if I played a tone of F-sharp4 in my right ear and played that same tone in my left ear, I’d hear a G4 instead. I didn’t know I had perfect (absolute) pitch until long after, let’s say when I was in my sophomore year of high school, but back then, this was what I had to work with. Occasionally, I’d wake up with a condition that felt like my right ear, usually, ducked the audio coming in. Sometimes I’d get a small headache and hear this strange buzzing tone, like one of those old dial tones at 120Hz, but with lots of high harmonics added. Also, frequencies at the high end of the spectrum are almost imperceptible, and voices end up sounding tinny. There has been some studies to see whether corticosteroids were effective at treating sudden sensorineural hearing loss (SSHL). I suspect it was maybe how oversensitive my tiny ear muscles were while I slept. I had a habit of sleeping with earbuds, so I could listen to various soundscapes while I slept, but maybe my ears thought they were too loud, so it tried to protect itself the best way it could. If you’ve ever experienced spontaneous ringing in your ears, this post from 2013 explains that the outer hair cells, which are used to amplify really quiet sounds, tend to vibrate on their own, sometimes causing a feeling of fullness or temporary loss of balance. Fortunately, there is a feedback loop that corrects this problem in a minute or two. About a month ago, I heard this tone increase in volume until it was nearly deafening. It was at around 975 Hz, and ten minutes later, everything got tinny again. What was more interesting was that anything that was sympathetically resonant to 975 Hz caused those hair cells to vibrate abnormally.
Another interesting phenomenon I noticed was that I could control some of the muscles, which I later learned were called the tensor muscles, and make sort of a click, click, click sound. It was perceptible enough that if I placed a microphone inside my ear, I could capture this sound. When I first discovered this, probably when I was five years old, I was afraid of having it and thought there was something in my ear that was causing it. I thought of running away from it, but no matter where I went, it’d just follow along with me.
Anyhow, I don’t know what happened, but once, probably in summer 2006, after I had gotten a tooth extraction, I noticed my hearing dissipating in my left ear if I moved my jaw too far back. I was genuinely afraid of this, and I never told anyone about it, so I don’t know what could’ve caused it. I suspect it might have been due to inflammation of the tempromandibular joints, but since I was on non-steroidal anti-inflammatory drugs, I didn’t feel anything.
Anyhow, socialising got harder and harder as my hearing continued to worsen over time. Crossing streets became troublesome to the point I needed to solicit assistance all the time, and I’ve had some blind people guilt trip me into thinking it was my fault I couldn’t hear them when they yelled at me or whatever, instead of just using alternative means of communication, like spelling words on demand using the phlnetic alphabet. One thing I’ve come to realise is that the more I am familiar with hearing a word or phrase based on its cadence, rhythm, inflection, intonation, prosody, etc, I could recognise it even if I didn’t hear all the vowels and consonants. Of course this wouldn’t work for words or phrases I’ve never heard before. It’s like listening to the lyrics of a song. Your brain expects to know what is coming ahead. This I later learned to be Lady Mondegreen Syndrome. That’s why one of my former teachers of the visually-impaired gave me a special nickname so that even if I didn’t recognise his voice, I’d still know who he was because of that.
When I got my first computer in 2007, even though I didn’t have internet then, I still had enough hearing to use the desktop speakers at high volume. I watched some TV shows by pressing my right ear against the TV’s speakers, but I later found a TV with a headphone jack, and this made watching TV shows easier. It wasn’t until late 2009 to early 2010 that my hearing decreased rapidly, especially in my right ear, that I started being more dependent on my assistive technology to hear my surroundings even when I was in my own home.
When I first got my own internet through Comcast back in 2010, I was gradually introduced to other blind people on Skype and other platforms, and I learned about audio production-editing using single and multi-track editors and digital audio workstations, MIDI sequencers and VST hosts. I did not know much about some of the fancier audio equipment people used to make better quality recordings, though. I had lots of ideas for making audio drama, but I ended up being criticised because people told me that my audio was of super-low quality. They never explained how so it was,and I probably should’ve explained how difficult things were with my hearing loss. Alas, I never did. Instead, I continued pressing on, oblivious to some of the artifacts I was likely producing by boosting my onboard sound card’s preamplifier to the maximum so I could monitor myself, and probably other things. Speaking of monitors, I began relying hevily on anything that acted as one to also behave like a personal sound amplifier or hearable. This is one of the ways I’ve developed interesting and unconventional uses of audio gear. I’ve used headphones as stereo microphones. I didn’t know that in-ear mics existed, like the Andrea’s Electronics binaural microphone headphone combo, or the earbud version. Fortunately, I later got a Pocket Talker Ultra from Williams Sound. I really enjoyed using them because to people, it didn’t look like I was using hearing aids. Rather, it looked more like I was listening to music or something. Someone told me that if I got Bose’s new augmented-reality headset or Apple Air Pods, I could virtually use hearing aids all the time. Not only have I found monitoring to be of great help in amplifying my surroundings, but it has helped restore my hearing awareness, so that I was more likely to notice when I mispronounced words or use wrong intonations as is common in people who can’t hear themselves well. Of course, people who wear hearing devices all the time, even when they sleep, are likely to develop a lot of earwax over time. Also, I spoke with somebody who said that they absolutely hated hearing aids and avoided them like the plague. They said that even if insurance were to pay three to five thousand dollars for a piece of crap, it was still a rip-off when they could easily build a rig that was about a thousand dollars and have much better EQ and filters and binaural microphones and stereo headphones.
In Fall of 2011, I was now at the top of the heap in my high school career. I initially didn’t think of making anything of it other than do my school work, but when I learned that our musical theatre department was putting on a production of The Wizard Of Oz, one of my favourites of all time, I knew I had to conquer my fear of not being able to do well because of my hearing challenges. And, while I didn’t run the soundboard that time, I did help in sound design by gathering sound effects from my archive and mixing them together, and even recording my own sound effect and editing it. I later got to run the board for Senior Spotlight after having demonstrated that I had exceptionally good operational skills even if I didn’t possess the technical background, knowledge or expertise of audio fundamentals.
Although a lot of people always recommended that I record lectures using a digital voice recorder, there was one particular reason I didn’t often follow through, a huge problem I didn’t learn about until much later. If you remember when I first talked about using an FM system, and if you read the article I linked here pertaining to that subject, then you are probably aware that many venues provide assistive listening devices to help negate the effects of ambient noise by bringing the sound of the person speaking into the microphone directly into someone’s ears. This is because, more often than not, sounds with frequencies that decay rapidly are lost in the reverberation or echo of a room, thus making it virtually impossible for one to hear the subtleties of a vowel or consonant. This was always a problem I experienced when I was at an auditorium and couldn’t hear what was being said through a speaker, or even when someone was just talking without one. What I’ve also noticed was that there tended to be some psychoacoustic differences in using headphones versus speakers. For instance, if I recorded something, and then I used speakers to play it back, I might hear things I failed to hear had I used headphones or earbuds. So, beginning in 2014, I began to look for ways of recording lectures directly. I had one instructor stand in front of a stereo microphone that was hooked up to my computer so I could record what they were saying. One challenge to using this kind of approach of direct listening was that since FM and other wireless transmission systems sent the microphone’s input directly into the hearing aid in mono, we also tend to lose any sense of directionality, so if a person were to my left, they would still sound as if they were in the centre. The only exception to this would be if someone invented a wireless system that used stereo microphones. So, when I ran the show for Senior Spotlight, I was able to use my FM rig to connect to the soundboard, and while I couldn’t hear the performers who were far away from the hanging mics, I was able to hear when one of them spoke directly into the microphone, and I knew when to play the sounds without the help of the stage manager to cue me by tapping on my shoulder.
Anyhow, in 2016, I was eligible to get new hearing aids thanks to my insurance plan. These new devices had two microphone capsules with variable pick-up patterns. They wanted to wait until I had completed another tympanogram, audiogram and speech perception test, all unaided, before configuring them. It was determined that I could not hear anything above 3,100Hz at 85Db in the right ear, and nothing above that frequency, no matter how loud it was, on the left ear. I once had a few bone conduction tests, but I told them that I mostly felt the vibration of the tones rather than heard them because of the occlusion effect. Speaking of that, some of the older hearing aids used a bar that you would bite on, so that the sounds resonating from it would be transferred via this means. Since the type of hearing aids I received were more modern, it meant that I could now use brand-specific accessories to enhance my listening experience. I now use a ComPilot Streamer, which is just like a neck loop, but uses a different RF protocol, which makes it unsharable to other hearing aid users. These hearing aids also had sophisticated digital signal processing for equallisation, cut and shelf filters, and even a transposition feature. Imagine I played a tone at B-flat 7. To my ears, it would sound like something in between B5 and B-sharp5. That feature always threw me off because I didn’t know what was real and what was not. I had the audiologist set up a programme that would turn this feature off whenever I wanted to listen to music. Anyhow, it made things like the S-sound sound more like an SH. It was still hard to differentiate between the Eeh and ooh vowels, though. For example, my friend told me that they knew of someone who may have had auditory neuropathy or central auditory processing disorder, and that they couldn’t hear the /k/ and /t/ sounds in cat, leaving them only to hear the /ae/ sound with a high attack and release. Although this post further explains how these hearing aids work, I couldn’t find one that talked about how lack of exposure to high frequencies could lead to brain atrophy, so some manufacturers are using either a harmonic exciter or some other technique to gradually introduce those high frequencies again. I hope that using these techniques, we can develop more hearing simulators that can simulate various hearing impairments the way some goggles are able to simulate blurred vision for things like what it feels like to be drunk. We could even prepare people to know how wearing a cochlear implant full-time might feel.
Having said that, I heard about a former on-line academy that prepared blind people for careers in the IT and audio production fields earlier this year. Sadly, however, they ran out of funding, but luckily, they released their audio courses. Alternatively, you can go here to learn more about IT, and here to find tutorials on audio engineering and production. When I finally started working on refining my critical listening abilities, I found that I could not hear certain vital characteristics that would’ve helped me determine if there were problems in my audio, such as aliasing, quantisation noise, artifacts from transcoding, comb filter effects, etc. At least now I knew about these concepts so that I would be more aware of them.
So, that’s basically my experiences with hearing loss in a nutshell. I do hope that synthetic biologists will further experiment with quale and other birds and reptiles to better understand how epidermal stem cells work, and work on implementing a technique discovered by Oregon State University. I know that a lot of blind people would act indifferent about wanting to restore their eyesight, but they would almost no doubt jump at the chance of having their hearing loss cured, assuming they lost it later in life. Of course, there is always going to be a big Deaf and a little deaf, the former referring to people who identify with Deaf culture and have lerned to embrace it. Someone from the National Federation of the Blind said that there was no such thing as Blind Culture. So, is there such a thing as Deaf-Blind culture? You tell me.

An Analysis on the Singing and Speaking Voice

In this brief post, we will look over the two kinds of intonations used in the voice you use when you speak in several situations, and those you use when you sing. If you compare the two, you’ll find that they are very different. Take this, for instance: http://www.youtube.com/watch?v=Nr9JQ-oF3jQ what do you notice in this example? The fundamental frequencies range from180-300 HZ, with very high resonance, with chronological age of eleven years, as measured by our calendar system for the speaking voice. In some countries, we do not celebrate birthdays or chronological ages at all.
The singing voice, however, is far more different. The fundamental frequencies not only follow the notes corresponding to the tuning system given to the singer, but the resonance is altered to give it an etherial hypnotic effect. Let’s try a quick test. If you say the sound “ah” in your normal voice, for males, the resonance will be deeper than that of females and children. Likewise, if a male were to apply the singing voice to the “ah”, then his resonance would be even deeper. This means that the correlation with the speaking and singing voice vary directly because the voice of the female will also deepen in resonance when she applies her singing voice. This is not to say that it is directly related to facial and laryngeal structures in different sexes, as they can vary widely.
Have a look at another type of therapeutic approach, called Music Intonation, used to rehabilitate people whose left hemisphere of the brain has become damaged.

Here’s some information regarding The Music Instinct.
So, how does one achieve this goal, and why do many not know about it? The first thing to understand is that sound is simply a wave. A pure tone is a sound that sounds like a hum, without any extra tones above or below it. These extra notes are called harmonics. We calculate the harmonics based on the integer that can be multiplied by 2X. Noise is an irregular sound vibration, another concept to keep in mind. Children have nearly the same vocal range before going through puberty, but some learn to talk the way they should according to their gender assignment. Males tend to speak with a dark, sinister sound while females usually speak with a brighter, more cheerful tone. Of course, if you were to be among lots of children and you were blindfolded, your only cues are to figure out who is who based on these characteristics. Also note that there may be overlaps, which makes us misgender a person if we can’t see them or if they are using a communication device. A perfect example of this can be found here.

Why do Gay Men Talk Differently? If you’ve watched any of the episodes in Glee, you can hear Kurt (Chris Colfer) sound like a stereotypical feminine gay guy.
The BBC has released some documentaries on trying to uncover the mysteries of the castrato voice. You can check them out here. https://trakt.tv/shows/bbc-documentaries/seasons/2007/episodes/1 There are many more male sopranist interviews that it would be difficult to list them all, but hopefully these will give you a place to start.
To begin with, many people get used to hearing what their own voices sound like when it vibrates inside their head via bone conduction, which is why they don’t like to hear how they sound in a recording. People with perfect pitch can make adjustments to their voices in a matter of seconds, and they learn to memorise just how much work is needed to apply that effect. That’s why some say, “It came from God”. It’s up to you to believe it or not, but in reality, they just listened to what something sounded like and started imitating it until they achieved the quality they wanted. They kept on doing this subconsciously until they had the memory wired into their brain. Deaf people do not have auditory awareness. As an example, crank up your music to full volume until you can no longer hear your voice through your earbuds. Then try saying a few sentences whilst recording, and then play it back. Did you mispronounce anything? It may be the reason why some adults may sing with a childlike singsong, but more likely it is probably due to their mental capacity.
You must have some basic knowledge of anatomy and physiology, as well as physics, to understand how the body’s resonant chambers work, and what you can do to manipulate them with your cerebrum and cerebellum. One unique characteristic about opera singers is that their vibrato measures approximately around six to seven beats per second, but in some old phonograph records, people had a faster vibrato of eight to nine beats per second. This is caused by totally relaxed muscles slowly contracting and relaxing again. This causes the fundamental frequency of the voice to go up and down in pitch in rapid succession. The same can be done for overtone (harmonics)), but it would need the work of moving the tongue up and down. Now, how we make our resonance chamber bigger is by opening up the throat, as if we were yawning. This forces your vocal tract to expand, the glottis to be half-open to make the voice have a breathy quality, and the tone emitted by the vocal folds have lower harmonics.
There are several kinds of terms that describe the specific techniques used in singing. Legato means that you slowly link the vowels and consonants to each other and you give them a nice, easy flow. In some operas, the singer has to staccato to describe tention, and again, vibrato is a very important aspect of singing. In some pieces, there are spoken words called Sprechstimme and Sprechgesang. Legato falls under the category of rubato or adlib, where you play freely.
The reason many people are ignorant about why our voices sound so much different when we sing, such as what you witnessed in this example is because they cannot make this distinction clearly, even singers have trouble understanding this concept because all they learn is how to do this, without getting in depth. They do it without knowing that there are ways of describing it. Anyone can achieve this goal, even youngsters, as long as they have the motivation to learn this skill to pursue a career in the arts. What you would need to decide is whether you should self-teach and get no immediate feedback, or work with someone who has experience so you can be finely-tuned.

Memorable Experiences, part 1

Welcome back, everyone. It is September, which means back-to-school for those of you who are still enrolled in K12, or attending a post-secondary educational institution. It is also going to be a month with Friday the 13th in it, so if you are supersticious, brace yourselves. I will quickly go over some interesting things I found this week. I won’t take long, though, because I want to concentrate on a different topic.
In the LGBTQ2SIA community, people in Native American cultures want to decolonise our patriarchal society so that everyone is treated equally and with respect. There was a conference held at Portland State University, in Oregon that depicted some of these characteristics. Also, do men and women, and those in between, find alpha or beta people attractive? Attraction is a very complicated thing because so many things occur within our heads, and much of it relies on vision. Remember, attraction is not about yourself (that’s identity), but about others. Gay couples tend to get along better and have less conflicts than straight couples. I used to visit this web site which is no longer existant, but it was called I Love Bacteria. The reason why words are becoming more perjorative is because our founders did not realise how much of a change in our language and culture would occur after our country was born as an independent nation. Sure, they knew technology was going to change. That’s why the courts are flexible. But, freedom of speech and freedom of religion were based on another time and place. So, as more minority and disability groups formed, they started to set rules for THEMSELVES only and not for others so as to screen them out, call them freaks, you name it. Remember that, just like the rest of society, they are diverse, so we might consider finding a balance so we can better live in harmony. REMEMBER, WE ARE NOT ROBOTS!
And with that, let’s get started. So, I would like to start out by describing one of my most memorable events in which I experienced euphoria and Déjà vu. But first, I am going to describe what I believe happens in the most common type of synaesthesia found in blind people, light and sound, touch and sound, smell and sound, sound and taste, musical keys and events, or voices and shapes and texture, and much more that are unique. If you were born deaf-blind, you may not be able to perceive some of these, especially those that involve sound.
As far as we know, synaesthesia is either caused by genes, or an unusual chemical reaction, or both. If it was through a gene, the knowledge of being able to transceive something would be there, but you would need to learn how to use it and have language to be able to express it. It is just like rolling your tongue. You have to learn how to do it before you know for sure that you have that gene. Some people, like me, have listened to music when we were still inside, but it all has run together. So, whenever we look at the sky, we automatically hear hums, drones, jingles, etc that fall in a cadence that resolves into an emotional key. Later, when the person develops absolute pitch and learns how to label each note, they can recall those memories and start putting labels to the things they remembered hearing. I found out, based on a YouTube video I recently found in which the drones and the notes were exactly the same whenever I was out in the rain, looking up at the sky during daylight hours.

Here are some more relaxing sounds for your listening pleasures. This one is one of my favourites because I tried to simulate what a typical day might look like.


You will find four items that you might find very useful to you. I was also very surprised to learn that the Moonlight Sonata was named so because of the moonlight shining effect in the first movement, and I could relate to it, even though I don’t remember seeing the moonlight. In another instance, I was listening to a radio show, and the host asked us to imagine darkness as we listened to Chopin’s Nocturn in E-flat major. Again, I could relate to this very well. So, I am including a link to an article so you can learn about this in greater detail.
This happened as early as I can remember, and when I came across it, I experienced that familiar feeling that I heard the drones before, and it reminds me of blue, now that I used the scientific mapping to find that out. So I think this is a tendency to pass through some subconscious memory, which may also be called the collective memory. A person does not necessarily need to know all the music theory terms to make artistic music. Some people just do it and not realise it has meaning. This is just like when a writer is able to write fluently by observing others and mimicing other people’s writing styles and never learn the technical background behind it.
If we teach our children how to label colours by pointing out that each frequency of light is a name that red is red, blue is blue, and so on. Who says this is so? Why can red not be green and blue be purple? Same thing with music. Who says A-440 is a? Why can it not be A-320 or A-415.3? The thing is that animals primarily use vision as their main source of information, including humans. What if we taught future children to establish a multisensory experience, like the colour-music notation? Currently,children are being taught to name colours, name animal and vocal sounds, and some of them g dragged into the arts of music. But,what if we taught them the rainbow piano association to listen to all the different types of colours? It would not only be beneficial for blind people, but it would also benefit everyone else, as well. They can learn how to walk in the dark, feel for objects, read and write in Braille, and much more. An inverse I noticed was that children tend to be afraid of certain noises and they would be afraid of seeing distorted shapes. The same thing can apply to darkkness and silence. Some children are afraid of the silence because silence is a noise, as well as darkness, though it may not be immediately obvious.
One thing I noticed that was quite interesting is how hearing works in several situations. I have two differing levels of hearin loss (one in each ear) so I use amplification devices that increases the sound intensity of things around me. What is more fascinating is how humans talk above loud noise. I discovered that I could sometimes hear people without the need for amplification because to them, the background noise is so loud that they need to be overheard, but for me, the background noise is of no problem. So this makes it easy for me to hear the other human. Another thing some blind people also tend to do is precisely record how tall or short something is. Like, how to locate a light switch. I remember waiting for someone. I was sitting on a bench and the person left to get me something. After a few minutes, I felt the ground give and a slight disturbance of air, so I immediately put up my hands at the position I estimated the person’s hands were going to be. They asked me how I knew they were coming. Truth is, I used my tactile senses to detect changes in air flow and vibration. Plus the fact that I was expecting their return, I set my senses on high alert without realising that I have done it.
Whenever I imagine myself in windy weather conditions
with a light drizzle in the air. It is during the dusk and dawn hours when some or no light is present. I would like my future generation to experience what I went through. Maybe it will. Maybe because I wired my brain in such a way that I rewrote the genes that will be passed down to my descendants. I put together some pieces of art called “Meditative Brain Wave Entrainment Session” and “Experience Various Weather Conditions”. It shows my observation on the light and sound phenomenon. You see, sound and light have a lot in common, in terms of frequencies in wavelengths. This concept is very abstract that it can only be thought of in one’s mind. But if we take time away from the latest fashions, and start to really get yourself into it, you would be surprised that this process is referred to as meditation because it will sync your brain waves. Get all the thoughts that are unimportant out of your mind, and imagine nature. By imagining this way, it is a helpful sleep tool in my opinion.
My observation started to take into focus back in 2002. This sudden thought occurred to me that if I heard music, I could imagine some basic forms of light, such as darkness, semidarkness, and brightness. These three I could do by humming the music I really had in mind. Then, as I delved into this idea further throughout my life, I realised it had a stupefying, hypnotic effect that would leave one transfixed for a while. My readers may agree or disagree with this idea, but either way, this is why I enjoy working with music. I considered looking into therapy because it does soothe the soul and body.
I ran into several conflicts with this observation. I did not want to reveal this idea to anybody because people would think I was being strange or nuts. On the contrary, it is a good thing I have this idea. It turned out that I was not alone, for one day, I was reading a twitter post that shared a similar perspective, and when I asked the poster about it, the latter replied that they were putting together the northern light phenomenon, an observation that is based on light-to-sound conversion. You have to picture the northern light, and then try to picture how sound would fit the seeing of the light in one’s head. Unfortunately, for people who have never seen light, it will be nearly impossible to describe what light looks like. How do we know when we see light if we never seen it? Same with sound. How do we know what sound actually feels like? Sound is the only form that requires a medium, and it is felt by the body’s senses. Smell, taste and vision have their own chemoreceptors, and cannot be felt or heard. Smell does require aroma particles that buoy in space. However, you have to breathe the air in order to smell it. These two last interact together: sound and feel.
There is just so much that I believe. Life to me has purpose. If I waste my time on tomsooleries, then I would not have accomplished what I wanted to do on this planet. Some of you may have a different view on life, but to me, life goes on. Our universe will never stop existing, although the matter will re-construct. Nature made ways to make cells into various shapes and substances to form creatures. How this happened we know not, but it is certain that after this happened, we started to evolve into many characteristics. It is believed that humans will some day have the option to become immortal.
Enough lecture and enough on my philosophy. I would like to propose a new experiment in which a brain’s visual processing nerves are surgically moved into the auditory region to see if the signals from the visual cortex end up being interpreted by the auditory cortex, and vice versa.