It's hard for me to believe it's already April. I'm still in the "2021 is brand new" mindset, even though almost 100 days have passed!
First, a warm welcome to all our new guests! I appreciate you and your time and hope you find value in what Doreen, my wife and care partner, and I share. Please keep in touch and don't hesitate to ask questions!
This month is Parkinson's Awareness month. If you're not familiar with it, the Michael J. Fox Foundation has a great page explaining how you can help end Parkinson's. I decided I would try to post a story every day to our Instagram account explaining a new symptom or scenario I deal with.
Did you know that people are being diagnosed with Parkinson's Disease at an alarming rate? The paper "the emerging evidence of the Parkinson's pandemic" was published a few years ago. The opening statement is quite telling.
"Neurological disorders are now the leading source of disability globally."
I had no idea it was so rampant, or how research is creating a solid case that synthetic chemicals found in many household products could be the trigger.
In this post:
Let's get to it!
Parkinson's Updates
As humans, we adapt incredibly well. I tend to tune out my symptoms as background noise. I'm thankful my wife has taken an active role in supporting me and pays attention to things I sometimes ignore. I like to share my symptoms for several reasons. It gives me a record to look back on. Years from now it will be incredibly helpful to know how far I've progressed. It gives insights to those of you new to Parkinson's Disease who aren't sure what to expect (although it is important to understand that symptoms and progression are very different for everyone). My goal is not to complain but to educate. It's a little bit of therapy too.
Speaking of therapy, our podcast is taking off! Our first episode is out and we plan to record and publish episode two in week or two (it's been a little hectic with a new puppy in the house, more on that later!). We found the process is much more emotional than we expected. I honestly don't know what to expect when we start recording, but I imagine there are going to be some "deeper" episodes coming up.
By the way, if you are a person with Parkinson's Disease or a caregiver and want to share your story, we'd love to have you on the podcast!
Just contact me through the site and we'll sort out the details. Our podcast is now available through the major providers. Just click one of the links below to listen and subscribe so you are notified when we post new episodes!
The next episode will be about we adapted to live after the diagnosis. Right now, here's what I can share:
My left hand tremors most of the day. It doesn't really interfere with my work (I can still type and use the mouse) but sometimes the nonstop tremors can get annoying. I am not embarrassed by it in public, but I do know it's noticeable and wonder what people think. The weird symptom that does impact work is the involuntary contraction of my left index finger. I'm left-handed so that's the finger that I right-click with. Sometimes my screen seems frozen and the mouse doesn't respond, when I realize it's because my finger is pressing and holding the right button down. Frustrating.
Ironically, one of my therapies is Virtual Reality. In the game I love to play (Population: One) I do well with the sniper weapons, despite being left-handed. When I'm relaxed, my hand tremors non-stop, but when I've got someone in the sights of my scope, I'm able to remain steady and pull the trigger without jerking. I'd like to think it's reinforcing some of those weak connections.
My face has been a little better. A few weeks ago, I noticed a strong "tugging." It pulled my lips into a frown, sometimes a snarl. I realized on video conferences I looked mad but it's just the lack of dopamine expressing itself in my face. My left foot still has episodes of dystonia (like a "Charlie horse" across my entire foot) but nothing that isn't manageable. For the first time I have noticed a little dizziness sometimes when I stand up. It's not bad and I'm still able to hit around 100 burpees in my workouts which are the ultimate practice for changing elevation.
A few times I've had an episode where I suddenly feel nausea and dizziness at the same time. My skin tingles and I feel like I'm out of breath. The episodes pass quickly but were disturbing enough to take notice. For what it's worth, I typically measure my oxygen levels, heart rate, and blood pressure when I feel off and they are always in healthy ranges.
I continue to consider exercise as a major factor in controlling my symptoms. I'm just over the halfway mark in my current program that has me doing everything from leg raises and pull-ups to one-armed push-ups and squats on the balance ball. I'm very pleased with my progress, although my attempts to keep a steady morning routine continue to fail miserably. I end up working out in the evening most days. It's a flywheel, and I'll just have to keep pushing. On the other hand, I received this wonderful bit of news:
I'm not stopping there. I started a new fundraiser this year. You can access it via the "donate" button at the top of this site.
The Fabulous Misses Pepper
Losing our German Shepherd, Indi, just before her eighth birthday last year was hard. Very hard. It was one of the many things piled on top of the rest of 2020 that made it a challenging year. We knew we would move on and adopt a new member for our family, but this time we decided to approach it differently. Due to my Parkinson's Disease and Doreen's fused spine combined with hypoglycemia, we decided to have our new family member trained as a service dog. We met the parents who have a history of siring well-trained animals and Doreen did a ton of research on training techniques.
Then, in March, it happened. We took a road trip across the state to visit the litter of pups and met our baby, "Pepper", for the first time. She thanked Doreen with a kiss.
We decided she is definitely "the one." We picked her up one week later. Pepper is an energetic, smart puppy. She's already learning commands like "sit," "down," "place," and "bed." She goes to the door when she must use the bathroom and sleeps through the night. We're happy parents! Oh, and she knows how to relax!
We're so excited to have her with us! I'll share more about her in future blog posts. She has swimming lessons, service dog training, and a whole lot of social visits coming up, so I'm sure we'll have plenty to talk about. I have a lot of pictures to post as well!
Now, what else is keeping me busy?
My New Heavenly Hobby
I've been asked a lot of questions about my new hobby, so I thought I'd share some answers. If you don't know what I'm referring to, it's the new telescope I picked up. The telescope has a built in CCD sensor and motorized base so it can take deep space pictures. It does this by capturing 10-second exposures and then "stacking" them to produce higher signal pictures. Here is an image where I manipulated the color palette but the structure is 100% from a backyard capture. This is M42, the Orion Nebula, one of the few nebulae that are visible from the naked eye (but not if you live as close to Seattle as I do).
This is another of my favorites: the moon.
The moon is a fast, easy capture compared to deep space objects. I'm often asked, "How much do you filter your images?" Technically, all deep space images must be filtered. This is because the light signal is so low, that one a scale from 1 to 100, typical captures are in the 1 - 20 range. The intensity must be stretched to make it visible. I decided it would be easiest to explain using an example. So, this is my journey of the Orion Nebula.
If you're interested in space photos, I post all my final images to this Google album.
First, I imaged the nebula for about an hour. That produces 60 * 6 = 360 10-second exposures. The telescope processes them in real-time and presented me with this:
That was enough to make my jaw-drop, but I knew I could do more intensive processing to produce an even better result. What I start with is several hundred raw images that look like this:
Tough to see, right? That's unfiltered. Let's stretch it so we can see it better.
Notice the graininess and random pixels in the image? That's from sensor defects. Some of it is light pollution. It's a usable image, but out of the hundreds I capture, there are always dozens of bad ones. Bad pictures throw off the average signal we try to achieve by stacking and can corrupt the final image. So, I scroll through the images one by one and throw out the ones that look like this:
Once I've culled them together, I take it through a workflow:
Image calibration processes the images and uses statistics and algorithms to remove dust, specs, and random noise
Cosmetic correction can use "darks" or photographs taken with the lens covered to block out all light, and/or algorithms, to detect sensor defects (so-called "hot pixels") and remove them
Debayering takes the black and white image that you start with and uses the matrix the pixels are laid out in to build a color image - the grayscale was just the way the image is rendered before combining the patterns
Subframe selection uses mathematical formulas to compute the how out of focus or distorted stars are and how much signal vs. noise is in the image. The images are sorted based on a weighted quality score so a reference frame can be selected
Registration (also known as "star alignment") uses the matrix of stars it detects in the reference frame to align all subsequent frames. You can see in the following video that I took of several frames displayed in rapid succession how the frames don't align, so the registration creates a matrix transformation to ensure that the stars align perfectly across each image
Image integration then takes these images and averages out the signal. Over a lot of exposures, the noise gets averaged away and the signal and detail are strengthened. It's not the last stop, however. This is what the stacked image looks like:
Oh, yeah, let's stretch it so we can see it better:
Wow! That looks a lot better. but what about the graininess and that blue background? This is when image processing workflows can branch in different directions. What I did was:
A fast rotation to orient the picture in the direction it is typically presented
A dynamic crop to trim the noise out of the edges
I then split the channels into separate red, green, and blue. Typically, one of those will present the most structure/detail of the nebula, so I use that to create the template for the subsequent processes
A process called dynamic background extraction uses samples placed on the image to determine what the background should be, then transforms the pixel data to remove gradients from light pollution. This step gets rid of the blue sheen
After applying the same template to the red, green, and blue channels, I figure out which has the highest intensity (strongest signal) and use linear fit to normalize the other two channels
I save the strongest channel (this case was red) for some magic a little later
I combine the channels back to a single image
Background neutralization takes a sample of the background and uses it to "flatten" the background
Now I extract the luminance of the image and use it to create a luminance mask. The higher intensity pixels match with stars and nebulosity, so those are "protected" more than the background, which has the lowest intensity
Using the luminance mask, I apply noise reduction algorithms. One application to the RGB channels handles most of the background graininess, while another to the chrominance channels smooths out color patterns that may have appeared as artifacts
I then apply deconvolution which is a technique that was developed to correct defects in the Hubble Space Telescope's lenses. It turns out it works equally well to correct for distortion in regular telescopes. This step sharpens the stars. I use an inverted version of the luminance mask to keep from putting detail back into the background
To this stage the image processing was considered linear. This is when I use a histogram transformation to stretch to a non-linear image. Effectively think of taking a signal from 1 to 5 and multiplying it so it translates to 50 to 100 (on a scale of 100). At this point I could export it and it will appear the same as the stretched images.
Now I apply another denoising algorithm to eliminate any artifacts of stretching.
For the next step I must deviate and share a few words about the á½° trous wavelet transform algorithm. This algorithm is genius. It essentially looks at an image as having different wavelet levels of detail. So, if you consider 4 layers, you look at structures at layer 1 (single pixel), layer 2 (2x2 or 4 pixels), layer 3 (3x3 or 9 pixels) and so on. Essentially it is a way to separate noise (lower layer), stars (lower to middle) and nebula (scattered across higher levels) when applying manipulations.
Remember that red channel signal I saved earlier? I grab that and run StarNet++ which uses a neural network to figure out where the stars are... and then remove them. That's right! Gone. This is what the red channel detail of the Orion Nebula looks like without stars.
As a bonus, I get the stars that were taken out as an image to use as a "star mask":
More on that.
With the starless mask activated, I essentially block out everything but the nebula itself. So, I apply a High Dynamic Range Wavelet Transform algorithm to increase the contrast and detail of the nebulosity.
This is followed by a localized histogram transform to bring out the color.
A multiscale linear transform helps sharpen the image.
I apply a curves transformation (again with the mask in place) to draw out color details and increase saturation.
Using the star mask, I apply a morphological transformation to shrink the stars and give more spotlight to the nebula.
That's it. At this point I've gone from this:
To this:
Whew! Thanks for coming along to learn about my new passion. Between the puppy and processing images, I've had my hands full!
Regards,
I need to go back and re-read your photography bit when I'm more awake. I'd been wondering what all was involved in this new endeavor of yours. Seems rather daunting.