If you could really use a wish right now, this year's best meteor shower has officially begun. Until August 24th you can catch a shooting star from the Perseid Meteor Shower. But you know you don't need to look to the sky to find a meteor, right?
Our atmosphere incinerates most meteors as they fall from the sky, but some survive entry and crash-land, becoming meteorites
.
Meteorites can be a rare find. Before the 1970's, a few dozen meteorites are recorded every year, mostly from USA, Australia, India, France, and Russia. Most countries are lucky to find one in a year.
They're hard to find partly because they pretty much look like rocks.
But in 1974, hundreds of meteorites are found in Antarctica
– more in one continent than before.
This antarctic explosion signals a tectonic shift that rocks the meteorite world.
Antarctica is the last region on Earth to be discovered. It has no known indigenous population and humans aren't known to arrive on Antarctica
until the 1820's.
By the mid-1900's – less than 100 years ago – small teams of researchers are making regular, annual expeditions there. Antarctica has six months of daylight and six months of darkness, so most researchers visit for the summer and leave before the dark winter.
In the summer of 1969, a 10-man party of the 10th Japanese Antarctic Research Expedition (JARE-10
) stumbles upon Antarctica's first ever meteorite discovery, near the Yamato Mountains. They find eight others nearby. While it's not unusal for a meteorite to break into pieces upon landfall, the majority of them are of different chemical classifications, suggesting they come from different meteors.
They're found on bare ice, making them easy to spot.
The meteorites are found unusually close to each other, all inside a 50 square kilometer area.
The JARE-10 team isn't there to search for meteorites, but meteorite finds are rare and researchers value them. Their chemical compositions tell us about the history of our solar system, so they let us explore space without looking to the sky.
The JARE-10 team takes the meteorites back with them. After they publish their findings, researchers take note. What are the odds that you can just happen upon a cache of meteorites without even looking? They predict more meteorites are waiting in Antarctica. The stone rush begins.
Soon, research teams from Japan, US, and Europe are finding more meteorites in Antarctica than anywhere else.
Then something unimaginable happens.
In 1979, an eight-man team of the 20th Japanese Antarctic Research Expedition (JARE-20
) hits pay dirt near the Yamato Mountains. They find over 3,000 meteorites
that summer.
The total is the equivalent of all meteorites found the previous 100 years combined.
Nearly all of them are resting on bare ice waiting to be found.
Their successful score contains over 90 different classifications. The top ten classifications
are:
They only had a few months to find what they could before packing up for the winter, but the rest is in the books.
Today, more meteorites have been found in Antarctica than anywhere else.
Antarctica has been largely untouched for millennia, it's got relatively low foot-traffic, and the penguins are only interested in pebbles, so your chances of picking up a meteorite there are pretty solid. You could catch the Southern Lights while you're at it, too. Just please don't litter.
But if you're like me and turn into a zombie in the cold, maybe there's something to the tundras in general. In the mid to late 1990's, another stone rush begins in northwest Africa after these NWA
meteorites found in the desert begin to appear in the markets. You can see the impact of these meteorites on the late 1990's portion of the graph.
If you want to see a meteor but don't plan on taking a hike to the nearest tundra, you can try to catch the Perseids while they're still here. Or maybe give that funky looking rock on the ground a second look.
]]>This 2020 Presidential Election Muddy Map shows county-level vote margins
and vote density
in a 2-dimensional scale.
To view in full screen - click the hamburger menu icon, then click "view in full screen".
Interactive - hover over counties to learn more about their vote totals and margins. You can double-click to zoom in.
If this is your first time viewing a Muddy Map, click here to read more about Muddy Maps, the problem they address, and the maths and colour theory behind the two-dimensional key.
While the 2016 Muddy Map uses vote totals on the vertical scale, this 2020 version has been upgraded to use vote density (in Votes / km^2). This more accurately accounts for disparities in land area
, since county areas can vary by three orders of magnitude.
Current upper fence: 50.66 Votes / km^2
This map is regularly updated with the latest data that we receive. Votes are still being counted. Some counties in this map may have incomplete data, and relative vote totals may still be in flux. In addition, we don't have Alaska vote data processed yet, so Alaska is blank, for now.
Although we haven’t been able to quickly find optimal solutions to NP problems like the Traveling Salesman Problem, "good-enough" solutions to NP problems can be quickly found ^{[1]}.
For the visual learners, here’s an animated collection of some well-known heuristics and algorithms in action. Researchers often use these methods as sub-routines for their own algorithms and heuristics. This is not an exhaustive list.
For ease of visual comparison we use Dantzig49
as the common TSP problem, in Euclidean space. Dantzig49 has 49 cities — one city in each contiguous US State, plus Washington DC.
A greedy algorithm is a general term for algorithms that try to add the lowest cost possible in each iteration, even if they result in sub-optimal combinations.
In this example, all possible edges are sorted by distance, shortest to longest. Then the shortest edge that will neither create a vertex with more than 2 edges, nor a cycle with less than the total number of cities is added. This is repeated until we have a cycle containing all of the cities.
Although all the heuristics here cannot guarantee an optimal solution, greedy algorithms are known to be especially sub-optimal for the TSP.
The nearest neighbor heuristic is another greedy algorithm, or what some may call naive. It starts at one city and connects with the closest unvisited city. It repeats until every city has been visited. It then returns to the starting city.
Karl Menger, who first defined the TSP, noted that nearest neighbor is a sub-optimal method:
"The rule that one first should go from the staring point to the closest point, then to the point closest to this, etc., in general does not yield the shortest route."
The time complexity of the nearest neighbor algorithm is O(n^2)
. The number of computations required will not grow faster than n^2.
Insertion algorithms add new points between existing points on a tour as it grows.
One implementation of Nearest Insertion begins with two cities. It then repeatedly finds the city not already in the tour that is closest to any city in the tour, and places it between whichever two cities would cause the resulting tour to be the shortest possible. It stops when no more insertions remain.
The nearest insertion algorithm is O(n^2)
Like Nearest Insertion, Cheapest Insertion also begins with two cities. It then finds the city not already in the tour that when placed between two connected cities in the subtour will result in the shortest possible tour. It inserts the city between the two connected cities, and repeats until there are no more insertions left.
The cheapest insertion algorithm is O(n^2 log2(n))
Random Insertion also begins with two cities. It then randomly selects a city not already in the tour and inserts it between two cities in the tour. Rinse, wash, repeat.
Time complexity: O(n^2)
Unlike the other insertions, Farthest Insertion begins with a city and connects it with the city that is furthest from it.
It then repeatedly finds the city not already in the tour that is furthest from any city in the tour, and places it between whichever two cities would cause the resulting tour to be the shortest possible.
Time complexity: O(n^2)
Christofides algorithm is a heuristic with a 3/2 approximation guarantee. In the worst case the tour is no longer than 3/2 the length of the optimum tour.
Due to its speed and 3/2 approximation guarantee, Christofides algorithm is often used to construct an upper bound, as an initial tour which will be further optimized using tour improvement heuristics, or as an upper bound to help limit the search space for branch and cut techniques used in search of the optimal route.
For it to work, it requires distances between cities to be symmetric and obey the triangle inequality, which is what you'll find in a typical x,y coordinate plane (metric space). Published in 1976, it continues to hold the record for the best approximation ratio for metric space.
The algorithm is intricate ^{[2]}. Its time complexity is O(n^4)
A problem is called k-Optimal if we cannot improve the tour by switching k edges.
Each k-Opt iteration takes O(n^k)
time.
2-Opt is a local search tour improvement algorithm proposed by Croes in 1958 ^{[3]}. It originates from the idea that tours with edges that cross over aren’t optimal. 2-opt will consider every possible 2-edge swap, swapping 2 edges when it results in an improved tour.
2-opt takes O(n^2)
time per iteration.
3-opt is a generalization of 2-opt, where 3 edges are swapped at a time. When 3 edges are removed, there are 7 different ways of reconnecting them, so they're all considered.
The time complexity of 3-opt is O(n^3)
for every 3-opt iteration.
Lin-Kernighan is an optimized k-Opt tour-improvement heuristic. It takes a tour and tries to improve it.
By allowing some of the intermediate tours to be more costly than the initial tour, Lin-Kernighan can go well beyond the point where a simple 2-Opt would terminate ^{[4]}.
Implementations of the Lin-Kernighan heuristic such as Keld Helsgaun's LKH may use "walk" sequences of 2-Opt, 3-Opt, 4-Opt, 5-Opt, “kicks” to escape local minima, sensitivity analysis to direct and restrict the search, as well as other methods.
LKH has 2 versions; the original and LKH-2 released later. Although it's a heuristic and not an exact algorithm, it frequently produces optimal solutions. It has converged upon the optimum route of every tour with a known optimum length. At one point in time or another it has also set records for every problem with unknown optimums, such as the World TSP, which has 1,900,000 locations.
Chained Lin-Kernighan is a tour improvement method built on top of the Lin-Kernighan heuristic.
It takes an existing tour produced by the Lin-Kernighan heuristic, modifies it by "kicking" it, and then applies Lin-Kernighan heuristic to it again. If the new tour is shorter, it keeps it, kicks it, and applies Lin-Kernighan heuristic again. If the original tour is shorter, it kicks the old tour again and applies Lin-Kernighan heuristic.
Depending on its implementation it may stop when there are no more improvements, or when it has reached a time limit, or a tour of a maximum length, etc.
Being a heuristic, it doesn't solve the TSP to optimality. However it is a subroutine used as part of the exact solution procedure for the state of the art Concorde TSP solver ^{[5]}.
This is not an exhaustive list, but I hope the selected algorithms applied on Dantzig49 can give a good impression of how some well-known TSP algorithms look in action.
The Traveling Salesman Problem is one of the most studied problems in computational complexity. Given a set of cities along with the cost of travel between them, the TSP asks you to find the shortest round trip that visits each city and returns to your starting city.
Nobody has been able to come up with a way of solving it in polynomial time. We’re not sure if it's even possible.
Harvard's Hassler Whitney first coined the name "Travelling Salesman Problem" during a lecture at Princeton in 1934. It became known in the United States as the 48-states problem, referring to the challenge of visiting each of the 48 state capitols in the shortest possible tour. Alaska and Hawaii weren’t US states back then.
Dantzig49 was the first non-trivial TSP problem ever solved. It’s a variant of Whitney’s 48 states problem, using one city for each state, plus Washington DC. The road distances used in Dantzig49 were those available on a Rand McNally map, so not all cities were state capitals.
There are (n-1!)/2
possible tours to any TSP problem, so Dantzig49 has 6,206,957,796,268,036,335,431,144,523,686,687,519,260,743,177,338,880,000,000,000 possible tours (~6.2 novemdecillion tours). If you ask a computer to check all of those tours to find the shortest one, long after everyone who is alive today is gone it will still be trying to find the answer.
The large (factorial) brute-force search space of the TSP doesn’t inherently mean there can’t be efficient ways to solve the TSP. There are other problems that have even larger search spaces, yet we have algorithms that can efficiently solve them. The Minimum Spanning Tree
problem is one example. But without an efficient algorithm for the TSP, this factorial search space contributes to the TSP’s difficulty.
It was solved in 1954 by Danzig, Fulkerson and Johnson. They introduced novel techniques, enabling them to solve Dantzig49 without inspecting all possible tours. They did it by hand, using a pin-board and rope. Their work paved the way for new heuristics.
As of today we can't quickly find the optimal solution to a TSP problem. Our best-known exact solving techniques can take a long time for even a modest number of cities. Specifically, we can't solve them in polynomial time. We also can't quickly verify the solutions even when we have them. This makes the TSP an NP-Hard
problem. It has a variant that can be written as a yes/no question. That 'decision' variant is NP-Complete
. NP-Complete problems also can't be solved in polynomial time, but their solutions can be verified in polynomial time.
Not all problems take too long to solve, though. We group the problems that we can quickly solve (in polynomial time) as P
.
It could be possible that a quick method for solving an NP-Complete problem exists, and we just haven't found it yet, making P=NP. Or, it could be impossible for a quick method to exist. Knowing which one of these two possibilities is true is a million dollar question ^{[6][7]}.
The TSP's solvability has implications beyond just computational efficiency. One of the unsolved questions in Economics is whether markets are efficient. There is proof that markets are efficient if and only if P = NP ^{[8]}. This has implications on the type of economic policies governments enact. Free market vs regulated market, small government vs big government, etc.
2020 update: Click here to view the 2020 Muddy Map
Graphs can inform, and informed discussions can be more civil than uninformed ones. But graphs can also mislead, so we need to understand what a graph is saying when we're using it. In 2015 I gave gave a TEDx talk on making clearer election maps. The original recording was lost, then recovered and uploaded to Youtube this summer. As election season ramps up, I'd like to continue the discussion by talking about this often-misleading map.
Different graphs are designed for different purposes. The graph above is a county-level winner-takes-all map. I'll call it a County Winner map
for short. Scientists use it to quickly see which way the counties went in an election. While there is arguably no better map for seeing who won in which county, this map can be misleading when used for other purposes. We need to be aware of two of its characteristics:
Half of the US population lives in these counties:
America can be described as a collection of densely populated metros buffered by less densely populated communities. Here’s what the population mountains
look like:
When we take the County Winner map and resize each county’s land-area to be proportionate to its population, here’s how the US looks.
One downside to the cartogram
, however, is that the shape and location of many territories are distorted beyond recognition. This is maybe one reason why the cartogram isn't very mainstream.
The County Winner map, however, doesn’t convey this relative population information. It's not designed to. But one might think it does.
In the general election, there are 50 concurrent presidential races, one for each state. In some of these states, the margin of victory turns out to be very small. In New Hampshire, 743,117 votes
were cast for the president in the 2016 general election. Hillary Clinton won New Hampshire by 2,701 votes
. We can seat as many people in a set of high school football bleachers.
There was a 75.03% turnout
in New Hampshire, so more people could have voted that didn’t. If just 2,702 more eligible voters in New Hampshire exercised their right to vote and voted for Trump, then New Hampshire would have gone to Trump instead. With such hair thin margins, New Hampshire is neither Blue nor Red in 2016. It’s 50:50
and leaning red or blue depending on traffic and dinner plans. It would be misleading to have all of New Hampshire colored as either blue or red to represent the statewide popularity of a presidential candidate.
This characteristic also holds true at the county level. The losing candidate in a county can receive a significant number of votes. In many counties the winner won by less than a 25% margin.
In general, smaller counties were won by larger percent margins. Larger counties were won by smaller percent margins. So in the counties with the most votes cast, the runner up got a lot of votes, too.
Here’s what the County Winner map looks like when we account for vote margins by blending each red and blue vote together within each county. Purple represents 50:50:
The neutralizing map is designed to express vote margins more clearly. It uses a grey intermediary, adjusting for the way humans perceive purple. Here’s the 2016 neutralizing map:
Rarely are all of the votes in a county cast for one candidate. The County Winner map, however, doesn’t convey this win margin information.
The contiguous United States aren't very contiguous. The County Winner map's inability to express vote population and margin of victory can be misleading. Cartograms account for population, but they distort the shape of the US, which can add confusion. The neutralizing map accounts for vote margin, but it doesn't account for population.
Can we construct a single map that shows both vote margin and vote population without distorting the shape of the US?
Here's one way:
Here's a less-distracting, static version of the graph:
The map leverages Color Theory to express vote margins
and vote populations
in a 2-dimensional scale.
Here's the key blown up:
Horizontal scale represents vote margins. Vertical scale represents vote totals.
The lighter counties had fewer votes. The darker counties had more votes.
The closer a county gets to gray, the closer the votes were 50:50.
A highly saturated red county was won by Trump with high percent vote margins. A highly saturated blue county was won by Hillary with high percent vote margins.
All colors can be described as a combination of Hue(°)
, Saturation(%)
, and Lightness(%)
.
We can leverage these individual components of the HSL color model
to faithfully express 2-dimensional data such as vote totals vs margin on a 2d color scale.
The fill-color of each county is constructed using the MuddyColor algorithm, which is expressed as the following mathematical formula:
This produces the following two dimensional scale, which also doubles as a map key, with the upper fence labeled for the 2016 data set:
For the borders of each county, I use the same formula, but just give them a constant lightness (L) of 50%.
This results in a 1-dimensional scale which we use for the county borders. It's the same color-scale scale used in the Neutralizing Map, which is designed to more-accurately express vote margins. Left = higher DEM %margin
. Right = higher GOP %margin
.
Giving each county an opaque border color allows even the lightest-filled counties to be recognized, including their vote margins.
You don't need to look at the whole nation to see where one county's vote total lies on the overall lightness scale. The county border color
and the county fill color
differ only by lightness, so the greater the contrast
between a county's border color and its fill color, the lower its vote total.
A few counties have enough votes to skew the vote totals scale. Here's how the graph looks when the vote totals
scale maxes out at 2,514,055
, the maximum number of votes in a county:
With enough heat, water will evaporate, turning into water vapor, a gas. Water vapor rises and condenses as clouds.
As water moves up plants, excess water reaches the plants' surface via transpiration. With enough heat, this water becomes water vapor. Water vapor rises and condenses as clouds.
]]>With enough heat, water will evaporate, turning into water vapor, a gas. Water vapor rises and condenses as clouds.
As water moves up plants, excess water reaches the plants' surface via transpiration. With enough heat, this water becomes water vapor. Water vapor rises and condenses as clouds.
Water droplets in the air come together, or condense, to form clouds. Winds move clouds through the atmosphere.
After enough condensation, the clouds will release water droplets. This is known as precipitation. The water droplets can be in liquid form (rain or drizzle) or in a solid form (snow, ice).
In response to heat, snow melts. As snow melts, it becomes runoff. The runoff can travel a long distance, becoming streams and rivers.
Ponds and lakes are the result of water accumulation. Water can accumulate directly from precipitation, or via nearby water runoff. Water can also accumulate underground, in what is known as an aquifer, an underground lake.
Streams or rivers can connect bodies of water above ground. Bodies of water can also be connected from underground, via groundwater.
]]>Mosses alternate between diploid and haploid generations in their life cycle, which is unique among flowering plants. Where does fertilization take place in the moss life cycle? Are spores haploid or diploid? Scroll to the Key Takeaways to get the answers, or start from the top to learn about the moss life cycle.
How does a moss reproduce?
Mosses have two forms of reproduction: sexual reproduction and asexual/ vegetative reproduction. This is true for all bryophytes.
Practically all flowering plants are diploid, but for mosses, this is different. Mosses alternate between diploid generations (as sporophytes) and haploid generations (as gametophytes).
Generally speaking, sexual reproduction is the process where genes from two different parents mix to produce offspring with a genetic makeup similar to, but different from, each parent.
The sexual reproduction of the moss (bryophyte) life cycle alternates between diploid sporophyte and haploid gametophyte phases. In a nutshell, haploid gametophytes produce haploid gametes, which can be sperm or eggs. When egg and sperm merge, they form a diploid zygote which grows into a diploid sporophyte. Sporophytes produce haploid spores, containing genetic information from both haploid gametophyte parents. A spore gives rise to a haploid gametophyte, completing the cycle.
A single gametophyte moss plant can produce both sperm and eggs. This can occur on different parts of the same plant, one part producing sperm and another part producing eggs. However, a plant usually produces either all sperm-producing organs or all egg-producing organs at any one time. This way it doesn't breed with itself, promoting genetic variation. The female structure for producing eggs is known as the archegonium, and the male structure for producing sperm is known as the antheridium. Antheridia are tiny, typically stalked, club-shaped or spherical structures. Archegonia are bottle-like containers, their wall just one cell thick. Archegonia are typically formed in groups. Archegonia and antheridia are usually bundled in leaf rosettes similar to flowers, called perichaetia. Elongated club-shaped cell filaments called Paraphyse are sometimes found on the gametophyte, storing water and protecting the archegonia sand antheridia from drying up.
When the antheridia are ripe and the flower gets wet from rain, numerous antherozoids (spermatozoids / sperm cells), are released. Antherozoids are only able to move underwater. They swim using two threadlike tails. Some successfully end up on female gametophyte moss plants and are chemically attracted to the archegonium. Each archegonium holds one egg, in a swollen section called the venter. The sperm enter the archegonium through the narrow channel in its neck. Fertilization occurs in the archegonium to form a diploid zygote. Once one archegonium in a group has been fertilized, in many cases the others lose the ability to be fertilized. This is caused by an inhibitory hormone released from the fertilized archegonium.
The formation of the zygote begins the second phase of the moss life cycle, where the zygote develops into a diploid sporophyte (spore-plant).
Upside-down photo of moss after rain. Sporophytes got their hands in the air like they just don’t care. https://t.co/jDpYaHweHn pic.twitter.com/tFeU2ahzgO
— Megan Lynch (@may_gun) March 18, 2020
After fertilization, the archegonium on the gametophyte plant becomes modified into a protective sheath around the young sporophyte. The sporophyte begins to grow by mitosis (diploid cell division) out of the top of the archegonium. It elongates and after a few cell divisions begins differentiation. At this point the sporophyte is practically a parasite on the gametophyte plant, although it may produce some food of its own via photosynthesis in the early stages of growth.
The embryonic sporophyte consists of three structures: a foot, seta, and a capsule. The foot, on the lower portion, anchors the sporophyte to the gametophyte via penetration and helps to transfer water and nutrients from the gametophyte. The seta is a long erect supporting stalk. At the end of the sporophyte is a pod-like capsule where spores are produced. The seta only occurs in species where the mature capsule is stalked.
Transfer cells develop at the sporophyte-gametophyte boundary in the majority of bryophytes, but not all. These specialized cells allow efficient transfer of nutrients from the gametophyte to the sporophyte. They may form on the gametophyte, sporophyte, or both. The gametophyte-sporophyte junction is often convoluted and maze-like. This increases the surface area, allowing for more transfer cells than a simple boundary, thus increasing the rate at which nutrients can flow to the sporophyte.
A capsule may contain four to over a million spores, depending on the species. It also may be stalked or stalkless depending on the species. In most mosses, the mouth of the capsule is covered by a lid-like operculum, which falls off when the spores are mature. A membranous hood, the calyptra, which is also discarded at maturity, further protects the operculum.
In wet conditions the spores can't travel very far. A tiny tooth-like structure around the mouth of the capsule controls the release of the spores. These structures, called the peristome, consist of one or two rows of teeth. They prevent the release of the spores during wet conditions by remaining closed. In dry conditions they open, releasing the spores.
Each spore contains a mix of genes from the two parents. If the spore falls onto a damp area of ground, it may germinate into a branching, threadlike filamentous protonema. Cusps bud from the protonema then grow into leafy male or female gametophytes, completing the life cycle.
In addition to sexual reproduction, mosses can reproduce asexually (vegetatively). The method they use to accomplish this depends on the situation they're in.
When the stem of a large clump of moss dies back, the stem-less clump becomes individual plants.
When bits of the stem or even a single leaf from the moss plant are broken off, these bits can then regenerate to form a new plant.
Every 4 years, we’re given a map like this:
Usually, a dissatisfied data visualizer is less than thrilled with entire states being designated as either red or blue. The analyst, wanting a clearer picture of the political landscape, breaks the original map down to a county-level:
With this graph, the analyst can see that the states are a patchwork of red and blue counties. But there’s an unanswered question. Did every single person inside a given county cast the same vote? Or, were the voters within the counties divided? This graph doesn’t show this information. It just shows which choice won, even if it was by one vote. So we decide to blend each county’s red and blue vote ratios together, instead of letting the winner take all. This way if we get a pure red or pure blue county, then we know that everybody casted the same vote. And if we get a purple county, then the voters were divided. This is the result:
A nice blend revealing smooth purple transitions between red and blue regions. We can now see that there was hardly a landslide victory in any county. But there’s still a problem. When looking at some of the purple counties, it can be hard to decipher if the purple is leaning more towards red or blue. And it gets difficult the closer the counties get to 50/50 red/blue. In fact a R>B county can be confused as a R<B county depending on adjacent colors. In an attempt to see the margins within the counties, we end up with something that’s close, but no cigar. What’s wrong?
The problem is that in the purple map we’re no longer discerning between two distinct hues (a specific red hue and specific blue hue), but an indiscrete number of purple hues found in-between red and blue.
Magenta (I will call purple this from now on) is what our brain registers when we observe an equal mixture of red and blue light with the absence of green. We created the purple map because pure red and pure blue counties didn’t show us the vote margins. But the purple hues are nonsense. We end up looking for the quality of red-ness or blue-ness inside each purple hue to create sense. We don’t need purple at all.
So is there a way for us to blend together red and blue hues without creating any new nonsensical hues in the process?
Of course! We add green, and we get a Neutralizing election map:
Interesting. How does this work? Basic color theory.
We can describe all colors as combinations of Red, Green, and Blue light. Earlier, we established that Magenta is the mixture of Red and Blue light, with the key absence of Green. But we don’t have any reservations about the absence of green for our purposes. We just want to create a gradient between red and blue. So instead of falling victim to Magenta, we can use Green to neutralize the purple-ranged colors.
For a given purple-ranged hue, if we take the weaker intensity value of that purple’s Red and Blue components — let’s call that value J — and then set the purple’s green component to that J value, the purple disappears. What’s left behind is either red or blue- whichever one was stronger in the original purple.
So if we take RGB[175/0/150] and Neutralize it to RGB[175/150/150], then the red hue sticks out, with some desaturation.
If we take RGB[150/0/175] and Neutralize it to RGB[150/150/175], then the blue hue sticks out, with some desaturation.
And if we Neutralize magenta, which is a 50/50 split between red and blue, the result will be a completely desaturated color since the red, green, and blue values would all be exactly the same. For example: RGB[128/0/128] Neutralizes to RGB[128/128/128].
This makes sense, because when mixing together high-intensity Red, Green and Blue light, we get pure white light, which has no color saturation.
This is awesome! Thanks to this neutralizing algorithm we can pit two hues against each other and only one of the original 2 hues will be the resulting victor. No new hues! The only hues on the Neutralizing map are hue#0/hue#360 (Red) and hue#240 (Blue). So now the degree of red/blue saturation reveals the margins.
If a county even has the slightest hint of color saturation, then that color is that county’s marginal vote. There can be no dispute. This allows us to observe minute margins, such as a 49/51 red/blue split. *But there are limits to how nuanced the margins can be, since RGB’s 8-bit color-space can’t accept too many significant figures.
This Neutralizing Map can bring to our attention things we didn’t notice before — such as the blue streak in the Bible Belt isolated from the reds by a grey buffer, which doesn’t really call out to us in the Purple map.
Here’s how relative voter-turnout appears on the Neutralizing graph:
The principle behind the neutralizing map algorithm can work for all sets of colors. For example, we can neutralize a graph that has 4 fields represented by red, green, orange, and yellow. Grey would still be the intermediary between all the hues. It would just take a little extra work since we’d need to abstract away RGB.
Engineers use visualizations like this all the time, and the results can be useful. Take a look at Teddy_the_Bear’s cartogram. He diverges by adjusting gamma & luminosity, resulting in darker reds and brighter blues. Here’s a Massachusetts map that does something similar by avoiding purple. Although, unlike the Neutralizing map, the blues and reds in the Massachusetts map end up changing in hue when transitioning towards their assigned neutral. There’s also professor Robert Vanderbei’s margin-of-victory/tilt maps. Obvious differences are that the Neutralizing map results in a grey neutral instead of a brighter, white neutral.
]]>