The Causation of the Colors of the Aurora Borealis

The colors of the aurora borealis are layered with violet at the bottom, green in the middle, and red at the top, which signifies the structure of the atmosphere. Just as different chemicals create different colored flames (e.g. propane always producing a blue flame etc.), so too do photons bouncing around in the atmosphere off of different elements present. Nitrogen glows violet in a band at the bottom and oxygen glows green and red, creating a curtain 400 kilometers in length

The Carrington Event of 1859

On September 1st and September 2nd in 1859 the Carrington Event occurred. English Astronomer Richard Carrington was reviewing an image of the sun when he noticed a bright flash upon the imagery. Carrington did not know what this anomaly was but soon learned first hand as approximately 20 hours later, chaos ensued. 200,000 kilometers of telegraph wire across the world collapsed, plugged in electrical items began to arc and produce power even when unplugged, batteries recharged without a power source, compass needles went haywire, and the Aurora Borealis could be viewed all across the world, in places which would never normally bare witness to such an event (e.g. Cuba and India etc.). This incident will inevitably occur again which is why the U.S. government is constructing the Thirty Meter Telescope upon Mauna Kea at an altitude of 10,000 and the reason the Parker Solar Probe was sent to the sun in 2018. Mauna Kea means “white mountain” in ʻŌlelo Hawaiʻi, the Hawaiian language. The rational of the U.S. government is that the Carrington Event affected the world greatly when electricity was in its infancy, therefore how much greater would it affect the modern world with the knowledge that many objects during the modern day are connected to the internet and/or are electrical in some capacity. The Carrington Event is the first mass coronal ejection reported in history

The Artificial Black Hole Created by U.S. Scientists

In Menlo Park, United States of America, in May of 2017, scientists working at the Stanford Linear Accelerator Center National Accelerator Laboratory (often abbreviated as “SLAC”) fired the world’s most powerful X-ray laser at individual molecules. The reason for this experiment was to observe what would occur when an atom with a lot of electrons is hit by high energy X-ray radiation to observe whether or not those electrons could be knocked out of orbit producing an atom which instead of having many electrons has very few electrons. This system behaved highly unusual and very differently than what scientists expected as it created a miniature black hole like object for 1/1,000,000,000,000,000 (1 quadrillionth) of a second, sucking all remaining electrons into it and exploding the molecule in a dramatic paroxysm

American Theoretical Physicist Robert Oppenheimer’s Reaction to the First Successful Nuclear Weapon Detonation

After the Trinity nuclear launch test which occurred on July 16, 1945, the first nuclear detonation in human history, when Manhattan Project lead Julius Robert Oppenheimer was asked about the reaction of himself and others on that fateful day Oppenheimer responded, “we knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the Hindu scripture the Bhagavad Gita. Vishnu is trying to persuade the prince that he should do his duty and to impress him takes on his multiarmed form and says, Now, I am become Death, the destroyer of worlds. I suppose we all thought that one way or another” (this phrasing/sentence structure while confusing in English, is the correct direct translation from Sanskrit)

The “Soulmate” Quality of Quantum Non-Locality and Photons

 

When a photon, a particle with no mass which is effectively a quantum packet of light, divides due to some external force, its energy is split and it emerges as 2 photons. These new photons are forever intrinsically tied together, able to communicate instantaneously despite their great distances as the universe expands. This should not be possible as light cannot travel faster than 299,792,458 kilometers per second. Regardless of how far apart these particles travel, their profound bond is unbreakable as they will always remain connected regardless of circumstance. This can be thought of as the ancient Greek philosopher Plato’s understanding of love, with a single being split into 2 beings with the new beings becoming soulmates who search for eachother eternally. For as long as the soulmates, or photons, exist, they will be intrinsically tied to each other as the one and only soulmate, or particle, which has the capability to do this with its pair. This long distance relationship between all elementary particles has been on going since the beginning of the universe, a fidelity which lasts for as long as the universe exists. The simple act of observant measurement is all that is required to sever this tremendous commitment between particles. If the spin of one particle is measured, a seemingly innocuous act by a third party observer, the bond between each particle is forever severed, never to return to its previous state. It’s unclear how these particles communicate which includes the break up message sent between them when the integer spin of one of the pair is observed

How Holograms Work

Holograms work by taking a single laser beam and splitting it into 2 parts, with the primary beam falling upon the object being photographed which then bounces away and falls onto a specialized screen, and the secondary beam falling directly upon the screen. The mixing of these beams creates a complex interface pattern containing a three dimensional image of the original object which can be captured on specialized film. By flashing another laser beam through the screen, the image of the original object suddenly becomes holographic. The term “holograph” is derived from the ancient Greek terms ”holo” which means “whole” and “graphos” which means “written”. The main issue with holographic technology is that unlike traditional visual media which needs to flash a minimum of 30 frames per second, scattering the image into pixels, a three dimensional holograph must also flash 30 frames per second, but of every angle to create depth of field, and the amount of data required far exceeds that of a traditional television photograph or video, even exceeding the capability of the internet until recently in 2014 when internet speeds reached 1 gigabyte per second

The Rationale as to Why Scientific Fact is Often Referred to as “Scientific Theory”

The term “theory” placed behind suffixes of large theories like gravity, evolution, and special relativity (e.g. the Theory of Gravity, the Theory of Evolution, the Theory of Special Relativity etc.), doesn’t mean “theory” in the traditional sense. During the 20th century, Sir Isaac Newton’s Laws of Motion began to break down within the theories own borderlines as physics progressed further and further to answer continually larger and more complex questions. As a direct result of this, a grander, more encapsulating law was required to explain certain phenomena (e.g. the reason the sun has a corona of light bend around it during a total solar eclipse) which is why Albert Einstein’s Theory of Relativity is so immensely important, as it explains such phenomena after which Newton’s laws begin to break down (e.g. Newton’s ability to predict planetary orbit but not explain why such a function occurs in nature etc.). Eventually the international scientific community unanimously agreed that laws should not be named as such because they may not remain laws in the long term, as there may be concepts outside of them which help explain both the supposed law itself as well broader phenomena outside of the suppositional law. The term “theory” was utilized to replace the term “law” because something scientific which can change over time, is not or was not truly a law to begin with. The term “theory” is used in the connotation of an idea which accurately describes a phenomena and empowers an observer to accurately predict what they have yet to observe. An idea isn’t genuinely a “theory” until it’s supported by empirical evidence, before which time it remains as a “hypothesis”

The Reason Aritifical Intelligence Differs From Traditional Software

Recently, many of the improvements made within the artificial intelligence sector have been due to the technology of “deep learning” which is also referred to as an “artificial neural network”. Traditional software is not intuitive as it simply follows a set of instructions predetermined by a programmer. If the software runs into a new problem which it has no answer prewritten for, it crashes. Deep learning is different as software can now write its own instructions instead of reading the instruction(s) of a programmer. Currently, as of 2021, deep learning is the equivalent of an all powerful, dim witted genie as it has the ability to evaluate the pixels of a photograph of a bottle of water, and can recognize with astonishing accuracy photographs of other water bottles, however it has no idea what the concept of water or the water bottle itself is, what the end user does to drink from the water bottle, what the end user needs the water for etc. This differs in human beings however as humans learn from a sample size of one, and are able to surmise the purpose of water and everything else which is relevant from witnessing it being used upon a single occasion

The Ability of Quantum Theory to Explain the Existence of All Matter

The theory of quantum mechanics is the most accurate and powerful description of the natural world which scientists have at their disposal. Quantum fluctuations are written into the stars as modern day theories explain that as the universe sprang from a vacuum, it expanded very rapidly, which means that the rules of the quantum world, should have contributed to the large scale structure of the entire universe. The universe is shaped by quantum reality, essentially the quantum world inflated many, many times in that nothingness has shaped everything, with this concept now being definitively proven as fact. Quantum physics provides a natural mechanism through quantum fluctuations to see into the early universe with small irregularities that would later grow to create galaxies. The idea that a cluster of gas and dust like the Milky Way Galaxy, a collection of billions of stars, could begin life simply because of small quantum fluctuations, is absolutely mind boggling, as these tiny fluctuations within the vacuum of space were only present upon a submicroscopic scale, yet had the ability to grow into some of the largest objects in the universe. This is possible because the Big Bang produced equal amounts of matter and anti-matter but as the universe cooled down, matter and anti-matter annihilated almost perfectly, but not quite, as every 1,000,000,000 (1 billion) annihilations will lead to 1 particle of matter being left behind and this is what has built the matter of the physical world, everything from stars to the Earth to the smallest life forms and inanimate objects. Everything within the universe which is physical to the touch is simply debris of an enormous collision between matter and anti-matter at the beginning of time

Galileo Galilei’s Telescope Design Improvement Upon the Dutch Spyglass Design

It had been known since the first spectacles were produced in the middle of the 13th century, that glass was capable of bending light, a property which no other known material of the period could achieve. The Dutch spyglass worked upon this very principal, arranging lenses with careful attention to detail to create a compounding magnification effect. If light hits a plano-convex (pronounced “play-noh”) lens, which is flat upon one side and convex upon the other, the same formation used for those who suffer from hyperopia, rays of light streaming inward are bent toward eachother, eventually meeting and converging at a specific triangular point. Right before this focal point, Galilei improved the original Dutch design by placing his second lens, an ocular lens which is plano-concave, meaning flat upon one side and concave upon the other, the same formation used for those who suffer from myopia. This secondary lens pushes the bent rays of converging light back out again so that they can hit the eye and provide a clear image. The eye focuses this light upon the retina so that the observer can view the image produced by the spyglass. The magnification power of a telescope depends upon the ratio between the focal lengths of the lenses, with these distances marked as F1 for the distance between the front of the spyglass and the plano-concave lens, and F2 from the plano-concave lens toward the back of the spyglass. The largest difficulty impeding Galilei was the grinding down process of his convex lens, in an attempt to make it as shallow as possible to maximize the length of the F1 partition, as the longer the distance is, the greater the magnification will be. Within a few weeks of developing this new technology, Galilei’s first telescope had a clear magnification of 8x, far exceeding the power of the original Dutch spyglass. On August 21, 1609, Galilei climbed a Venice bell tower to meet up with Venetian nobles and senators so that he could display his new technology. This new bleeding edge feat of engineering permitted Venetians to spot sailing ships 2 hours earlier than if they had used the naked eye. 3 days after the event, Galilei gifted his telescope to the Duke of Venice and was afforded a guaranteed job for life in exchange, with this salary equating to double his original income. With his finances secured, Galilei went on to develop and produce even more powerful telescopes