After the Trinity nuclear launch test which occurred on July 16, 1945, the first nuclear detonation in human history, when Manhattan Project lead Julius Robert Oppenheimer was asked about the reaction of himself and others on that fateful day Oppenheimer responded, “we knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the Hindu scripture the Bhagavad Gita. Vishnu is trying to persuade the prince that he should do his duty and to impress him takes on his multiarmed form and says, Now, I am become Death, the destroyer of worlds. I suppose we all thought that one way or another” (this phrasing/sentence structure while confusing in English, is the correct direct translation from Sanskrit)
Category: Quantum Physics
How Holograms Work
Holograms work by taking a single laser beam and splitting it into 2 parts, with the primary beam falling upon the object being photographed which then bounces away and falls onto a specialized screen, and the secondary beam falling directly upon the screen. The mixing of these beams creates a complex interface pattern containing a three dimensional image of the original object which can be captured on specialized film. By flashing another laser beam through the screen, the image of the original object suddenly becomes holographic. The term “holograph” is derived from the ancient Greek terms ”holo” which means “whole” and “graphos” which means “written”. The main issue with holographic technology is that unlike traditional visual media which needs to flash a minimum of 30 frames per second, scattering the image into pixels, a three dimensional holograph must also flash 30 frames per second, but of every angle to create depth of field, and the amount of data required far exceeds that of a traditional television photograph or video, even exceeding the capability of the internet until recently in 2014 when internet speeds reached 1 gigabyte per second
The Rationale as to Why Scientific Fact is Often Referred to as “Scientific Theory”
The term “theory” placed behind suffixes of large theories like gravity, evolution, and special relativity (e.g. the Theory of Gravity, the Theory of Evolution, the Theory of Special Relativity etc.), doesn’t mean “theory” in the traditional sense. During the 20th century, Sir Isaac Newton’s Laws of Motion began to break down within the theories own borderlines as physics progressed further and further to answer continually larger and more complex questions. As a direct result of this, a grander, more encapsulating law was required to explain certain phenomena (e.g. the reason the sun has a corona of light bend around it during a total solar eclipse) which is why Albert Einstein’s Theory of Relativity is so immensely important, as it explains such phenomena after which Newton’s laws begin to break down (e.g. Newton’s ability to predict planetary orbit but not explain why such a function occurs in nature etc.). Eventually the international scientific community unanimously agreed that laws should not be named as such because they may not remain laws in the long term, as there may be concepts outside of them which help explain both the supposed law itself as well broader phenomena outside of the suppositional law. The term “theory” was utilized to replace the term “law” because something scientific which can change over time, is not or was not truly a law to begin with. The term “theory” is used in the connotation of an idea which accurately describes a phenomena and empowers an observer to accurately predict what they have yet to observe. An idea isn’t genuinely a “theory” until it’s supported by empirical evidence, before which time it remains as a “hypothesis”
The Reason Aritifical Intelligence Differs From Traditional Software
Recently, many of the improvements made within the artificial intelligence sector have been due to the technology of “deep learning” which is also referred to as an “artificial neural network”. Traditional software is not intuitive as it simply follows a set of instructions predetermined by a programmer. If the software runs into a new problem which it has no answer prewritten for, it crashes. Deep learning is different as software can now write its own instructions instead of reading the instruction(s) of a programmer. Currently, as of 2021, deep learning is the equivalent of an all powerful, dim witted genie as it has the ability to evaluate the pixels of a photograph of a bottle of water, and can recognize with astonishing accuracy photographs of other water bottles, however it has no idea what the concept of water or the water bottle itself is, what the end user does to drink from the water bottle, what the end user needs the water for etc. This differs in human beings however as humans learn from a sample size of one, and are able to surmise the purpose of water and everything else which is relevant from witnessing it being used upon a single occasion
The Ability of Quantum Theory to Explain the Existence of All Matter
The theory of quantum mechanics is the most accurate and powerful description of the natural world which scientists have at their disposal. Quantum fluctuations are written into the stars as modern day theories explain that as the universe sprang from a vacuum, it expanded very rapidly, which means that the rules of the quantum world, should have contributed to the large scale structure of the entire universe. The universe is shaped by quantum reality, essentially the quantum world inflated many, many times in that nothingness has shaped everything, with this concept now being definitively proven as fact. Quantum physics provides a natural mechanism through quantum fluctuations to see into the early universe with small irregularities that would later grow to create galaxies. The idea that a cluster of gas and dust like the Milky Way Galaxy, a collection of billions of stars, could begin life simply because of small quantum fluctuations, is absolutely mind boggling, as these tiny fluctuations within the vacuum of space were only present upon a submicroscopic scale, yet had the ability to grow into some of the largest objects in the universe. This is possible because the Big Bang produced equal amounts of matter and anti-matter but as the universe cooled down, matter and anti-matter annihilated almost perfectly, but not quite, as every 1,000,000,000 (1 billion) annihilations will lead to 1 particle of matter being left behind and this is what has built the matter of the physical world, everything from stars to the Earth to the smallest life forms and inanimate objects. Everything within the universe which is physical to the touch is simply debris of an enormous collision between matter and anti-matter at the beginning of time
Galileo Galilei’s Telescope Design Improvement upon the Dutch Spyglass Design
It had been known since the first spectacles were produced in the middle of the 13th century, that glass was capable of bending light, a property which no other known material of the period could achieve. The Dutch spyglass worked upon this very principal, arranging lenses with careful attention to detail to create a compounding magnification effect. If light hits a plano-convex (pronounced “play-noh”) lens, which is flat upon one side and convex upon the other, the same formation used for those who suffer from hyperopia, rays of light streaming inward are bent toward eachother, eventually meeting and converging at a specific triangular point. Right before this focal point, Galilei improved the original Dutch design by placing his second lens, an ocular lens which is plano-concave, meaning flat upon one side and concave upon the other, the same formation used for those who suffer from myopia. This secondary lens pushes the bent rays of converging light back out again so that they can hit the eye and provide a clear image. The eye focuses this light upon the retina so that the observer can view the image produced by the spyglass. The magnification power of a telescope depends upon the ratio between the focal lengths of the lenses, with these distances marked as F1 for the distance between the front of the spyglass and the plano-concave lens, and F2 from the plano-concave lens toward the back of the spyglass. The largest difficulty impeding Galilei was the grinding down process of his convex lens, in an attempt to make it as shallow as possible to maximize the length of the F1 partition, as the longer the distance is, the greater the magnification will be. Within a few weeks of developing this new technology, Galilei’s first telescope had a clear magnification of 8x, far exceeding the power of the original Dutch spyglass. On August 21, 1609, Galilei climbed a Venice bell tower to meet up with Venetian nobles and senators so that he could display his new technology. This new bleeding edge feat of engineering permitted Venetians to spot sailing ships 2 hours earlier than if they had used the naked eye. 3 days after the event, Galilei gifted his telescope to the Duke of Venice and was afforded a guaranteed job for life in exchange, with this salary equating to double his original income. With his finances secured, Galilei went on to develop and produce even more powerful telescopes
The Future of Body Modification
Near field communication, often abbreviated as “NFC” is the ability for wireless devices to communicate with eachother and has now made its way into the bodies of human beings with some opting to implant small subdermal microchips using a large gauge hypodermic syringe (e.g. 14 – 18 gauge) which is preloaded so that these individuals gain the ability to start their vehicle(s), open their home door locks, send contact information to another persons smartphone etc., wirelessly and without any intervention or effort upon the end user. This adaptation is referred to as “transhuman” as it goes beyond what the biological human body can do by introducing technology which cannot be evolved into existence. Devices have been developed for a number of different purposes (e.g. vibrating when pointed towards magnetic north turning the body into a compass or implanting a small chip containing tritium gas which glows beneath the skin but is radioactive and therefore not battery powered lasting indefinitely as tritium gas has a 12 year half-life etc.). In 2018, at the University of Colorado, Dr. Carson Bruns and his team developed a technology which allows for smart tattooing in that newly and highly specialized tattoo inks will be able to deliver new functions to the artistic medium of tattooing. The first design invented was a tattoo ink which is sensitive to ultraviolet light which allows it to lay invisible under typical lighting conditions and only appear as a blue hue once outside in the presense of sunlight or an artificial ultraviolet light source. This technology would be practical as well as esthetic as it would allow a person to know when they’ve had too much sun exposure while outside. Bruns’ team has also developed tattoo ink which changes color as the temperature of the body changes which again would be functional as well as artistic, acting as a thermometer to indicate when a person has had too much or too little exposure to cold or heat. Nanotechnology is used to engineer and design tattoo particles which have specialized properties and characteristics (e.g. thermal battery and/or storage mechanism). Real world applications could be spurred by this advent like the ability to keep the entire body at a comfortable temperature at all times, regardless of the environment, if the entire body was tattooed, either visibly with color or invisibly with translucent ink. Specially engineered tattooing can also have medical applications such as that of the distribution of a pharmacological medication or hormone which helps regulate biochemistry (e.g. insulin or neural catecholamines to control mood etc.). World militaries may find use with specially engineered tattoos as well, allowing skin to become more resilient to abrasions or epidermal damage. Specialized tattoo pigments are also tactile sensitive in that when touched, they have the ability to turn on or off as well as perform other functions (e.g. manipulate an options menu upon a screen or act as a controller for a game or software etc.). In 2018, billionaire futuristic Elon Musk unveiled Neuralink, a technology which he states provides the ability of “self-directed evolution”. Neuralink will be installed within the human body by using a specialized, robotic hypodermic syringe to inject an ultra thin mesh, referred to as “neuro lace”, into the neurocortex of the brain, to form a body of electrodes which are able to monitor and influence brain function. These microelectrodes will be able read and write onto neurons; a bi-directional information exchange. This will allow for the downloading and uploading of information to and from the internet, wirelessly. This technology will allow for thoughts to be sent between users in the same format that data is shared online during the modern day using peer to peer networking. This technology will also allow for the control of devices, remotely; in principle, telekinesis. Nanotechnology now provides scientists with the technology required to manufacture electronics small enough to become tattooed, which means that in the future, Neuralink will only require a small, cranial tattoo instead of a cranial implant
The End of the Universe and the Big Crunch Theory
The likelihood of a Big Crunch in which the universe expands to the point that it then collapses inward upon itself is not very probable as mathematical calculations demonstrate that there simply isn’t enough mass in the entire universe to be able to revert into into an enormous compaction. The idea of the universe folding in upon itself can be visualized by imagining a person throwing a ball in the air. The Earth has enough mass to bring a thrown ball back down to the ground but if thrown faster than the speed of escape velocity which is 11.186 kilometers per second, a thrown ball would never come back down, in fact, it would travel an infinite distance over an infinite timespan before the Earth mathematically had enough time and mass to pull the ball back to its starting position. The universe is represented by the Earth in that it acts as a force upon other objects and the ball represents all matter throughout the universe in this thought experiment
The Danger of Air Pollution Gaining Access to the Brain
The reason pollution has a metallic taste and scent and that it burns the eyes when exposed to it is because the particles of air pollution are tiny enough that they can travel through nerve cells, and gain direct entry to the brain, where the olfactory bulbar meets the frontal cortex, as there is no blood-brain barrier at this point. The body protects itself through the blood-brain barrier, which means that particles within the bloodstream, cannot get directly into the brain. This system has a slight flaw however as the nose acts as a direct conduit for incredibly tiny particles to bypass this security mechanism