Technology

Beyond End-To-End: unveiling the Quantum threat to Encryption

If you’ve ever used Whatsapp or Instagram to communicate with friends and family, you’d notice that the messages are “end-to-end encrypted”. Upon first notice, it sounds great. All your messages are safe and secure – you’d think. 

However, not every encryption method is created equal, and with the rise of cyberattacks and more sophisticated technology especially in the Quantum field, one must exercise caution when choosing the right tools to use. But to better understand the scale of this issue we must first address the mathematical operation that makes such risk feasible in the first place.

Shor’s algorithm poses a major threat to security provided by current industry-standard encryption methods like RSA and ECC which rely on the difficulty of factoring large integers for security. However this difficulty is limited to the classical world of computing, where operations would be trialed one by one until a solution is found (exponential time) making it almost impossible to decipher such encryption methods. On the other hand, a Quantum computer is able to simultaneously compute all the possible trials in a single iteration due to it being in a superposition of exponentially many states – achieving rapid polynomial time. In simpler terms, many of the “asymmetric” encryption methods are at risk.

Evidently, this causes a domino effect on Symmetric encryption methods, since most Symmetric keys are exchanged between users through an asymmetric exchange process, which could be compromised by Shor’s algorithm allowing potential decryption of all data encrypted with that key: including your texts and photos.

Whilst this threat isn’t currently feasible for ordinary individuals — since Quantum Computers are costly, sophisticated pieces of technology –  many countries and researchers are becoming increasingly aware of its uses and have created their own. Evidently, there is an imminent risk that Quantum threats may have the potential to escalate cyberattacks and transform the digital landscape as we know it. 

Moreover, some authorities and individuals are adopting a technique called “Harvest Now, Decrypt Later”: accumulating databases of encrypted information. In hopes, it could one day be decrypted with sufficiently powerful quantum computers. 

Evidently, many companies and researchers (including NIST) have taken measures to enhance encryption methods and implement Quantum safe or secure encryption in their communication protocols. One example, is the open-source messaging platform signal, which introduced the new PQXDH encryption protocol that claims to be quantum resistant to current advancements in the field of encryption: however, they claim that such technology must be upgraded as future findings and vulnerabilities may require additional security adjustments. If you wish to, the whitepaper for the encryption method can be accessed here.

Conclusion

Finally, we realised that such advancements pose a monumental risk to information security. Although it’s easy to be pessimistic about such advancements, I believe that it’s a step in the right direction towards safeguarding our digital security and communication. Therefore, as individuals and organisations alike we must take proactive measures:

  • Stay Informed: Keep abreast of developments in quantum computing and its implications for encryption. Awareness is key to making informed choices.
  • Quantum-Safe Encryption: Consider adopting encryption methods that are resilient to quantum attacks. New cryptographic standards, often referred to as Post-Quantum Cryptography (PQC), are being developed to address this specific concern.
  • Advancements in Technology: Support and invest in technologies that stay ahead of the curve (especially open-source projects), continually updating encryption methods to withstand emerging threats.

Sources

https://csrc.nist.gov/projects/post-quantum-cryptography/
https://statweb.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf
https://purl.utwente.nl/essays/77239/
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/security/encryption/what-types-of-encryption-are-there/#:~:text=There%20are%20two%20types%20of,used%20for%20encryption%20and%20decryption.
https://signal.org/docs/specifications/pqxdh/

The Secret to Clustering? Unveiling the Mystery Behind Spinal Cancer Clusters

According to the CDC, over 600,000 people passed away from this leading cause of death, making it the second greatest claimant of casualties. Upon being diagnosed, one experiences a lifetime of stress and a load of rigorous treatment in the form of chemotherapy in an attempt to kill all the cells before the illness could spread to the remainder of the body. It doesn’t always stop at this point though, many go on to have progressing stages that may either require more treatment, a verdict of ‘X months to live,’ or the first followed by the second. This illness is cancer. 

There are over a hundred and fifty types of cancer, ranging from the head to the toe and everything in between. Some cancers are heavily influenced by gender (such as breast cancer), others by age (such as prostate cancer), and so on and so forth. The specific one that ought to be highlighted though for its increased presence (clustering) of cancerous cells relative to the body is spinal cancer. Although most (if not all) cancers have the ability to cluster its cells in a couple of specific areas, this one can cause cancer cells to become three to five times more apparent in the spine relative to other limbs.

Without the assistance of modern medicine and technology necessary to probe deeper, scientists just considered this specific clustering of cells a medical mystery.

But we have access to that technology now. At least, researchers from Weill Cornell Medicine and the Hospital for Special Surgery in New York do. With these resources, they were available to figure out what exactly causes these clusters to emerge: vertebral skeletal stem cells in the spine. What makes these stem cells unique relative to other ones is their production of a protein that attracts these tumor (cancerous) cells to come to them.

So what can be done with this? Excellent question. 

Through identifying what may exactly cause this clustering, researchers can work on targeting these vertebral skeletal stem cells to disrupt their function (i.e., attracting cancerous tumor cells to them). 

That seems like the perfect plan, no? However, when an experiment on mice where these cells are targeted took place, it didn’t completely eliminate the amount of bone (and, by extension, the number of cancerous cells) in that area. This then begs the question: is there a second stem cell type that we aren’t accounting for? Or is it something else?

Sources

  • https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm
  • https://www.washingtonpost.com/science/2023/11/28/new-stem-cell-spine-cancer/
  • https://pubmed.ncbi.nlm.nih.gov/37704733/

Exploring the Science Behind Allergies

As alarming as it sounds, even a lick of peanut butter could be life-threatening. Allergies. What is it? Let’s see. Had the peanut in peanut butter been harmful to everyone it wouldn’t be called an allergy. Only if something reacts in an unprecedented way to a select few is then called an allergy.

So the question arises, How do I know if I’m allergic and what I am allergic to?

Allergies come in forms, ranging from water to even nickel coins. One can’t possibly predict what substances react weirdly with your body without ever being exposed to it. This is why allergy tests are done.

Well, Only a medical professional could let you know your allergies unless something you had eaten or been exposed to previously didn’t sit right with you. Symptoms of an allergy range from a runny nose to breathlessness and of course, the scary and itchy hives. 

Let’s take a look at what the doctor is doing behind the scenes, shall we?

An immunologist or allergist usually does the test which involves a skin prick or a patch test. The image above, from Westhillsaaa, illustrates a medical personnel checking for unusual reactions in a patient’s skin through various triggers.

The tests could range from injecting the allergens into your skin from an injection to taking out a blood sample. The choice of tests varies according to the patient’s data including their medical history, their condition, and suspected triggers.

Something to note about allergies is that a person can outgrow them with time. This is commonly seen in children getting rid of food allergies but some allergies like that of pollen and medications persist for a long time or even all your life.

Although you can’t possibly get rid of an allergy that still persists in adulthood, you can take certain medications and tests described accordingly to reduce complications.

A common medication is desensitization which is basically building tolerance for your allergen by exposing your body to it periodically under small concentrations. 

A personal suggestion is that you should have an emergency action plan including an EpiPen ready just in case things go south after eating/reacting to something new.

In the near future, who’s to deny that at the rate medical technology is growing, maybe we could even have a permanent remedy for allergies? That’s a topic up for discussion.

500Hz Monitors: Who’s buying them?

When it comes to monitor refresh rates, higher is undoubtedly better. But, complementing this idea is a trend of diminishing return, where the benefits of higher refresh rates start to become less impactful.

“Frametime” refers to amount of time it takes for a screen to refresh its display once. There’s an inverse relationship between frametime and refresh rate; as the refresh rate increases, the frametime decreases. More precisely, we can use the function f = 1000 ms / x Hz to model the frametime (in milliseconds) based off the refresh rate (in Hertz).

Notice that while upgrading from a 60Hz to 144Hz results in a frametime improvement of 9.73ms, a further leap from 144Hz to 240Hz offers a lesser impact of only 2.77ms. This is the reasoning behind why humans perceive the transition from 144Hz to 240Hz as less noticeable than that of 60Hz to 144Hz. And as we continue to increase the refresh rate, this trend of diminishing return only becomes more significant; the difference that consumers will perceive decreases as they upgrade their display further.

Standing at the highest end of today’s monitor market are 500Hz displays. So the question raises: with a benefit so seemingly insignificant, who exactly is the target audience for these monitors?

Well, while diminishing return is unavoidable, the degree to which consumers will notice improvement can vary. Picture two very different activities: typing on a document and playing a first-person shooter. When a user is typing, only a small portion of their monitor screen will update at a time. On the other hand, while playing a first-person shooter, nearly the entire screen is in constant motion and updates rapidly, effectively causing more disparity between separate frames. For this very reason, users who perform activities that demand a significantly and dynamically updating screen will benefit the most from a 500Hz monitor. 

(Most frames do not significantly change) (source: TrevorTrac)

(Most frames do significantly change) (Source: MP1st)

Additionally, fully utilizing a 500Hz monitor requires a video game to consistently run at or above 500fps(frames per second). And running these applications at such high framerates requires a higher-end system.

Furthermore, even the most powerful hardware can only run a handful of games above the 500fps threshold. Most titles that consumers play are recent releases; developers intended these games to run at 60fps on upper-middle level consumer hardware. Despite this, there are a few exceptions that can run at 500fps with reasonable settings. Video games such as Valorant, Minecraft, and older first-person shooters are among the select few which can fully utilize a 500Hz monitor.

The improvement from 360Hz to 500Hz is not nearly as significant as past generational leaps. But there still exists a niche userbase for whom 500Hz monitors tangibly benefit: users whose games 1) frequently and significantly update most of their screen and 2) can consistently run at or above 500fps on their system.

Quantum-Inspired AI model helps CRISPR Cas9 Genome Editing for Microbes


A team of scientists at the Oak Ridge National Laboratory (ORNL), have embarked on a groundbreaking venture, leveraging quantum biology, artificial intelligence (AI), and bioengineering to revolutionize the effectiveness of CRISPR Cas9 genome editing tools. Their focus is on microbes, particularly those earmarked for modifications to produce renewable fuels and chemicals, presenting a unique challenge due to their distinct chromosomal structures and sizes.

Traditionally, CRISPR tools have been tailored for mammalian cells and model species, resulting in weak and inconsistent efficiency when applied to microbes. Recognizing this limitation, Carrie Eckert, leader of the Synthetic Biology group at ORNL, remarked, “Few have been geared towards microbes where the chromosomal structures and sizes are very different.” This realization prompted the ORNL scientists to explore a new frontier in the quest to enhance the precision of CRISPR tools.

The team’s journey took an unconventional turn as they delved into quantum biology, a field at the intersection of molecular biology and quantum chemistry. Quantum biology explores the influence of electronic structure on the chemical properties and interactions of nucleotides, the fundamental building blocks of DNA and RNA, within cell nuclei where genetic material resides.

To improve the modeling and design of guide RNA for CRISPR Cas9, the scientists developed an explainable AI model named the iterative random forest. Trained on a dataset of approximately 50,000 guide RNAs targeting the genome of E. coli bacteria, the model took into account quantum chemical properties. The objective was to understand, at a fundamental level, the electronic distribution in nucleotides, which influences the reactivity and stability of the Cas9 enzyme-guide RNA complex.

“The model helped us identify clues about the molecular mechanisms that underpin the efficiency of our guide RNAs,” explained Erica Prates, a computational systems biologist at ORNL. The iterative random forest, with its thousands of features and iterative nature, was trained using the high-performance Summit supercomputer at ORNL’s Oak Ridge Leadership Computer Facility.

What sets this approach apart is its commitment to explainable AI. Rather than relying on a “black box” algorithm that lacks interpretability, the ORNL team aimed to understand the biological mechanisms driving results. Jaclyn Noshay, a former ORNL computational systems biologist and first author on the paper, emphasized, “We wanted to improve our understanding of guide design rules for optimal cutting efficiency with a microbial species focus.”

Graphical Abstract https://academic.oup.com/nar/article/51/19/10147/7279034

Validation of the explainable AI model involved CRISPR Cas9 cutting experiments on E. coli, using a large group of guides selected by the model. The results were promising, confirming the efficacy of the model in guiding genome modifications for microbes.

The implications of this research extend far beyond microbial genome editing. “If you’re looking at any sort of drug development, for instance, where you’re using CRISPR to target a specific region of the genome, you must have the most accurate model to predict those guides,” highlighted Carrie Eckert. The study not only advances the field of synthetic biology but also has broader applications in drug development and bioenergy research.

The ORNL researchers envision collaborative efforts with computational science colleagues to further enhance the microbial CRISPR Cas9 model using additional data from lab experiments and diverse microbial species. The ultimate goal is to refine CRISPR Cas9 models for a wide range of species, facilitating predictive DNA modifications with unprecedented precision.

The study, supported by the DOE Office of Science Biological and Environmental Research Program, ORNL’s Lab-Directed Research and Development program, and high-performance computing resources, signifies a significant leap forward in the quest to improve CRISPR technology. As Paul Abraham, a bioanalytical chemist at ORNL, remarked, “A major goal of our research is to improve the ability to predictively modify the DNA of more organisms using CRISPR tools. This study represents an exciting advancement toward understanding how we can avoid making costly ‘typos’ in an organism’s genetic code.” The findings hold promise for applications in fields ranging from bioenergy feedstock enhancement to drug development, marking a pivotal moment in the evolution of CRISPR technology.

Sources

https://doi.org/10.1093/nar/gkad736

Synthetic Biology: A Brave New World of Cures and Cautions

As a recent and ever-changing form of medicine and science, synthetic biology is paving the way for the future of medicine. Defined as a “research and engineering domain of biology where a human-designed genetic program is synthesized and transplanted into a relevant cell type from an extant organism” (A.M. Calladine, R. ter Meulen, 2013), synthetic biology offers possible solutions to some of society’s most pressing medical issues. Through DNA sequence modification and genome editing, scientists have been able to edit genetic material in living organisms with tools such as CRISPR (Clustered regularly interspaced short palindromic repeats). This ability allows scientists to provide organisms with genetic tools that nature has not yet apportioned. CRISPR also allows for the creation of ‘living therapeutics’ and introduction of immunity cells into the human body. 

So, what does this all mean? Well, synthetically creating genetic tools has already allowed for a breakthrough in different areas of production, such as the ability for silkworms to produce spider silk, as well as genetically engineered food, such as cheese, plant-based meat, etc., some of which are already available on a market scale. This provides society with a more sustainable way of creating different materials, which may be necessary as we continue to experience the impacts of consumerism on our planet’s environment. Living therapeutics and immune cells can help treat patients with various diseases, including multiple forms of cancer, providing them with a better chance of recovery and survival. Synthetic biology also assisted in the mass production of certain COVID-19 vaccines by manufacturing the SARS-CoV-2 genome sequence. 

It’s clear that an abundance of benefits derive from the usage of synthetic biology. Consequently, as with most technological advancements, there is also a profusion of risks. A majority of these risks appear to be ethical and extremely dangerous. According to The University of Oxford, synthetic biology, although promising, gives biologists a concerning way of ‘playing god.’ Misusing synthetic biology could potentially destroy existing ecosystems and undermine our crucial distinction between living organisms and machines. The loss of this distinction could be catastrophic for humans’ view on the importance of different organisms and creates an ethical concern of prioritizing machines and technology over nature and living organisms. Synthetic biology also introduces the risk of the synthesization of known human pathogens, such as Influenza or Smallpox, which could be released in much more dangerous forms than what they currently are. Although some of these associated risks are unlikely, the potential danger they inflict could be devastating. 

When considering the sad reality of human greed, it is essential to question whether the findings of synthetic biology will continue to be used for good. If put into the wrong hands, the technology could cause the decimation of multiple existing species, ultimately jeopardizing the balance of our ecosystem. Synthetic biology also poses the genuine risk of bioterrorism, as creating hazardous and genetically mutated organisms could be maliciously and violently released. Control of this technology is seen more in richer first-world countries, creating an inequality regarding access and usage. This gives certain countries, such as the U.S., an extensive scientific advantage over other countries, which could be used at the expense of other nations. 

It is still being determined what the future of synthetic biology holds, but it is imperative that both the benefits and drawbacks are considered. Naturally, we hope synthetic biology continues to be used for the greater of humankind, but that could very easily and swiftly change. Therefore, and when considering that we are already in the midst of multiple ethical, moral, and environmental crises, it is necessary to be aware of the information we consume and promote, specifically regarding the ongoing evolution of technology and science. 

Sources

Why “Teraflops” Fail to Represent Computational Performance

Floating-point operations per second, or flops, is a theoretical measurement used when describing the performance capability of hardware. The term “teraflops,” or “trillion flops,” is often used around recent gaming consoles such as the PlayStation 5 and the Xbox Series X, but it’s important to note that it applies to any device that contains a computing chip. The issue, however, is that measuring in teraflops has never reliably indicated real-world performance.

This can be observed especially in the graphics card market. During Q2 of 2019, AMD released the RX 5700 XT, a direct competitor to NVIDIA’s GTX 1080 Ti. While the 5700 XT had a teraflops rating of around 9.8, the 1080 Ti still boasted a stronger rating of around 11.3. And upon comparing the two values, we’ll find that the 1080 Ti should be about 15.3% faster than the 5700 XT.

Well, let’s take a look at some real-world testing results:

Source: [1] Timestamp 9:20
Source: [1] Timestamp 8:30

It appears that “15.3% faster” was in fact not the case. Represented above, Hardware Unboxed tested both card models, and found that on average, the 1080 Ti was the same speed as the 5700 XT at 1440p resolution. It was even 3% slower at 1080p.[1]

And no, this comparison is not an anomaly. The RTX 3080 almost triples the TFLOPS rating of the RTX 2080 (29.8 to 10.7), but is only about 65% faster at 4K resolution.

These performance statistics only begin to question a teraflops rating’s credibility in measuring computational performance; in order to further dispute it, we must better understand what actually defines “teraflops.”

The value of floating-point operations per second is calculated by taking the clock speed, the number of cores, and the number of floating point operations per cycle, and then multiplying them together. For the purpose of converting from flops to teraflops, this product is divided by one trillion.

The expression here might seem too simple; and as a matter of fact, it is. What part of this expression considers the processor’s memory or bus width? And after one year of video driver updates, the 5700 XT’s performance increased by ~6%[1]; so where is the driver considered? These are only a few of so many variables that this expression is missing; so while this expression is representative of floating-point operations, it is not indicative of real world performance, and thus dooms a measurement of teraflops to be flawed.

Let’s say you multiplied your CPU’s core count by 1000, and you maintained all of the other variables at constant values. Per the expression, the TFLOPS rating would multiply by at least 1000. The issue, here, is that you’re going to need much more cache to support so many cores. Cache, meanwhile, isn’t even considered in the above expression; so even though the card would be theoretically 1000 times more powerful, it would be hindered by its painfully disproportionate cache size.

So how can you accurately measure a processor’s performance? Well, we need to look at performance metrics that can actually be verified with testing. These include framerates and temperatures, which can be observed via benchmarking, and latency, which can be examined with an LDAT. Metrics such as these which are observed in the real world are significantly more reliable and indicative of computational performance. Theoretical values such as teraflops are unreliable because they have never been tested.

Source: [2]

Sources:

[1] https://www.youtube.com/watch?v=1w9ZTmj_zX4&ab_channel=HardwareUnboxed\

[2] https://www.youtube.com/watch?v=OCfKrX15TOk&ab_channel=JayzTwoCents[3] https://www.tomsguide.com/news/ps5-and-xbox-series-x-teraflops-what-this-key-spec-means-for-you

Is Your Phone Really Listening, or is it just Smart Advertising?

Have you ever had that eerie feeling that your phone is listening to your conversations? Has it ever happened that you’re making ice skating plans with your friends, and the moment you open Instagram, your eyes lay upon an ice skating ad? It has happened to me, and I’m sure I’m not alone. But amidst the bewilderment, the question lingers: Is our phone genuinely listening to us, or is it all just a series of bizarre coincidences?

Although Apple claims that it doesn’t listen to users, voice assistants such as Siri and Alexa listen for wake-up words such as “Hey Siri” and “Alexa” and record the user’s speech, contributing to the creation of a user’s profile for targeted advertisements. A user’s profile includes their demographics, browsing history, online purchases, social media interactions, app usage, and much more. Additionally, Ad networks buy data from many sources and track a user’s online activity. They seem to know everything about us – our age, gender, likes, dislikes, location, hobbies, and even the time we spend on different websites. Through the data from the profile and algorithms, advertisers effectively target specific audiences for their ads. Now sometimes the ad isn’t completely in line with the user’s preferences, but there is a process that involves customization to make the ad as precise as possible with the user’s interests. 

I mentioned that Apple claims that it doesn’t listen to users; however, that statement is contradicted by a report that revealed how Siri can “sometimes be mistakenly activated and record private matters,” raising privacy concerns. For the most part, the data gathered by advertisers is used anonymously to respect privacy, but it’s essential to read the terms and conditions before agreeing to them. 

Because of these specific ads, one gets the impression that their phone is actively listening to them 24/7, but it’s mostly due to the role of data collection and network algorithms. Sometimes confirmation bias – the tendency of individuals to support information that aligns with their opinions and ignore information that does not – plays a role here. For example, if you’re talking about chocolate ice cream and receive an ad about it, you’ll instantly think you’re phone has been listening to you all this time, but other times when you get an ad unrelated to your conversation, you disregard it or don’t notice it.

In conclusion, while your phone does listen to you, it does through voice assistants and mostly in harmless ways. So the next time you experience the ice skating situation, you’ll know the reasons behind it.

Alef Aeronautics

Flying cars become a reality – FAA approves

What the world once considered to be something out of a sci-fi movie might just become reality. Alef Aeronautics, an automotive aviation company, has been working on this flying car for the past 7-8 years and has without a doubt left not only the world of flight and travel but also the general public in awe. This electric vehicle is expected to hit the American skyline in the year 2025, and preorders begin at a whopping $300,000 (USD). 

As per a report from Times Now, the company has already started accepting pre-orders and money deposits for the vehicle, but the vehicle itself will only be delivered by 2025. The design is urban, futuristic, and the car itself is sustainable. CEO Jim Dukhonvy backs this claim, as he says “We’re excited to receive this certification from the FAA. It allows us to move closer to bringing people an environmentally friendly and faster commute, saving individuals and companies hours each week. This is one small step for planes, one giant step for cars.” 

Not only is this a huge step towards technological advancements, but also is an insight to the future. As several sources report, the Federal Aviation Administration (FAA) has approved and certified this yet-to-be automobile. With a vertical take-off stance, “Model A,” can currently carry 2 people and travel about 200 miles. 

Alef’s newest development and the approval of the FAA is only the stepping stone to a more sustainable and safer future. 

Defeating Time: A breakthrough in Aging.

Something you can’t see or hear until years go by. Something you recognize as simple and yet impossible to avoid. Something that is known as both the cruelest and most beautiful law in all of nature. Something that neither the richest nor poorest person can escape from. That something is time. 

Throughout mankind, humans have been able to conquer just about everything, from their minuscule problems to global affairs. However, with all of our minds combined, we still failed to defeat the toughest opponent of all: time. For what seems like since the origin of the universe, it appeared as the one unstoppable force that nobody could fight.

That is until 2022. While this year beckoned the end of the COVID-19 pandemic, it also brought along news about a case study conducted by David Sinclair, a molecular biologist who spent the vast majority of his career (twenty years) searching for ways to reverse aging and undoing time in the process. While the beginning of his journey was unsuccessful, he didn’t give up. 

The study split up two different mice (siblings born from the same litter) and genetically altered one of them to make them considerably older, something that was a marked success. While this alone is not indicative of a reversal in aging, it does bring up an important question: if time could be sped up, could it also be slowed down or even undone altogether? However, before we get to that, we need to understand just how the mice were genetically altered and why. 

Image credit: https://www.cnn.com, depiction of two mice from the same litter being drastically different in age appearance.

Many believe that aging is caused due to cell damage, but that’s not exactly accurate. That is one of the reasons, yes, but that’s not the main cause. Instead, we should look at the heart of the matter: the epigenome. It is what determines what each cell becomes and how it works, an instructional manual of sorts for each cell. When the epigenome malfunctions, the “instructions” of the cells are lost, thus resulting in the cell failing to continue functioning. 

So, Sinclair utilized gene therapy to get the cells their instructions to continue working and the results were shocking. Sinclair wasn’t only able to display success in accelerating aging, but also reversing it as well by nearly 60%. What’s more, this appears to be limitless, with Sinclair even citing that “[he’s] been really surprised by how universally it works. [Him and his team] haven’t found a cell type yet that [they] can’t age forward and backward.”

This expands beyond mice: it has already been utilized to reverse aging in non-human primates through the use of doxycycline, an antibiotic with gene reprogramming potential, with rapid success. There has even been some human experimentation, with gene therapy being done on human tissues in lab settings. 

The ability to reverse aging across the board brings up more than just stopping time, it also enables the possibility of halting sickness relating to aging. In retrospect, these illnesses (like dementia and Alzheimers among others) are caused due to cell malfunction. If the reversal of aging is potent enough, it runs the risk of also undoing these illnesses. 

With the potential to halt aging and enable people to live into their hundreds without fear of age-related illnesses, it does bring up countless possibilities. If we can already undo aging on a small scale, imagine what the future ten, fifty, or even a hundred years from now can behold.

  • https://www.cell.com/cell/fulltext/S0092-8674(22)01570-7
  • https://time.com/6246864/reverse-aging-scientists-discover-milestone/
  • https://www.cnn.com/2022/06/02/health/reverse-aging-life-itself-scn-wellness/index.html