Technology

Tiny but Mighty: UC Berkeley’s Micro-Robot Takes Flight with Magnetic Power

There’s a very popular saying that states “A little goes a long way.” That saying is exemplified through the University of California, Berkeley’s new smallest robot. Weighing in at just 21 milligrams and only being 9.4 millimeters in size, it is the smallest robot in the world that is capable of controllable flight.

Inspired by the movements of bumblebees, Liwei Lin, professor of mechanical engineering at UC Berkeley, aimed to create a robot that could mimic their precision, stating that “Bees exhibit remarkable aeronautical abilities, such as navigation, hovering, and pollination, that artificial flying robots of similar scale fail to do.” Typically, flight is only achievable in robots through motors, propellers, and electronics for flight control. These components prove to be a challenge to cram into such a miniscule frame. However, this robot is powered by external magnetic fields, with its body built to resemble a propeller, and has two magnets of opposite attraction attached, which provide the robot with the necessary lift to take off.

Lau, Adam. Campus Professor Liwei Lin Holds the Robot, Which Is Able to Pick up and Distribute Pollen and Nectar When Flown into Flowers. DailyCal.org, 2025

Magnetic force is caused by the rotation of electric charges, which creates an invisible force that can attract or repel other magnetic materials. This force creates an external magnetic field, which attracts and repels the magnet within the robot, spinning the propeller and causing the robot to fly. In this case, the field is generated by an electromagnetic field coil. By altering the strength of the magnetic field, the flight path of the robot can be accurately controlled.

With such a small frame, the possibilities are endless. One of the most promising applications is artificial pollination, as mentioned by Wei Yue, co-author of the study and Ph.D. candidate. “This flying robot can be wirelessly controlled to approach and hit a designated target, mimicking the mechanism of pollination as a bee collects nectar and flies away,” states Lin. This can be instrumental in supplementing global pollination. Trends have shown that populations of bees — who are the #1 pollinators in the world — have been steadily declining. If enough of these robots are produced, it could counteract the decline of the bee population. This is just one of the many uses. The next smallest size of robot is about 2.8 cm. This is over 3 times the size of the UC Berkeley model. The miniscule form of this robot makes it highly beneficial in rescue situations, as it will be able to squeeze into spaces previously deemed too small to fit into. They can also be useful in the field of medicine, with Yue stating that “They could potentially be used in minimally invasive surgery because we could inject a number of them into the body and have them cooperate together to form stents, ablate clots or do other tasks.”

 Lau, Adam, and Berkeley Engineering. “The Robot Was Designed to Mimic the Flight Behavior of Insects like Bumblebees.,” UC Berkeley News, 2025

Due to the nature of the frame, it is not possible for the robot to adjust its movements instantaneously, as on-board sensors are not able to fit. This means it cannot adapt to any unexpected changes or obstacles in its flight path. So, events like strong wind or rain can knock the robot off course. However, the scientists at UC Berkeley plan to further develop this technology, with Yue stating “In the future, we will try to add active control, which would allow us to change the robot’s attitude and position in real time.” Another drawback is that the magnetic field required to lift the robot is quite strong. This can be corrected by further shrinking the size of the robot down to about 1mm. This will make it light enough to be carried by weaker fields made from waves such as radio waves.

Innovation knows no bounds with UC Berkeley’s new smallest robot. Despite the challenges, this robot shows that big things can truly come from small packages. From saving the environment, to enhancing medical practices, to performing rescue missions, the possibilities are endless. 

References:

Jacobs, Skye. “Miniature Robot Takes Flight Using Magnetic Fields, No Onboard Power.” TechSpot, 3 Apr. 2025, www.techspot.com/news/107394-miniature-robot-takes-flight-using-magnetic-fields-no.html. Accessed 11 Apr. 2025.

Manke, Kara. “UC Berkeley Engineers Create World’s Smallest Wireless Flying Robot – Berkeley News.” Berkeley News, 28 Mar. 2025, news.berkeley.edu/2025/03/28/uc-berkeley-engineers-create-worlds-smallest-wireless-flying-robot/. Accessed 11 Apr. 2025.

Trovato, Roman. “UC Berkeley Engineers Create World’s Smallest Wireless Flying Robot.” Www.dailycal.org, 2 Apr. 2025, www.dailycal.org/news/campus/research-and-ideas/uc-berkeley-engineers-create-world-s-smallest-wireless-flying-robot/article_be09b0eb-5f5e-48a4-892c-b92cda6064ec.html. Accessed 11 Apr. 2025.

Proton-Coupled Energy Transfer Deciphered: High-Pressure Research Reveals Key Mechanisms

In a groundbreaking study, a team of researchers have discovered new mechanisms of proton-coupled electron transfer (PCET), a fundamental process that is the base for life-sustaining reactions such as cellular respiration and photosynthesis. Using an innovative high-pressure technique, the team has successfully distinguished between two key reaction mechanisms, paving the way for advancements in energy conversion and storage technologies.

The Balancing Act of Electrons and Protons

Redox reactions, which involve the transfer of electrons between molecules, are critical to both natural and industrial processes. However, electron transfer alone can create energetically unfavorable charge imbalances. Nature’s solution? Coupling electron transfer with the movement of positively charged protons. As the researchers explain, “This proton-coupled electron transfer (PCET), as it is known, does not produce any change in charge—the most efficient way for a redox reaction to occur.”

But how exactly do these transfers occur? There are two possibilities: either electrons and protons move simultaneously in a “concerted” mechanism, or they transfer separately in a stepwise fashion. “To be able to optimize these processes, we need to know the exact mechanisms,” says Professor Ivana Ivanović-Burmazović. “Before now, however, there has been no direct method for differentiating the two alternatives with certainty. Our work set out to remedy this.”

Pressure Yields the Answer

The research team, led by Professor Ivana Ivanović-Burmazović of LMU Munich and Professor Dirk Guldi from FAU Erlangen-Nürnberg, investigated the influence of pressure on a light-induced reaction in a photosensitive molecule. By applying pressures of up to 1,200 atmospheres (atm), they observed how the reaction rate changed or was unaffected under extreme conditions. “If high pressure—in the experiment, up to 1,200 atmospheres—is applied and the reaction rate remains unchanged, it is a concerted reaction,” explains Ivanović-Burmazović. “When electrons and protons are transferred simultaneously, charge of reacting species does not change and neither does the associated solvation sphere—that is, the cluster of solvent molecules surrounding the molecules. Therefore, pressure has no influence on reaction rate—a clear sign of a concerted mechanism.” Conversely, if the rate changes, this points to changes in the charge and to a change in the volume of the solvation sphere—indicating a stepwise process.

Original diagram explaining (pressure-based) Concerted vs Stepwise mechanisms

Surprising Control Over Reaction Pathways

The team’s findings went beyond mere observation. “By increasing the pressure, we managed to steer the reaction from a stepwise mechanism toward a concerted mechanism,” says Ivanović-Burmazović. This level of control opens up new possibilities for designing and optimizing chemical processes.

Implications for Energy and Beyond

The study’s results have potential for practical applications. As the authors emphasize, “The new findings are highly significant for numerous research areas that deal with the motion of electrons and protons. They not only offer new insights into fundamental chemical processes, but could also help advance new technologies concerned with the conversion and storage of chemical energy—such as redox catalysis for the generation of solar fuels or for hydrogen production.”

Example of high-pressure reactors (Optimus Instruments)

Looking Ahead

The team’s innovative use of high-pressure techniques sets a new standard for studying complex reaction mechanisms. As researchers continue to explore the intricacies of PCET, the findings could lead to breakthroughs in fields as diverse as biochemistry, materials science, and renewable energy.

For now, one thing is clear: the connection between high-pressure science and molecular chemistry has furthered our understanding of the building blocks of life and energy to a new level.

Citations:

Langford, D., Rohr, R., Bauroth, S. et al. High-pressure pump–probe experiments reveal the mechanism of excited-state proton-coupled electron transfer and a shift from stepwise to concerted pathways. Nat. Chem. (2025). https://doi.org/10.1038/s41557-025-01772-5

Ludwig-Maximilians-Universität München. “Proton-coupled electron transfer: Deciphered with high pressure.” ScienceDaily. ScienceDaily, 21 March 2025. <www.sciencedaily.com/releases/2025/03/250321121450.htm>.

High pressure reactors – optimus instruments. (2024, June 27). Optimus Instruments. https://optimus.be/subject/high-pressure-reactors/

Italian Scientists “Freeze” Light to Make a Supersolid for the First Time

Introduction:

When most people think of light, they may think of things like rays of sunlight or what they see when they flip on a light switch, but they definitely don’t think of it as a solid, and they shouldn’t have, until now.

On March 5, 2025, Italian researchers published a paper in the Nature Journal about how they “froze” laser light into a supersolid that has extraordinary properties. 

A supersolid is a state of matter where particles condense to make crystalline solids but they move as if they are a liquid without viscosity (internal friction). In order to be formed, it has to be cooled to almost absolute zero. According to Bob Yirka, “a supersolid is a seemingly contradictory material- it is defined as rigid, but also has superfluidity, in which a liquid flows without friction.”

History of Research in Supersolids:

In the 1960s, supersolids were first predicted and they were first observed in 2017, but only in special gases. 

They were also observed in 2024 by physicists in China. In order to form this supersolid, scientists had a compound of atoms which were positioned in triangular lattices which, when in a magnetic field, spin the same way. However, when they are not in a magnetic field, the atoms try to have a spin that opposes the spin of neighboring atoms. That’s where the triangle shaping comes in because it limits how many ways the atoms can orient themselves. With this shaping, researchers predicted that a supersolid with this material was possible, but only under the proper conditions. So, they put the material in an apparatus that allowed them to see the spin states and transitions of the atoms. After comparing several results to different theoretical calculations, they came to the conclusion that it was a supersolid. 

How Light was Turned into a Supersolid:

At CNR Nanotec, Institute of Nanotechnology, in Lecce, Italy, Antonia Gianfate and Davide Nigro led a team of scientists with the goal of “freezing” light. However, the researchers didn’t simply lower the temperature to “freeze” light. Instead, they used quantum techniques such as using a photonic semiconductor platform that conducted photons similarly to how electrons are usually conducted.

To make a supersolid, the scientists fired their laser at gallium arsenide that had special ridges. When the light hit the ridges, it interacted with it and made polaritons (hybrid particles) that the ridges constrained, forcing the polaritons to become a supersolid.

The amount of photons (particles of light) increased and satellite condensates formed, which indicated that it was a supersolid. Since these condensates had opposite wavenumbers while having the same energy and having a specific spatial structure, it was confirmed that it was in a supersolid state.

Conclusion:

This recent breakthrough is a big step forward in the research into supersolids and the quantum world. In the future, supersolids could be crucial to doing things such as making more stable quantum computers and improving energy storage and materials.

The ability to “freeze” particles of light may seem to be something out of a science fiction book, but in 2025 it is the reality and this revolutionizing experiment will make supersolids easier to study so scientists can continue unlocking the secrets of the quantum realm. 

References:

ET Online. (2025, March 12). Scientists freeze light: Researchers discover a rare state of matter where it flows like liquid but holds shape like a solid. Economic Times. Retrieved March 15, 2025, from https://economictimes.indiatimes.com/news/new-updates/scientists-freeze-light-researchers-discover-a-rare-state-of-matter-where-it-flows-like-liquid-but-holds-shape-like-a-solid/articleshow/118928851.cms

HT News Desk. (2025, March 14). Scientists manage to freeze light, convert it into a solid: Here’s how they did it. Hindustan Times. Retrieved March 15, 2025, from https://www.hindustantimes.com/world-news/us-news/scientists-manage-to-freeze-light-convert-it-into-a-solid-heres-how-they-did-it-101741943981846.html

Pine, D. (2025, March 13). Scientists turn light into a ‘supersolid’ for the 1st time ever: What that means, and why it matters. Live Science. Retrieved March 15, 2025, from https://www.livescience.com/physics-mathematics/scientists-turn-light-into-a-supersolid-for-the-1st-time-ever-what-that-means-and-why-it-matters

Yirka, B. (2024, January 29). The first observation of a material exhibiting a super solid phase of matter. Phys.org. Retrieved March 15, 2025, from https://phys.org/news/2024-01-material-supersolid-phase.html#google_vignette

Yirka, B. (2025, March 6). Laser light made into a supersolid for the first time. Phys.org. Retrieved March 15, 2025, from https://phys.org/news/2025-03-laser-supersolid.html#google_vignette

Could the New State of Matter be the Future of Quantum Computing?

Definitions

  • Fermions- particles that have a mass and are one of matter’s two main building blocks.
    • Composite fermions- combinations of these fermions.
  • Superconductors- according to Cade Metz of the New York Times, “are materials that conduct electricity without losing the energy they are transmitting.”
  • Majorana- a particle that is its own antiparticle
    • Antiparticles- subatomic particles that have the same masses as corresponding particles, but they have opposite charges

We’re all told growing up that the states of matter are solid, liquid, and gas. As we grow up, plasma and Bose-Einstein Condensates are added to that list, but what if I were to tell you that a new state of matter was just created, and it’s being used to advance quantum computing. 

On February 19, Microsoft published a research paper in the science journal Nature, announcing that they created the Majorana-1 chip. The Majorana-1 chip is a microprocessor that uses a topological superconductor that yields particles that aren’t solid, liquid, or gas. Could this new state of matter be the key to the future of quantum computers?

History:

Thirty years ago, Jainendra Jain, a physicist at Penn State, pioneered a theory about a new state of matter. They called this theory the fractional quantum Hall effect and said that it was a liquid of composite fermions. These fermions can create a superconductor under the right conditions. Theorists then predicted that under the right conditions, these composite fermions could make a superconductor that encloses a Majorana. 

Image Source: Hu, C. (2022, September 7). IBM’s quantum computer. Popular Science.

Standard Versus Quantum Computing

Normal computers use “bits.” These are the 0’s and 1’s that make up data. On the other hand, quantum bits, or qubits, can be a 0, a 1, or a superposition, such as being a 0 and a 1 simultaneously. Together, these qubits drastically increase how quickly calculations can be made. In fact, if every computer in the world worked together, it would take decades to do what a quantum computer can do in just one day. 

One of the main challenges in the world of quantum computers is interference either from the environment or from within the system itself. If there is interference, qubits can collapse and cause errors by going into a definite state, meaning they would turn into only a 1 or a 0. That’s where theorists bring in these strange Majorana particles.

Theorists believed that these Majorana particles could be applied to quantum computing to make them more fault-tolerant. This happens because when there are two Majorana particles together, they either make a whole fermion or nothing, acting as the 0’s and 1’s of a standard bit. However, unlike most other qubits, Jainendra Jan says “the information here can be stored non-locally in a topological fashion.” What this means is that the two Majorana particles making up a qubit can have distance between them. Since neither of them have all the information, local interference can’t turn them on or off. So, the qubits won’t lose their data because it can correct errors as it calculates, allowing quantum computing to be used by industries in the future.

The New Quantum Computer

In 1997, Alexei Kitaev, a Russian American physicist, first came up with the idea of combining superconductors and semiconductors. Microsoft has worked toward this idea in what has been its longest-running research project by combining the superconductors most quantum computers use with the classical computer’s semiconductor’s strengths.

Experimentalists have found that in the right conditions, such as being cooled to about four hundred degrees below zero, composite fermions can pair up to form a topological superconductor that contains the highly sought-after Majorana particles. 

This topological superconductor, also called a topoconductor, is the new state of matter they created. A topoconductor is formed when a superconductor and a semiconductor are cooled to extremely low temperatures and then tuned with a magnetic field.

Far from the future?

Even with this exciting news involving a new state of matter that could revolutionize quantum computing, people shouldn’t get too excited yet. In fact, Microsoft’s Majorana-1 chip doesn’t actually show that composite fermions work as qubits because Jainendra Jain says it “focuses on Majorana particles in superconductor-semiconductor hybrid nanowires, not on Majorana particles in a composite-fermion superconductor.” However, it shows that the needed measurements for a Majorana particle-based computer are possible, showing a pathway to the future. 

Also, the Majorana-1 chip is still in development. It is designed to have up to one million qubits, but right now it only contains eight. Microsoft claims that it has to put one hundred qubits into the chip to make it commercially viable. They are predicting that this will come in the near future, and based on their predictions this technology may be possible by 2030. 

This technology could be used to advance research, improve energy efficiency, develop medicine, and much more. Some are still skeptical about it though because these computers are extremely powerful, so in the wrong hands the technology could pose a threat to national security.

Conclusion

While some scientists try to figure out if they should be developing this technology or not, others are off to the races, with Mirosoft, Google, Intel, IBM, and several foreign governments all trying to be the first to crack the code of quantum computing. 

If Microsoft’s data is verified, it could revolutionize technology and make the future of science come much sooner than originally predicted. Whether one believes we should develop this technology or not, everyone can agree that these incredible leaps in technology showcase how much we have furthered our understanding of the quantum world. Now, we just have to wait and see what the future has in store for this exciting new technology. 

References:

Fiveable. (n.d.). Antiparticle. Fiveable. Retrieved March 4, 2025, from https://fiveable.me/key-terms/principles-physics-iii-thermal-physics-waves/antiparticle

Klebanov, S. (2025, February 21). New state of matter just dropped? Morning Brew. Retrieved March 4, 2025, from https://www.morningbrew.com/stories/2025/02/21/new-state-of-matter-just-dropped

Metz, C. (2025, February 19). Microsoft Says It Has Created a New State of Matter to Power Quantum Computers. The New York Times. Retrieved March 4, 2025, from https://www.nytimes.com/2025/02/19/technology/microsoft-quantum-computing-topological-qubit.html

Porschke, T. (2025, February 26). Solid, Liquid, Gas, Plasma… Topoconductor? The Log. Retrieved March 4, 2025, from https://www.thelogcchs.com/post/solid-liquid-gas-plasma-topoconductor

Unacademy. (n.d.). Fermions and Bosons. Unacademy. Retrieved March 4, 2025, from https://unacademy.com/content/upsc/study-material/physics/fermions-and-bosons/

WennersHerron, A., & Berard, A. (2025, February 26). Q&A: Will Microsoft’s quantum ‘breakthrough’ revolutionize computing? Penn State. Retrieved March 4, 2025, from https://www.psu.edu/news/research/story/qa-will-microsofts-quantum-breakthrough-revolutionize-computing

Beyond End-To-End: unveiling the Quantum threat to Encryption

If you’ve ever used Whatsapp or Instagram to communicate with friends and family, you’d notice that the messages are “end-to-end encrypted”. Upon first notice, it sounds great. All your messages are safe and secure – you’d think. 

However, not every encryption method is created equal, and with the rise of cyberattacks and more sophisticated technology especially in the Quantum field, one must exercise caution when choosing the right tools to use. But to better understand the scale of this issue we must first address the mathematical operation that makes such risk feasible in the first place.

Shor’s algorithm poses a major threat to security provided by current industry-standard encryption methods like RSA and ECC which rely on the difficulty of factoring large integers for security. However this difficulty is limited to the classical world of computing, where operations would be trialed one by one until a solution is found (exponential time) making it almost impossible to decipher such encryption methods. On the other hand, a Quantum computer is able to simultaneously compute all the possible trials in a single iteration due to it being in a superposition of exponentially many states – achieving rapid polynomial time. In simpler terms, many of the “asymmetric” encryption methods are at risk.

Evidently, this causes a domino effect on Symmetric encryption methods, since most Symmetric keys are exchanged between users through an asymmetric exchange process, which could be compromised by Shor’s algorithm allowing potential decryption of all data encrypted with that key: including your texts and photos.

Whilst this threat isn’t currently feasible for ordinary individuals — since Quantum Computers are costly, sophisticated pieces of technology –  many countries and researchers are becoming increasingly aware of its uses and have created their own. Evidently, there is an imminent risk that Quantum threats may have the potential to escalate cyberattacks and transform the digital landscape as we know it. 

Moreover, some authorities and individuals are adopting a technique called “Harvest Now, Decrypt Later”: accumulating databases of encrypted information. In hopes, it could one day be decrypted with sufficiently powerful quantum computers. 

Evidently, many companies and researchers (including NIST) have taken measures to enhance encryption methods and implement Quantum safe or secure encryption in their communication protocols. One example, is the open-source messaging platform signal, which introduced the new PQXDH encryption protocol that claims to be quantum resistant to current advancements in the field of encryption: however, they claim that such technology must be upgraded as future findings and vulnerabilities may require additional security adjustments. If you wish to, the whitepaper for the encryption method can be accessed here.

Conclusion

Finally, we realised that such advancements pose a monumental risk to information security. Although it’s easy to be pessimistic about such advancements, I believe that it’s a step in the right direction towards safeguarding our digital security and communication. Therefore, as individuals and organisations alike we must take proactive measures:

  • Stay Informed: Keep abreast of developments in quantum computing and its implications for encryption. Awareness is key to making informed choices.
  • Quantum-Safe Encryption: Consider adopting encryption methods that are resilient to quantum attacks. New cryptographic standards, often referred to as Post-Quantum Cryptography (PQC), are being developed to address this specific concern.
  • Advancements in Technology: Support and invest in technologies that stay ahead of the curve (especially open-source projects), continually updating encryption methods to withstand emerging threats.

Sources

https://csrc.nist.gov/projects/post-quantum-cryptography/
https://statweb.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf
https://purl.utwente.nl/essays/77239/
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/security/encryption/what-types-of-encryption-are-there/#:~:text=There%20are%20two%20types%20of,used%20for%20encryption%20and%20decryption.
https://signal.org/docs/specifications/pqxdh/

The Secret to Clustering? Unveiling the Mystery Behind Spinal Cancer Clusters

According to the CDC, over 600,000 people passed away from this leading cause of death, making it the second greatest claimant of casualties. Upon being diagnosed, one experiences a lifetime of stress and a load of rigorous treatment in the form of chemotherapy in an attempt to kill all the cells before the illness could spread to the remainder of the body. It doesn’t always stop at this point though, many go on to have progressing stages that may either require more treatment, a verdict of ‘X months to live,’ or the first followed by the second. This illness is cancer. 

There are over a hundred and fifty types of cancer, ranging from the head to the toe and everything in between. Some cancers are heavily influenced by gender (such as breast cancer), others by age (such as prostate cancer), and so on and so forth. The specific one that ought to be highlighted though for its increased presence (clustering) of cancerous cells relative to the body is spinal cancer. Although most (if not all) cancers have the ability to cluster its cells in a couple of specific areas, this one can cause cancer cells to become three to five times more apparent in the spine relative to other limbs.

Without the assistance of modern medicine and technology necessary to probe deeper, scientists just considered this specific clustering of cells a medical mystery.

But we have access to that technology now. At least, researchers from Weill Cornell Medicine and the Hospital for Special Surgery in New York do. With these resources, they were available to figure out what exactly causes these clusters to emerge: vertebral skeletal stem cells in the spine. What makes these stem cells unique relative to other ones is their production of a protein that attracts these tumor (cancerous) cells to come to them.

So what can be done with this? Excellent question. 

Through identifying what may exactly cause this clustering, researchers can work on targeting these vertebral skeletal stem cells to disrupt their function (i.e., attracting cancerous tumor cells to them). 

That seems like the perfect plan, no? However, when an experiment on mice where these cells are targeted took place, it didn’t completely eliminate the amount of bone (and, by extension, the number of cancerous cells) in that area. This then begs the question: is there a second stem cell type that we aren’t accounting for? Or is it something else?

Sources

  • https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm
  • https://www.washingtonpost.com/science/2023/11/28/new-stem-cell-spine-cancer/
  • https://pubmed.ncbi.nlm.nih.gov/37704733/

Exploring the Science Behind Allergies

As alarming as it sounds, even a lick of peanut butter could be life-threatening. Allergies. What is it? Let’s see. Had the peanut in peanut butter been harmful to everyone it wouldn’t be called an allergy. Only if something reacts in an unprecedented way to a select few is then called an allergy.

So the question arises, How do I know if I’m allergic and what I am allergic to?

Allergies come in forms, ranging from water to even nickel coins. One can’t possibly predict what substances react weirdly with your body without ever being exposed to it. This is why allergy tests are done.

Well, Only a medical professional could let you know your allergies unless something you had eaten or been exposed to previously didn’t sit right with you. Symptoms of an allergy range from a runny nose to breathlessness and of course, the scary and itchy hives. 

Let’s take a look at what the doctor is doing behind the scenes, shall we?

An immunologist or allergist usually does the test which involves a skin prick or a patch test. The image above, from Westhillsaaa, illustrates a medical personnel checking for unusual reactions in a patient’s skin through various triggers.

The tests could range from injecting the allergens into your skin from an injection to taking out a blood sample. The choice of tests varies according to the patient’s data including their medical history, their condition, and suspected triggers.

Something to note about allergies is that a person can outgrow them with time. This is commonly seen in children getting rid of food allergies but some allergies like that of pollen and medications persist for a long time or even all your life.

Although you can’t possibly get rid of an allergy that still persists in adulthood, you can take certain medications and tests described accordingly to reduce complications.

A common medication is desensitization which is basically building tolerance for your allergen by exposing your body to it periodically under small concentrations. 

A personal suggestion is that you should have an emergency action plan including an EpiPen ready just in case things go south after eating/reacting to something new.

In the near future, who’s to deny that at the rate medical technology is growing, maybe we could even have a permanent remedy for allergies? That’s a topic up for discussion.

500Hz Monitors: Who’s buying them?

When it comes to monitor refresh rates, higher is undoubtedly better. But, complementing this idea is a trend of diminishing return, where the benefits of higher refresh rates start to become less impactful.

“Frametime” refers to amount of time it takes for a screen to refresh its display once. There’s an inverse relationship between frametime and refresh rate; as the refresh rate increases, the frametime decreases. More precisely, we can use the function f = 1000 ms / x Hz to model the frametime (in milliseconds) based off the refresh rate (in Hertz).

Notice that while upgrading from a 60Hz to 144Hz results in a frametime improvement of 9.73ms, a further leap from 144Hz to 240Hz offers a lesser impact of only 2.77ms. This is the reasoning behind why humans perceive the transition from 144Hz to 240Hz as less noticeable than that of 60Hz to 144Hz. And as we continue to increase the refresh rate, this trend of diminishing return only becomes more significant; the difference that consumers will perceive decreases as they upgrade their display further.

Standing at the highest end of today’s monitor market are 500Hz displays. So the question raises: with a benefit so seemingly insignificant, who exactly is the target audience for these monitors?

Well, while diminishing return is unavoidable, the degree to which consumers will notice improvement can vary. Picture two very different activities: typing on a document and playing a first-person shooter. When a user is typing, only a small portion of their monitor screen will update at a time. On the other hand, while playing a first-person shooter, nearly the entire screen is in constant motion and updates rapidly, effectively causing more disparity between separate frames. For this very reason, users who perform activities that demand a significantly and dynamically updating screen will benefit the most from a 500Hz monitor. 

(Most frames do not significantly change) (source: TrevorTrac)

(Most frames do significantly change) (Source: MP1st)

Additionally, fully utilizing a 500Hz monitor requires a video game to consistently run at or above 500fps(frames per second). And running these applications at such high framerates requires a higher-end system.

Furthermore, even the most powerful hardware can only run a handful of games above the 500fps threshold. Most titles that consumers play are recent releases; developers intended these games to run at 60fps on upper-middle level consumer hardware. Despite this, there are a few exceptions that can run at 500fps with reasonable settings. Video games such as Valorant, Minecraft, and older first-person shooters are among the select few which can fully utilize a 500Hz monitor.

The improvement from 360Hz to 500Hz is not nearly as significant as past generational leaps. But there still exists a niche userbase for whom 500Hz monitors tangibly benefit: users whose games 1) frequently and significantly update most of their screen and 2) can consistently run at or above 500fps on their system.

Quantum-Inspired AI model helps CRISPR Cas9 Genome Editing for Microbes


A team of scientists at the Oak Ridge National Laboratory (ORNL), have embarked on a groundbreaking venture, leveraging quantum biology, artificial intelligence (AI), and bioengineering to revolutionize the effectiveness of CRISPR Cas9 genome editing tools. Their focus is on microbes, particularly those earmarked for modifications to produce renewable fuels and chemicals, presenting a unique challenge due to their distinct chromosomal structures and sizes.

Traditionally, CRISPR tools have been tailored for mammalian cells and model species, resulting in weak and inconsistent efficiency when applied to microbes. Recognizing this limitation, Carrie Eckert, leader of the Synthetic Biology group at ORNL, remarked, “Few have been geared towards microbes where the chromosomal structures and sizes are very different.” This realization prompted the ORNL scientists to explore a new frontier in the quest to enhance the precision of CRISPR tools.

The team’s journey took an unconventional turn as they delved into quantum biology, a field at the intersection of molecular biology and quantum chemistry. Quantum biology explores the influence of electronic structure on the chemical properties and interactions of nucleotides, the fundamental building blocks of DNA and RNA, within cell nuclei where genetic material resides.

To improve the modeling and design of guide RNA for CRISPR Cas9, the scientists developed an explainable AI model named the iterative random forest. Trained on a dataset of approximately 50,000 guide RNAs targeting the genome of E. coli bacteria, the model took into account quantum chemical properties. The objective was to understand, at a fundamental level, the electronic distribution in nucleotides, which influences the reactivity and stability of the Cas9 enzyme-guide RNA complex.

“The model helped us identify clues about the molecular mechanisms that underpin the efficiency of our guide RNAs,” explained Erica Prates, a computational systems biologist at ORNL. The iterative random forest, with its thousands of features and iterative nature, was trained using the high-performance Summit supercomputer at ORNL’s Oak Ridge Leadership Computer Facility.

What sets this approach apart is its commitment to explainable AI. Rather than relying on a “black box” algorithm that lacks interpretability, the ORNL team aimed to understand the biological mechanisms driving results. Jaclyn Noshay, a former ORNL computational systems biologist and first author on the paper, emphasized, “We wanted to improve our understanding of guide design rules for optimal cutting efficiency with a microbial species focus.”

Graphical Abstract https://academic.oup.com/nar/article/51/19/10147/7279034

Validation of the explainable AI model involved CRISPR Cas9 cutting experiments on E. coli, using a large group of guides selected by the model. The results were promising, confirming the efficacy of the model in guiding genome modifications for microbes.

The implications of this research extend far beyond microbial genome editing. “If you’re looking at any sort of drug development, for instance, where you’re using CRISPR to target a specific region of the genome, you must have the most accurate model to predict those guides,” highlighted Carrie Eckert. The study not only advances the field of synthetic biology but also has broader applications in drug development and bioenergy research.

The ORNL researchers envision collaborative efforts with computational science colleagues to further enhance the microbial CRISPR Cas9 model using additional data from lab experiments and diverse microbial species. The ultimate goal is to refine CRISPR Cas9 models for a wide range of species, facilitating predictive DNA modifications with unprecedented precision.

The study, supported by the DOE Office of Science Biological and Environmental Research Program, ORNL’s Lab-Directed Research and Development program, and high-performance computing resources, signifies a significant leap forward in the quest to improve CRISPR technology. As Paul Abraham, a bioanalytical chemist at ORNL, remarked, “A major goal of our research is to improve the ability to predictively modify the DNA of more organisms using CRISPR tools. This study represents an exciting advancement toward understanding how we can avoid making costly ‘typos’ in an organism’s genetic code.” The findings hold promise for applications in fields ranging from bioenergy feedstock enhancement to drug development, marking a pivotal moment in the evolution of CRISPR technology.

Sources

https://doi.org/10.1093/nar/gkad736

Synthetic Biology: A Brave New World of Cures and Cautions

As a recent and ever-changing form of medicine and science, synthetic biology is paving the way for the future of medicine. Defined as a “research and engineering domain of biology where a human-designed genetic program is synthesized and transplanted into a relevant cell type from an extant organism” (A.M. Calladine, R. ter Meulen, 2013), synthetic biology offers possible solutions to some of society’s most pressing medical issues. Through DNA sequence modification and genome editing, scientists have been able to edit genetic material in living organisms with tools such as CRISPR (Clustered regularly interspaced short palindromic repeats). This ability allows scientists to provide organisms with genetic tools that nature has not yet apportioned. CRISPR also allows for the creation of ‘living therapeutics’ and introduction of immunity cells into the human body. 

So, what does this all mean? Well, synthetically creating genetic tools has already allowed for a breakthrough in different areas of production, such as the ability for silkworms to produce spider silk, as well as genetically engineered food, such as cheese, plant-based meat, etc., some of which are already available on a market scale. This provides society with a more sustainable way of creating different materials, which may be necessary as we continue to experience the impacts of consumerism on our planet’s environment. Living therapeutics and immune cells can help treat patients with various diseases, including multiple forms of cancer, providing them with a better chance of recovery and survival. Synthetic biology also assisted in the mass production of certain COVID-19 vaccines by manufacturing the SARS-CoV-2 genome sequence. 

It’s clear that an abundance of benefits derive from the usage of synthetic biology. Consequently, as with most technological advancements, there is also a profusion of risks. A majority of these risks appear to be ethical and extremely dangerous. According to The University of Oxford, synthetic biology, although promising, gives biologists a concerning way of ‘playing god.’ Misusing synthetic biology could potentially destroy existing ecosystems and undermine our crucial distinction between living organisms and machines. The loss of this distinction could be catastrophic for humans’ view on the importance of different organisms and creates an ethical concern of prioritizing machines and technology over nature and living organisms. Synthetic biology also introduces the risk of the synthesization of known human pathogens, such as Influenza or Smallpox, which could be released in much more dangerous forms than what they currently are. Although some of these associated risks are unlikely, the potential danger they inflict could be devastating. 

When considering the sad reality of human greed, it is essential to question whether the findings of synthetic biology will continue to be used for good. If put into the wrong hands, the technology could cause the decimation of multiple existing species, ultimately jeopardizing the balance of our ecosystem. Synthetic biology also poses the genuine risk of bioterrorism, as creating hazardous and genetically mutated organisms could be maliciously and violently released. Control of this technology is seen more in richer first-world countries, creating an inequality regarding access and usage. This gives certain countries, such as the U.S., an extensive scientific advantage over other countries, which could be used at the expense of other nations. 

It is still being determined what the future of synthetic biology holds, but it is imperative that both the benefits and drawbacks are considered. Naturally, we hope synthetic biology continues to be used for the greater of humankind, but that could very easily and swiftly change. Therefore, and when considering that we are already in the midst of multiple ethical, moral, and environmental crises, it is necessary to be aware of the information we consume and promote, specifically regarding the ongoing evolution of technology and science. 

Sources