Technology

Stopping the Energy Crisis Clock?

Less than 120 years. That’s the duration in which all nonrenewable energy will be fully exhausted. Although that sounds like a long time away, a whole lifetime away, in fact, this is a problem that needs to be addressed. 

Part of the reason as to why this issue needs to be solved is because of the exponential growth of fossil fuel usage. Compared to 1950, approximately 70 years ago from the date of this article, gas consumption has risen nearly twenty times as much, a staggering increase. 

Image Credit: https://ourworldindata.org/fossil-fuels, depiction fossil fuel consumption rates over the years.

It’s not unrealistic to say that, as time progresses, consumption of fossil fuels will continue to rise and thus, reduce their supply and hasten their end.

Thankfully, people have been paying attention to this issue (dubbed the energy crisis). Many have even made adjustments in varying degrees, from installing solar panels and foregoing nonrenewable resources (a relatively minor contribution) to proposing bills such as the Green New Deal, which would implement a gradual nationwide swap into renewable resources (a massive commitment). These are just two examples of how humans have been attempting to solve this problem.

Unfortunately, many of these potential methods have posed their innate issues. When it comes to adopting renewable energy resources, they either run the risk of being too costly (such as hydroelectric and solar) or unreliable (such as wind and solar), which are only available around 30% of the time. 

In addition, there has been extreme pushback against any means to delay (if not outright prevent) the energy crisis when it comes to large legislation such as the Green New Deal, with some citing that it’s too expensive (with a $93 trillion, twelve zeroes worth, price tag) and will submerge the U.S. into debt they can never get out of. With all the energy crisis efforts, both large and tiny, being fought against, it begs the question of whether there is any sort of renewable energy that can check all (or even most) of the boxes that would make everyone happy. 

That question was answered positively at the end of 2022. Beyond beckoning a new year, December also welcomed a new (or rather, improved) renewable energy source: nuclear energy. For the first time since the history of its experimentation, nuclear fusion/fission (the process of achieving nuclear energy) had reached a net gain, producing more energy than what it receives.

This discovery checked many boxes at first: it was reliable (with nuclear fuel being abundant in the environment) and exceptionally efficient, with a single gram of uranium being able to produce as much energy as a ton, 2,000 pounds that is, of coal. It also is a clean source of energy, an award that non-renewable energy fails to achieve. 

However, like all the benefits of discovery, there are bound to be drawbacks, and this was no exception. The first major issue is the extremely advanced technological nature of fusion: it makes it tough to master and replicate easily. The second vice ties into the first perfectly, with it being simply too expensive to sustain due to the immense amount of energy needed. Finally, the stereotypical reason why people fear nuclear energy: the danger. Although this stereotype is grossly exaggerated, there is some truth to the matter. Both malfunctioning accidents (like Chernobyl and Fukushima) and long-term radioactive waste that must be stored securely are issues that just cannot be ignored. 

Although nuclear energy does have some (rather large) issues, it’s important to not forget about its boons as well. From its efficient nature to being reliable and clean simultaneously, fusion is not a renewable energy source to underestimate. Regardless of what side you are on, nuclear energy brings up an important thing to think about: imagine what could be possible over the next century when it comes to technological innovation?

  1. https://group.met.com/en/mind-the-fyouture/mindthefyouture/when-will-fossil-fuels-run-out
  2. https://ourworldindata.org/fossil-fuels
  3. https://www.forbes.com/sites/judeclemente/2019/04/29/five-practical-problems-for-the-green-new-deal/?sh=5892345f3e8a
  4. https://stacker.com/science/22-biggest-scientific-discoveries-2022
Kevin Ku Unsplash

The AI Paradox: Enhancing Cybersecurity while Raising Concerns

In today’s interconnected and digitized world, numerous cybersecurity attacks such as malware attacks, phishing attacks, and data breaches occur, increasing the demand for cybersecurity professionals and advanced technologies such as Artificial Intelligence. However, AI acts as a defender and as a challenger to cybersecurity. 

Cybersecurity involves analyzing patterns from massive amounts of data to protect systems and networks from digital attacks and identify threats. The traditional approach relied heavily on signature-based detection systems, which were effective against known threats but incapable of detecting new or unknown threats, resulting in frequent cybersecurity attacks. However, AI-based solutions use machine learning algorithms that are trained using vast amounts of historical and current data to detect and respond to unknown and new threats. According to Jon Olstik’s Artificial Intelligence and Cybersecurity, approximately “27 percent” of security professionals in 2018 wanted to use AI for “improving operations, prioritizing the right incidents, and even automating remediation tasks,” implying that the reasons for AI in cybersecurity continue to increase over time.

However, there’s another side to AI’s role in cybersecurity, and it’s more dangerous than beneficial. While it may appear that AI systems are unlikely to be hacked, this is not the case because hackers can manipulate these systems. As previously said, AI solutions employ machine learning algorithms to detect threats; nevertheless, a fundamental flaw is that vulnerability is that these models are fully dependent on the dataset. A poisoning attack occurs when an attacker modifies the dataset to fulfill their malicious goals, forcing the model to learn from the modified data and leading to more attacks. 

As these rising concerns and vulnerabilities, AI-based solutions will need to be not only fast, but also safe and risk-free. AI and Cybersecurity share a complex relationship and it’s safe to conclude that Artificial Intelligence will continue to play a paradoxical yet essential role in the growing field of cybersecurity.

  • Comiter, Marcus, et al. “Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It.” Belfer Center, August 2019, https://www.belfercenter.org/publication/AttackingAI#toc-3-0-0. Accessed 7 July 2023.
  • Moisset, Sonya. “How Security Analysts Can Use AI in Cybersecurity.” freeCodeCamp, 24 May 2023, https://www.freecodecamp.org/news/how-to-use-artificial-intelligence-in -cybersecurity/. Accessed 7 July 2023.
  • Oltsik, Jon. “Artificial intelligence and cybersecurity: The real deal.” CSO Online, 25 January 2018, https://www.csoonline.com/article/564385/artificial-intelligence-and -cybersecurity-the-re al-deal.html. Accessed 7 July 2023. 

Who Would You Trust More: AI or Doctors?

For as long as the profession existed, doctors have been working diligently to perfect their craft and refine any rough edges, diagnosing, treating, and eventually curing their patients in the most efficient way possible in their eyes. However, mistakes are frequently made: medical malpractice is the third leading cause of death in the United States, with over 250,000 deaths occurring yearly. Despite the rigorous education doctors undergo to officially practice their craft, they too still make mistakes. It’s human nature to err sometimes, even in life-or-death scenarios. For the majority of time, it appeared as if this was just a sacrifice that had to be made to keep one of the world’s oldest, and most vital, professions stable. 

But what if the risk of human error was eliminated by having humans removed from the equation when it came to distributing medical care?  This would dynamically pivot the medical industry and the person-to-person interaction we all know today, in a completely different direction. Some speculate that this is possible, through the utilization of artificial intelligence (AI). 

Artificial intelligence has permeated throughout the medical field briefly, but it’s been shut down due to a variety of complications, whether it’d be availability, cost, unreliability, or a combination of these factors (among others). This was especially true of Mycin, an expert system designed by Stanford University researchers to assist physicians in detecting and curing bacterial diseases. Despite its superb accuracy, being even as reliable as human experts on the matter, it was far too rigid and costly to be maintained. Despite not being medically affiliated, Google image software is another example of just how unreliable AI is: it assessed, with 100% certainty, that a slightly changed image of a cat is guacamole, a completely incorrect observation.

However, as modern technology rapidly advances, with special emphasis on machine learning (the ability of a machine to function and improve upon itself without human intervention), some believe that AI can now pick up the slack of physicians. 

This claim isn’t entirely unsubstantiated: artificial intelligence can already assess whether or not infants have certain conditions (of which there are thousands of) by facial markers, something doctors struggle with due to the massive variety of illnesses. MGene, an app that has Ai examine a photo taken of a child by its user, has over a 90% success rate at accurately detecting four serious, potentially life-threatening syndromes (Down, DiGeorge, Williams, and Noonan). AI even detected COVID-19, or SARS-CoV-2, within Wuhan, China (the origin of this virus) a week before the World Health Organization (WHO) announced it as a new virus.

With every passing day, it appears that more and more boxes that are needing to be checked, enabling the possibility of artificial intelligence becoming a dominating presence within the medical field to become one step closer to turning into a reality.

That isn’t to say that there are issues with having artificial intelligence enter the medical industry: beyond the previous problems (of cost and unreliability) being possible, Ai being ever-changing also opens up the doors to bias, ranging from socioeconomic status to race to gender and everything in between. In addition, the usage of AI also is uncomfortable to many due to the removal of the person-to-person interaction that is commonly known to people, another big issue that needs to be addressed to ensure the successful implementation of artificial intelligence into the healthcare sector. 

Regardless of what side you are on, there is a common ground: artificial intelligence will continue to get more and more advanced. While it is uncertain as to whether the general public will want AI to replace doctors, have them serve as back-end helpers, or not exist whatsoever in the office, it is clear that artificial intelligence is a tool that has both a lot of benefits and drawbacks. Whether AI is implemented or not is a question that is left to the future. 

AI can now use the help of CRISPR to precisely control gene expressions in RNA

Almost all infectious and deadly viruses are caused due to their RNA coding. Researchers from established research universities, such as NYU and Columbia, alongside the New York Genome Center, have researched and discovered a new type of CRISPR technology that targets this RNA and might just prevent the spread of deadly diseases and infections.

A new study from Nature Biotechnology has shown that the development of major gene editing tools like CRISPR will serve to be beneficial at an even larger scale. CRISPR, in a nutshell, is a gene editing piece of technology that can be used to switch gene expression on and off. Up until now, it was only known that CRISPR, with the help of the enzyme Cas9, could only edit DNA. With the recent discovery of Cas13, RNA editing might just become possible as well.

https://theconversation.com/three-ways-rna-is-being-used-in-the-next-generation-of-medical-treatment-158190

RNA is a second type of genetic material present within our cells and body, which plays an essential role in various biological roles such as regulation, expression, coding, and even decoding genes. It plays a significant role in biological processes such as protein synthesis, and these proteins are necessary to carry out various processes. 

RNA viruses

RNA viruses usually exist in 2 types – single-stranded RNA (ssRNA), and double-stranded RNA (dsRNA). RNA viruses are notoriously famous for causing the most common and the most well-known infections – examples being the common cold, influenza, Dengue, hepatitis, Ebola, and even COVID-19. These dangerous and possibly life-threatening viruses only have RNA as their genetic material. So, how can/might AI and CRISPR technology, using the enzyme Cas13 help fight against these nuisances?

Role of CRISPR-Cas13

RNA targeting CRISPRs have various applications – from editing and blocking genes to finding out possible drugs to cure said pathogenic disease/infection. As a report from NYU states, “Researchers at NYU and the New York Genome Center created a platform for RNA-targeting CRISPR screens using Cas13 to better understand RNA regulation and to identify the function of non-coding RNAs. Because RNA is the main genetic material in viruses including SARS-CoV-2 and flu,” the applications of CRISPR-Cas13 can promise us cures and newer ways to treat severe viral infections.

“Similar to DNA-targeting CRISPRs such as Cas9, we anticipate that RNA-targeting CRISPRs such as Cas13 will have an outsized impact in molecular biology and biomedical applications in the coming years,” said Neville Sanjana, associate professor of biology at NYU, associate professor of neuroscience and physiology at NYU Grossman School of Medicine. Learn more about CRISPR, Cas9, and Cas13 here

Role of AI

Artificial intelligence is becoming more and more reliant as days pass by. So much so, that it can be used to precisely target RNA coding, especially in the given case scenario. TIGER (Targeted Inhibition of Gene Expression via guide RNA design), was trained on the data from the CRISPR screens. Comparing the predictions generated by the model and laboratory tests in human cells, TIGER was able to predict both on-target and off-target activity, outperforming previous models developed for Cas13 

With the assistance of AI with an RNA-targeting CRISPR screen, TIGER’s predictions might just initiate new and more developed methods of RNA-targeting therapies. In a nutshell, AI will be able to “sieve” out undesired off-target CRISPR activity, making it a more precise and reliable method. 

A solution to the Ails of Chemotherapy?

600,000 deaths. That’s how many casualties were estimated in 2021 by a foe we can’t so much as see with the naked eye: cancer. The dreaded illness that, since the foundation of modern medicine, humanity seems unable to tackle and extinguish permanently. Despite the advancement of technology (specifically in the medical sector), it seems as if we are a ways off from adequately dealing with it on a global scale. 

That isn’t to say that there aren’t methods to deal with this disease. Chemotherapy for instance is one such remedy. It decimates cancerous cells, but does so with a massive risk to the body it’s done to, through also killing the necessary (good) cells humans need in the process. This treatment results in patients becoming immunocompromised. This label not only increases the risk of people contracting diseases, but it also increases the potential for these common ailments (such as the common cold or the flu for instance) to quickly turn to a hospital visit because of a life-threatening concern. 

Described by those who administer chemotherapy as a double-edged sword, it appeared doubtful that the negative effects of chemotherapy could ever be reduced. After all, it took so long for this treatment to even be discovered according to modern medicine, reinforcing the notion that humanity’s war against cancer seems to have arrived at a stalemate.

Then came a new discovery: stem cell transplants. This method seemed to solve the problems that chemotherapy generated by administering stem cells to the vein. This enables the cells to travel to the bone marrow and then become new cells that are necessary for human health, such as platelets (which help out with blood clots), to white blood cells (which assists the immune system and helps the body fight infection) to even red blood cells (which helps facilitate oxygen throughout the body). 

Proponents of this method claim that this is an instrumental tool for humanity in its battle against cancer due to its ability to assist cancer patients after chemotherapy, which is widely considered to be the most prevalent form of cancer treatment. Although it may not be the final product, it does certainly pose questions that may pave the way toward achieving even more technological advancements in this war. 

That’s not to say that there aren’t those who are against this method however. Some argue their stance as one where this treatment excludes the common man: stem cell transplants are incredibly expensive due to their highly advanced technological nature. This high price tag prevents the vast majority of cancer patients from being able to access this potentially life-saving treatment, pushing the ethical dilemma concerning both wealth and the ability to save a life (if not multiple). Others who are against this cite that it too comes with some drawbacks much like chemotherapy in the form of side effects. From bleeding to increased risk of infection (which is what it’s partially designed to combat), it too poses a set of risks that cannot be ignored in the eyes of some. 

Image credit: bioinformant.com, depiction of stem cells.

Regardless of your stance on this matter, there is a middle ground: this innovation, despite all of its shortcomings, has advanced the battle against cancer in many ways beyond just one. Beyond helping people achieve some sense of normalcy in their lives through alleviating the impacts of chemotherapy, it also grants hope to those who have (or can obtain) access to this treatment. Modern medicine, just like how it conquered measles and rubella and countless other diseases, will hopefully beat this one too.

  1. https://www.cancer.gov/about-cancer/treatment/types/stem-cell-transplant
  2. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2021.html

SpaceX Internet Service Provider “Starlink” reaches One Million User Milestone

A tweet from SpaceX earlier this week reports that their “Starlink” service has amassed over a million subscriptions.

SpaceX satellite network “Starlink” was developed in hopes of providing low-cost internet globally, especially to remote locations that lack reliable internet connectivity.

How does it work?

Starlink satellites function through identical means to those of other satellite internet service technologies, an internet service provider will transmit an internet signal to a satellite in space, which then comes back to the users and is captured by their satellite dish. These dishes are connected to a modem which connects their computer with the captured internet signal. The issue with this, is that your data must travel all the way to a satellite in space and back to you on Earth. These long trips take a considerable amount of time, and in turn this leads to a higher latency (response time) and a worse connection.

This, is where we face an issue. Ideally, we want an internet connection to have a lower latency, which is where SpaceX’s Starlink comes in. SpaceX’s proposal was to make Starlink “a constellation of thousands of satellites that orbit the planet much closer to Earth, at about 550km, and cover the entire globe”. This shortened geostationary orbit proves much more effective as it increases internet speeds and reduces latency levels.

How fast is Starlink?

It’s fast, but how fast, really? Starlink offers two plans for subscribers, the basic plan, and the premium plan. The basic plan advertises download speeds from 50 to 250 megabytes per second, whilst the premium plan’s download speeds range from 150 to 500 Mbps; is this really the case?

Source: Official Ookla Website

Ookla’s recent report shows that in the US the median download speed was 164 Mbps, which does follow the advertised range provided for both plans. The median latency was about 27 ms in the US which is actually considered within the optimal range of 20-40ms. A huge improvement compared to previous testing.

The future of Starlink

As of writing, the Starlink constellation consists 3300 small satellites, with the latest additions on 17 December 2022. 54 Starlink satellites were launched by the SpaceX Falcon 9 rocket when it had lifted off for its 15th time. Overall, about 12,000 satellites are planned to be deployed on this mission, with a possible extension to 42,000 afterwards. This should ultimately fulfill SpaceX’s proposal and achieve global internet availability, and the million subscription milestone is a step in that direction.

Lab-grown meat: Incredible or Inedible?

Scientists are currently cultivating proteins from the stem cells of livestock and poultry in labs in a bid to create more sustainable meat, but will anyone want to eat it?

Lab-grown meat, although a promising concept, has been slow to hit the mainstream. The notion is to grow meat, within laboratory conditions, by extracting stem cells from live animals and installing them into a bioreactor (vessel-like device), where salts, vitamins, sugars, and proteins are added. The oxygen-rich temperature-controlled environment allows the stem cells to multiply dramatically; eventually differentiating into muscle fibres that cluster together, aided by scaffolding material.

Numerous start-ups and companies have invested millions into this innovative technology. Eat Just, valued at $1.2 billion, was founded by Josh Tetrick in 2011, and the company began the development of lab-grown chicken in 2016. “With the aid of a 1,200-liter bioreactor, the cells can develop into meat at a rapid rate with the whole process taking around 14 days. For comparison, the production of farm-based chicken is a 45-day process”, states the CEO of Eat Just. Evidently, lab-grown meat rivals the production of farm-based alternatives; by providing a more efficient development procedure.

Currently, the meat industry slaughters tens of billions of animals every year, and meat consumption is expected to increase by more than 70% by the year 2050; according to the Food and Agriculture Organisation of the United Nations. At the current state, lab-grown meat products will struggle to satisfy these demands. To put this into perspective, to produce enough meat to feed everyone in Singapore, Eat Just would need to use 10,000-litre bioreactors, over more this process is currently more expensive than traditional farming methods. However, with increased funding, it might soon become a reality.

Despite these challenges, the advancement of lab-grown meat products will continue, promising a wealth of benefits. Lab-grown meat is drug-free, cruelty-free, more environmentally friendly, and sustainable. One report estimates that lab-produced meats could lower greenhouse emissions by 78–96%, 99% less land use, and 82–96% less water consumption. It is, without a doubt, more sustainable than traditional meat farming.

In spite of all adversities, at the end of last year, restaurant 1880 in Singapore became the first in the world to serve lab-grown meat, after approval from the country’s food agency on the sale of cultured meat. This poses as a huge stepping stone for the future of lab-grown meat. One estimate by US consultancy firm Kearney suggests that 35 per cent of all meat consumed globally will be cell-based by 2040.

In an earlier interview, Josh Tetrick (founder of EatJust) expresses, “Working in partnership with the broader agriculture sector and forward-thinking policymakers, companies like ours can help meet the increased demand for animal protein as our population climbs to 9.7 billion by 2050.”

It is beyond dispute that the status quo is not sustainable. So, do we have the appetite for change?