Technology

Why “Teraflops” Fail to Represent Computational Performance

Floating-point operations per second, or flops, is a theoretical measurement used when describing the performance capability of hardware. The term “teraflops,” or “trillion flops,” is often used around recent gaming consoles such as the PlayStation 5 and the Xbox Series X, but it’s important to note that it applies to any device that contains a computing chip. The issue, however, is that measuring in teraflops has never reliably indicated real-world performance.

This can be observed especially in the graphics card market. During Q2 of 2019, AMD released the RX 5700 XT, a direct competitor to NVIDIA’s GTX 1080 Ti. While the 5700 XT had a teraflops rating of around 9.8, the 1080 Ti still boasted a stronger rating of around 11.3. And upon comparing the two values, we’ll find that the 1080 Ti should be about 15.3% faster than the 5700 XT.

Well, let’s take a look at some real-world testing results:

Source: [1] Timestamp 9:20
Source: [1] Timestamp 8:30

It appears that “15.3% faster” was in fact not the case. Represented above, Hardware Unboxed tested both card models, and found that on average, the 1080 Ti was the same speed as the 5700 XT at 1440p resolution. It was even 3% slower at 1080p.[1]

And no, this comparison is not an anomaly. The RTX 3080 almost triples the TFLOPS rating of the RTX 2080 (29.8 to 10.7), but is only about 65% faster at 4K resolution.

These performance statistics only begin to question a teraflops rating’s credibility in measuring computational performance; in order to further dispute it, we must better understand what actually defines “teraflops.”

The value of floating-point operations per second is calculated by taking the clock speed, the number of cores, and the number of floating point operations per cycle, and then multiplying them together. For the purpose of converting from flops to teraflops, this product is divided by one trillion.

The expression here might seem too simple; and as a matter of fact, it is. What part of this expression considers the processor’s memory or bus width? And after one year of video driver updates, the 5700 XT’s performance increased by ~6%[1]; so where is the driver considered? These are only a few of so many variables that this expression is missing; so while this expression is representative of floating-point operations, it is not indicative of real world performance, and thus dooms a measurement of teraflops to be flawed.

Let’s say you multiplied your CPU’s core count by 1000, and you maintained all of the other variables at constant values. Per the expression, the TFLOPS rating would multiply by at least 1000. The issue, here, is that you’re going to need much more cache to support so many cores. Cache, meanwhile, isn’t even considered in the above expression; so even though the card would be theoretically 1000 times more powerful, it would be hindered by its painfully disproportionate cache size.

So how can you accurately measure a processor’s performance? Well, we need to look at performance metrics that can actually be verified with testing. These include framerates and temperatures, which can be observed via benchmarking, and latency, which can be examined with an LDAT. Metrics such as these which are observed in the real world are significantly more reliable and indicative of computational performance. Theoretical values such as teraflops are unreliable because they have never been tested.

Source: [2]

Sources:

[1] https://www.youtube.com/watch?v=1w9ZTmj_zX4&ab_channel=HardwareUnboxed\

[2] https://www.youtube.com/watch?v=OCfKrX15TOk&ab_channel=JayzTwoCents[3] https://www.tomsguide.com/news/ps5-and-xbox-series-x-teraflops-what-this-key-spec-means-for-you

Is Your Phone Really Listening, or is it just Smart Advertising?

Have you ever had that eerie feeling that your phone is listening to your conversations? Has it ever happened that you’re making ice skating plans with your friends, and the moment you open Instagram, your eyes lay upon an ice skating ad? It has happened to me, and I’m sure I’m not alone. But amidst the bewilderment, the question lingers: Is our phone genuinely listening to us, or is it all just a series of bizarre coincidences?

Although Apple claims that it doesn’t listen to users, voice assistants such as Siri and Alexa listen for wake-up words such as “Hey Siri” and “Alexa” and record the user’s speech, contributing to the creation of a user’s profile for targeted advertisements. A user’s profile includes their demographics, browsing history, online purchases, social media interactions, app usage, and much more. Additionally, Ad networks buy data from many sources and track a user’s online activity. They seem to know everything about us – our age, gender, likes, dislikes, location, hobbies, and even the time we spend on different websites. Through the data from the profile and algorithms, advertisers effectively target specific audiences for their ads. Now sometimes the ad isn’t completely in line with the user’s preferences, but there is a process that involves customization to make the ad as precise as possible with the user’s interests. 

I mentioned that Apple claims that it doesn’t listen to users; however, that statement is contradicted by a report that revealed how Siri can “sometimes be mistakenly activated and record private matters,” raising privacy concerns. For the most part, the data gathered by advertisers is used anonymously to respect privacy, but it’s essential to read the terms and conditions before agreeing to them. 

Because of these specific ads, one gets the impression that their phone is actively listening to them 24/7, but it’s mostly due to the role of data collection and network algorithms. Sometimes confirmation bias – the tendency of individuals to support information that aligns with their opinions and ignore information that does not – plays a role here. For example, if you’re talking about chocolate ice cream and receive an ad about it, you’ll instantly think you’re phone has been listening to you all this time, but other times when you get an ad unrelated to your conversation, you disregard it or don’t notice it.

In conclusion, while your phone does listen to you, it does through voice assistants and mostly in harmless ways. So the next time you experience the ice skating situation, you’ll know the reasons behind it.

Alef Aeronautics

Flying cars become a reality – FAA approves

What the world once considered to be something out of a sci-fi movie might just become reality. Alef Aeronautics, an automotive aviation company, has been working on this flying car for the past 7-8 years and has without a doubt left not only the world of flight and travel but also the general public in awe. This electric vehicle is expected to hit the American skyline in the year 2025, and preorders begin at a whopping $300,000 (USD). 

As per a report from Times Now, the company has already started accepting pre-orders and money deposits for the vehicle, but the vehicle itself will only be delivered by 2025. The design is urban, futuristic, and the car itself is sustainable. CEO Jim Dukhonvy backs this claim, as he says “We’re excited to receive this certification from the FAA. It allows us to move closer to bringing people an environmentally friendly and faster commute, saving individuals and companies hours each week. This is one small step for planes, one giant step for cars.” 

Not only is this a huge step towards technological advancements, but also is an insight to the future. As several sources report, the Federal Aviation Administration (FAA) has approved and certified this yet-to-be automobile. With a vertical take-off stance, “Model A,” can currently carry 2 people and travel about 200 miles. 

Alef’s newest development and the approval of the FAA is only the stepping stone to a more sustainable and safer future. 

Defeating Time: A breakthrough in Aging.

Something you can’t see or hear until years go by. Something you recognize as simple and yet impossible to avoid. Something that is known as both the cruelest and most beautiful law in all of nature. Something that neither the richest nor poorest person can escape from. That something is time. 

Throughout mankind, humans have been able to conquer just about everything, from their minuscule problems to global affairs. However, with all of our minds combined, we still failed to defeat the toughest opponent of all: time. For what seems like since the origin of the universe, it appeared as the one unstoppable force that nobody could fight.

That is until 2022. While this year beckoned the end of the COVID-19 pandemic, it also brought along news about a case study conducted by David Sinclair, a molecular biologist who spent the vast majority of his career (twenty years) searching for ways to reverse aging and undoing time in the process. While the beginning of his journey was unsuccessful, he didn’t give up. 

The study split up two different mice (siblings born from the same litter) and genetically altered one of them to make them considerably older, something that was a marked success. While this alone is not indicative of a reversal in aging, it does bring up an important question: if time could be sped up, could it also be slowed down or even undone altogether? However, before we get to that, we need to understand just how the mice were genetically altered and why. 

Image credit: https://www.cnn.com, depiction of two mice from the same litter being drastically different in age appearance.

Many believe that aging is caused due to cell damage, but that’s not exactly accurate. That is one of the reasons, yes, but that’s not the main cause. Instead, we should look at the heart of the matter: the epigenome. It is what determines what each cell becomes and how it works, an instructional manual of sorts for each cell. When the epigenome malfunctions, the “instructions” of the cells are lost, thus resulting in the cell failing to continue functioning. 

So, Sinclair utilized gene therapy to get the cells their instructions to continue working and the results were shocking. Sinclair wasn’t only able to display success in accelerating aging, but also reversing it as well by nearly 60%. What’s more, this appears to be limitless, with Sinclair even citing that “[he’s] been really surprised by how universally it works. [Him and his team] haven’t found a cell type yet that [they] can’t age forward and backward.”

This expands beyond mice: it has already been utilized to reverse aging in non-human primates through the use of doxycycline, an antibiotic with gene reprogramming potential, with rapid success. There has even been some human experimentation, with gene therapy being done on human tissues in lab settings. 

The ability to reverse aging across the board brings up more than just stopping time, it also enables the possibility of halting sickness relating to aging. In retrospect, these illnesses (like dementia and Alzheimers among others) are caused due to cell malfunction. If the reversal of aging is potent enough, it runs the risk of also undoing these illnesses. 

With the potential to halt aging and enable people to live into their hundreds without fear of age-related illnesses, it does bring up countless possibilities. If we can already undo aging on a small scale, imagine what the future ten, fifty, or even a hundred years from now can behold.

  • https://www.cell.com/cell/fulltext/S0092-8674(22)01570-7
  • https://time.com/6246864/reverse-aging-scientists-discover-milestone/
  • https://www.cnn.com/2022/06/02/health/reverse-aging-life-itself-scn-wellness/index.html

Stopping the Energy Crisis Clock?

Less than 120 years. That’s the duration in which all nonrenewable energy will be fully exhausted. Although that sounds like a long time away, a whole lifetime away, in fact, this is a problem that needs to be addressed. 

Part of the reason as to why this issue needs to be solved is because of the exponential growth of fossil fuel usage. Compared to 1950, approximately 70 years ago from the date of this article, gas consumption has risen nearly twenty times as much, a staggering increase. 

Image Credit: https://ourworldindata.org/fossil-fuels, depiction fossil fuel consumption rates over the years.

It’s not unrealistic to say that, as time progresses, consumption of fossil fuels will continue to rise and thus, reduce their supply and hasten their end.

Thankfully, people have been paying attention to this issue (dubbed the energy crisis). Many have even made adjustments in varying degrees, from installing solar panels and foregoing nonrenewable resources (a relatively minor contribution) to proposing bills such as the Green New Deal, which would implement a gradual nationwide swap into renewable resources (a massive commitment). These are just two examples of how humans have been attempting to solve this problem.

Unfortunately, many of these potential methods have posed their innate issues. When it comes to adopting renewable energy resources, they either run the risk of being too costly (such as hydroelectric and solar) or unreliable (such as wind and solar), which are only available around 30% of the time. 

In addition, there has been extreme pushback against any means to delay (if not outright prevent) the energy crisis when it comes to large legislation such as the Green New Deal, with some citing that it’s too expensive (with a $93 trillion, twelve zeroes worth, price tag) and will submerge the U.S. into debt they can never get out of. With all the energy crisis efforts, both large and tiny, being fought against, it begs the question of whether there is any sort of renewable energy that can check all (or even most) of the boxes that would make everyone happy. 

That question was answered positively at the end of 2022. Beyond beckoning a new year, December also welcomed a new (or rather, improved) renewable energy source: nuclear energy. For the first time since the history of its experimentation, nuclear fusion/fission (the process of achieving nuclear energy) had reached a net gain, producing more energy than what it receives.

This discovery checked many boxes at first: it was reliable (with nuclear fuel being abundant in the environment) and exceptionally efficient, with a single gram of uranium being able to produce as much energy as a ton, 2,000 pounds that is, of coal. It also is a clean source of energy, an award that non-renewable energy fails to achieve. 

However, like all the benefits of discovery, there are bound to be drawbacks, and this was no exception. The first major issue is the extremely advanced technological nature of fusion: it makes it tough to master and replicate easily. The second vice ties into the first perfectly, with it being simply too expensive to sustain due to the immense amount of energy needed. Finally, the stereotypical reason why people fear nuclear energy: the danger. Although this stereotype is grossly exaggerated, there is some truth to the matter. Both malfunctioning accidents (like Chernobyl and Fukushima) and long-term radioactive waste that must be stored securely are issues that just cannot be ignored. 

Although nuclear energy does have some (rather large) issues, it’s important to not forget about its boons as well. From its efficient nature to being reliable and clean simultaneously, fusion is not a renewable energy source to underestimate. Regardless of what side you are on, nuclear energy brings up an important thing to think about: imagine what could be possible over the next century when it comes to technological innovation?

  1. https://group.met.com/en/mind-the-fyouture/mindthefyouture/when-will-fossil-fuels-run-out
  2. https://ourworldindata.org/fossil-fuels
  3. https://www.forbes.com/sites/judeclemente/2019/04/29/five-practical-problems-for-the-green-new-deal/?sh=5892345f3e8a
  4. https://stacker.com/science/22-biggest-scientific-discoveries-2022
Kevin Ku Unsplash

The AI Paradox: Enhancing Cybersecurity while Raising Concerns

In today’s interconnected and digitized world, numerous cybersecurity attacks such as malware attacks, phishing attacks, and data breaches occur, increasing the demand for cybersecurity professionals and advanced technologies such as Artificial Intelligence. However, AI acts as a defender and as a challenger to cybersecurity. 

Cybersecurity involves analyzing patterns from massive amounts of data to protect systems and networks from digital attacks and identify threats. The traditional approach relied heavily on signature-based detection systems, which were effective against known threats but incapable of detecting new or unknown threats, resulting in frequent cybersecurity attacks. However, AI-based solutions use machine learning algorithms that are trained using vast amounts of historical and current data to detect and respond to unknown and new threats. According to Jon Olstik’s Artificial Intelligence and Cybersecurity, approximately “27 percent” of security professionals in 2018 wanted to use AI for “improving operations, prioritizing the right incidents, and even automating remediation tasks,” implying that the reasons for AI in cybersecurity continue to increase over time.

However, there’s another side to AI’s role in cybersecurity, and it’s more dangerous than beneficial. While it may appear that AI systems are unlikely to be hacked, this is not the case because hackers can manipulate these systems. As previously said, AI solutions employ machine learning algorithms to detect threats; nevertheless, a fundamental flaw is that vulnerability is that these models are fully dependent on the dataset. A poisoning attack occurs when an attacker modifies the dataset to fulfill their malicious goals, forcing the model to learn from the modified data and leading to more attacks. 

As these rising concerns and vulnerabilities, AI-based solutions will need to be not only fast, but also safe and risk-free. AI and Cybersecurity share a complex relationship and it’s safe to conclude that Artificial Intelligence will continue to play a paradoxical yet essential role in the growing field of cybersecurity.

  • Comiter, Marcus, et al. “Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It.” Belfer Center, August 2019, https://www.belfercenter.org/publication/AttackingAI#toc-3-0-0. Accessed 7 July 2023.
  • Moisset, Sonya. “How Security Analysts Can Use AI in Cybersecurity.” freeCodeCamp, 24 May 2023, https://www.freecodecamp.org/news/how-to-use-artificial-intelligence-in -cybersecurity/. Accessed 7 July 2023.
  • Oltsik, Jon. “Artificial intelligence and cybersecurity: The real deal.” CSO Online, 25 January 2018, https://www.csoonline.com/article/564385/artificial-intelligence-and -cybersecurity-the-re al-deal.html. Accessed 7 July 2023. 

Who Would You Trust More: AI or Doctors?

For as long as the profession existed, doctors have been working diligently to perfect their craft and refine any rough edges, diagnosing, treating, and eventually curing their patients in the most efficient way possible in their eyes. However, mistakes are frequently made: medical malpractice is the third leading cause of death in the United States, with over 250,000 deaths occurring yearly. Despite the rigorous education doctors undergo to officially practice their craft, they too still make mistakes. It’s human nature to err sometimes, even in life-or-death scenarios. For the majority of time, it appeared as if this was just a sacrifice that had to be made to keep one of the world’s oldest, and most vital, professions stable. 

But what if the risk of human error was eliminated by having humans removed from the equation when it came to distributing medical care?  This would dynamically pivot the medical industry and the person-to-person interaction we all know today, in a completely different direction. Some speculate that this is possible, through the utilization of artificial intelligence (AI). 

Artificial intelligence has permeated throughout the medical field briefly, but it’s been shut down due to a variety of complications, whether it’d be availability, cost, unreliability, or a combination of these factors (among others). This was especially true of Mycin, an expert system designed by Stanford University researchers to assist physicians in detecting and curing bacterial diseases. Despite its superb accuracy, being even as reliable as human experts on the matter, it was far too rigid and costly to be maintained. Despite not being medically affiliated, Google image software is another example of just how unreliable AI is: it assessed, with 100% certainty, that a slightly changed image of a cat is guacamole, a completely incorrect observation.

However, as modern technology rapidly advances, with special emphasis on machine learning (the ability of a machine to function and improve upon itself without human intervention), some believe that AI can now pick up the slack of physicians. 

This claim isn’t entirely unsubstantiated: artificial intelligence can already assess whether or not infants have certain conditions (of which there are thousands of) by facial markers, something doctors struggle with due to the massive variety of illnesses. MGene, an app that has Ai examine a photo taken of a child by its user, has over a 90% success rate at accurately detecting four serious, potentially life-threatening syndromes (Down, DiGeorge, Williams, and Noonan). AI even detected COVID-19, or SARS-CoV-2, within Wuhan, China (the origin of this virus) a week before the World Health Organization (WHO) announced it as a new virus.

With every passing day, it appears that more and more boxes that are needing to be checked, enabling the possibility of artificial intelligence becoming a dominating presence within the medical field to become one step closer to turning into a reality.

That isn’t to say that there are issues with having artificial intelligence enter the medical industry: beyond the previous problems (of cost and unreliability) being possible, Ai being ever-changing also opens up the doors to bias, ranging from socioeconomic status to race to gender and everything in between. In addition, the usage of AI also is uncomfortable to many due to the removal of the person-to-person interaction that is commonly known to people, another big issue that needs to be addressed to ensure the successful implementation of artificial intelligence into the healthcare sector. 

Regardless of what side you are on, there is a common ground: artificial intelligence will continue to get more and more advanced. While it is uncertain as to whether the general public will want AI to replace doctors, have them serve as back-end helpers, or not exist whatsoever in the office, it is clear that artificial intelligence is a tool that has both a lot of benefits and drawbacks. Whether AI is implemented or not is a question that is left to the future. 

AI can now use the help of CRISPR to precisely control gene expressions in RNA

Almost all infectious and deadly viruses are caused due to their RNA coding. Researchers from established research universities, such as NYU and Columbia, alongside the New York Genome Center, have researched and discovered a new type of CRISPR technology that targets this RNA and might just prevent the spread of deadly diseases and infections.

A new study from Nature Biotechnology has shown that the development of major gene editing tools like CRISPR will serve to be beneficial at an even larger scale. CRISPR, in a nutshell, is a gene editing piece of technology that can be used to switch gene expression on and off. Up until now, it was only known that CRISPR, with the help of the enzyme Cas9, could only edit DNA. With the recent discovery of Cas13, RNA editing might just become possible as well.

https://theconversation.com/three-ways-rna-is-being-used-in-the-next-generation-of-medical-treatment-158190

RNA is a second type of genetic material present within our cells and body, which plays an essential role in various biological roles such as regulation, expression, coding, and even decoding genes. It plays a significant role in biological processes such as protein synthesis, and these proteins are necessary to carry out various processes. 

RNA viruses

RNA viruses usually exist in 2 types – single-stranded RNA (ssRNA), and double-stranded RNA (dsRNA). RNA viruses are notoriously famous for causing the most common and the most well-known infections – examples being the common cold, influenza, Dengue, hepatitis, Ebola, and even COVID-19. These dangerous and possibly life-threatening viruses only have RNA as their genetic material. So, how can/might AI and CRISPR technology, using the enzyme Cas13 help fight against these nuisances?

Role of CRISPR-Cas13

RNA targeting CRISPRs have various applications – from editing and blocking genes to finding out possible drugs to cure said pathogenic disease/infection. As a report from NYU states, “Researchers at NYU and the New York Genome Center created a platform for RNA-targeting CRISPR screens using Cas13 to better understand RNA regulation and to identify the function of non-coding RNAs. Because RNA is the main genetic material in viruses including SARS-CoV-2 and flu,” the applications of CRISPR-Cas13 can promise us cures and newer ways to treat severe viral infections.

“Similar to DNA-targeting CRISPRs such as Cas9, we anticipate that RNA-targeting CRISPRs such as Cas13 will have an outsized impact in molecular biology and biomedical applications in the coming years,” said Neville Sanjana, associate professor of biology at NYU, associate professor of neuroscience and physiology at NYU Grossman School of Medicine. Learn more about CRISPR, Cas9, and Cas13 here

Role of AI

Artificial intelligence is becoming more and more reliant as days pass by. So much so, that it can be used to precisely target RNA coding, especially in the given case scenario. TIGER (Targeted Inhibition of Gene Expression via guide RNA design), was trained on the data from the CRISPR screens. Comparing the predictions generated by the model and laboratory tests in human cells, TIGER was able to predict both on-target and off-target activity, outperforming previous models developed for Cas13 

With the assistance of AI with an RNA-targeting CRISPR screen, TIGER’s predictions might just initiate new and more developed methods of RNA-targeting therapies. In a nutshell, AI will be able to “sieve” out undesired off-target CRISPR activity, making it a more precise and reliable method. 

A solution to the Ails of Chemotherapy?

600,000 deaths. That’s how many casualties were estimated in 2021 by a foe we can’t so much as see with the naked eye: cancer. The dreaded illness that, since the foundation of modern medicine, humanity seems unable to tackle and extinguish permanently. Despite the advancement of technology (specifically in the medical sector), it seems as if we are a ways off from adequately dealing with it on a global scale. 

That isn’t to say that there aren’t methods to deal with this disease. Chemotherapy for instance is one such remedy. It decimates cancerous cells, but does so with a massive risk to the body it’s done to, through also killing the necessary (good) cells humans need in the process. This treatment results in patients becoming immunocompromised. This label not only increases the risk of people contracting diseases, but it also increases the potential for these common ailments (such as the common cold or the flu for instance) to quickly turn to a hospital visit because of a life-threatening concern. 

Described by those who administer chemotherapy as a double-edged sword, it appeared doubtful that the negative effects of chemotherapy could ever be reduced. After all, it took so long for this treatment to even be discovered according to modern medicine, reinforcing the notion that humanity’s war against cancer seems to have arrived at a stalemate.

Then came a new discovery: stem cell transplants. This method seemed to solve the problems that chemotherapy generated by administering stem cells to the vein. This enables the cells to travel to the bone marrow and then become new cells that are necessary for human health, such as platelets (which help out with blood clots), to white blood cells (which assists the immune system and helps the body fight infection) to even red blood cells (which helps facilitate oxygen throughout the body). 

Proponents of this method claim that this is an instrumental tool for humanity in its battle against cancer due to its ability to assist cancer patients after chemotherapy, which is widely considered to be the most prevalent form of cancer treatment. Although it may not be the final product, it does certainly pose questions that may pave the way toward achieving even more technological advancements in this war. 

That’s not to say that there aren’t those who are against this method however. Some argue their stance as one where this treatment excludes the common man: stem cell transplants are incredibly expensive due to their highly advanced technological nature. This high price tag prevents the vast majority of cancer patients from being able to access this potentially life-saving treatment, pushing the ethical dilemma concerning both wealth and the ability to save a life (if not multiple). Others who are against this cite that it too comes with some drawbacks much like chemotherapy in the form of side effects. From bleeding to increased risk of infection (which is what it’s partially designed to combat), it too poses a set of risks that cannot be ignored in the eyes of some. 

Image credit: bioinformant.com, depiction of stem cells.

Regardless of your stance on this matter, there is a middle ground: this innovation, despite all of its shortcomings, has advanced the battle against cancer in many ways beyond just one. Beyond helping people achieve some sense of normalcy in their lives through alleviating the impacts of chemotherapy, it also grants hope to those who have (or can obtain) access to this treatment. Modern medicine, just like how it conquered measles and rubella and countless other diseases, will hopefully beat this one too.

  1. https://www.cancer.gov/about-cancer/treatment/types/stem-cell-transplant
  2. https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2021.html

SpaceX Internet Service Provider “Starlink” reaches One Million User Milestone

A tweet from SpaceX earlier this week reports that their “Starlink” service has amassed over a million subscriptions.

SpaceX satellite network “Starlink” was developed in hopes of providing low-cost internet globally, especially to remote locations that lack reliable internet connectivity.

How does it work?

Starlink satellites function through identical means to those of other satellite internet service technologies, an internet service provider will transmit an internet signal to a satellite in space, which then comes back to the users and is captured by their satellite dish. These dishes are connected to a modem which connects their computer with the captured internet signal. The issue with this, is that your data must travel all the way to a satellite in space and back to you on Earth. These long trips take a considerable amount of time, and in turn this leads to a higher latency (response time) and a worse connection.

This, is where we face an issue. Ideally, we want an internet connection to have a lower latency, which is where SpaceX’s Starlink comes in. SpaceX’s proposal was to make Starlink “a constellation of thousands of satellites that orbit the planet much closer to Earth, at about 550km, and cover the entire globe”. This shortened geostationary orbit proves much more effective as it increases internet speeds and reduces latency levels.

How fast is Starlink?

It’s fast, but how fast, really? Starlink offers two plans for subscribers, the basic plan, and the premium plan. The basic plan advertises download speeds from 50 to 250 megabytes per second, whilst the premium plan’s download speeds range from 150 to 500 Mbps; is this really the case?

Source: Official Ookla Website

Ookla’s recent report shows that in the US the median download speed was 164 Mbps, which does follow the advertised range provided for both plans. The median latency was about 27 ms in the US which is actually considered within the optimal range of 20-40ms. A huge improvement compared to previous testing.

The future of Starlink

As of writing, the Starlink constellation consists 3300 small satellites, with the latest additions on 17 December 2022. 54 Starlink satellites were launched by the SpaceX Falcon 9 rocket when it had lifted off for its 15th time. Overall, about 12,000 satellites are planned to be deployed on this mission, with a possible extension to 42,000 afterwards. This should ultimately fulfill SpaceX’s proposal and achieve global internet availability, and the million subscription milestone is a step in that direction.