The Limits of Cybersecurity and the Role of Reliability
Emre Kursat Kaya, Research Fellow, EDAM
With the development of cloud-based platforms and the construction of 5G infrastructures, we are witnessing yet another transformation of the internet. This evolution will both introduce new technologies and upgrade existing ones. In addition to becoming more dependent on cloud-based technologies, we will also see a larger presence of digital systems in our “physical” life via the Internet of Things (IoT).
Similar to past innovations, the dissemination of new products or services takes priority over security mechanisms. This is not to say that new technologies are not tested. It is just that these tests are mostly based on performance-related parameters rather than cybersecurity reliability-related ones. The existing approach is vital for the spread of these technologies and their benefits among the public and the survival of innovative ecosystems. Various efforts are made to minimize the negative effects of this post-dissemination security approach, yet, these efforts face limits. This post will present some of the aforementioned limits and argue for a more significant role for reliability-related parameters to examine before marketing new technologies.
Cybersecurity and its Limits
Public authorities, technology companies and, customers are all working on making new technologies more secured. National cybersecurity strategies are being developed by states all around the world. These are followed by the formation of cyber task forces/institutions to counter malicious attacks against state infrastructures. Big Tech companies are working on securing the servers which we all rely upon, through maximum hardware security and regular updates of their software. Additionally, businesses, regardless of size, and individuals are taking various cybersecurity measures from hiring hundreds of IT analysts to installing the latest antivirus program.
While these efforts are necessary, they are not matching the pace of technology development and the increasing number of possible breaches for cybercriminals. The limits of these cybersecurity measures are based on the way both civilian infrastructures and malicious cyber actors function.
To start, civilian infrastructures are more vulnerable to cyberattacks than military ones because the latter ones are less digitally connected. By definition, civilian infrastructures (private or public) are more accessible to the larger population. The connectedness and accessibility of these infrastructures, coupled with the ever-increasing number of connected devices, makes difficult to find and patch all possible breaching points. Additionally, even if all breaches of a company are patched, the ecosystem in which the company is based cannot be secured. While big companies may allocate sufficient resources for their IT department, this might not be the case for all of the SMEs that they do business with. Indeed, while 43% of online attacks are aimed at small businesses, only 14 % of those getting attacked have sufficient cybersecurity capabilities.
A second limitation is the necessary number of manpower. There is a large and growing need for cybersecurity analysts around the world. A 2019 ISC2 study estimates that globally the cybersecurity workforce represents 2.8 million individuals. The same study also shows that there is still a lack of around 4.07 million cybersecurity experts. This gap is set to grow with the spread of the IoT.
A more connected world also equates to an easier proliferation of malicious cyber tools. Hackers are usually not the lone wolfs portraited in Hollywood movies. They work in groups, learn lessons from past succeeded or failed attacks, and share their experiences on blogs and in chat groups. Hackers do not necessarily need to be backed by a bigger state or non-state actor. Their motivation can be personal or financial.
When these motives are translated into an Advanced Persistent Threat (APT), hackers do not stop looking for a breach into the targeted system. These attacks are usually highly coordinated and have started long before they are noticed by the target’s cybersecurity department.
The complexity of hackers’ motives, resources, and types complicates their cataloging. Thus, companies and public authorities do not always have enough knowledge of these actors to prioritize the threats emanating from them. This lack of information is reflected in the issue of attribution. It is highly difficult to attribute an attack if you do not have the necessary resources and expertise to adequately recognize it. This is also mirrored in the low level of cybercrime reporting. In the U.S., according to the head of FBI’s Internet Crime Complaint Center, only 10-12% of cybercrimes are reported.
Reliability Has a Role to Play
A grim picture comes out of the current limits of cybersecurity. Yet, the threat posed by cybercriminals will not stop the spread of new technologies. More resources and manpower will be dedicated to minimizing the negative effects of cyber threats. Yet, some measures can be taken even before there is a risk of attack.
As mentioned above, new devices and services are tested before being available on the market. Yet, these tests are mostly performance-based. The reliability of a system is usually questioned only when there has been a major breach or defect from a similar product, or when there are geopolitical motives behind it. The Huawei case is an obvious example of the latter one, while an example for the first one is the Uber self-driven car crash.
One way to increase the reliability of the new devices and digital services is to set cybersecurity standards. Setting cybersecurity standards is a form of risk mitigation and when possible, risk avoidance. It is a proactive measure that is taken before a harmful event occurs.
Several developed countries have set mandatory cybersecurity standards. For example in 2015, France introduced its National Digital Security Strategy; a mix of priorities and measures to take before and after a cyberattack. In the U.S., different sectors implement different standards. The Federal Energy Regulatory Commission, a federal agency, is responsible for setting the Critical Infrastructure Protection (CIP) Reliability Standards. These standards are applied to all stakeholders, from software providers to grid operators. Yet, there are also provisions favoring self-regulation for these actors. The voluntary approach is supported by the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework.
Besides standard-setting, public authorities can work with tech companies to reach an adequate level of cybersecurity. This can be through the establishment commissions or research centers. Huawei has chosen this method to remedy the cybersecurity concerns of several countries including Germany and the UK. This year, the Huawei Cyber Security Evaluation Centre (HCSEC) Oversight Board’s report for the National Security Advisor of the UK has pointed out the lack of progress made by Huawei regarding cybersecurity. This type of strategy can both provide a public good and increase the legitimation of tech companies.
Governments can also incentivize best practices among private actors. The regulatory and legal systems are still working on a post-attack base; punishing a company for a breach after it happened if it did not take the necessary measures. Yet, governments can establish reward-based incentives for companies that take the necessary cybersecurity measures and avoid cyberattacks.
In any case, as we become faster with 5G and more connected through the Internet of Things, risks related to cybersecurity will exponentially increase. Other new technologies such as artificial intelligence and quantum computing will also provide new tools for cybercriminals. While reactive cybersecurity actions have been the rule, there is now a need to increase the share of proactive and pre-dissemination measures to maximize the safety of civilian infrastructures.