By Alvaro Teofilo*

The “Global Risks” annual report written by the World Economic Forum and presented in Davos at the end of January pointed cyber security as one of the top three concerns of business leaders and governments in 2018, alongside with issues concerning the environment and the massive displacement of immigrants around the world.

Noticing that security is inserted among such important issues for society does not come as a surprise. In the last year, it was possible to witness a significant volume of cybersecurity incidents with a global impact, and their roots refer to two extremes: on the one side, the lack of practices which are considered basic in traditional technology infrastructures; and, on the other side, there is a lack of knowledge by the companies on how to properly manage the security of cloud services.

The simple lack of a patch — a sort of “technical recall” adopted by software manufacturers and known by corporations and governments for decades — has brought chaos to companies of all sizes and industries when the Wannacry virus ran the world in less than 24 hours in May last year. “The National Audit”, agency maintained by the British government, stated in a report:

“[Wannacry] was based on an attack with a low degree of sophistication, which could have been avoided if the NHS had followed good safety practices that are considered basic”

The technology infrastructures of English hospitals were among the most affected by Wannacry; however, a substantial number of other companies — including one of the largest telecommunication groups in the world — once affected by the malware and unsure of what to do, simply closed their offices and sent their teams home.

Obviously, not everyone lived chaos on the day of the incident. Those who had minimal processes for managing security patches and adequate governance of their telecommunication networks — two well-known topics and with an extensive literature in the technology industry — were awarded with intact and incident-free infrastructures. Several of my colleagues were in the privileged position of “spectators” of the horror show witnessed by the world.

If, on the one hand, basic processes are not being adopted by companies and governments, there is still a clear sense that society is still not able to understand the functioning of the new technologies that are at our disposal. The cases of large-scale leakage in cloud computing services stood out last year.

One of the most severe cases occurred in a cloud service offered by a large consultancy, used by more than 90 of the groups listed in the “Fortune Global 100”. According to testimonials of people who got involved in the incident, more than 150 gigabytes of data were found in four instances (“buckets”) of a popular cloud service, with customer data gathered by the consultancy, which could simply be accessed with the public address of the servers which stored it. And no password required.

Once again, basic security processes have been ignored; however, in this case, in a technology that is not yet widely dominated by managers and technicians. More mature cloud services — among them Amazon AWS, Google Cloud and Microsoft Azure — provide basic security tools and practices at no additional cost to any person or company that hires their services; and technical users are often “forced” to go through a minimal checklist before ending the installation of a new server. Still, apparently, the American consulting technicians ignored those steps.

Large providers of these services have demonstrated enormous responsibility and initiative to make easier the use of security tools and practices; nevertheless, the common “shared responsibility” models of cloud services correctly indicate that part of cloud security depends exclusively from the customer, while the incidents these companies are experiencing teach us that failure responsibility cannot be transferred to technology providers.

As the global policy expert and author, Michele Wucker, points out in the same report that inspired this article,

“Risk management begins with the identification and estimation of the probability of impact of a given threat. We can then, with this information, decide whether this risk fits within the limits we tolerate and how we should react to reduce or mitigate the level of exposure to it”.

We have been experiencing a time of anxiety for large and medium-sized businesses through innovation, business digitization and efficiency, and there is a clear difficulty in properly managing these “new risks.” The feeling we have is that the speed with which the technology industry is directing businesses toward new possibilities is opening an increasingly large gap in teams’ ability to properly understand and manage the new technological risk scenarios created during this transformation phase that society has been experiencing. It is the responsibility of business managers and technology managers to notice the existence of these gaps and ensure investment in security and risk management at levels that are consistent with the size of their business and with their appetite for the risks, which inherent in the digital environment.

Alvaro Teofilo is a Director of Strategy and Business Development for Tempest Security Intelligence.