Show relevant text based on the filters below:
Filter op onderwerpen-1 en onderwerpen-2 en onderwerpen-3

Please think along with us!

When writing the growth book it quickly became clear that there is a lot of information to share. To make it easier for the reader, filters have been devised with which the content of the growth book can be tailored to a specific situation. The reader can, for example, indicate for which phase and for which role he is looking for information, after which only the texts remain that are precisely relevant for this. But: when is something relevant? When assigning the filters it turned out to be a difficult question. That is why the filters are not working properly at the moment; when activating the filters, virtually all content remains visible, virtually nothing is filtered out.

The filter functionality will be further developed in 2020. If you have suggestions or want to help us, we would like to hear from you. Contact us via or 085 4862 410.

Onderwerpen 1
Incident response and recovery
Business continuity
Asset management
Business case
Onderwerpen 2
Cybersecurity management
Logging and monitoring
Risk-based approach
Verification and validation
Onderwerpen 3
Legislation and regulations
Sluit filters
Dagelijkse quiz
Kwis - Cybersecurity
15 mei 2023
Stelling /
Ga naar het groeiboek


To view this living document offline, you can download a PDF version (3-5 MB) via the button below. This PDF is updated daily, but remains a snapshot: over time, the downloaded PDF may deviate from the online living document.

Download pdf-version

Do you want to participate?

A living document wouldn’t be alive without newly added knowledge. The content of this living document can be updated at any time. We need your help for that. If you see something that is not right, or if you have additions, you can contact us using the form below. After consultation you can get access to the content of the living document. The suggestions will be evaluated by the steering committee (every month) before being published online.

[contact-form-7 id=”67584″ title=”Participation request living document”]

Table of contents

    Geleerde lessen:
    Geleerde lessen

    Living document Cybersecurity tunnels – version 2023

    1. Introduction [link id=”1d968″]

    This living document focuses on cybersecurity in the context of safety, availability and privacy in infrastructure. Although the document is specifically aimed at tunnels, the content is widely applicable to all infrastructural works and objects with (industrial) automation such as bridges, locks weirs and storm surge barriers.


    Downloads (Dutch)



    A number of important points of attention from this living document have been summarised in a fact sheet and on the posters ‘Guidance on cybersecurity for (tunnel) managers’ and ‘Cybersecurity and organisation relationship diagram’. You can download the (Dutch) products free of charge.

    >> Fact sheet corresponding to the living document version 1 (pdf, 525 KB)

    >> Cybersecurity guide for (tunnel) managers (pdf, 133 KB)

    >> Fact sheet corresponding to the living document version 2 (pdf, 555 KB)

    >> Cybersecurity and organisation relationship diagram (pdf, 85 KB)

    1.1 Reading this document [link id=”xk940″]

    Although the living document is divided into separate, individual chapters, there are clear links between the chapters. This is shown in the following figure. Using the filters at the top, you can directly make the correct selection of chapters. The chapters about the aspects and legislation and regulations are relevant in all cases; as a result, there are no filters in these chapters, and they always remain visible. However, for example for asset management, it is not necessary to read the chapter about verification and validation. For that reason, if the ‘Asset management’ filter is applied, the chapter about verification and validation is rendered invisible.


    Figure 1.1: The elements in the living document.



    In order to keep all the information readable, a number of interactive elements are available. The main text is concise and compact in its structure. Additional information or selections of texts can be presented and called up via buttons and links.

    1.2 Background and purpose [link id=”xrhfx”]

    A series of studies have revealed that government, business and industry and citizens all need to take steps in order to increase their digital resilience. However, these steps are not being taken quickly enough and are constantly outpaced by developments. This, for example, is reflected in Cyber Security Assessment Netherlands from the National Cyber Security Centre (NCSC).


    The aim of the members of the COB network in producing this guide is to improve awareness of cybersecurity, and to introduce all parties involved in the realisation and management of infrastructural projects to the key aspects of cybersecurity. Using this knowledge, the various stakeholders will be able to assess the digital resilience of their object and structure their processes in such a way that the appropriate measures are taken in a timely fashion, to prevent or manage cyber incidents. A second purpose of this guide is to assist the various parties in organising their efforts to mitigate the possible consequences of incidents.


    The purpose of this living document is therefore to offer an easily accessible introduction to the world of cybersecurity for anyone professionally involved in security in and around tunnels (and other infrastructure objects in which comparable technology is employed). Implementing and safeguarding cybersecurity within the infrastructure is a precondition for the safety of everyone in and around the object.


    The absence of cybersecurity can have immense consequences. For instance, cyber incidents can result in injuries, damage and losses, and not only financial losses. Negative impact on business continuity in vital and non-vital processes can have major social and economic impact, which can also affect people’s safety. Moreover, reputation damage (due to the lack of cybersecurity or due to damage as a consequence of a successful attack) can have a huge impact for an organisation. The credibility of the organisation vis-à-vis clients and potential clients can be seriously damaged. As an organisation, you can take all kinds of measures against financial risks, such as insurance and/or reserving a risk budget, but damage to your image cannot be insured and is often permanent.


    Cybersecurity business case

    When drawing up a business case for cybersecurity, it is important to determine which cybersecurity measures (costs) will result in a reduction of social, financial, security and privacy risks (benefits). Benefits can be found in the reduction of potential damage and injury and the direct costs of repairs. By way of illustration, the losses caused by the (not)Petya cyber attack in 2017 in the Netherlands, better known as the Maersk case, amounted to more than 300 million euro.


    Both in the public and private sector, management is not always equally convinced of the reality of the threat of a cyber incident. Often-heard statements are:


    • What could they possibly get from us?
    • We’re not interesting, are we?
    • This wouldn’t happen to us, you know!
    • We’ve got our affairs in order.
    • Our systems are not connected to the Internet.


    In addition, the costs of security measures are a deterrent for many organisations.


    However, it is no longer a question of whether you will be hacked, but when and what the impact will be.


    For the most recent reports, see: National Cyber Security Centre (NCSC).

    1.3 Scope [link id=”qmtxt”]

    This living document provides people and parties involved in tunnels with an idea of the elements involved in cybersecurity in tunnels. The guide does not provide an elaboration of systems, designs and procedures, because these after all are project, organisation and system-specific. As such, this document presents a series of considerations, but can not, and never will be, complete.


    The first edition of this living document was published on 30 May 2018, the second version appeared on 26 November 2019. In version two, the living document was extended to cover the entire life cycle of a tunnel/object and the tasks and responsibilities of all stakeholders involved during this phase were described.


    In the third edition (2020), the layout and structure of the living document were fundamentally revised. The aim was to improve the traceability and findability of subjects and aspects. This revision makes it easier for readers to filter according to aspect and subjects, and to find the relevant information, brought together in a single document.


    Version three also includes a number of important additions and expansions, including a tool for conducting a maturity analysis, a clarification of the relationship between cybersecurity and tunnel safety, and an example for a business case in which arguments, methods and effects are listed.

    In the fourth version (2021), attention was focused on a number of specific subjects from practice. In summary:

    • Recovery and business continuity: how do you ensure that following an incident, you are back up and running quickly and in a controlled fashion, and how do you mitigate the level of damage from an incident?
    • Patching and servicing: the aim of maintenance is to improve systems, but new codes or components and temporarily open gateways also present a risk. How do we manage that risk and do we benefit from that opportunity?
    • Renovation: to what extent is renovation different from new building? What makes the difference from normal operation and normal maintenance; in what way does renovation work generate additional or other kinds of risks?
    • Relationship between cybersecurity and tunnel safety: a memorandum has been added that explains the various aspects of this relationship.


    In this fifth edition, some more useful additions have been made:

    • Attention has been paid to recognising a cyber incident and the action perspective, in the form of a number of scenarios.
    • Knowledge on the effect of legacy systems and how to deal with them has been added.
    • A knowledge guide has been developed and added: what information can be found, how do the different sources relate to each other (and this living document is just somewhere in the middle of it), and in addition: what should you know for the role you play in relation to a tunnel and where can you find that knowledge.


    This fifth edition completes a project of many years (we started with the first ideas and sketches back in 2014). We expect to do a next edition when substantial changes in applicable legislation or regulations have been published. Also a new turn in instruments or in threats arises could be a reason to issue a new edition. The current document will be updated at intervals, which will be announced on this page.


    All five editions were published in Dutch, editions two, four and five were also published in English. We will aim to publish future editions in both languages at the same time.

    1.4 What is cybersecurity for OT? [link id=”077wv”]

    Cybercrime is generally thought of as referring to attacks on the corporate IT environment. For instance, a DDoS attack that shuts down a service, malware that steals business information or a database that is contaminated with erroneous data. However, the time when hackers only singled out traditional IT environments is gone. More and more often, they are also targeting the operational environment: the operational technology (OT) environment of, for example, energy companies and factories. And in those situations, the damage is immediately many times greater. A company’s critical OT process that is suddenly shut down can mean that a plant’s production lines come to a standstill, terminals are closed, tunnels and bridges become unsafe and locks no longer open or close. The consequences are then incalculable.


    In this document, cybersecurity should be taken to mean:


    All security actions taken to prevent damage caused by the failure, malfunction or misuse of an information system or computer. Actions are also taken to limit and/or repair damage if it does occur. Examples of damage are no longer being able to access a computer system when one wants to, or that the stored information ends up with others or is no longer correct. The measures relate to processes in the organisation, technology and the behaviour of people.

    (Source: cybersecurity-woordenboek)


    Note: an OT system is an information system or computer, supplemented with industrial automation. Because OT systems connect the digital and physical worlds, a cyber incident can, in addition to the damage mentioned in the definition, also cause damage in the physical world, for example injuries (persons): a collision with an unexpectedly closing barrier for a tunnel or bridge.

    1.5 Working group cybersecurity and ISAC Tunnels [link id=”q43vc”]

    At the ‘Secure software’ event on 10 February 2015 organised by the COB platform Veiligheid, Jaap van Wissen from Rijkswaterstaat explained the information sharing and analysis centres (ISACs) for vital sectors in the Netherlands, see figure below.



    Figure 1.2: Overview of ISACs active in the Netherlands. As standard, three different public organisations are affiliated with each ISAC: the National Cyber Security Centre (NCSC), the General Intelligence and Security Service (AIVD) and Team High Tech Crime (THTC) of the National Police. (Source: NCSC


    As a result of this session, the initiative arose for a Cybersecurity working group and for an ISAC focused on cybersecurity in tunnels.

    The ISAC Tunnels was set up with the following mission and objective:

    1. The ISAC Tunnels provides a secure and trusted environment in which parties that are part of the vital infrastructure in the tunnel community, together with the government parties responsible for security and cybersecurity, exchange sensitive and confidential information about cyber threats and best practices.
    2. The ISAC Tunnels is a forum where the sharing of knowledge, information and experiences regarding cybersecurity between members of the tunnel community plays a central role.
    3. The ISAC Tunnels contributes to the strengthening of (chain) security in the sector by forming a permanent (human) network, which makes it easier for parties to find each other, including outside the consultation process.
    4. Added value and mutual trust form the basis of the ISAC Tunnels.


    This living document was drawn up by the cybersecurity working group of the COB. The working group consists of approximately forty participants, from market parties such as engineering firms, system integrators, suppliers and cybersecurity companies, and governments. The members of the closed ISAC Tunnels are also members of the working group. The ISAC itself decides which information can be shared with the Cybersecurity working group and which remains exclusively among the ISAC members.


    In Appendix 7 Overview of working group participants, an overview appears of the participants of the working group.

    1.6 Terminology [link id=”xgmmn”]

    Different terms are used for the automated systems in the tunnels, such as

    industrial automation and control systems (IACS), industrial control systems (ICS), industrial automation (IA) and operational technology (OT). In this living document, the term operational technology (OT) is used for the operational systems, networks and applications.


    For the purpose of this document, the term ’tunnel’ refers to the tunnel system including the connections to the wide area network (WAN) and traffic control centre; see diagram below. The traffic control centre and the WAN are therefore not part of the tunnel system.

    Figure 1.3: Diagram of tunnel system and traffic control centre


    For an explanation of the other jargon used in this document and within the cybersecurity sector, you can consult the (Dutch) Cyberveilig Nederland glossary of terms.

    2 The aspects of cybersecurity [link id=”c20cq”]

    In structuring information relating to cybersecurity, it is useful to create a classification. This classification is based on three aspects.

    • People
    • Organisation
    • Technology


    Dividing a question into these three aspects helps analyse and describe the situation, or draw up measures. The three aspects are entirely separate but in practice impact on each other. This is reproduced in the figure below.

    Figure 2.1: Aspects of cybersecurity.


    The aspect ‘people’ applies in relation to human behaviour. In the case of cybersecurity, this refers for example to incidents resulting from human action (failure to use a password, unauthorised access). The aspect ‘technology’ emerges when something is a direct consequence of or impacts on a technical characteristic (non-upgraded version, faulty door lock). The aspect ‘organisation’ relates to the influence of processes, rules, procedures, etc. Examples are: no active cybersecurity policy, too many authorities allocated to a single person, etc.


    It is important to realise that these three aspects impact each other. For example, the wish may be expressed that employees (‘people’) do not leave their laptops unattended for too long. The obvious step is for the IT department to implement and manage (‘organisation’) an app (‘technology’). This example shows that in describing a situation or measure, all three aspects must be considered separately. At the same time, the applied technology must be appropriate to the organisation and tie in with the knowledge and experience of the ‘people’ factor. In other words, the triangle must be balanced. Only if all three aspects are covered is it possible to correctly implement cybersecurity.


    The following sections deal with each of these three aspects in greater depth.

    2.1 People [link id=”lkz5s”]

    The ‘people’ aspect relates to the influence a person or persons can have on cybersecurity. That influence is the consequence of an individual’s actions or behaviour in a given situation.


    The majority of people often have little or no awareness of the risks related to the way in which they behave. Employees all too often still share login details with colleagues or write them on a note, that is kept with the installation to which it relates. It is also not uncommon for maintenance staff to connect data carriers (laptops, USB sticks, etc.) to installations on which they are performing maintenance, without scanning for malware in advance. Other examples of undesirable behaviour are checking the content of ‘found’ USB sticks, losing of information – both on paper and on digital media – and sharing of information with non-authorised persons, both deliberately and accidentally. At the same time, people who are dismissed, who find themselves in financial difficulties, who are dissatisfied or incapable often represent major risk factors.


    Such examples show that behaviour leads to risks that must be considered in carrying out a risk inventory. It is important to be aware of the possible dangers of behaviour and to take the necessary corrective measures. Examples are blocking accounts, regularly changing passwords and training staff. After all, human behaviour can completely negate technical and procedural measures. If a cyber risk is solved through a technical and/or procedural measure, but that measure forms too much of an obstacle in people in their work, all too soon a workaround is developed. As a rule, such workarounds do not make things more secure. The golden triangle – the balance between functionality, security and ease of use – is therefore of essential importance.


    More than seventy percent of all reported security incidents are due primarily to ignorance and incorrect human behaviour. In that sense, people represent an important factor for cybersecurity. It is therefore essential that people be trained (in knowledge and awareness) and enabled to develop secure behaviour. This relates to managers, operators, maintenance staff and anyone else involved in carrying out work on objects.

    2.2.1 Awareness [link id=”w1q8w”]

    Awareness is the cheapest and most effective method of laying the foundation for cybersecurity within an organisation. An example of a campaign to raise awareness is the Dutch Safe banking website. Here you can find tips and videos about recognising when something is not right with emails or phone calls that supposedly come from your bank. For instance, as soon as you are asked for personal details or to send your bank card back to the bank, the alarm bells should ring. Your bank will never ask you to do this kind of thing. In other words: ‘Hang up! Click away from the site! Call your bank!’

    2.2.2 Education, training and exercise (ETE) [link id=”70×64″]

    Cyber incidents can lead to a crisis situation in which the safety of people or the environment can no longer be guaranteed. In order to have a framework for action in such a crisis, it is important to prepare the organisation for the steps that need to be taken. This can be achieved through a programme of van education, training and exercise (ETE). The so-called ETE programme is therefore very important in the crisis-management structure. Frequent education, training and exercise should be permanently on the agenda in an organisation.


    In 8 Incident response and recovery more information is available on incidents and recovery.


    Importance of ETE

    Within infrastructure, the awareness of cybersecurity is increasing. One of the key requirements, ‘a safe object’, is not only about the technology and providing redundant solutions, but also about policy, processes and procedures, during both the construction and operational phases. Total security does not exist and would not be economically justifiable, not even to achieve a ‘cyber-secure’ situation. So it is not possible to demand that every single thing is perfectly organised. Objectives are:

    • The risks have been identified and are being managed.
    • An acceptable level of risk has been determined.
    • It is clear what management measures need to be taken and why.
    • There is a sound OT security policy in place.


    When identifying the risks, for example by means of a security assessment, it is important to examine the three aspects: ‘people’, ‘organisation’ and ’technology’. These aspects should be approached together. Only then can cybersecurity be managed in a responsible manner and responsible cost-effective measures taken. Through ETE, the right balance can be found between people, organisation and technology.


    Despite the increasing awareness, the domain of cybersecurity within infrastructure is still in a development phase. A ‘cyber roadmap’ containing the improvements that are planned to be implemented in the coming years is indispensable in this respect.


    It goes without saying that ETE is an essential part of the cyber roadmap. It improves performance and can prevent crucial mistakes – especially in hectic situations. Having and maintaining, and practising with frameworks and plans helps to ensure continuity and increases the self-awareness and confidence of staff. It also ensures that errors and omissions in the scenarios can be found and corrected before a real crisis occurs.


    Focus areas ETE programme

    Staying trained! Raising people’s awareness to the desired level quickly is very important. Training is an excellent means of achieving this. But how do we make sure that the people involved do not fall back into old habits just as quickly? The answer is to keep repeating. Repetition makes people stay sharp and can be done in many different ways. Some examples:

    • A periodic refresher course (classroom or e-learning)
    • Toolbox meetings
    • Newsletters/mailings
    • Cybersecurity flyers.
    • ‘Food for thought’ items/one-liners/maxims
    • Narrowcasting
    • Serious games and quizzes during meetings
    • Sharing experiences of incidents within the organisation


    Staff in an organisation should be aware of factors that can lead to a crisis in the work process and which then requires an appropriate crisis approach. In addition, it is important to know not only the crisis roles, but also the associated frameworks for action and working methods.

    2.2 Organisation [link id=”7k3p8″]

    IEC 62443-2-2 and ISO 27002 contain guidelines on the roles and the requirements that need to be defined for information protection and cybersecurity. The figure below is an example of cybersecurity roles. These roles can be occupied by the same number of specialists. Certainly with smaller projects/contracts, this is obviously neither feasible nor desirable.

    Figure 2.2: Example of cybersecurity roles.



    By embedding tasks, authorities and responsibilities relating to cybersecurity in the organisation, it is possible to allocate detected risks to the appropriate person/problem owner. Those persons can help to accurately estimate the threat and its potential impact during the risk analysis. If they do not possess the necessary expertise to properly assess the situation, they should consult persons who do or follow a training programme to gain the skills needed to make their own assessment.


    Two very common support roles are:

    • Mandated official for connecting equipment Explicit permission is needed for connecting every non-standard item of equipment. The mandated official determines and applies the frameworks for granting permission to connect equipment, the registration of the equipment to be connected and investigating the risks.
    • Personnel with authorised access to log data Access to log data is subject to very high confidentiality requirements. The data can include personal data and therefore fall under the GDPR. In addition, log data often contain important information that could reveal vulnerabilities in a system to a hacker, for example version information, IP addresses, protocols used, etc. With that in mind, additional requirements are imposed on people who have access to or process log data.


    In addition to raising the cybersecurity awareness of all the people involved in the tunnel, specific officials must be appointed and a number of specific requirements apply to employees who are operational in tunnel management (own and insourced personnel); see the following chapters.


    Outsourced work

    Because much work is outsourced, this aspect must receive special attention. When work is outsourced, it is very important that the insourced third party is aware of and follows all procedures and measures applicable for the internal organisation. If it becomes necessary to depart from this structure, ensure that the deviation is authorised and documented so any deviations remain traceable and can be accounted for.


    It is the contractor’s responsibility to demonstrate that the reallocated activities and responsibilities comply with the original requirements. Nonetheless, under the law, the tunnel manager will retain joint and several final responsibility for the security of the object. The tunnel manager must therefore be confident in the reallocated activities. The shape of the organisation can vary greatly depending on the nature and size of the project/contract. See also chapter 4 Cybersecurity management.

    2.3 Technology [link id=”vvpwk”]

    The aspect ‘technology’ relates to all equipment, components, objects, software, etc. present and anything directly related to those elements. This therefore includes maintenance contracts, maintenance schedules, etc. Technology is a vital aspect for cybersecurity, and therefore demands optimum deployment. Nonetheless, technology itself is not infallible, and requires periodic evaluation and – if necessary – adjustment.


    To limit cyber risks, tunnel equipment (but also the devices used by maintenance staff, such as laptops and data carriers) must be kept virus-free and up-to-date. This necessity is further reinforced by the fact that subsystems are often sourced from different suppliers. The access points must be technically facilitated in such a way that they can be used safely. It must also be evident to which installations there is remote access. Practice has shown that risks are not always clear or recognised. For example, a sub installation for maintenance work may make use of a (wireless and Internet) access. Through this access point, it is possible to access the entire network. By employing the correct monitoring, these vulnerabilities can be identified.


    Note: the technology must be safe and robust, but must not make performance of maintenance and recovery work impossible, since otherwise maintenance workers will employ a workaround or fail to carry out repairs.

    3 Legislation and regulations [link id=”9qszf”]

    3.1 Overview [link id=”t90zf”]

    The vast majority of the rules and regulations relating to cybersecurity come from the European Union (EU). European legislation is therefore a dominant force in creating a framework of common standards for cybersecurity. The EU has two main legislative instruments: regulations and directives.


    Regulations, such as the General Data Protection Regulation (GDPR), have direct application in all of the Member States. Directives, such as the Directive on security of network and information systems (NIS Directive), must first be implemented in national legislation by the Member States. The Netherlands implemented the NIS in the Network and Information Systems Security Act.


    The regimes for operational security are the frameworks for cybersecurity:

    • Tunnel safety (Tunnel Act, TSI-SRT – technical specifications for interoperability safety in railway tunnels)
    • Security of the railways (Railways Act, Local Railways Act)
    • Public safety, safe working conditions, environmental safety
    • Social safety
    • etc.


    Although cybersecurity is not (yet) specifically mentioned in these regulations, a tunnel manager cannot ignore that aspect in managing other security risks. In that respect, the regulations lag behind reality. It is the tunnel manager’s responsibility to fill in the gaps.


    There is a specific law for road tunnels. The applicable laws for rail tunnels are the Railways Act and the Local Railways Act. The law defines the organisation, duties and responsibilities of the tunnel manager, the security officer and the competent authority differently for the railways. The report Consequences of the entry into force of the Local Railways Act from the COB/KPT contains a clear diagram of the organisational structure. Despite the differences, the contents of this living document also apply in full to railway tunnels.


    The laws and regulations mentioned in this chapter are all based on the principle of managing risks. In that sense, this living document offers a good starting point for achieving cybersecurity in compliance with the applicable laws and regulations.

    3.2 Dutch tunnel act [link id=”h75w8″]

    The Road Tunnels (Additional Safety Rules) Act (Warvw) and the Road Tunnels (Additional Safety Rules) Regulation (Rarvw) refer to various safety-related roles. For more information, see the brochure Tunnel safety explained published by the Kennisplatform Tunnelveiligheid (KPT). The following is a list of important roles:


    Parties / roles

    Interest / objective

    Tunnel manager, role defined in the Warvw

    Responsible for the safe operation of the tunnel, consequently responsible for security documentation (Tunnel Safety Plan, Building Plan, Security Management Plan), for a technically functioning tunnel system and for a properly trained management and emergency response organisation.

    Tunnel personnel, affiliated to tunnel manager

    Includes operating personnel and road inspectors. Operate and safeguard the tunnel and play an essential role in traffic management and emergency response. Road inspectors support the operating personnel on location (the road).

    Security officer , role defined in the Warvw

    Provides solicited and unsolicited advice regarding the tunnel’s safety for the tunnel manager. Monitors the education, training and practical exercises of the management and emergency response organisation and is involved in the evaluation of incidents. By law, the advice of the security officer must be attached to applications for permits relating to tunnel safety.

    Competent municipal executive, role defined in the Warvw, also known as ‘competent authority’

    Responsible for granting the various permits relating to tunnel safety (environmental permit and tunnel opening permit), on the basis of an assessment against the legal frameworks. Also responsible for enforcing compliance with the legal frameworks, with the ultimate sanction of revoking the necessary permits for the construction and use of the tunnel.

    Fire brigade/emergency services


    The fire brigade or safety region acts as an advisor to the competent authority in assessing permit applications. The emergency response plan is also the result of consultation between the fire brigade, the tunnel manager and other parties engaged in dealing with emergencies (including the other emergency services).

    Supervisory officials

    Public officials who are responsible for supervising compliance with the rules in the Housing Act, the Building Decree, the Warvw and the Rarvw and are appointed by the competent municipal executive. In addition, oversight of the tunnel and road managers and the road users is exercised by the Human Environment and Transport Inspectorate on behalf of the Minister of Infrastructure and Water Management.


    Competent authority

    The Warvw and other legislation use the term ‘competent authority’. This is the administrative body that is empowered to make particular decisions. The Minister of Infrastructure and Water Management is the competent authority for the adoption of an infrastructure planning decision, for example. Competence to adopt a zoning plan for a municipality or a province lies with the municipal council and provincial council, respectively. The municipal executive is the competent authority for the granting of an environmental or tunnel opening permit. The executive of the safety region is responsible for enforcing the security requirements ensuing from the Security Regions Act.


    As already mentioned, the Tunnel Act applies only to road tunnels. Other legislation applies for rail tunnels (see above). The risks to be managed on the basis of the Tunnel Act relate to occurrences in the tunnel. The compulsory risk analysis on the basis of QRA Tunnels only considers incidents in the tunnel caused by traffic, and therefore excludes all externally caused risks (including also cybersecurity-related risks). A tunnel manager who also wishes to manage the cybersecurity risks can make use of the Cybersecurity Implementation Guideline Objects issued by Rijkswaterstaat. As yet, no support is available in the form of a programme for quantitative risk analysis.

    3.3 Information protection [link id=”cspck”]

    3.3.1 Government Information Security Baseline (BIO) [link id=”n6xzf”]

    On 1 January 2020, the Government Information Security Baseline (BIO) came into force for all levels of government. The BIO replaces the Information Security Baseline (BIR), Inter-provincial Information Security Baseline (IBI), the Municipalities Information Security Baseline (BIG) and the Water Authorities Information Security Baseline (BIWA). The BIO is based on the international standards ISO/IEC 27001 and ISO/IEC 27002 (see 3.5.1 ISO/IEC 27000 series).

    3.3.2 Network and Information Systems Security Act (Wbni) [link id=”lxlmw”]

    The Network and Information Systems Security Act is the Dutch law implementing the European Network and Information Security Directive (NIS Directive) and came into effect 9 November 2018. The objective of the law and the directive is to ensure that ‘suppliers of essential services and digital service providers take appropriate technical and organisational measures to manage the risks of cyber incidents’ (article 7) and to guarantee business continuity. The risk of a cyber incident can be reduced by taking proactive, deterrent and preventive measures. The law also requires suppliers to take appropriate measures (both detective and corrective) to minimise the consequences if a cyber incident occurs (article 8).


    Suppliers of essential services and digital service providers

    The scope of the Network and Information Systems Security Act is confined to ‘suppliers of essential services and digital service providers’. The Minister of Justice and Security designates the essential services, a list of which can be found in the Network and Information Systems Security Decree (Bbni). The Ministry informs the relevant organisations that they have been designated as suppliers of essential services.


    The Minister regularly reviews the list of suppliers of essential services. In the original Network and Information Systems (Security) Decree (effective as of 1 January 2019), for the transport sector, the only designations related to the harbourmaster’s division of the Port of Rotterdam Authority and service providers involved in the handling of air traffic. Since then, the subsectors road and rail transport (main road network) has been designated vital B. The managers of bridges and tunnels including operation in the national trunk road/railway network are therefore now designated as suppliers of essential services, and are subject to the Security of Network and Information Systems Act.


    Digital service providers have to determine for themselves whether they fall within the scope of the Wbni. This is the case if:

    • it is an online marketplace, search engine and/or cloud service provider; with
    • 50 or more employees or a balance sheet total or annual turnover of €10 million; and
    • a European head office or a representative in the Netherlands.


    Duty of care

    The law imposes a duty of care on organisations that fall under the scope of the Security of Network and Information Systems Act, which requires those organisations to take measures to manage the risks to the security of their networks and information systems. In other words, the measures taken must be tailored to the specific risks for the organisation concerned. The law does not prescribe any specific measures. The law states that at least the following aspects must be considered in determining the measures to be taken:

    • The security of systems and facilities
    • Incident handling
    • Management of business continuity
    • Supervision, monitoring and testing
    • Compliance with international standards


    Organisations must also take appropriate measures to prevent cyber incidents that impair security and mitigate the consequences of any such cyber incidents as far as possible.


    Obligation to notify

    The Act also contains an obligation for organisations to notify a cyber incident under certain circumstances. An organisation covered by the law must notify any incident that has had, or could have, a significant impact on the continuity of the service it provides.


    Factors that determine whether an incident could have a significant impact are:

    • The number of users affected by the disruption of the service;
    • The duration of the incident.
    • The size of the geographical area affected by the incident.


    If an incident has serious consequences for the continuity of a service but is not subject to the obligation to notify, the service provider in question may report the incident. A voluntary notification need not be processed, but can be passed on to a crisis team.

    3.3.3 General Data Protection Regulation (GDPR). [link id=”x4xfz”]

    In addition to availability and safety, cybersecurity is also intended to protect privacy. The General Data Protection Regulation Implementing Act (GDPR) relates to the processing of the personal data of natural persons. The GDPR lays down rules that processing of this kind must comply with.


    Definition of personal data

    Article 4 of the GDPR contains a broad definition of ‘personal data’. Briefly, it states that personal data are any data which are related to an identifiable natural person. These data can include car registration numbers, camera images, information from log files and, with the emergence of smart mobility, data that a vehicle shares with the surrounding infrastructure. In this living document, for practical purposes personal data include e.g. camera images of persons and licence plates, audio recordings and log-in data of employees.


    The context of the data can also be relevant. Linking anonymous data from different sources can generate personal data because the combination of data is so unique that a natural person can be identified from them.


    Definition of processing

    The GDPR also contains a broad definition of ‘processing’. The definition includes practically every action relating to personal data, in any case including collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, transmission, dissemination, otherwise making available, alignment or combination, restriction, erasure or destruction of data.


    Purpose and grounds

    The GDPR states that personal data may only be processed for a legitimate purpose and with a legal ground. Processing personal data without a legitimate purpose and a legal ground is a no-go. For tunnel managers, the legitimate purpose is to guarantee safe passage of traffic through the tunnel. The legal ground lies in the road manager’s statutory duties.


    ‘Data minimisation’ is another important criterion in the GDPR. It means that the processing of personal data must be confined to what is essential for its purpose, but also that personal data must not be retained for any longer than necessary.


    Security of personal data

    The GDPR obliges the data processor to take appropriate technical and organisational measures to ensure an appropriate level of security of personal data. What those measures are depends on the current state of technology: what is an appropriate measure now might no longer be appropriate in a year’s time. Furthermore, the appropriate security level also depends on the risks associated with the data processing, which have to be analysed in light of the risks for the natural person whose data is processed.


    Article 32 of the GDPR further includes a number of mandatory security measures:

    • The pseudonymisation and encryption of personal data;
    • The ability to ensure the ongoing confidentiality, integrity and availability of personal data;
    • The ability to restore the availability and access to personal data in the event of a physical or technical incident;
    • A process for regularly testing the effectiveness of the technical and organisational measures;
    • A process for deleting personal data at the request of the natural person concerned.


    Duty to notify data breaches

    The GDPR contains a duty to report a data breach. A data breach occurs in the following cases:

    • Breach of confidentiality: access by an unauthorised person or unintentional disclosure of personal data;
    • Breach of integrity: when there is an unauthorised or accidental alteration of personal data;
    • Breach of availability: when there is an unauthorised or unintentional loss of access to, or destruction of, personal data.


    Under the above definition, the accidental deletion of camera images is a data breach, for example.


    The Data Protection Officer(FG)

    In certain situations, organisations are obliged to appoint a data protection officer, a person within the organisation with responsibility for monitoring the application of and compliance with the GDPR. The data protection officer should therefore be involved in every process in which personal data might be used. Government agencies and public organisations are always obliged to appoint a data protection officer, regardless of the types of data they process. Organisations must register their data protection officer with the Dutch Data Protection Authority.

    >> Read more


    Whenever a data breach occurs, the data controller must record this in the data breach register. The data controller may also be required to report the data breach to the Dutch Data Protection Authority. However, this is only necessary if the data breach results in a risk to the rights and freedoms of the data subjects. It is therefore important to inform the organisation’s data protection officer immediately if a data breach is discovered (see box).

    3.4 Other regulations [link id=”mkpbn”]

    In addition to the GDPR and the Network and Information Systems Security Act, there is other legislation and regulations that affect cybersecurity in relation to tunnels. For the national government, the following regulations apply:

    3.5 International standards [link id=”v4sw7″]

    There are international and national standards for cybersecurity. One such standard essential for information technology (IT) is the ISO 27000 series. The IEC 62443 standard framework is for operational technology (OT), what the ISO 27000 series is for IT.

    3.5.1 ISO/IEC 27000 series [link id=”98n7n”]

    The International Standardisation Organisation (ISO) has drawn up a series of standards relating to the subject of information security. The most important of these are:

    • ISO/IEC 27001 Information Security Management This standard describes a management system for safeguarding the security of information in an organisation; the so-called information security management system, ISMS. Organisations can have their ISMS certified according to this standard.
    • ISO/IEC 27002 Code of Practice for Information Security Controls This standard describes a collection of measures that can be deployed for the ISMS. The measures actually chosen will depend on the outcome of an information risk analysis.
    • ISO/IEC 27003 Guidance This standard provides guidelines and tips for the implementation of an ISMS.
    • ISO/IEC 27004 Monitoring, Measurement, Analysis and Evaluation This standard describes the measures and techniques for evaluating the effectiveness of the measures taken for the ISMS.
    • ISO/IEC 27005 Information Security Risk Management This standard provides guidelines for ISMS-related risk management.
    • ISO/IEC 27035 Incident Management This standard provides guidelines for planning and preparing for incident response/management.
    • ISO/IEC 27701 Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management — Requirements and guidelines This standard specifies requirements for and provides assistance in extending an ISMS for the protection of privacy-sensitive information. Compliance with this standard is essential but not sufficient for satisfying the requirements of the GDPR.

    3.5.2 IEC 62443 [link id=”rbrvg”]

    The IEC 62443 series of standards consists of fourteen parts, eight of which have been published to date. The figure below provides an overview. The standard employs the abbreviation IACS (Industrial automation and control systems) as opposed to the term operational technology (OT).


    Figure 3.1: Overview of the IEC 62443 series of standards. (Source: IEC)


    The series of standards comprises four ‘layers’ aimed at different target groups:

      1. IEC 62443-1-x General – Four parts with general information applicable to all target groups.
        • IEC TS 62443-1-1 Concepts and Models (published) – Describes the concepts and models in the standard.
        • IEC TR62443-1-2 Master Glossary of terms and abbreviations – This technical report will contain a description of all terms and abbreviations used in the series of standards.
        • IEC 62443-1-3 System Security Conformance metrics – This part of the standard will describe a series of quantitative metrics derived from the fundamental requirements, system requirements and associated requirements.
        • IEC TR62443-1-4 IACS security life cycle and use cases – This technical report will provide a more detailed description of the life cycle for IACS security. In addition, different use cases will be described to illustrate the various applications.
      2. IEC 62443-2-x Policies & Procedures – This series of standards is aimed at the owners of assets and describes a cybersecurity management system (CSMS) and everything it needs.
        • IEC 62443-2-1 Requirements for an IACS security Management System (published) – This part of the standard describes what is necessary for defining and implementing a CSMS for OT. This part of the standard relates to ISO 27001. It is aimed at asset owners and solution providers.
        • IEC 62443-2-2 IACS Security Program Rating – This standard will describe a method of measuring the quality of the security programme, based on the possibilities of the technical measures combined with the maturity of an organisation for implementing procedural measures in practice.
        • IEC TR62443-2-3 Patch management in the IACS environment (published) – This technical report describes how patch management can be used in an OT environment. It will be rewritten as a standard. It is aimed at asset owners and solution providers.
        • IEC 62443-2-4 Security program requirements for IACS service providers ([published) – This part of the standard describes the requirements that OT service providers must meet. It is aimed at solution providers such as manufacturers, system integrators and resellers.
        • IEC TR62443-2-5 Implementation guidance for an IACS security Management System – This part of the standard describes what is necessary to have an effective CSMS, and relates to ISO 27003.
      3. IEC 62443-3-x System – This series of standards is aimed at system integrators and describes the techniques that have to be used in order to achieve the correct level of security.
        • IEC TR 62443-3-1 Security technologies for IACS (published) – This technical report describes the application of various security techniques in an OT environment.
        • IEC 62443-3-2 Security risk assessment and system design – This part of the standard describes how to deal with security risk analysis and system architecture. It introduces the terms ‘zones’ and ‘conduits’. It is aimed at asset owners and solution providers.
        • IEC 62443-3-3 System security requirements and security levels (published) – In this part of the standard, the foundational requirements (FR) and security assurance levels (SL) are described. For each FR, one or more security requirements (SR) are defined, with an indication of which SR applies to which SL. It is aimed at asset owners and solution providers.
      4. IEC 62443-4-x Component – This series of standards is aimed at suppliers of components used to assemble a system. Subjects are requirements on the development of products and the security requirements that have to be imposed on components.
        • IEC 62443-4-1 Product development (published) – This part of the standard describes the derived requirements applicable to the development of products. It is aimed at solution providers such as manufacturers, system integrators and resellers.
        • IEC 62443-4-2 Technical security requirements for IACS components (published) – This part of the standard describes the collection of derived requirements that translate the SRs into sub systems and components of the system under consideration. It is aimed at solution providers such as manufacturers, system integrators and resellers.

    3.6 Cybersecurity and tunnel safety [link id=”k0c48″]

    A tunnel which (on paper) appears safe can in fact be unsafe without the responsible tunnel manager and its safety officer being aware of the fact. This is because cybersecurity is not yet part of the assessment framework for tunnel safety as intended in tunnel law. In a separate memo, available in Dutch for download from the COB knowledge base, the Cybersecurity working group of the COB provides a further explanation of the relationship between cybersecurity and safety in general, and tunnel safety in particular. >> Publication Cybersecurity and tunnel safety on the knowledge base (only in Dutch).

    4 Cybersecurity management [link id=”rg22h”]


    The asset owner is ultimately responsible for safety in the tunnel, and cybersecurity is one of the most important parts of this. The asset owner is therefore also responsible for cybersecurity.


    By embedding cybersecurity tasks, responsibilities and authorities in the organisation, cybersecurity becomes part of the organisation’s standard processes. This serves three purposes:

        1. Cybersecurity risks are identified.
        2. Measures are formulated and entrusted to the right people, with professional knowledge, so that the measures are correctly implemented.
        3. The maintenance and evaluation of measures is verified against legislation and changing threats.

    4.1 Cybersecurity management system (CSMS) [link id=”pdc8t”]

    A cybersecurity management system (CSMS) can be a useful tool for managing and safeguarding cybersecurity for the OT environment in a structured and validatable manner, within an OT organisation. A CSMS consists of a set of plans, reports, measures, instructions, policies, procedures and evaluation processes that help the organisation to pay attention to cybersecurity, without losing the overview. A CSMS involves periodic verification of the compliance with and effectiveness of the CSMS.


    Infrastructure and tunnels are characterised by the fact that there are one or more suppliers, a users’ organisation and a control or management organisation, depending on the type of contract (see section below). The CSMS can then be distributed among these parties, but must always function as a whole.


    Comment: For organisations with a company-wide Information Security Management System (ISMS), it is wiser to follow that system. In organisations where automation consists mainly of OT systems, a CSMS in conformity with IEC 62443 should be considered.

    4.2 Contract forms [link id=”l6twn”]

    The contract form influences the organisation of cybersecurity. The most commonly used contract forms in infrastructure are:

        1. Design, build, finance, maintain (DBFM)
        2. Design, build, finance, maintain, operate (DBFMO)
        3. Design and construct (D&C)
        4. Engineering and construct (E&C)
        5. Performance contract (PC)


    These contract forms differ in the extent to which the phases design, construction and maintenance are included in the contract. In the case of cybersecurity, this has consequences for the influence that the contractor has on the implementation of the cybersecurity measures. In the case of D&C and even more in the case of E&C, the client will determine the organisational measures and (part of) the technical measures The primary coordination of the cybersecurity activities therefore lies with the client. The execution of the specifically mentioned activities is on the other hand entrusted to the contractor. In the case of DBFM and DBFMO, the entire organisation is arranged by the contractor whereby the client has a verifying role.


    Performance contracts are aimed at managing an object or area of land and do not include any design or construction work. In this contract, it is up to the contractor to preserve at least the existing security measures.

    4.3 Organisation form during the life cycle [link id=”72g64″]

    In the various phases of the life cycle of an object, different forms of organisation apply. For example, for product development we have product groups, for building projects we have a design and construction process that conforms to the V-model from systems engineering (see ISO 15288), and for maintenance (including the demolition phase), the PDCA cycle. The figure below illustrates these different forms. The form of organisation has direct consequences for the tasks of cybersecurity officers.


    Figure 4.1: Organisation forms during the life cycle of an object.


    Cybersecurity is an integral part of all these activities and work. It is important to recognise the transition between the phases as a risk, and to respond in good time. For example, it is to be recommended that the manager be involved early in the development and design phase.


    A relationship diagram (see below) has been drawn up to further clarify the relationship with cybersecurity.


    Figure 4.2: Cybersecurity relationship diagram (in Dutch).


    The table below provides an explanation of the tasks and responsibilities associated with the relationship diagram.



    Tasks and responsibilities

    Tunnel manager

    Within the context of his/her legal duties, final responsibility for the cybersecurity of his/her tunnel(s).

    Competent authority

    Within the context of his/her legal duties, responsible for monitoring tunnel safety and therefore also the cybersecurity aspects and their enforcement.

    Security officer

    Within the context of his/her legal duties, the security officer provides the tunnel manager with both solicited and unsolicited advice concerning tunnel safety and therefore also the cybersecurity aspects and their enforcement.


    asset management

    Daily management during the operational phase.

    Drawing up requirements for the management and maintenance phase.

    Monitoring manageability.

    Performing maintenance on the basis of cybersecurity procedures.

    Performing periodic tests/audits/risk analyses.

    Tunnel staff

    Notifying, recording and reporting on cyber incidents.

    Project management/organisation

    Responsible to the tunnel manager and client for the cybersecurity aspects. (Commitment by the organisation to cybersecurity policy is essential!)

    Process and quality management

    Document control.

    Coordinating/initiating cybersecurity audits.

    Coordinating and recording risk analyses.

    Security management

    – Incident manager

    – Security architect

    – Cybersecurity coordinator

    – Cybersecurity auditor

    Notifying, recording and reporting on cyber incidents.

    Carrying out inspections into compliance with integrated security.

    System/application management

    Monitoring accounts and access rights.

    Setting up workplace management and service desk.

    Releasing staff IT resources.

    Application management.

    Design team Civil/VTTI

    Cybersecurity engineer

    Network specialist

    Designing physical access and building installations and OT systems (hardware and software).

    Contract management

    Responsible for the contractual aspects and transferring cybersecurity measures to subcontractors.

    Realisation teams


    Procurement and construction of systems.

    Compliance with cybersecurity measures by suppliers. Both in technical and organisational terms, a secure delivery must be guaranteed.

    Secure transfer of design data (e.g. software).

    Transferring cybersecurity measures to subcontractors.

    IBS/test team

    Recording and monitoring physical and logical access to OT systems.

    Checking that staff comply with cybersecurity measures.

    Performing patching, hardening, virus checks and backups, filling the CMDB.

    Reading out local log files.


    Project launch/confidentiality obligation.

    Recording and maintaining personal data.

    Enforcing cybersecurity screening as a prerequisite for hiring staff.


    The way in which the organisation is structured depends on the size and complexity of the scope (new build, renovation, maintenance and demolition). A number of organisation models are described in IPM (Rijkswaterstaat) and ISO 55000 (asset management).


    4.4 Basic organisation for cybersecurity [link id=”rpxwd”]


    The model below shows which cybersecurity roles can be distinguished in each phase of the life cycle. Depending on the scope of the work, several roles may be assigned to the cybersecurity coordinator in a specific phase.



    Planning phase and tender

    Construction phase – Design

    Construction phase – Construction

    Operational phase – Management and maintenance

    Operational phase – Operation

    Operational phase – Renovation


    Incident manager








    Cybersecurity coordinator/ advisor








    Cybersecurity auditor








    Security engineers/ specialists












































    4.5 Tips for smooth collaboration [link id=”bgpbn”]

    Organise commitment by the management

    Combine roles or specifically keep them separate?

    Judge the workload at its true value

    Organise communication

    Implement escalation models

    Collaborate throughout the entire chain

    Fold out Fold in

    4.6 Case study: Responsibility assignment matrix (RAM) for a road tunnel [link id=”qzdfm”]

    The RAM shown below (tasks, authorities and responsibilities) gives an example of a division of roles between a client and contractors, in a tunnel project. An attempt has been made to provide a practical structure based on shared responsibility between the contractor and the client, with a view to the result, whereby the knowledge and skills of both parties are put to optimum use.


    Three preliminary remarks:

        1. For each contract/organisation/object, the interpretation will be different. The main purpose of the matrix is to initiate discussion of the situation with all the parties involved.
        2. The matrix attempts to strike a balance between conscious responsibility (for example, the tunnel manager being responsible for much of the work) on the one hand, and responsibility for the actual execution of the tasks, on the other.
        3. Cyber incidents soon transcend the context of the contract. There will have to be clear agreements at management level on how to deal with the impact on objects beyond the scope, and with the consequences of external incidents.


    Figure 4.3: Example (Dutch) of a RAM matrix based on the DBFM contract form. The matrix is also available for download in a larger format.


    The client provides the appropriate frameworks, in the tender. The role of the client subsequently becomes that of an ‘involved assessor’ and ‘advisor with accompanying responsibilities’. In the design phase, the contractor draws up the plans and instructions in collaboration with all the specialists. An important role is played by process management and the IT design of the contractor’s working environment. During the construction phase, the technical measures are implemented and validated. The process of drawing up status reports, annual evaluation, auditing and updating the risk analyses is carried out in the operational phase.

    4.7 Cybersecurity risk management [link id=”pf1h2″]

    The purpose of cybersecurity risk management is to manage the risks of security incidents over the entire life cycle of an OT system and starts with the cybersecurity risk analysis after the cybersecurity security objectives have been derived from the project’s operational objectives. The analysis is part of the design process that has to be carried out for each phase of the life cycle of an OT system – from new build, maintenance and renovation through to demolition (what should be done with old data, connections, organisation, people, etc.?). The purpose of the analysis is to identify measures that correspond to a realistic assessment of the threat and/or the vulnerability of the (existing) system and organisation. These cybersecurity measures should be practical and affordable (efficient and effective) and help to achieve the objectives of the system and facilitate the planned operation.


    The design choices are the result of a balancing of different, often conflicting, aspects, such as the technical possibilities and limitations, operational usefulness, organisational and financial constraints and human aspects. Both preventive and corrective measures have to be formulated with a view to reducing the risk of the occurrence of a cyber incident and – if an incident does occur – mitigating its impact on the system and the operation. The cybersecurity risk analysis highlights the residual risks and the recovery measures that will be required for the system and the operation.


    Cybersecurity risk management should be embedded in the asset management of a specific system (object or asset) and as such should be an integrated element of asset management. ISO 55000 is a useful guideline for asset management and covers an asset’s entire life cycle.

    4.8 Quick scan for ‘cyber maturity’ [link id=”83nqv”]

    Cybersecurity remains relatively unknown territory for many asset managers and asset owners. To initiate discussion with these actors, the COB has drawn up the Quick scan cybersecurity tunnel (in Dutch). This quick scan uses 21 questions to provide a broad insight into the strengths and weaknesses of the organisation or the object, in respect of cybersecurity. The quick scan can be used for a specific object or (sub) organisation within the infrastructure domain. By working together to complete the questionnaire, a first clear picture emerges of important points for attention from the point of view of cybersecurity, and of the score achieved by the object in question in terms of cybersecurity. Because the quick scan offers a broad, overall insight, its primary objective is to serve as the starting point for further discussion to zoom in more specifically on individual points. In completing the quick scan, it is recommended that you seek the advice of experts in the field of cybersecurity from within your organisation or from outside.

    5 Logging and monitoring [link id=”1grcg”]

    5.1 Why logging and monitoring? [link id=”qwdpr”]

    No matter what measures are taken, cyber incidents cannot be prevented entirely. Furthermore, a certain degree of residual risk is usually accepted. The purpose of monitoring technical systems is to constantly monitor the status of digital security. This makes it possible to identify new vulnerabilities and detect and deal with cyber incidents.


    Once ‘normal’ behaviour has been established, the purpose of monitoring is to detect threats of an incident by recognising abnormal behaviour, to detect any cybersecurity-related events and to collect evidence. The Government Information Security Baseline (BIO) obliges organisations to take measures to achieve these objectives. The responsibility for compliance with the BIO, including security monitoring, lies with the managing party.


    Incident detection

    According to the NCSC reports, the number of days between the moment at which a breach takes place and the moment that it is discovered worldwide was:

        • in 2014: 205 days
        • in 2017: 101 days
        • in 2018: 78 days
        • in 2019: 56 days

    In other words, a clearly downward trend but still not fast enough.


    Whereas the Cybersecurity Assessment Netherlands 2019 shows that in 2018, on average it took 78 days to detect a system breach, according to the Cyber Security Assessment Netherlands 2020, the average detection time had fallen to 56 days. Even this reduced detection time remains in shrill contrast with the average time the attacker requires; the duration of attack can be measured in hours. It therefore is and remains essential to take adequate measures to detect and prevent breaches and to properly monitor and test the system and the processes. Factors to consider are:

        • How good is your Security Operation Centre (SOC).
        • How quickly you install patches.
        • How quickly you respond to an incident.
        • How quickly you can process information about a threat.
        • How you deal with a crisis.


    Assessing employees

    Social engineering is a form of cybercrime. It involves the exploitation of human traits such as curiosity, trust and greed to secure information. A worthwhile measure is to assess employees’ ‘susceptibility’ to social engineering. One way of testing that is with phishing as a service (PhaaS), where phishing is simulated by spreading fake phishing messages throughout the organisation in a controlled manner.


    A coordinated and documented escalation model will probably promote the earlier notification and escalation of abnormal behaviour and help to ensure that an incident can be resolved more quickly and effectively.


    The National Cyber Security Centre has published Guidelines for the implementation of detection solutions to assist with the design of monitoring systems.

    5.2 Implementation of logging and monitoring [link id=”3llwq”]

    The following sections outline some of the issues that need to be addressed when implementing a logging and monitoring system. In this context, the log data are created on a local device (computer, switch, etc.). They should be stored in a central log server. The local and central databases containing log data should then also be protected against unauthorised alteration or deletion.


    OT system logging (information processing environment)

    Logging activities of administrators and operators

    Logging of network equipment

    SOC and SIEM

    Logging of network equipment

    Clock synchronisation

    Fold out Fold in

    6 Risk-based approach [link id=”4svmz”]


    There are several different ways for developing measures for cybersecurity. These are described in section one of this chapter. In the remaining sections, the risk-based approach is discussed in greater depth. This approach enables stakeholders to evaluate and update the digital resilience of an object or project, by optimising processes in such a way that the appropriate measures are taken to mitigate cyber incidents and/or their impact and consequences.

    6.1 Approach for determining measures [link id=”x29w1″]

    A rule-based approach comprises a set of measures adopted in advance, for a specific threat. Examples of a rule-based approach are the BIO and the ISO 27001 and ISO 27002 standards. Together, these standards contain approximately 340 (generic) technical and organisational measures that apply in the case of a specific incident or threat. The advantage of an approach of this kind is that it makes it easy to formulate criteria for cybersecurity. The drawback is that neither the client nor the contractor is clear in advance about the desired level of security for the specific operation and that the approach can cause confusion and disagreement about the estimated threats, the measures to be taken and the residual risks. This one-size-fits-all approach as a consequence often leads to a catch-all variety of (unnecessary and costly) measures. This means that the client often adds the rule ‘comply or explain’ to the requirements, in order to introduce some degree of scalability.


    A risk-based approach effectively allows risk management to be tailored to specific situations and ensures that the reaction is precisely sufficient (neither too much nor too little) for the planned operation, the situational threat, the operational objective (the measures are effective) and the accepted residual risks. In this way, the operation can be protected at minimum cost (the measures are efficient). This is the ideal approach for new and one-of-a-kind systems.


    Another feature of this approach is that the client determines the objectives and the accepted residual risks (or incidents) of the planned operation in advance, as well as the objectives and requirements in terms of cybersecurity, for the contractor. This approach calls for active management and decisions on why particular measures have been taken to address specific risks. This list must be reviewed regularly to determine whether the risk has changed and/or the (and which) measure needs to be revised.


    A disadvantage of this approach is that the precise security costs are difficult to predict. With this approach, therefore, mutual transparency is essential and the client and the contractor must both have the necessary maturity level to jointly manage the uncertainty. The client must also reserve an adequate budget.


    We refer to a classification-based approach if the measures can be applied to all systems subject to the same security classes (for example highly critical, critical and less critical operations). This is possible in an environment with many similar systems. On the basis of a generic risk analysis, the client assigns the various systems to security classes, and adopts a (fixed) set of measures for each security class. Deviations from the classification are possible on the basis of ‘comply or explain’. This approach is in fact a combination of the risk and rule-based approaches. CSIR is an example of a classification-based approach.


    An advantage of the classification-based approach is that the client personally determines the desired level of security, and the variety of measures is more limited. A classification-based approach can have the same drawbacks as a risk-based approach. The maturity of both parties is also an important factor in this approach, albeit to a lesser extent.


    In practice, the different approaches are often combined.

    6.2 Basic principles [link id=”9twhh”]

    Risk types

    It is important to realise that there are two types of risks. Firstly, the product represents a risk, for example through the automated operation or monitoring and management of a tunnel, bridge or lock. Secondly, the project or implementation can be the source of risks. This chapter first discusses the product risks, followed by the project risks.


    Layered security

    The basis for an inventory of the situation and the measures to be defined is the concept of integrated, layered security, or ‘defence in depth’, see the figure below. Each layer of security represents part of the whole. As the building blocks differ for each layer, the measures to be taken are also different. The diversity of measures to be taken ensures that a weakness in one layer can be compensated for by a measure in a different layer. Creating sufficient security for each layer gives rise to a security concept of cumulative protection for the core. The benefit of this accumulation of layers of security is that the risk of the system as a whole being disrupted is reduced, because an array of security features have to fail before the system as a whole fails.


    Figure 6.1: Diagrammatic depiction of layers of security



    Besides a risk analysis, a cost-benefit analysis and an analysis of the operational objectives are also carried out since it is not automatically necessary to hedge every risk in advance: if the cost of the measures to limit a risk is higher than the potential damage or personal injury in the light of the prescribed operational objectives (security, availability and privacy), the risk may be accepted as a residual risk.


    It is important that the complete or partial hedging or acceptance of a risk corresponds with the organisation’s risk appetite and that the organisation is capable of absorbing the effects of operational incidents. A system of risk management or risk control with permanent or periodic risk analyses is an essential element of a good cybersecurity policy. See also 4 Cybersecurity management.


    Risk strategy

    The risk strategy employed by an organisation relates closely to the question of what level of risk is acceptable. As a rule, the following strategies are identified:

        • Avoid By not performing certain activities, you can avoid the risk resulting from those activities. For example: by not using a USB stick, the risk of a virus contaminating the installation/system from a USB stick is prevented.
        • Mitigate By taking measures, the risk level is reduced. These may be proactive measures that reduce the risk of occurrence or reactive measures that reduce the consequences of the occurrence of the risk, or a combination of the two.
        • Accept If the risk level is lower than that which is considered acceptable, or if a measure has a greater negative impact than the occurrence of the risk itself, the organisation can choose to accept the risk. The acceptance of risks must always be confirmed by senior management.
        • Transfer It is also possible to have a situation taken over by a third party, if a threat arises (outsourcing), or to insure the risk.
        • Exploit If the consequence of a risk has a positive effect, you can make use of that effect. This strategy will only rarely be chosen in respect of cybersecurity-related risks.

    6.3 Overview [link id=”x9bq9″]

    In terms of process, the risk-based approach consists of six steps, as indicated below:

    Figure 6.2: Risk-based approach in six steps.


    The various steps have the following functions:

        1. The purpose of the inventory is to record:
        2. Risk analysis: determining how the risk can be managed or returned to an acceptable level.
        3. Define measures: determining a strategy for each risk and, if necessary, identifying measures and determining residual risks following the implementation of the measure.
        4. Selecting measures: determining which measure will be implemented for each risk.
        5. Implementing measures: implementing the selected measures.
        6. Safeguarding the implemented measures.

    6.4 Threat inventory [link id=”c6vkh”]

    Both in an existing situation and for new builds, it is essential to identify the existing (‘ as is’) and the desired (‘to be’) situation in respect of operation, management and maintenance, and the accompanying operational objectives. The recording of operational objectives takes the form of process descriptions known as concepts of operations or ConOps. The various types and numbers of cyber incidents are an essential component of such concepts: what (types of) incidents do we consider possible, what is acceptable and what should be prevented ‘at all costs’? Residual risks are decisive in determining what is acceptable in the event of an incident. Every year the NCSC publishes an update of the Cyber Security Assessment Netherlands.


    Security incident

    ‘A security incident is an event or action that could potentially jeopardise or violate the security of hardware, software, information, a process or an organisation,’ according to the cybersecurity dictionary. In other words, a security incident may also be the result of an oversight, an unintentional/well-intentioned change with an accidental, negative effect.


    In general, classes of activity that are recognised as security breaches are:

        1. Attempts to gain unauthorised access to a system and/or data.
        2. The unauthorised use of systems to process or store data.
        3. Changes to system firmware, software or hardware without consent of the system administrators.
        4. A malicious disruption and/or denial of service.


    The purpose of the inventory is to gain an insight into the threats that negatively influence the operational objectives.


    A commonly used list of threats in the OT environment is as follows:

        1. Unauthorised access by unauthorised persons to:
        2. Operating and technical areas.
        3. ICT and OT/SCADA systems and/or documentation such as drawings, manuals, etc.
        4. Data network (via Internet or wireless applications or an open portal).
        5. Missing information about weaknesses in the security and about incidents and potential response.
        6. IT and OT/SCADA systems have vulnerabilities and are susceptible to malware.
        7. The inability to detect and analyse divergent behaviour or incidents on the data network via logging and monitoring.
        8. Risks introduced by operating and/or maintenance staff. They are unaware of unsafe situations, have not received the appropriate training, have not signed a confidentiality statement or are not in possession of a recent certificate of conduct (VOG).
        9. Functional changes may have unintentional effects on safety and security, and may even cause the partial or complete failure of ICT and OT/SCADA systems.
        10. The enforcement and effectiveness of the cybersecurity measures is not safeguarded, nor is there any structural assurance among all (sub)contractors involved.
        11. In the event of disruption to the system or functional changes, there is no fall-back option (no back-up or recovery process).

    6.5 Drafting a risk analysis [link id=”z89x3″]

    The inventory is followed by the drafting of a risk analysis on the basis of the list of threats presented in the inventory. The threats must be specified as precisely as possible. The classification of aspects 2 The aspects of cybersecurity can help in preparing this list.


    The risk can be quantified by determining the probability of occurrence of each threat, and determining the impact, if the threat actually occurs. The risk is then the combination of impact and probability: risk = probability x impact.


    Risk assessment by Rijkswaterstaat

    Rijkswaterstaat uses the formula ‘risk = threat x vulnerability x impact’ to determine the risk to cybersecurity. See the diagram below (Source: presentation on cybersecurity for ISAC by Turabi Yildirim, Rijkswaterstaat).

    Figure 6.3: Risk assessment at Rijkswaterstaat



    A ‘cybersecurity incident log’ is a tool for monitoring the security assessment for a specific operational environment or system. Cybersecurity assessments are generally drawn up by cybersecurity experts. However, it is advisable to have the realism of these assessments peer-reviewed, and to have the client and the users evaluate their impact on the operation, especially with respect to the acceptance of residual risks (see below) since they have an impact on the operation.



    Broadly speaking there are two methods for drafting a risk analysis:

        • Qualitative: qualitative estimates of the risks are made.
        • Quantitative: the risks are quantified in measurable criteria, usually expressed in terms of financial consequences or the number of acceptable incidents (residual risk) or cases of personal injury.


    In the case of industrial automation, use is often made of a fault tree analysis or a failure mode, effects (and criticality) analysis, de FME(C)A. Specifically for cybersecurity, a threat model can be drafted.


    Sources for risk analysis and management for cybersecurity are:


    OT systems are subject to the IEC 62443-3-2 standard. This standard describes the security risk assessment and system design.

        • ISO 27000: This series comprises a variety of information security sources. The ISO 27000 standard is aimed primarily at IT environments such as office automation (SAP, Microsoft Office, etc.). For certain aspects, it is not suitable for OT systems such as operation, management and monitoring of tunnels (see also Appendix 2 OT vs IT). After all, in the case of tunnels, continuous use is essential, which is not the case for office automation.
        • ISO 31000, risk management
        • Guides and guidelines ProRail and Rijkswaterstaat


    Not separate domains

    Whereas IT and OT have traditionally been separate domains, nowadays a growing number of manufacturing processes depend on IT solutions. As a result, the OT environment is affected by malware from the IT environment more frequently. In practice, more than half of the malware problems in an OT environment arise from the organisation’s own IT systems. Moreover, the development of the Internet of Things (IoT) has now reached the OT world. Naturally, this creates many new possibilities, but unfortunately also new security threats; an increase in the number of factory sensors within a factory or an object represents a larger ‘attack surface’. Specific knowledge and expertise, as well as a clear understanding of the impact of a cyber attack and how to defend against it, are needed to reconcile cybersecurity in the two domains.


    Availability and integrity are top priorities on the OT agenda. Cyber defences have to protect them, but at the same time must not disrupt the business processes. Equally, threats to the IT environment must be prevented (mitigated) while the priority might be to preserve confidentiality for example. An obvious solution for properly securing both domains is an integrated ‘IT/OT’ approach to cybersecurity. A specific threat assessment must be drawn up for every operation and for every system. This applies both to the design process and in determining (and evaluating) the specific cybersecurity measures.



    Operation encompasses day-to-day tasks for the operational tasks (normal and in an emergency) and for maintenance and management activities. (maintenance operation).


    Identifying vulnerabilities and adverse consequences

    On the basis of the previously mentioned set of threats, it is possible to define a set of hazards by identifying one or more vulnerabilities and one or more (adverse) consequences, for example by following the step-by-step plan outlined below:

        1. One of the threats is that unauthorised persons gain access to a technical area.
        2. The person who drafts the risk analysis must then ask himself what could go wrong if such a situation arises:
          • The unauthorised person could deactivate equipment.
          • The unauthorised person could gain access to the network.
          • The unauthorised person could gain access to a SCADA computer.
        3. A consequence can be identified for each of these vulnerabilities.
          • If equipment is deactivated, the object (the tunnel) can no longer be operated or monitored.
          • If someone gains access to a network, they can listen into and manipulate the network traffic, thereby influencing the control system.
          • If a person gains access to a SCADA computer, they can operate and control that part of the systems without being able to observe what happens in the tunnel.
        4. The final consequence can then also be determined:
          • If the tunnel cannot be operated, it is no longer safe and has to be closed.
          • If the commands transmitted via the network are manipulated, unintended effects can occur in and around the tunnel that have a negative impact on the safety of road users.
          • If a person operates a sub installation without understanding the effect of their actions, unintended effects can take place in and around the tunnel that negatively impact the safety of road users.
        5. Finally, once the complete picture of the threat, vulnerabilities and consequences has been identified, it is possible to estimate the probability of this threat occurring.
        6. For each of the risks, the risk level is defined and a determination can be made whether this level is or is not acceptable.


    Note: the above summary is intended to be an example of situations that may arise, rather than an exhaustive list. The step-by-step plan outlined above can also be applied to the entire tunnel system but also to sub systems in order to obtain a more detailed picture.


    In the case of existing systems, the inventory is carried out on the basis of the current system design, the current measures and the current vulnerabilities (‘as is’). Following the risk analysis, new measures are introduced, effectively representing a new system design (‘to be’). In the event of new systems, the inventory is based on the intended operation and operational objectives, and the threats and designed cybersecurity measures.


    Baseline measurement

    On the basis of the list of risks, IEC 62443-3-2 makes it possible to make a baseline measurement and assess the threat risk without creating control measures. Using the baseline measurement, it then becomes apparent what the most significant risks are, and the control measures that are required to reduce the risk to an acceptable level. Precisely what that acceptable level is must be determined by the organisation that is going to manage the object to be created. They will be confronted with the consequences of the threat occurring, and the necessity of recovering normal functionality of the object within a reasonable time frame and at reasonable expense. Potential measures will be explored in the following section.


    Project-associated risks

    As explained above, there are also risks associated with the project itself. These are no longer those associated with the OT, but rather risks that are the consequence of the fact that the project generates information that is of potential interest to malicious third parties. This can then be subdivided into information on the project (who is working on it, how much it costs, how is the planning progressing etc.) and information on the object to be created (the design documentation). For the purposes of this living document the design documentation is particularly important: sufficient practical information relating to protecting information on the project can be found elsewhere.


    If the design documentation falls into the wrong hands, there is a risk that a malicious third party could use it to investigate whether the design has any weak points, and how it is configured to counter threats. That is why it is important to carry out a risk analysis for this documentation, too. ISO 27005 can be used for this purpose. Subjects that must be addressed in this context are:

        • Who has physical access to the design locations and the workstations on site? Access must be restricted to personnel who work there. There are measures in place to prevent access by unauthorised persons.
        • Secure storage of the design documentation; who is allowed to do what, and how is this protected?
        • What is expected of personnel working on the project in relation to communications, both internal and external, and how can design documentation be shared in a secure and reliable manner?
        • How is the review and approval process set up, and what threats might it harbour? Who is allowed to see what, and how is this safeguarded?
        • How are the computer systems that are used to create the design documentation protected?


    Types of protection

    When assessing actual access security, people look at security from the point of view of putting something tangible in place, such as a fence, lighting or guards. In this way it is possible to monitor and manage access to an object and its technical rooms and control cabinets.


    Analyses in the context of technical security also have a bearing on the system hardware and software. This includes firewalls, antivirus scanners, secure links, the blocking of USB ports, encryption and mechanisms for identification and authentication. Control systems in tunnels use hardware that increasingly uses Internet technology to connect to a network. In relation to tunnels this is not just PCs and servers, but also cameras, network hardware, intercom, High-Frequency technology etc. It must be possible to use and service each device safely, remotely if necessary.


    The security of technical measures is based on good faith in the products and their suppliers. After all, if there are backdoors or unpatched vulnerabilities in the product, this can totally undermine a security design. The products used must be configured and deployed safely, but they also need to be serviced long term (safety updates). Support offered by suppliers (of products) is crucial in this respect.


    An important aspect of the analysis is to realise that updates not only remedy the errors; they often also introduce new or amended functionalities. It is important to test updates extensively before they are rolled out on the actual system. This may mean that a representative test/acceptance environment is required for each environment.

    6.6 Determining measures [link id=”sgpf4″]

    A risk analysis and the accompanying measures focus on three aspects: people, organisation and technology (see also chapter 2 The aspects of cybersecurity .

        1. People: awareness, tasks, authorities and responsibilities of employees, managers, maintenance staff etc.
        2. Organisation: vision, strategy and policy, governance, standardisation, legal aspects, contract management, GDPR, processes, procedures, etc.
        3. Technology: physical access security, IT, OT, hardware, software, the built-up environment.


    The figure below shows the three aspects and their mutual relationships. It follows that a proper cybersecurity management plan has to embrace all three aspects. To arrive at a particular risk level (classification), a weighted set of measures will have to be adopted in each of these areas.


    Figure 6.4: Aspects of cybersecurity.


    Measures often impact on more than one aspect and must be in balance to achieve the intended effect. If, for instance, only technical measures are created while other aspects are ignored, the intended level of threat control will not be achieved. More to the point, new threats that cannot yet be envisaged may arise because people who are unaware of, or do not understand the reason or importance behind the measures circumvent them. One example is the introduction of minimum requirements on passwords. The unintended consequence is that people often write down passwords that are too strong on a post-it note that they then stick on the monitor or under the keyboard. This undermines the measure and effectively increases the level of risk.


    There is a huge volume of information about cybersecurity measures in the form of best practices, international and national standards, general guidelines, checklists and manufacturer guidelines. As a rule, for a specific OT system in a specific environment and for a specific operational objective, choices are made (in the design process) in favour of specific cybersecurity measures as a subset of generic measures. The measures to be taken can be determined on the basis of a risk analysis. ISO/IEC 62443 is an important guideline for such an analysis.


    Effect on the threat

    Measures to be taken can be classified according to the effect they have on the threat. Measures can be proactive, reactive or aimed at recovery.


    The figure below is a diagrammatic representation of the situation. The primary goal of proactive measures (staying healthy) is to reduce the probability of a cyber incident occurring. Their purpose is to protect the asset against malware and manipulation or changes to software and/or hardware in the event of a failure of the physical access security. The recovery measures (healing quickly) on the right-hand side of the figure are intended to reduce the impact of an incident and to fully restore the asset’s functionality, as quickly as possible. It is however essential that the event be detected; without detection, no remedial measures can be taken.



    Figure 6.5: Diagram of proactive and recovery measures.


    Measures can then be further divided into subclassifications as follows:

        • Prevention: measures to prevent the threat from happening or to reduce the probability of it happening.
        • Detection: measures to detect a (potential) threat when it occurs.
        • Repression: measures to limit the damage when a threat occurs.
        • Correction: (partial) corrective response as soon as a threat occurs.


    A secondary purpose of proactive measures is to greatly shorten the time taken for diagnosis, repair and start-up operations. Corrective measures help limit the damage to the organisation’s reputation and reduce the time needed to restore function(s).


    Phased approach

    Different sets of measures may be needed during different phases of a project. Below, specifically for the design and build phase, a series of focus points are listed. In each phase, we reconsider the three aspects introduced in chapter 2 The aspects of cybersecurity.


    Design phase – People

    Design phase – Organisation

    Design phase – Technology

    Building phase – People


    Building phase – Organisation


    Building phase – Technology

    Fold out Fold in


    Residual risks

    Taking a measure has consequences for the threat or the risk. The residual risk is the risk that remains after a specific measure has been taken in relation to a specific risk or threat. To enable comparison of the effects of the various measures, it is essential to determine the residual risk for each measure. The residual risk to a large extent determines whether a measure will or will not eventually be implemented.

    6.7 Selecting measures [link id=”dgxcg”]

    The selection of measures to be taken can depend heavily on the condition of the object and the technical status of its systems. There will be a wider choice of measures with building new objects than with an older existing object in which the age of the existing installations and the technology used in them can vary greatly.


    In new-build projects ‘security by design’ and cybersecurity are integrated in the design of the installation(s) and the overall design. With a renovation, the process involves ‘security by customisation’: bespoke solutions for the individual installations and a protected level of security over the installations as a whole.


    Depending on the risk, a decision has to be made, with transparent reasons, on the combination of precautionary measures to be taken against cyber incidents and the minimum recovery measures that will be needed. Measures must have a specific purpose, which has to be respected by the ‘people’, the ‘organisation’ and the ‘technology’ as long as the measure is in effect. Failure to implement the measures properly (for example making a workaround because it is too much trouble to perform maintenance) will once again increase the risk.


    The risk inventory and analysis yields a list of the impact that particular measures will have on the risk of events occurring. Measures can have very different outcomes. For example, some measures will have little effect and others will have a huge impact. A particular measure could also have a negative effect on another measure or have a negative impact on the functionality of the object. It is crucial to select the correct measures during this phase, and that calls for careful consideration. The selection should therefore preferably be made by a team composed of appropriate experts.


    The figure below illustrates the effect of a series of measures which in and of themselves do not deliver complete certainty. In such cases, the preference should be selected in such a way that each measure covers the weaknesses inherent in the other measures. If all the measures have the same vulnerability, a situation emerges as represented in the Swiss cheese model.


    Figure 6.6: The Swiss cheese model. (Source: James Reason)


    Once the measures have been selected, it is time to once again review the (residual) risks in each domain. Preferably, measures should be designed to reinforce one another. Naturally, the greater the number of proactive measures, the smaller the chance of a threat occurring. However there are also drawbacks to persevering with the measures, including management costs and potential problems in maintaining and making changes to the installation/the system.


    The resultant risk differential that arises when a measure is taken is the difference in impact if a measure is not adopted and the impact if that measure were to be implemented. The residual risk is the effect (including the chance of it occurring) that remains when all the measures to manage the risk have been taken. These residual risks can be underpinned with scenarios of the consequences in terms of time, finance and/or staff if the risk occurs. The sets of selected measures can then serve as input for a new selection round. The selection process only ends once the total residual risk is sufficiently low. At the end of the day, the residual risk must also be accepted at management level.

    6.8 Implementing measures [link id=”qg169″]

    Once a decision has been made on which measures are to be taken they will then have to be implemented. It is possible that not all measures can be implemented at once due to budgetary or technical restrictions. In that situation, it can be useful to rank the measures in order of their effect on the risk level or in order of their financial and/or technical impact. On that basis, a sequence of implementation can be proposed. In that case, a temporary residual risk will remain, that is higher than the acceptable risk level. This higher residual risk will then have to be explicitly accepted by the organisation.

    6.9 Patching and servicing [link id=”41mst”]

    This section offers practical suggestions and tips for dealing with patching and servicing (maintenance) in the light of the availability of a tunnel object for the object or tunnel managers, technical specialists and security advisors involved.


    Note: wherever the term patches is used in this chapter, it should be taken to mean security patches.

    6.9.1 Introduction [link id=”v5cbn”]

    Appendix 3 Cybersecurity and tunnel safety describes a number of examples of how cybersecurity and tunnel safety are related. A physical incident may for example be the consequence of defective (or even failing) cybersecurity policy. With regard to the monitoring of the physical safety of tunnels by tunnel operators, they are entirely dependent on (complex) process automation systems also known as industrial automation control systems (IACS). In many cases, people think this refers only to the operating systems (SCADA/PLC) but it also in fact relates to systems for dynamic traffic management (DTM), cameras (CCTV), PA and intercom, building management, access control, etc.). All these systems are interrelated by means of an OT network itself often linked to a traffic control centre located remotely from the tunnels. If these systems turn out to be unreliable, it soon becomes questionable whether the tunnel is in fact safe and can therefore be kept open. This then threatens the availability of the tunnel system as part of the overall road infrastructure system in the environment.


    When it comes to the completion and handover of a new or renovated tunnel by the project organisation to the management organisation, the tunnel is automatically considered to have been designed, tested and commissioned safely. It is widely recognised that this technology is at a considerably higher risk of failing than civil engineering structures. In the project phase, the contractor will have to hire in expertise in this special field, or ensure that that expertise is available within their own organisation, to ensure that it can meet its obligations.


    Following handover, throughout the operation phase, too, the IACS environment will also have to be kept (cyber) safe. Patching can help in that process.


    ‘Patch management is the process of managing patches and/or upgrades for firmware and software with the aim of keeping the systems up to date and secure’


    Patch management must be thoroughly embedded in the organisation in order to manage the apparent contradiction between the concerns of the security officer (confidentiality and integrity) and those of the service manager (availability).


    In principle, it is possible to distinguish between two types of patches:

        • Security patches intended to improve security and mitigate threats and abuse, for example zero days and exploiting vulnerabilities.
        • Functional patches are intended to add new functions or improve operation, for example in the form of updates and upgrades.


    In practice, both types of patches are often combined in one and the same update, and can in fact not be implemented in isolation. This means that both the security improvements and the changed functionalities must be assessed and tested for correct functioning and (undesirable) side effects. Security patches should therefore be implemented in accordance with the overarching patch management process.


    Within the OT environment in general and specifically for tunnels, patching often represents a challenge. As already indicated, a tunnel object is characterised by a whole range of systems which:

        • differ in nature (supplier, type);
        • differ in application or are developed specifically for an application;
        • are developed with a view to long-term deployment and availability;
        • are not designed with security in mind – no ‘security by design’ (NB. this applies in particular to not yet renovated tunnels; for the most recent tunnels, security by design is a design principle);
        • are divided across multiple locations that are often interlinked via a traffic control centre.


    Based on the above, the obvious conclusion is that it is practically impossible to use patching to keep all systems in a tunnel permanently intrinsically secure. The working group therefore proposes the following approach:


        1. Develop a Patch management process with additional, mitigating measures to create a secure situation.
        2. The starting points for that process must be: ‘Install patches in a controlled manner’.
        3. Patch management is not an end in itself. It is part of an overlying framework in respect of maintaining a secure infrastructure.
        4. Conditions for patch management are a test backup and recovery process and a controlled change process.
        5. Ensure access to sufficient expertise if not already available within your own organisation.


    There must be a clear decision making moment on whether or not to install a patch, in any Patch management process. A patch should only be rolled out if the current risk is greater than acceptable and the patch reduces the level of risk and has no negative impact on the functionality of the object. Non-implementation of a patch could prevent or seriously hinder the installation of a future, possibly essential patch.

    6.9.2 Starting point [link id=”bgtvc”]

    This living document takes as its starting point the best practice Patch management from the Cybersecurity implementation guideline 2.4 (CSIR, appendix 8) from Rijkswaterstaat. This guideline has been evaluated and where applicable adapted specifically for use within the management process for a tunnel object. The following sources are also relevant:


        1. IEC 62443-2-3 Patch management in the IACS environment This international standard was developed by a community including volunteers and may be used on condition the standard is cited.
        2. NIST 800-82 Guide to industrial control systems (ICS) security Contains a restricted section on patching and refers to NIST 800-40 ‘Guide to enterprise patch management technologies’.
        3. DHS recommended practice for patch management of control systems The 2008 version may no longer be entirely current, but it contains possibly still usable information. This version is referenced from the overview of Recommended practices from the Cybersecurity & Infrastructure Agency.

    6.9.3 Patching-related processes [link id=”v9mph”]

    Patching should and may not be seen as a standalone element in the organisation. It is part of the overarching lifecycle management process. That process itself comprises a series of elements that must be organised from purchase through to disposal of the asset, via regular processes.

        • Need
        • Selection
        • Pilot
        • Purchase
        • Maintenance
        • Disposal


    Within the maintenance process, patching has interfaces with a number of related processes:

        • Change management
        • Management
        • Asset management
        • Configuration management
        • Contract management


    Patching is an integral part of the management process. Within this process, all forms of patches are managed and implemented in a controlled manner. This applies to the various components present within the landscape of the management organisation:

        • Software
        • Firmware
        • Hardware


    For each of these processes, descriptions must be available at various levels for the implementation of the work and safeguarding within the organisation:

        • Process descriptions
        • Work instructions
        • Reporting


    Patching influences all operational IT and OT processes and can also impact on the continuity within the organisation if not or not correctly implemented. Directly and indirectly it can lead to the halting of processes, work flows and, in worst case scenarios a complete operational shutdown. Examples are:

        • Non-functioning software
        • Not correctly functioning software
        • Not correctly functioning hardware
        • Vulnerabilities that can be exploited by cyber criminals
        • Vulnerabilities that can be exploited by viruses and ransomware
        • Non-compliance with statutory requirements
        • The inability to guarantee and validate data and outcomes
        • Etc.


    Patching is essential to keep various processes within the organisation reliable and honest and to safeguard continuity.

    6.9.4 Effective patch management [link id=”z2120″]

    Starting points and conditions

    Before any effective patch management process can be established, a number of essential preconditions must be satisfied.


    Firstly, there must be an insight into which equipment is present in the object, and which software and firmware (including version and patch level) have been installed on that equipment. In reality this means that a ‘software bill of materials’ (SBOM) must be available for the object and integrated in the CMDB. This in turn is part of the processes asset management and configuration management for the maintenance of the CMDB. A process will then be established that supplies information for every configuration item (CI element in the CMDB) about known vulnerabilities and the (future) availability of patches for these vulnerabilities.


    There must also be clarity within the organisation as to where and how tasks and responsibilities for management, maintenance and (cyber)security are allocated, and on the policy of the organisation with regard to information security, (cyber)security and patching since this establishes the frameworks within which patch management must be organised. One key decision that every organisation has to take is the risk level considered acceptable by the organisation. This is a vital factor for the risk management process. The acceptable level of risk is described in all facets relevant for the organisation such as acceptable financial losses, acceptable number of accidents with or without loss of time per time unit, number of fatalities per time unit, etc.


    The security policy must be elaborated in the form of processes, procedures and working methods known to all personnel who work with them. There must also be a clear record of the applicable processes for the organisation for risk management and change management.


    Finally, the availability of a permanent, representative test environment is an essential precondition. Representative means that tests conducted in that environment predict the behaviour of the production environment. It does not need to be a precise copy of the production environment but the test environment must contain one example of every individual device present in production, equipped with precisely the same firmware and software.



    In section 2.5.3 of the CSIR, Rijkswaterstaat has formulated measures for patching, and appendix 8 to that same document contains a guideline on the structuring of the patch management process.


    The figure below contains an overview of the various process stages that make up patch management. As already explained in the starting points, the process starts with the availability of assets with all their characteristics. Information then has to be collected about vulnerabilities and threats. That information in turn forms the basis for the risk analysis to be undertaken. The outcome of the risk analysis is a decision on acceptable or unacceptable risks. Control measures then have to be implemented for the unacceptable risks; firstly in a test environment and subsequently in the production environment. Finally, the CMDB has to be updated with the most recent configuration information.


    Figure 6.7: Process stages patch management.


    Roles and responsibilities

        • The object owner is responsible for the object and its optimum performance. The object owner defines which risks are acceptable and which are not and eventually decides on which patches should or should not be applied.
        • The manager/operator is responsible for advice on and assessment and implementation of all management tasks, including the application of patches.
        • The supplier provides the manager with advice on the application of patches and supplies the patches in question.


    Together, the various parties consider whether the work to be carried out is compatible with the expected risks and the work to be undertaken. In consultation, a decision will be taken on whether the measure, the work and any risk or downtime are acceptable and have to be carried out. These elements could be safeguarded in a series of processes, to guarantee decision making, including change management, risk consultation, toolbox meetings, life cycle management, etc.


    Information about vulnerabilities

    A variety of sources are available for gathering information about vulnerabilities:

        • NCSC advisories, including the Mitre CVE database .
        • Confidential threat intelligence from the ISACS.
        • Subscription to the feed from the supplier/ICS-CERT-CVSS.
        • Notifications from the supplier.
        • Specialist advice such as SOC services and specialist monitoring services in the field of vulnerabilities and zero days use.
        • Notifications from a variety of sources such as the Darkweb (hacker forums, bulletin boards), social media internet monitoring.
        • Possible use of the Cyber Threat Intelligence (CTI) feed from Rijkswaterstaat or TIP (if an up-to-date CMDB is available).


    During information gathering, account must be taken of the fact that on the one hand information may be missed while on the other, information may be provided that is not or not fully applicable to the object. It is then essential that information that is made available be linked to the CIs in the CMDB as a data source for the risk analysis to be conducted.


    Risk analysis

    When information about vulnerabilities is combined with information about threats, it is possible to carry out a risk analysis for each of the vulnerabilities. For each vulnerability, the level of risk is determined by assessing the likelihood that it can be successfully used by a threat, in combination with the potential consequences of such unintended use. If the conclusion is that the level of risk is higher than acceptable, a control measure must be implemented to reduce the level of risk. This can be achieved by installing a patch once it has been provided by the supplier. Until the patch is available or has not yet been installed, the management organisation will have to be on high alert to signs of use of the vulnerability.


    Use could be made of a decision tree as proposed by Dale Peterson. Another method is to use a trade-off matrix (TOM) in which the consequences of scenarios are elaborated in advance of a decision.


    In this way, for each known vulnerability, a decision will be taken and a determination made as to whether a patch should or should not be installed, once it becomes available. Two potential situations are further explained below.

    No patch



    Fold out Fold in

    6.9.5 Testing [link id=”lcfzb”]

    Patching during construction or renovation

    Despite the fact that no changes are permitted during a tunnel running test, we still want to at least maintain the level of security, by means of patching. This requires a controlled process that is similar to the process used in the production situation. That in turn requires that the test planning takes account of the installation of patches and the carrying out of retests as a consequence. The responsibility for taking this activity into account in the planning is a shared responsibility of both the client and the contractor since otherwise the entire system could already be outdated and vulnerable at the moment the object becomes operational.


    Patching in production

    Before a patch can be installed in a production environment, testing must be carried out to demonstrate that the functionality of the patched element is not negatively influenced. To make this possible, a selection must be made from the available test cases that are relevant for the affected element of the software. The successful implementation and conclusion of this test is an essential precondition for installing (or not installing) the patch in the production environment.


    After it has been demonstrated in the test environment that the patch can be safely installed, a minimum subset of test cases must be defined, necessary for implementation in the production, to demonstrate that the behaviour in the production environment is the same as that seen in the test environment.

    6.10 Monitoring [link id=”v8x9x”]

    Following implementation of the definitive set of measures, it is essential that the set then be monitored. The purpose of the set of measures is to achieve a specified level of cybersecurity. The vulnerabilities of the tunnel system are after all determined on the basis of a risk analysis of the current situation. Maintaining this level of cybersecurity entails monitoring (process) of the cybersecurity of the three aspects. Possible examples include:

        • Physical access protection/security to IA-related areas.
        • Granting logical access.
        • Configuring network connections.
        • Procedures for hardening and patching.
        • Procedures following the occurrence of security incidents.
        • Defining an incident response plan.
        • Logging and monitoring network traffic.
        • Agreements and procedures with contractors.
        • Awareness and training for tunnel managers and other affected personnel.
        • Procedures for making backups


    In addition to measures resulting from the risk analysis, it is also important to focus attention on the implementation of the security recommendations from suppliers and from the NCSC (National Cyber Security Centre) of the Ministry of Justice and Security.


    Full-scale monitoring also includes auditing and accountability. The entire process comprises three stages: layout, design and operating effectiveness. For example, there should be a documented procedure for patching the tunnel operating system. In this case, assessment, design and operating effectiveness means that the measure must be recorded, the procedure is described and that the affected party also actually implements the procedure as described.


    With all the methods and aspects of cybersecurity monitoring, it is important to make agreements on the frequency with which inspections, checks and audits will be carried out. As far as possible, they should coincide with the inspections, checks and audits unrelated to cybersecurity that already have to be carried out by virtue of agreements and/or legislation.


    Monitoring depends on the phase in the process. In the design phase, the process of monitoring measures concentrates particularly on the implementation of the cybersecurity requirements in the technical design on the part of the contractor, and the testing of these by the client. In addition it is important that information that forms part of the design is classified. Sensitive information must be available solely in a secure environment with access control. This is also the case where communication with potential suppliers is concerned; in this respect, consider agreeing terms on how confidential information and documentation is dealt with. The document management process may be audited.


    During the construction phase on the other hand, it is vital to include testing and validation of the implementation of cybersecurity requirements as an integral part of the systems engineering. The client may also carry out audits during the construction phase. In this context consider workplace visits and physical inspections. Access management in relation to the construction site and any confidential information that can be found there is a point for consideration.


    Broadly speaking, monitoring breaks down into:

    Non-technical aspects (organisation and people)

    Technical aspects

    Fold out Fold in

    7 Verification and validation [link id=”0t64l”]



    Security measures in general must not only be in place, but must also be demonstrably effective. The same applies for cybersecurity measures: verification and validation are also required for this form of security. The definitions of verification and validation are often slightly different in different knowledge domains. There are therefore a wide variety of definitions in use internationally.


    Within the earthmoving, road construction and hydraulic engineering sector, the definitions are taken from the Guideline for systems engineering

        • Verification: Confirmation that the specified requirements have been met by providing objective evidence.
        • Validation: Confirmation that the realised product meets the requirements and needs of the client, over and above the requirements of verification, by way of actual experience and providing objective evidence.


    In other words, the purpose of verification is to demonstrate objectively and explicitly that a solution meets the requirements imposed. Validation on the other hand demonstrates that a solution is suitable for the intended purpose.


    In addition, the verification and validation procedure for cybersecurity also has to encompass unusual situations. In other words, it is not only the ‘sunny day’ scenario that has to be tested, but also a (series of) scenarios for ‘rainy days’. For example: if a minimum length is prescribed for passwords, the following aspects should be separately validated:

        • A chosen password with the required length will be accepted.
        • A chosen password with a (random) longer length will be accepted.
        • A chosen password with a shorter length than the minimum permitted length will not be accepted and will lead to an error message from the system.


    The principles for a verification and validation process are:

        1. The method used to provide evidence.
        2. The criterion.
        3. The assessor.
        4. Additional activities, criteria and assessors that are needed to provide sufficiently objective evidence. These include verification during successive phases of a project or even at multiple locations during a specific phase.
        5. The timing of the verification/validation.
        6. The starting conditions necessary to commence the verification/validation procedure. For example, if someone else has to complete their task in the project before the validation of one’s own task can start.
        7. The validity of the verification or validation method used. For example, if the contractor proposes a method of providing evidence that is new or unknown to the client.
        8. The risk of non-compliance with the criterion. For example, if it cannot be shown in advance that a theoretical system performance is feasible in practice, specific management measures might have to be defined for some criteria.


    Steps in the verification process

    2. Project start-up/work package analysis

    2. Starting a work package: drafting a verification plan

    3. Verification process

    4. Completing a work package: producing a verification report

    Fold out Fold in


    Steps in the validation process

    The validation steps should preferably be completed as early as possible in the project because the discovery of errors at a later moment results in higher costs. The validation process consists of four steps:

    1. Validation requirements

    2. Design validation

    3. Product validation

    4. Functional simulation of the system

    Fold out Fold in

    8 Incident response and recovery [link id=”mbft5″]


    “There are just two types of companies, those that have been hacked and those that will be”, as Robert Mueller said in his speech Combating threats in the cyber world. Although many people would agree with this sentiment, there is still little awareness that this also applies to OT environments. Any OT environment with an external network connection has a continuous potential to be targeted for a cyber attack . In addition to the incidents via scanning of the Internet that have become almost commonplace, there are also other opportunities for breaches of an OT environment. This chapter describes the detection, logging, assessment and processing of cyber incidents in OT systems.

    8.1 What is a cyber incident? [link id=”lbwnb”]

    8.1.1 Definition [link id=”zlr2v”]

    ‘A security incident is an event or action that could potentially jeopardise or breach the security of hardware, software, information, a process or an organisation,’ according to the cybersecurity dictionary. In other words, a security incident may also be the result of an oversight, an unintentional or well-intentioned change with an accidental, negative effect.


    In general, types of activity that are recognised as security breaches are:

        1. Attempts to gain unauthorised access to a system and/or data.
        2. The unauthorised use of systems to process or store data.
        3. Changes to system firmware, software or hardware without consent of the system administrators.
        4. A malicious disruption and/or denial of service.


    A modern system is a combination of a physical system, consisting of civil components and all electrical and mechanical components necessary for the functioning of the system, and a cyber system that comprises all computer systems and software necessary to operate, manage and monitor the physical system. In the physical system, physical incidents can occur in the same way that cyber incidents can occur in the cyber system. This means that the failure of a system can have both a physical and a cyber cause. Indeed, any cyber cause will have consequences in the physical system.


    Cyber incidents are (observable or unobservable) anomalies in the availability, confidentiality and integrity of systems. Not every cyber incident necessarily results in an operational incident. Operational incidents are deviations in respect of the operational objectives for example in the systems. They can lead to harm to the security, availability and privacy. There are generally more observable or unobservable cyber incidents per year than there are operational incidents.


    Figure 8.1: The causality between cyber incidents and physical damage and personal injury

    8.1.2 Life cycle [link id=”tp2qz”]

    The National Institute of Standards and Technology (NIST) uses the following model for the life cycle of an incident. This chapter discusses the various phases.


    Figure 8.2: Life cycle of an incident. (Source: Computer security incident handling, NIST)

    8.1.3 Causes and recognition [link id=”rxzx3″]

    The Cyber Security Assessment Netherlands 2019 (CSBN2019) shows that there has been a change in the causes of incidents. The number of ‘chance’ breaches by non-specialist individuals (script kiddies) is falling. However, the number of consciously-planned attacks by nation states is on the increase. Attacks of this kind are characterised by a long lead time, use of advanced resources and specialised workers. In order to prevent such incidents, or to resolve them effectively, it is necessary to deploy greater forces in terms of resources and employees, and to have an integrated policy based on a coherent strategy. A cyber attacker needs only a small point of entry to gain access. Security enforcers have to be alert to potential vulnerabilities in order to close off any such points of entry.


    Security enforces must also at all times be aware of what is going on in the object to be secured and whether the observed behaviour or the observed events are ‘normal’. Whenever a deviation from ‘normal’ is identified, this is grounds for further investigation.


    In section 6.5 Drafting risk analysis a list of threats identified by Rijkswaterstaat is provided, that could be the cause of security incidents.

    8.1.4 Notification and logging [link id=”zvggl”]

    In order to be able to deal with an incident correctly it is crucial that incidents are reported immediately to the relevant incident manager. The tasks and responsibilities of the incident manager must be known in advance, see chapter 4 Cybersecurity management. Even if there is a possibility that the situation in question might not qualify as an incident, a notification must still be generated. Notifications of this kind must be made in good faith. The person reporting the incident must not be put under pressure to ‘cover up’ (parts of) potential incidents by not reporting them, or to dismiss them for personal, economic or other reasons.


    When logging incidents it is important to establish at least the following aspects:

        • Location
        • Date and time
        • System concerned
        • Company situation
        • Action taken
        • Person making the report and witnesses
        • Any other persons present


    The information in a notification forms the initial input from which the recovery plan can be drafted. Once this information has been established, it is also important to be aware that the input has, potentially, not just technical value but also legal value.


    Figure 8.3: The four phases of incident response and recovery

    8.2 Phase 1: Preparation [link id=”ffwkv”]

    Proper preparation means that an organisation is ready for action and can respond promptly when an incident arises. An appropriate ready to detect and respond attitude forms part of the security incident response process for the object, organisation and administrator in question.


    A security incident is not always visible, and is not always easy to recognise as such. This makes being aware and training in the recognition and logging of incidents a first step in preparation. This step, together with screening of personnel and formalising tasks, responsibilities and authorities, forms the appropriate preparation for the human aspect of the incident response process.


    The organisation must also add safeguards to prevent incidents caused by people. Each activity controlled by an individual carries with it the risk of causing a security incident, either consciously or subconsciously. In addition to integrity audits, other important instruments are the ‘two-man rule’, the use of work permits and formalised decision making.


    Information relating to confidential information (who has access to it) must also be securely protected. Password management is therefore an important aspect. Mechanisms like multi-factor authentication must be considered. Passwords known to more than one person are not allowed. The creation of temporary passwords for the design and test phase offers a useful solution.


    From a technical point of view, ‘automatic’ detection and incident logging are built in. Forwarding of notifications and follow-up of detection are part of the response process. A new vulnerability in technical systems may be maliciously exploited, but is not in and of itself, a security incident. Another technical measure is patching. To prevent incidents, proper implementation of patching or patch analyses in good time is an important instrument. Leaving a newly-completed system ‘unpatched’, something that was hardly out of the ordinary until very recently, is now inconceivable.


    Frameworks and plans

    An integral part of sound preparation is producing (and practising with!) a variety of plans. These plans describe the structure and functioning of the organisation in emergency situations. Based on the contexts (laws and regulations), the various plans describe the framework for action in order to safeguard continuity, especially when a crisis occurs where the object or process is disrupted or fails. The following plans can be identified:

    Emergency plan


    Crisis management plan (CMP)


    Calamity plan or continuity plan (CP)


    Incident response plan (IRP)


    Physical security plan (PSP)

    Fold out Fold in

    8.3 Phase 2: Detection and analysis [link id=”50lvf”]

    The purpose of this phase is to establish whether there has indeed been a security breach leading to an incident and, if so, how serious the incident is. A classification matrix is an excellent means of establishing the impact and the required level of escalation.

    8.3.1 First response [link id=”twzlt”]

    The person reporting the incident is often also involved as the first responder. In this respect, it is important to acquire conscious behaviour focused on preventing escalation of the incident, and protecting available evidence. Another obvious aspect that is important here is protecting the object in question, so that security is safeguarded. One example of this is closing a tunnel. Also important in this respect is the role of the forensic investigator and the balance between recovery of the object’s function (for instance, putting images back on systems) and gathering evidence for the follow-up procedure (which may mean that the process of recovery takes longer).

    8.3.2 Classification [link id=”4189v”]

    Classification depends on the nature of the impact (is the system as a whole shutdown, has there been a data leak, etc.) and the type of incident. A typical classification is critical, significant, low, negligible. Incidents can be prioritised using this classification.


    In this respect, it is important to differentiate between the time in which an incident has to be reported and a prioritisation of how incidents and escalation are dealt with. This is important in order to be able to differentiate between differing levels of up-scaling and, as a consequence, establish the sequence of the incident response process. For methods of classification, see:


    Classification and escalation are necessary because the issue here is not merely technical, i.e. getting the system up and running again, but also involves notifying and involving the right people/parties. A standard communications plan based on a classification system is therefore essential. This must also include a list of incidents for which the Autoriteit Persoonsgegevens (AP), the NCSC and/or the client need to be notified immediately.

    8.3.3 Incident analysis [link id=”93gkq”]

    After classification and prioritisation, the following step is allocating incidents to a response team. The composition of the response team depends on the classification and the phase of the project. Typically a team is made up of:

        • Representative of the administrator (object, system).
        • Service organisation (responsible for system components).
        • Internal cybersecurity experts (e.g. the SOC in question).
        • External cybersecurity experts (e.g. from the service organisation).
        • Communications representative.
        • Legal Affairs department representative.


    What is crucial to the response team is a full and correct understanding of the existing system architecture and operational processes. In addition, there must be sufficient representation of people with knowledge of the vulnerabilities and their possible impact. Technical and operational knowledge is brought together from the design data, CMDB, business process and knowledge of IT/OT.


    A graphic overview results in an analysis of systems, applications, individuals and business processes involved. This analysis will have an iterative character. The first version describes the functional impact on the business process. The analysis then expands to highlight the technical level. It also takes into account the possibility of the incident escalating, leading to a spread of the impact, and the fact that the choice of corrective action can also have an impact on other parts of the system.

    8.4 Phase 3: Corrective action [link id=”zw6vk”]

    The response team carries out an analysis of the incident and determines any corrective action. This calls for a structured approach based on a recovery plan. The purpose of the plan is to select and implement corrective measures. In terms of content, the plan starts with the information obtained from the notification and then follows the process for dealing with the situation, up to and including the lessons learned and the final incident report. A recovery plan can vary from a ‘simple’ log in an Excel spreadsheet up to an entire volume covering analysis of systems and log files needed for a forensic investigation. It is up to the response team to choose which form the recovery plan takes.


    Subjects covered in the recovery plan include:

    1. Description of the incident

    2. Description of the functional impact

    3. Suspected cause of the incident

    4. Actual recovery of the object

    5. Improving the cybersecurity of the object

    6. Conclusions and communication

    Fold out Fold in


    Closing the incident

    Once the recovery plan has been implemented, it is important that the incident is administratively rounded off and formally closed. This will be carried out differently for each project. An incident database often contains the notification and the steps leading to closure of the incident. Depending on the type of contract, the client and contractor may wish to make a settlement based on the logging of incoming and closed incidents. The recovery plan also forms the input for a potential forensic investigation, and is therefore of legal value.

    8.5 Phase 4: Evaluation [link id=”k3581″]

    8.5.1 One-off or structural incidents [link id=”m7tzp”]

    It is important to learn from incidents involving security breaches. Sharing information on such incidents with other organisations by means of ISACs for example is of essential importance for learning both within the Netherlands and abroad. Incident logging and regular reporting on incidents that occur are ingredients of any proper information security policy. As a result, once the incident has been closed, the recovery plan can be rounded off with the chapters ‘Improvement’ and ‘Communication’.


    The occurrence of an incident may be caused by a one-off event. Once the incident has been resolved, a re-evaluation can be used to determine whether or not this incident could occur again. If the answer is yes, then options will have to be explored for a structural solution. Alternatively, it could be viewed as a residual threat and, as a result, no changes need be made. For example: the threat created by forgetting to lock a PC can be reduced by configuring the PC so that it locks automatically after a specific period of inactivity. The waiting time must then of course depend on the normal operational process.


    Structural incidents cannot be resolved merely by correcting the incident that occurs. The vulnerability of the object as a whole can be reduced by applying corrective action in other, comparable systems. Under those circumstances it is important to know exactly what the existing configuration is.


    Preventive measures can be carried out on the basis of a periodic (annual?) update of the risk analysis. Newly-discovered vulnerabilities may lead to extra security measures or to the choice of adding and/or replacing systems.

    8.5.2 Reports and communication in relation to incidents [link id=”l2vhh”]

    The client will need to gain insight into incidents that have occurred and how they have been closed. Urgent and critical incidents must be forwarded immediately to the client’s security contact. A standard report can be used to give insight into incidents that have occurred and the actions taken. The report must be delivered periodically (once per quarter is an ideal benchmark) in the form of an executive summary. This report will not contain every detail on the incident, as some of that information is confidential. It is worthwhile to discuss the format and level of detail with the client. In this way, the client will gain an insight into the nature and impact of any incidents that occur, plus information on prevention of similar incidents. On request, the client may be given an insight into the recovery plan that has been drawn up. Even if there have been no incidents in the reporting period in question, a report must still be drawn up.


    Finally: where the measures have resulted in adjustments to systems, this will also have to be processed in the as-built project documentation and in the CMDB!

    8.6 In practice: typical incidents at various stages in a project [link id=”2vnf5″]

    The online article Triton is the world’s most murderous malware, and it’s spreading gives a good picture of the way in which cyber attackers have operated in the past, slowly permeating the ‘defence in depth’ layers to reach the engineering station in the OT network and exploit a zero-day vulnerability.


    Another potential incident trigger is the documentation relating to the current objects that is distributed among the various parties that have worked on the object in question. This documentation contains sensitive information (IP addresses, firewalls, software operating systems, configurations, etc.). In practice, asset management shares such documentation with new projects, without always remembering to consider the sensitivity of the information or its relevance to the recipient. The person who shares this information is unaware that in doing so, he is opening a door that allows access to previous projects. The recipient may have no intention of using the information for malicious purposes, but the information is nonetheless available to a much wider audience than would otherwise be the case. All parties involved in a project must be aware that information must remain confidential, even after the project has run its course. Transferring documents and processes in the form of a template must only be done after anonymisation and removal of all sensitive information.

    9 Business continuity [link id=”0m4lb”]

    9.1 Resilience of business operations [link id=”v3p43″]

    Business continuity relates to the entire organisation, processes and systems, that together enable the critical processes in business operations. Business continuity management is the process according to which an organisation:

        • identifies potential threats to the continued existence of the organisation such as a natural disaster or a major cybersecurity incident;
        • determines the potential consequences of these threats for the business operations, thereby gaining an insight into the systems that are critical for operations;
        • defines a framework for action according to which the resilience of the business operations of the organisation are safeguarded and kept up to date;
        • decides which people and resources are needed and how they can be organised.


    An integral element of business continuity management is the business continuity plan or BCP. The BCP is a document (or a set of documents) which:

        • defines the scenarios in which the plan is activated. Examples of scenarios are the unavailability of (part of) the buildings, the failure of critical ICT infrastructure, the failure of the power supply, flood/fire (natural disasters), etc. Relevant scenarios are scenarios that disrupt the operation(s), the stakeholders, trust and/or strategic/business objectives to such an extent that business continuity is seriously threatened.
        • describes what is expected of the organisation to recover ‘traffic flow’ and incident handling.
        • records the relevant applicable requirements. These requirements determine the minimum acceptable operational level and permitted recovery time.


    Relationship between documents

    The BCP is closely related to risk management and the business impact analysis (BIA). The BIA contains an analysis and evaluation of potential risks which (could) influence the operational process. It also classifies the IT/OT systems/services of the local traffic management system and the accompanying operational processes, according to their contribution to the risk.


    The BIA document must describe all possible cybersecurity disaster scenarios for IT/OT cybersecurity and incident security. To give a few examples:

        • An attack from outside or inside the organisation aimed at the internal tunnel IT/OT systems and the applications whereby multiple systems are simultaneously ‘infected’ such that availability/confidentiality/integrity fails, leading to serious disruption or complete halting of traffic flow.
        • Recovery of hardware/applications is not possible due to loss/corruption of the backups and/or backup locations/media become inaccessible. For example no further access to storage locations (backup file system is locked down).
        • Documents (information) are no longer accessible because IT/OT systems are ‘locked’ (including backup systems).
        • Loss of critical data, data, information referring to a specific period.


    The figure below is a diagrammatic representation of the relationship between the various processes and documents. The DRP is described in further detail later in a separate section.


    Figure 9.1: Relationship between BIA, BCP and DRP


    For each specific (disaster) scenario, the BCP describes the nature of the response, for example involving stakeholders, establishing communication, releasing funding, determining the decision-making process, deploying resources, etc. Based on the principles outlined in the BIA, the BCP describes the critical functions and the appropriate maximum recovery times, including the parameters within which the tunnel may/can remain operational. A description must also be given (in the greatest possible detail) of the steps to be taken by the administrative organisation in order to continue (or re-establish as quickly as possible) all traffic management in that scenario. Processes must also be established within the tunnel management organisation aimed at disaster scenarios. In the event of an incident/disaster, the organisation must be organised in such a way that all resources are deployed so that the maximum permitted downtime (MDT) is not exceeded.

    Figure 9.2: KPIs for incident response.


    For the BCP, only those critical systems and processes are relevant that are essential for keeping the tunnel ‘operational’. All other systems/processes are less important and therefore non-critical. This does not mean that these systems should not have a maintenance plan or an incident management process.


    The table below provides an example of a classification with the accompanying times. The times given are indicative and should be individually determined for each tunnel. These must be in line with the values from the RAMS analyses and the principles outlined in the BIA. A number of IT/OT systems critical for a tunnel are: 3B system, video management and the local networks.









    Test frequency

    Core services

    Applications necessary for recovering or ensuring the functioning of business processes (e.g. network, operating systems)

    4 hours

    < 4 hours

    < 8 hours

    6 months

    Mission-critical applications

    Applications or services that will have a serious impact on human health or the public, or cause widespread damage to the reputation of the organisation if not available.

    4 hours

    < 4 hours

    < 8 hours

    6 months


    Applications or services that will have a serious impact on the organisation / IT/OT processes if not available.

    8 hours

    < 48 hours

    < 60 hours



    Applications or services that directly support basic functions

    24 hours

    > 48 hours

    < 72 hours

    2 years


    Applications or services without which only inconveniences will be experienced; processing can be shut down for an unspecified term, without substantial impact.

    24 hours

    > 48 hours

    < 80 hours


    Figure 9.3: Example table met BCP classifications


    Note: there can be multiple BCPs that apply to a single tunnel. In addition to the BCP for the tunnel systems, for example there can also be a BCP for recovering the power supply and a BCP for the overarching central IT systems with their (external) networks.

    9.2 Content of the BCP (with regard to cybersecurity) [link id=”d3mxk”]

    A BCP provides an answer to the following questions: what should be done in the event of a crisis, what form does the crisis organisation take, how is the crisis team staffed (names/roles/positions with accompanying responsibilities and authorities), how are budgets released and what is the timetable for the activities to be undertaken? The answers to these questions are described in detail in the BCP chapters:

        • Identification of essential critical functions including related emergency requirements.
        • Determination of recovery targets, recovery priorities and measurement points (see table above).
        • Description of unforeseen roles, responsibilities and contact details of (allocated) persons.
        • Description of how the essential/critical functions can be maintained as far as possible, despite the failure of systems (alternatives).
        • Description of how information is shared about unforeseen events.


    The BCP must be approved by the organisation itself and the stakeholders should be informed in detail. The plan should also be regularly tested.


    In addition to the BCP, there should also be a disaster recovery plan (DRP). This plan is activated in the event of specific disruptions of business continuity.

    9.3 Disaster recovery plan (DRP, disaster/emergency recovery) [link id=”59t5c”]

    9.3.1 Objective of the DRP [link id=”qrhsh”]

    Disaster recovery is an integral part of business continuity management. The BCP and DRP are closely related and geared towards the same IT/OT systems/applications and processes. The objective of disaster recovery is to develop a clearly coordinated strategy backed up by plans, procedures and technical measures which enable the (functional) recovery of IT/OT systems, applications, data, networks and operations, following a (cybersecurity) incident or crisis.

    9.3.2 Description of incidents [link id=”tqn78″]

    The DRP describes the process to be followed by the organisation in order to return to normal business operations following a disruptive event or crisis (a major incident). The process is focused primarily on the IT/OT systems classified as ‘business critical’ and higher (see table above). This process must be described and structured so efficiently and effectively that the targets laid down in the BCP plan are actually achieved.


    In order to come up with an answer to the question when the DRP should be activated (see also the section below), it is essential to have a clear picture of exactly what a cybersecurity incident is. There are two visions on this definition that although mutually complementing, do lead to some confusion:

        1. Every occurring deviation from a specified cybersecurity measure is an incident, is for example the definition employed in the CSIR published by Rijkswaterstaat. An example is the non-locking of the operating station by the operator when he/she leaves the workstation or the late changing of a password by the operator. This definition also includes doors left open to technical areas.
        2. An incident only occurs if IT/OT has been used to actually carry out a successful attack on an object whereby systems are manipulated, data lost, stolen or leaked.


    The chapter 8 Incident response and recovery discusses the definition and response in more detail.


    Clearly, disaster recovery relates only to major incidents that disrupt business continuity. Incidents of this kind may also harm the integrity of (security) functions. The tunnel may not be reopened before the recovery of a safe situation has been confirmed.


    Major incidents and crises can occur in many different ways: cyber attacks, equipment failure, ransomware, power failure, natural disasters or even human errors. In order to prepare an adequate response to emergency situations, the potential IT/OT threats (see BIA and BCP documents) must be analysed and evaluated and DRP plans drawn up. This can be achieved by dividing potential events into two categories: predictable and non-predictable.


    Predictable events are disruptions that can reasonably be expected. An identified threat to an organisation/process can be viewed as a predictable disruption and its impact can be mitigated through proactive planning. These situations are often covered by regular maintenance plans. Unforeseen events are the result of probability and the related impact. Examples are an external cyber attack or a ransomware incident that ‘infects’ multiple IT/OT systems. See the figure below. Timely measures can be taken for predictable incidents versus unpredictable incidents whereby the risk of occurrence is small and for which the impact cannot always be determined in advance.


    Figure 9.4: Risk matrix and predictability of incidents


    Cost-benefit analyses determine to a large extent the scope of maintenance plans and the degree of security measures to be taken in new build and (major) renovation projects. There will always be some cybersecurity risk (residual risk). To accelerate recovery from the consequences in the event of the residual risk, an effective DRP must be drawn up.


    Figure 9.5: The position of the BCP and DRP in the PDCA cycle.

    9.3.3 Activating the DRP [link id=”qdllk”]

    The activation of the DRP always follows via the process for response to disruptions. The first report is submitted to a contact or emergency alarm centre. Following a quick scan, the decision can be taken to activate the DRP. The appropriate persons are informed and the DRP is worked through. Following completion of the DRP, including completion of the accompanying tests, the report is closed.


    The DRP must be activated according to the BCP, in the event of the following major incidents:

        • Failure of primary energy supply (purchase side).
        • Failure of energy supply to IT/OT systems in the object (‘behind the meter’).
        • Failure of communication with the outside world (e.g. connection to control centre).
        • Failure of operating facilities for traffic controllers.
        • Failure of critical components of OT systems rendering operation impossible.
        • Failure of security or monitoring systems such as camera systems.
        • Infection with malware, ransomware (via detection of management system).


    To allow non-subject experts to carry out the quick scan, clear criteria must be laid down for example in the form of a checklist. Standard questions can help to arrive at a quick decision.

    9.3.4 Structure and content of the DRP [link id=”tv8dv”]

    The DRP lists the resources, actions, tasks, standard work procedures/work instructions, list of contacts, organisation chart, etc. necessary for describing application and service recovery processes in the event of a crisis or disaster. The DRP describes in detail the measures that have to be taken in order to recover systems and protect data, together with who is responsible for what and what resources they need.


    The figure below is a diagrammatic representation of the themes outlined in the DRP.



    Figure 9.6: The themes of the DRP.


    A brief description of each theme:


    1. Assemble plan

    This refers to the system design documents. It can include a description of the location of the latest design documents as well as a physical copy of the latest copy of the design documents.


    2. Identify scope

    This document provides an outline description and/or flowchart for the system. In principle, it describes everything that has to be recovered and how it is connected to other systems. The document can list the location of the most recent version of the flowcharts but also could refer to a physical copy of the latest version of the flowchart.


    3. Appoint emergency contacts

    This is an overview of the contacts for system recovery. These may be the supplier but equally a system integrator or an ICT department. The list of contacts with names, telephone numbers and email addresses is essential to avoid having to search for telephone numbers during a recovery attempt.


    4. Designate disaster recovery team

    A disaster recovery team consists of a number of people whose task is to concentrate on the planning, implementation, maintenance, checking and testing of the procedures of an organisation for business continuity and recovery. Having a team that focuses on recovery following disasters ensures that reaction times and damage to resources are kept to a minimum. Depending on the type of incident, technical specialists, external (contract) parties, etc. are included in the team, in order to ensure that the recovery process and recovery activities are undertaken as quickly as possible.


    A disaster recovery team is generally assembled from existing company employees, from the CIO via the IT department, to stakeholders at the various operational units. It is essential to ensure that the team has been allocated sufficient authorities and that specialist knowledge of systems and general knowledge of the specific tunnel are combined.


    5. Assign roles and responsibilities

    The specific roles that may be assigned within a disaster recovery team include:

        • Head of recovery team – This may be a CIO, senior IT manager or member of the executive management team. It is their task to supervise the entire team, to coordinate the efforts of the individual members and to ensure that the BC/DR plan is efficient.
        • Crisis management coordinator – This employee supervises the management of data recovery and initiates procedures whenever a problem or disaster occurs.
        • Expert business continuity – This team member focuses on the strategy necessary for continuing or recovering activities in the event of a disaster. Their task is also to ensure that DR plans tie in with the needs of the business.
        • Effect assessment and recovery consultant – This role is generally occupied by various staff members with differing expertise in different technology components. If a disaster occurs, they are responsible for assessing the volume of damage in their specific area and the recovery methods. Examples of areas of expertise are networks, servers, storage and databases.
        • Monitoring IT applications – This person is responsible for monitoring all technology for potential disasters and ensuring that all individual components work together as soon as they are recovered.


    6. Restore technology functionality

    This aspect contains a description of the order in which recovery must be carried out. It also describes how systems can be completely wiped clean.


    The following procedures are described:

        • How is recovery implemented after a system is hacked or infected with a virus?
        • How is recovery carried out for example following fire or water damage?
        • How is recovery carried out following a LAN and WAN network disruption?


    Each procedure is structured as follows:

        • Activation of the procedure.
        • Description of recovery times, persons involved and contact details.
        • Description of all affected system components including PLC, computers (clients and servers), routers, etc.
        • Description of the organisation. How is the notification received and who communicates with whom?
        • Design details and system settings. Where are the backups?
        • Safeguarding and reinstating versions (roll-back following changes).
        • Description of the verification and validation method. Which tests are carried out after the system has been recovered before the system can be re-released?
        • Registration and completion of the procedure.
        • Revision of CMDB and possibly documentation.
        • Evaluation and updating of the procedure.


    7. Data and backups location

    This contains a description of which data/applications are to be rolled back and where the latest data/applications are kept. In certain environments that by their nature have a relatively stable configuration, for example tunnel objects, it is possible to conduct the latest system backup directly in the DRP.


    8. Testing and maintenance

    This section describes how you test your DRP, for example in a separate environment, and how you maintain the DRP. The aim is that in the event of a failure, the DRP is up to date so that recovery is as quick as possible.

    9.4 Safeguarding and testing BCP and DRP [link id=”v1vgl”]

    Safeguarding and communication about both plans must receive sufficient attention within the organisation. The affected employees and external parties must be correctly and fully informed about the BCP and DRP, so that in the event of a crisis they know what is expected of them and which processes need to be followed. Because the number of crises will be very small but their impact can be huge, it is all the more necessary to regularly plan and test execution.


    10 Asset management [link id=”shk7b”]

    10.1 Introduction [link id=”wqdsm”]

    Traditional maintenance management can be described as ‘guaranteeing that an asset continuously does what the user wants it to do, within a given operating environment’. ‘What the user wants it to do’ means more or less the same as seeking the optimum between the desired performance, the costs for guaranteeing the performance and managing risks. In other words, traditional maintenance management is maintaining the balance between performance, costs and risks during the use phase of an installation.


    As the term suggests, asset management relates to the management of assets. However, asset management adds two important phases to traditional maintenance management: the design phase and the demolition phase. In the De Succesfactor op RTL7, CMS director Luc de Laat discusses asset management, in detail.


    In asset management, costs, performance and risks are already an important element, before the design phase. Before starting to design or purchase the installation, it is necessary to first investigate the possible effects of the design on the direct and indirect costs during the entire service life. However, asset management also considers the use phase: the total costs include the costs and risks for the dismantling of the asset when it reaches the end of its economic or technical life. Asset management also considers the value creation by the asset. In other words, asset management looks beyond the boundaries of the total life cycle of the asset.


    Asset management according to NEN-ISO 55000

    Definition: ‘Coordinated activities of an organisation to realise value from assets’.

    ISO 55000 is based on four principles:

        1. Value: assets exist to generate value for the organisation and its stakeholders. An asset is something that is of value for an organisation. What this value is and what form that value takes depends on the organisation and its stakeholders. Examples of physical assets are: an installation, machines, vehicles, infrastructure, civil engineering structures, buildings, etc.
        2. Harmonisation: asset management translates the organisation’s objectives into technical and financial decisions, plans and activities.
        3. Leadership: leadership and culture at the workplace are determining factors for realising value.
        4. Guarantee: asset management guarantees that assets will meet their required objective.


    Asset management identifies the following phases for every asset:

        • Planning phase and tender procedure
        • Building
        • Operations
        • Demolition


    It is also possible to make a distinction between newly built tunnels and renovation projects. Typical for a new construction is that a design first has to be drawn up, before being actually built. This then is followed by handover. This in turn is followed by the operations phase during which the asset is put into use.


    Characteristic for renovation is that the asset already is and must remain in use. In other words, renovation takes place during the operations phase. For cybersecurity it is essential to make this distinction because the principles differ. In the case of a new tunnel, for example, all measures can be implemented from the start, while renovation relates to a pre-existing situation. During renovation, it will be necessary both to implement new measures and to adjust or remove existing measures. However, the most essential precondition is that the safety of the asset must be guaranteed during the renovation process.


    The figure below shows the various phases in order. As this figure shows, in a new construction, the building phase is divided into ‘Design’, ‘Newly built’ and ‘Handover’. In that process, a whole raft of new measures have to be implemented. On the other hand, during renovation, the object is already – at least partially – in use. This means dealing with an existing situation and also implementing new measures.

    Figure 10.1: Phases in a project.


    During or at the end of each phase, it is essential that the measures taken be safeguarded within the organisation. There are also specific roles in each phase. 4.4 Basic organisation model for cybersecurity provides an overview of the roles that arise. Below we discuss a number of tasks, authorities and responsibilities for each phase.

    10.2 Planning phase and tender procedure [link id=”7t1k6″]

    The planning phase refers to the physical and spatial phase that is rounded off with a spatial planning decision, such as a decision on the route for infrastructure, or development and zoning plans at provincial or municipal level. A tunnel safety plan is also drawn up during this phase. The starting points of the cybersecurity policy must be included, and then constitute a formal part of the agreed statutory plan, i.e. an official part of the tender procedure.


    Tender procedures are driven by additions to or replacements of existing infrastructure, in the field of both building services and civil engineering. Life cycle considerations, management and maintenance and changing legislation are all key drivers in this respect. OT has an important role to play in the control of infrastructure. Due to advances in digitalisation, new functionality in this area is becoming available all the time. Remote control is a prime example of a new opportunity of this kind. ‘Being connected’ is ever more important, both from an operational point of view and from the perspective of management and maintenance. Easy availability of information often ensures that there is a lot of interest in information in its own right. ‘There are many benefits, but also new risks, including cybersecurity issues.


    In order to safeguard cybersecurity throughout the life cycle of an object, the client will have to consider contractually enshrining cybersecurity during contract negotiations. The aim is that the contractor delivers and maintains a system that meets the organisation’s cybersecurity requirements. See also chapter 4 Cybersecurity management.

    10.2.1 Considerations prior to the tender procedure [link id=”c8tbl”]

    In preparing the tender procedure, the client must also consider cybersecurity. This may include the following aspects:



    Life cycle


    Integrated design


    Management and maintenance


    Information protection


    New construction or renovation?


    Threat resistance throughout the chain

    Fold out Fold in

    10.2.2 During the tender procedure [link id=”07847″]

    At each step of the tender procedure, the following aspects deserve attention:

    Provision of information


    Cybersecurity as a criterion for selection


    Project/tender management

    Fold out Fold in

    10.2.3 Tasks, authorities and responsibilities [link id=”3dhqr”]

    The tender procedure involves a client and market parties acting as bidders. The focus in this phase is on information management (e.g. document management system and DMS) and the screening of personnel.


    Tasks of the client:

        • Include cybersecurity regulations as contract terms.
        • Classify documents and systems for information exchange between parties.
        • Request a non-disclosure agreement (NDA) before forwarding documents.
        • Monitor removal of information provided to unsuccessful bidders.


    Tasks of bidders:

        • Supply your own NDA and demand an NDA from third parties.
        • Configure a confidential document environment.
        • Where necessary, configure a physically segregated project environment.
        • Include cybersecurity measures in the tender (for all phases of the project).

    10.2.4 Monitoring [link id=”lz26q”]

    Cybersecurity measures are an integral part of the tender procedure. This applies to all aspects: technology, people and organisation. In other words, these aspects must be present in the tender requirements. With the help of systems engineering, these requirements will be validated in subsequent phases, with the aim of further safeguarding the required measures.


    It is also important for information that forms part of the tender process to be classified. Sensitive information must only be made available in a secure environment, and the availability of such information must be contingent on signing a confidentiality agreement. An agreement of this kind must include a clause to specify that sensitive information must be destroyed or anonymised after the tender procedure has run its course.


    For general aspects in relation to monitoring, see 6.10 Monitoring.

    10.3 Building phase [link id=”b966r”]

    When building a tunnel, the process can be split into the following subphases:


    It is important to realise that all information relating to a project could be of interest to someone with malicious intent, at a later date. Right from the very beginning – in other words when the project is first configured – it is important to draw up rules for the classification and processing of all documentation to be generated.


    Everyone participating in the project must be aware of the risks in this area and must know what action to take to keep the risk of (design) information and documentation falling into the wrong hands as small as possible.

    10.3.1 Design [link id=”lwcth”]

    Cybersecurity has consequences for practically every aspect of the design. Cybersecurity should therefore be reflected as an integral theme in the design, rather than defining it as a point of attention for the various specialists of the sub-systems.


    The agreed contract forms the basic document for the design phase. The contract contains many requirements for the project to be implemented. Most of the requirements relate to the functionality of the object to be built but, ideally, the contract will also cover non-functional requirements, including those relating to cybersecurity.


    The client may already have described identified risks in the contract. Control measures have to be formulated to address the risks that pose an unacceptable threat. In the event of the client not having identified any risks, one of the first activities facing the contractor in the design phase is to carry out a risk analysis together with the client. IEC 62443-3-2 can be used as a guideline in this respect. Another option is to use the ISO 27005 standard from the ISO 27000 series. However, this standard is geared more towards environments which deal with office automation.


    Note: whereas IT and OT have traditionally been separate domains, nowadays a growing number of manufacturing processes depend on IT solutions. To protect both worlds properly, it is necessary to have an integrated IT/OT approach to cybersecurity.


    Cybersecurity in the design process, in new-build projects and during maintenance

    Ideally, cybersecurity is a regular part of the design process of an OT system. Cybersecurity is one of the aspects in a RAMS (or RAMSSHEEP) analysis. Many OT systems in the Netherlands are currently in need of a check-up in order to update threat resistance to the cyber threats of the present day. In that case what is required is a cybersecurity overhaul, which presents the opportunity of drafting qualitative and quantitative cybersecurity requirements by the book, on the basis of the operational objectives of the OT system and a structural risk assessment (threats, vulnerabilities). The result is an appropriate set of specific cybersecurity measures, and instructions for their implementation. Where the OT systems are serviced under the regular maintenance regime, a minimum cybersecurity requirement for any change that the current cybersecurity level must not be lowered.


    In terms of the design it is important that the object to be delivered under the contract is ‘cyber safe’. Not just at the point of completion, but thereafter as well. Wherever possible, the design must take into account the ability to make changes that are needed to keep the object cyber safe during its service life. This starting point has an impact on both the design of the technology, and on the design of the processes and structure of management and maintenance.


    During the ‘Design’ subphase, there is a client and a contractor. The contractor creates a design that demonstrably complies with the required legislation. This phase focuses on information management (DMS), security-by-design and the screening of personnel.


    Tasks of the client:

        • Assessing cybersecurity plans and cybersecurity risk analyses.
        • Testing designs, both in terms of structural aspects and operational technology.
        • Managing cybersecurity when deploying third parties and subcontractors.
        • Coordinating approach and implementation with policy-making agencies.
        • Discussing cybersecurity in management meetings.


    Tasks of the contractor:

        • Cyber security risk analysis and control measures.
        • Drafting a cybersecurity management plan.
        • Drafting cybersecurity security plan(s).
        • Drafting cybersecurity procedures.
        • Follow through on cybersecurity requirements in respect of external parties, such as consultancies and subcontractors.
        • Discussing cybersecurity in management meetings.

    10.3.2 New build [link id=”3qbxw”]

    During the building phase of an infrastructure project, cybersecurity plays an important role. In this phase, the focus is on safe access, network security and screening of personnel (of third parties).


    For this subphase, the following threats and points for attention must be considered:

        • Organise actual access security to locations and workstations.
        • Access to information storage: who is allowed to do what and how is this protected?
        • Drafting and enforcing internal and external communication guidelines.
        • Review and approval process (who is allowed to do what and how is this monitored, both on the part of the client and contractor as competent authority).
        • Organising computer system security.
        • Only authorised persons are permitted to conduct tests and access the test results.


    Tasks of the client:

        • Awareness and training of future operators.
        • Awareness and training of company personnel.
        • Reviewing the implementation of cybersecurity measures.


    Tasks of the contractor:

        • Cybersecurity risk assessment.
        • Establishing clearly defined powers for cybersecurity officers.
        • Under what circumstances can operations be halted in response to an incident.
        • Building a secure test environment.
        • Organising access control and logging.
        • Ensuring access control to each room in which active hardware is set up.
        • Drafting work instructions for cybersecurity.
        • Verifying and validating user accounts and log files.
        • Implementing hardening for all IT and OT.
        • (Commissioning) penetration tests.
        • Implementing cybersecurity requirements in relation to external parties (suppliers and subcontractors).
        • Protecting information at the construction site.

    10.3.3 Handover [link id=”0ckqk”]

    Handover refers to the point at which the project organisation transfers control over the project to the organisation responsible for operation. In terms of the project this is a single moment in the cycle, a transition between project phases. From the point of cybersecurity, however, handover is more than a single moment: the project team must take account of the fact that the point of handover and transfer to the management and operating organisation can only be successful if prepared properly. Handover is also the starting point for processes that form part of operation and management. This means that attention must also be addressed to the following subjects:

    1. User accounts for developers, maintenance personnel and operators

    2. Password management

    3. System/physical access

    4. Safeguarding and implementing the cybersecurity plan for the operations phase

    5. Transfer of the risk report from the construction phase

    6. Training of personnel

    7. Organisation

    Fold out Fold in


    To smooth the handover process, the transition from new build to handover must go hand in hand with the appropriate conditions, which themselves must constitute deliverables of the construction phase. A construction project should hand over the project results to a management organisation to kick off management at the technical-operational level. In a similar way, the project results may also be handed over to the cybersecurity management organisation. See chapter 4 Cybersecurity management.


    These measures can also be considered from three aspects:




    Fold out Fold in


    To safeguard cybersecurity, cybersecurity must be incorporated into the DNA of the organisation, and be a fixed part of each and every discussion on the maintenance of the object. By frequently having the object threat report updated, it is possible to reveal where attention needs to be focused if cybersecurity is to be kept up to date.

    10.4 Operations phase [link id=”wtmtg”]

    The operations phase follows the opening of the tunnel for use in accordance with the zoning plan and the tunnel opening permit. In this phase, the tunnel is operated and monitored in accordance with the standard procedures, including those designed to manage cybersecurity.


    The tunnel is operational if the tunnel system is in use. (operational phase). One of the specific activities within the operations phase is maintenance: the measures taken to ensure that the condition of the tunnel system does not change, for example by one-for-one replacement of wear parts, lubrication and cleaning, patching, reading log files and checking calibrations and settings. For recurring activities, there should be procedures that have to be followed, which can (should) safeguard cybersecurity. For incidental maintenance activities that are not covered by procedures, cybersecurity must be safeguarded on a case-by-case basis.


    Renovation work may also be carried out during the operations phase. Renovation of the tunnel system involves:

        1. redesigning the system (or parts of it) in such a way that the changes still support the original purpose and are kept in good condition with regular maintenance;
        2. redesign of the system for a new purpose.


    This document distinguishes two forms of renovation: the situation where the tunnel is fully closed and/or is only partially used for short periods, and the situation in which the tunnel is closed entirely for a longer period (examples are the Velsertunnel and the Koningstunnel). There is no definition of ‘short’ or ‘long’. It depends on the specific project. If the tunnel is closed for a longer period, the approach corresponds with that for a new construction, see section 10.3.2 New build.

    10.4.1 Operational phase [link id=”nd6t2″]

    In this phase, the focus is on physical and logical access control, auditing and evaluation, and the screening of personnel.


    Tasks of the client:

        • Assessment of risk analyses and reports relating to cybersecurity.
        • Holding an annual audit.
        • Network monitoring.
        • Organising incident response.


    Tasks of the contractor:

        • Cybersecurity risk analysis.
        • Clearly defining the authorities of cybersecurity officers. For example: under what conditions can an object be shut down in the event of an incident?
        • Recording and monitoring physical and logical access.
        • Network monitoring and reporting.
        • Organising and implementing incident response.
        • Organising patching.
        • Evaluation of such aspects as long-term processes, documentation, incidents, log files, drafting/assessment of reports, reviewing risk estimates, etc.
        • Testing of backup and recovery procedures.
        • Annual evaluation and reporting.


    Measures to be taken in this phase can be classified according to the three aspects:






    Fold out Fold in


    The operations phase is generally the longest phase. It is important for the parties concerned to have a clear understanding of the delegation of specific responsibilities. Continuous management processes for monitoring measures include reports (of incidents, status of access control, etc.) and consultative structures. It is also important to frequently review the estimated cybersecurity risks and the cybersecurity plan and ensuing measures. Monitoring of systems and networks and conducting internal/external audits on the technology and the organisation can also contribute to permanent compliance with measures. Naturally, the client and the contractor must have the necessary budget and expertise for these activities.


    A number of crucial aspects of the operations phase were already mentioned under ‘Handover’, such as the training of personnel, the execution and effectiveness of the password and cybersecurity policy and the design of the management organisation. In addition, the training plan also has to be implemented during the operations phase. Any developments in the domain of cybersecurity that affect or influence work processes must be included in the teaching materials for the education and training plan.

    10.4.2 Renovation [link id=”n8899″]

    The process of renovation refers to the complete and/or partial closure of the tunnel system (or parts of it) for short periods. Longer periods of closure are covered by 10.3.2. New build Depending on the nature of the renovation, this phase can also include a design and a building phase, as well as a demolition phase. However, these aspects need not be discussed in further detail here. One aspect that does demand special attention is the fact that renovation involves a hybrid situation of the old system and new system.


    Tasks of the client during renovation include:

        • Risk analysis.
        • Clearly determining who is responsible for security in the existing situation (in the various phases).


    Tasks of the contractor during renovation include:

        • Also performing a risk analysis.
        • Planning and validating phased transition from the old to the new systems.
        • Safe disposal of information-bearing components.

    Focus on: secure access, network security, screening of personnel (of third parties).


    The start of a renovation process is identical to the start of a new-build project: establish the client’s requirements, carry out a risk inventory and complete the V-model. In the case of renovation, however, two further aspects need to be taken into account:

        • The possibility that not all system components will be replaced.
        • Certain risks cannot be (easily) mitigated because of previous design choices that are no longer (easily) adjusted.


    As a consequence, in the case of renovation, special attention must be paid to the installation to be (partially) replaced and constraints that the existing situation might create for the desired new situation. This calls for special measures. The measures to be taken are also determined by the duration of the tunnel closure. These measures are usually specific to the situation, and therefore create more and more complex risks than with new build or demolition.


    A number of more specific points for attention apply, for each aspect.




    Fold out Fold in


    The systems and installations that need to be changed are identified during the renovation phase. This can involve upgrades or the replacement of entire systems or system expansions. The principles of systems engineering are followed, whereby (new) security requirements apply and must be fully incorporated in the entire renovation process, including the system and network architecture. For specific construction and (new build) aspects, see the section 10.3.2 New build.


    Released systems/installations must be assessed for sensitive business and personal information and then sent (at least in the case of national tunnels) to the Movable Property Agency (DRZ). This process is helped by a well-maintained IT/OT/application configuration management system. For specific aspects of the demolition phase, see the next section, 10.6 Demolition.


    Security audits during the renovation phase (for example, visits by the client to workshops and workstations) will enhance awareness of security.

    10.5 Renovating [link id=”hms6q”]

    10.5.1 Cybersecurity is a critical requirement in renovation [link id=”fs20s”]

    In a renovation project, cybersecurity in fact begins before an actual start is made on (the design of) the renovation work. Which changes are to be made to which systems and installations is after all determined before the renovation begins. These changes can take the form of upgrades or the complete system replacement or system expansions. During renovation, the principles of systems engineering are followed, whereby (new) security requirements apply and must be fully incorporated in the entire renovation process, including the system and network architecture.


    The initial scope of a renovation project often consists exclusively of the renovation of system components that are due for replacement on reaching the end of their life cycle. From the point of view of cybersecurity of the tunnel, attention should also be focused on those system components that were not initially included in the renovation programme. In that sense, cybersecurity must be made a critical requirement in renovation. However, this inevitably leads to a broadening of the scope. If the initial plans for renovation are written on the basis of life cycle management exclusively, there is a serious risk that the scope will be significantly altered, with equally significant consequences for the budget, under the influence of the critical cybersecurity requirements. However, if cybersecurity is not taken as a critical requirement, any integrated design for the renovation will not truly be integrated: (digital) tunnel safety will not be safeguarded.


    Example: 3B, ‘one thing leads to another’

    The server of the mini-DCS runs on an outdated (insecure) operating system (OS). The software in question no longer operates on a modern OS. As a result, although the mini-DCS itself could be maintained, it is necessary to switch to a newer version. The newer version of the mini-DCS is unable to communicate with the old controllers which in turn means that the old controllers also have to be replaced (although they have not yet reached their end-of-life phase). The new controllers run with different software. At the end of the day, therefore, the entire tunnel operating software has to be rewritten. It turns out in practice that a certain degree of reuse is possible, but this (true life) example reveals how a single obsolete OS can have huge consequences for the scope of a renovation project.


    Making cybersecurity a critical requirement will directly impact technology, organisation and processes.

        • In determining the technical scope, do not only consider the life cycle perspective but instead make cybersecurity an integral part of the benchmark for determining the scope. There are three different situations:
          • Check whether systems that are beyond the initial scope nonetheless require alteration.
          • Conduct an impact analysis in the case of a one-for-one replacement to check whether the functionality remains the same or whether the function of the replaced component changes.
          • Employ security by design in designing the new functionality.

    The recommendation is to tie in with the organisation’s own cybersecurity maintenance process when it comes to applying the critical requirement to the scope of the renovation.

        • A renovation in a tunnel can result in the need to also renovate the organisation. Cybersecurity must be embedded in both the project and the management organisation; this is in fact a legal requirement. In accordance with its management responsibility, the organisation must grow towards a level at which it tackles cybersecurity in a structured manner. Raising the level of maturity in the field of cybersecurity is a continuous process. The organisation must be structured in such a way that the cybersecurity strategy is clear to everyone and that clear policy rules and procedures are in place or will be drafted. The various roles must be defined, each with its own tasks, authorities and responsibilities.
        • Existing current processes must be adjusted and revised from the perspective of the ever changing security threats. Is cybersecurity sufficiently integrated in the existing processes, and what happens if those existing processes change? Keeping the processes ‘fit for purpose’ is a very similar process to the regular inspections that have to be carried out on technical installations. The cycle for process improvement is in the order of magnitude of between weeks and months.

    10.5.2 Impact from the past in renovation projects [link id=”3plq2″]

    The start of a renovation process is identical to the start of a new-build project: establish the client’s requirements, carry out a risk inventory and complete the V-model. However, for a renovation project there are two additional aspects to the risk inventory, because of the impact from the existing situation:

        • The possibility that not all system components will be replaced.
        • Certain risks cannot be (easily) mitigated because of previous design choices that are no longer (easily) adjusted.
        • Lack of knowledge of the cyber resistance status of the existing tunnel. Conducting a quick scan can give an insight into this aspect.


    This means that in a renovation project, cybersecurity risks are managed in a different way than in a new-build project. Mitigating these cybersecurity risks means that security measures from other layers of the ‘defence in depth model’ (see chapter 6.2 of the living document) must be chosen.


    Example: changing the overpressure facility in a server room

    An extinguishing gas installation with an accompanying overpressure facility is installed in an old server room. If the extinguishing gas installation is ‘triggered’, considerable overpressure is generated in the room in question, which it must be possible to release in a controlled manner. This is achieved using an overpressure valve, which is effectively nothing more than a ventilation grid. A cybersecurity requirement which could be imposed on this type of facility is that structural changes will have to be made so that the grid can no longer be used for injecting harmful agents or as a potential access route.


    All released systems/installations must be assessed for sensitive business and personal information and then (at least in the case of national tunnels) be sent to the Movable Property Agency (DRZ). This process is helped by a well-maintained IT/OT/application configuration management system. The next section of this chapter deals with the crucial role played by configuration management in renovation projects.

    10.5.3 Configuration management as critical success factor [link id=”zd089″]

    A typical characteristic of any renovation is that (parts of) the tunnel system are temporarily fully closed and/or remain partly in use. For cybersecurity in renovation projects, time is a critical aspect. Large numbers of major changes are often implemented in systems, under considerable time pressure. The transition period demands a high degree of customisation. This can easily lead to:

        • more and more complex risks being (temporarily) introduced;
        • changes take place in a far less controlled manner than in normal situations.


    It is essential during the renovation that the current status of the system be carefully monitored, because old and new systems must be able to work together successfully:

        • so that the (more complex) cybersecurity risks are effectively mitigated during the transition period;
        • so that there is an up-to-date picture of the relevant vulnerabilities for the tunnel systems in the field of cybersecurity.

    In other words, throughout the renovation process, specific attention must be paid to the simultaneous existence of the new and the old system. Carefully structured configuration management is therefore critical for the success of a renovation project from the point of view of cybersecurity.

    10.5.4 Who does what in a cyber secure renovation project? [link id=”hcmnd”]

    The first step is to identify the roles and responsibilities in a renovation project.

        • The client for the renovation is responsible for drafting a risk analysis. The same client must also clearly specify who is responsible for the security of the existing situation and the intended phasing of the renovation project.
        • The contractor also performs a risk analysis. In addition, the contractor plans and validates the phased transition from the old system to the new, and the safe disposal of information-bearing components. Additional areas of focus for the contractor are secure (physical and logical) access, network security and the screening of third-party personnel.


    Awareness of the cyber risks among all relevant employees and stakeholders is crucial during the renovation. There is however a danger that employees are not aware of the risks or fail to act in accordance with them, for example because the planning and/or activities are not properly coordinated. Because the tunnel is not operational, employees might lower their guard a little. Security audits during the renovation phase (for example, client visits workshops and workstations) will help increase security awareness. The risk of reduced awareness can also be controlled by including cybersecurity as an element/criterion on the work permit and focusing attention on cybersecurity in kick-off meetings.

    10.5.5 What are the most essential technical measures? [link id=”klnl6″]

    Example of physical resistance/compartmentalisation

    The architect had created a large open space almost immediately behind the entrance door. The local operating system was installed in one corner of this area. Simply opening the entrance door granted direct access to the tunnel operating system. With an entrance door in the public space, without screening or fencing, the level of resistance was limited, even though the door itself was equipped with solid hinges and locks. Meter readings were also carried out in the same space in which the operating system was installed. The only possibility for increasing the level of physical resistance in this area was to introduce compartmentalisation. This meant installing a reception area behind the entrance door with a second door to the open space and then installing the operating systems in a screened-off area.

    The risks can be mitigated with technical measures. Some are designed to protect confidential information, for example by using a secure data and network infrastructure, classifying information and designing and maintaining the necessary tools, such as a secure file-sharing server and email encryption. There should be a prescribed procedure for transmitting information and data to the technical installations in the tunnel, for example only using secure notebooks/engineering stations or selected secure USB devices. A stepping-stone/jumpserver for which the patches are provided ‘from outside’ could also be used, after being screened and distributed to the target environment. It is important to make backups, both for the office environment and for the on-site systems. There should be roll-back scenarios which have been tested. There are of course other technical measures available (see the rings in the ‘defence in depth’ model).

    10.5.6 Consequences of the chosen renovation strategy. [link id=”f8z4c”]

    Finally, there are a series of risks relating to the status changes to the tunnel, as part of the renovation strategy: the detection of abnormal behaviour is rendered more difficult by the technical changes while the possibilities for checking persons present in the tunnel during the implementation of the renovation work demands additional attention. When the renovation is carried out in the form of a long-term complete closure to traffic, it is easier to create a bulletproof access policy and secure environment for staff and personnel. Possibilities include an access gate that makes it possible to check that only registered visitors and personnel are given access. If the renovation is carried out in a form in which for example the tunnel always remains open to the public during daytime hours, it is more difficult to achieve a watertight situation in respect of persons present in the tunnel.


    Short closure, repeated daily

    Long closure (of several weeks)

    Fold out Fold in

    10.6 Demolition [link id=”rt27l”]

    When an infrastructure object is being demolished, the main focus in terms of cybersecurity is on how information is destroyed or saved. Discarding information or information carriers that contain security data constitutes a cyber risk. On the other hand, not everything can simply be destroyed because companies and public authorities have a duty to store certain information (see rules for government archives) and the explanatory notes from the Tax and Customs Administration).


    The focus in this phase is on safely disposing of information-carrying components and shutting down external connections to the demolished object.


    The tasks of the client in this phase are:

        • Shut off all transmission connections to the object.
        • Receive and wipe clean or destroy any network equipment.
        • Safely archive all project information.
        • Destroy all non-archived information (documentation, backups, personal data, log files).


    The tasks of the contractor include the following:

        • Classify equipment from the project.
        • Wipe clean or destroy data carriers.
        • Hand over client’s equipment.
        • Remove all documentation found at the object and destroy or hand over the documentation to the client.


    The systems that will have to be disposed of are identified at the start of the demolition phase. These systems have to be ‘dismantled’ in a professional manner, which means that a demolition/dismantling plan must be drawn up and approved by the parties concerned. The plan must cover at least the following subjects:

        • What is the scale of the demolition: which (sub) systems have to be removed and which should continue to function temporarily or permanently?
        • Asset lists of the systems to be demolished.
        • Determine how the released system will be disconnected and disposed of.
        • Released systems/installations must be assessed for sensitive business and personal information and (at least in the case of government tunnels) sent for destruction to the Movable Property Agency. These items include data carriers, documentation (client and subcontractors), switches, firewalls, settings in OT equipment and PLCs, etc.
        • Planning (starting date and duration, authorisation of relevant personnel): how should the demolition phase be conducted?
        • Organising supervision, both at the physical location (uncoupling and transport) and during the logistic process up to and including scrapping.
        • Validation of this plan by drawing up a final report.

    11 Legacy management [link id=”cvp3c”]

    11.1 Introduction [link id=”vrv4x”]

    To operate and control a tunnel, tunnel operators depend on (complex) process automation systems, also called industrial automation control systems (IACS). These systems will need to be kept safe/cyber secure. This can be a difficult task, it regularly happens that replacing or upgrading (sub-)systems at a tunnel object cannot (easily) be done or that replacement or an upgrade is not (even) allowed. When this situation occurs, it is referred to as having ‘legacy systems’. A further explanation of this definition follows in paragraph 11.2 Definition.


    Since tunnel objects are operational for long periods of time, chances are that they have legacy systems. From a cyber security perspective, the question of how to deal with this then soon arises.

    This chapter aims to support relevant object or tunnel managers, technical specialists and security advisers with practical tips and tricks on how to deal with legacy systems in the light of the availability and safety of a tunnel object. It provides insight into which choices to make when the issue of ‘replacing or extending’ hard/software systems is on the table from a cyber security perspective.


    The reason a specific guide has been written for dealing with legacy systems is because these systems involve a number of specific risks and courses of action from a security perspective.


    Legacy management cannot be seen as a separate part of operations or IT service management. It is part of the overarching asset life cycle management process (asset LCM). For more information on asset management, please refer to 10 Asset managementof this living document.

    11.2 Definition [link id=”h4s85″]

    A clear definition of when something is considered a legacy system is difficult to give.

    The UK government defines a system or technology as ‘legacy’ when it meets one or more of the following:

        • It can be considered an end-of-life product.
        • It no longer has vendor support.
        • It is impossible to update (update).
        • The system or technology is no longer cost-effective.
        • The risk of unsolvable vulnerabilities is assessed as higher than the accepted level of risk.


    Another definition that addresses the topic of security a little more is the following:


    A legacy environment is a custom environment containing older systems or applications that may need to be secured to meet today’s threats, but often use older, less secure communication mechanisms and need to be able to communicate with other systems.


    In the context of this guide, we use the UK definition mentioned above. Systems that meet that definition are called legacy systems. Whether, and if so how, they should be protected against possible threats is something that will be determined in the risk assessment.

    11.3 Risks [link id=”lc6z0″]

    The risks of legacy systems require a different approach than risks of non-legacy systems. The potential cyber security risks posed by having legacy systems include the following:

        • Misuse of vulnerabilities in hardware and software.
        • No up-to-date virus scanner.
        • No replacement hardware available.
        • Required knowledge is no longer present both in the own organisation and externally.
        • No updates/patches available.
        • No more support on:
          • Hardware
          • Firmware
          • Software
          • Operating system
          • Applications
          • Databases

    11.4 Practical tools to deal with legacy systems [link id=”hg53t”]

    This section aims to provide a roadmap to make the reader aware of possible solution approaches to manage the risks posed by legacy systems. The roadmap should be seen as a stepping stone. It is an example of a possible approach as the context in which the legacy is located may influence its approach.

    11.4.1 Step 1: Gaining insight into the issues and context [link id=”xm427″]

    This step aims to answer some crucial questions:

        1. What systems are we talking about?
        2. What is the context of this environment?
        3. Are there any aggravating or mitigating circumstances?


    To answer these questions, it is important to have the right information. Some of the information can be retrieved from documentation, the majority will (have to) be available in an ‘asset inventory’ or the configuration management database (CMDB). In practice, a CMDB often proves to be present, but not with the right level of detail. Components from the industrial environment can often not (yet) be found in the CMDB due to legacy considerations.


    To effectively address legacy systems, the following will provide direction on a minimum information requirement. This list provides a direction and can be further expanded or minimised depending on the problem:

        • Hardware components (manufacturer, product line, etc.)
          • PC/workstation/server systems
          • Control and engineering systems
          • PLCs
        • Software components (name, version numbers, patch levels, etc)
          • Firmware
          • Operation system
          • Application
          • Database
        • Context
          • Connectivity; connections to other systems, domains, environments
          • VLANs; separate traffic flows for e.g. management, SCADA, PLC
          • Physical security; building zoning, room compartments, locks on cabinets
          • Configuration
          • Other security aspects (logical security, malware detection, etc.)


    Gaining an understanding of these aspects is the first step and a prerequisite for following steps. Choosing a method for this is up to the reader. Experience with this has been gained within several organisations and this often consists of a mix of desk study, expert meetings and automated data collection. Attention is required when running scanning software (asset discovery) on a legacy environment. Some systems react unpredictably (e.g. trip mode PLC) when a sweep package runs to collect asset information. Preferably do this in a test environment or clone of the production environment restored from a backup.

    11.4.2 Step 2: Assessing risk [link id=”2wffg”]

    A next step, based on the insight gained about the environment, is to estimate what the actual risks are. This sets the stage for answering the following questions:

        • How big is my problem actually?
        • In which sub-area are my risks?
        • Which measures are most effective?
        • Which risks do I want to tackle, which do I accept (temporarily). In other words, what is my organisation’s ‘risk appetite’ and what is its consequence for my identified risks? This could possibly also mean: acceptable downtime/stoppage of the object, acceptable financial loss, to even tolerance in accidents.


    To carry out a good risk analysis, a number of ingredients are needed (see also 6 Risk-based approach:

        1. An idea of the ‘importance’ of an object:
          1. What is the effect of failure on the function of the object?
          2. What is the effect of failure of the object on the surrounding chain?
        2. A chosen risk analysis methodology with defined categories. Several methods are available for this. In the simplest form: risk = chance * impact, where ‘chance’ might be ‘negligible’ to ‘probable’ in various steps, and ‘impact’ can be chosen for categories such as RAMSSHEEP, BIV, etc.
        3. An estimate of when the risk could possibly occur. Tomorrow? Or in a year?
        4. If a comparison with other objects or other risks is needed: a uniform scoring methodology. This can be used to make the right decisions on how to deploy a limited budget.
        5. A realistic threat/vulnerability list on which to score.
        6. A group of experts from various divisions within and possibly outside the organisation. The group should have a sufficient spread, but not get in the way of an open discussion. It is therefore advisable not to have legal experts join immediately as the discussion will then not have the desired character. Example participants include:
          1. A process supervisor
          2. An administrator / asset manager
          3. An IA/OT/ICS expert with technical knowledge
          4. An (industrial) cybersecurity expert
          5. The market party if maintenance is outsourced


    During the risk session, obtaining context is important:

        • What is my external exposure (am I connected to another network/the internet)?
        • What is my interconnection with the office environment (segmentation) c.q. other industrial applications?
        • What data flows are there and how do they run?
        • Etc.


    A well-executed risk analysis provides a list of realistic risks of interest to the environment. This insight is important to arrive at a reasoned approach to legacy systems. An outcome may also be that a legacy system in the organisation does not pose an actual risk because of its context, or that limited resources can be used more effectively on other components than would initially be expected.

    11.4.3 Step 3: Points of interest (horizon, forms of contract, etc) [link id=”8n665″]

    Before a selection of measures can be made, there are a number of concerns that need to be taken into account. It was mentioned earlier that we often find legacy systems in critical environments where continuous availability is expected. In addition, we often see a number of specific characteristics:

        • Very limited number of maintenance occasions, usually a limited number per year. In addition, these are often already fully scheduled.
        • Renovation horizon in the order of decades.
        • Systems for control purposes are not designed with the ability to make easy modifications.
        • Often no solid test environment is available.
        • Limited (technical) performance/capacity for additional technical measures (malware scanning, etc.).


    As an example, this means that measures such as patching often are challenging – if not impossible – or ineffective.


    Thus, the first thing to consider is what the possibilities are, and the measures should be planned so that they can be implemented at the right times. This can create a (temporary) situation in which other measures or risks are accepted:

        • Maintenance moments: how many maintenance moments do I have per year, what space do I have to introduce measures?
        • Replacement and renovation: the moment to also apply technical measures. Alternative measures up to this point? Extra attention to ‘hygiene’ (cleaning USB sticks, being careful with laptop connections etc).
        • Contract forms, such as:
          • Performance contracts: change of contracting party approximately every 5-10 years. At these times, contractual process requirements (measures) can often be introduced. In terms of system requirements, there are fewer opportunities.
          • DBFM: starting point is often 20+ years of running legacy systems where all defective parts are replaced one-on-one with spare parts on the shelf for that period. At the end of this contract period, a highly obsolete system may be found.
          • Norwegian systematics: Contracts for the duration of 10/15 years including to manage this conform ‘maintenance and renew’, i.e. after a newly concluded contract the systems are totally renewed and maintained full-swing.
          • Etc.


    The result of this analysis is the identification of opportunities and moments to implement measures (packages). It may also lead to the insight that more opportunities need to be deliberately created, e.g. a specific maintenance weekend for implementing replacement and cyber resilience measures.

    Depending on the opportunities, a certain scope arises for taking a mix of measures. This mix can lean more on prevention, detection or just response. A choice and consideration can also be made for technical, procedural and/or organisational measures.


    From a certain point onwards, an opportunity will arise to be able to replace the entire legacy system for a new one. Clarity in when this moment occurs, gives a rationale for choosing measures and/or accepting risks.


    Visualised, the above looks as follows:

    Figure 11.1: Legacy lifecycle of IT and OT assets. The current situation of aging IT and OT assets will remain frequently occuring with unchanged approach.

    11.4.4 Step 4: Selection of measures [link id=”hlfg3″]

    In this section, a number of options are put forward to protect legacy systems to some extent against cyber calamities. For legacy systems, the choices of measures are always fewer, compared to current systems that are in a different phase of their life cycle.


    Basically, three principles apply:

        • Taking measures – technical, organisational and people-oriented.
        • Detection – monitoring of the legacy asset(s).
        • Recovery – mostly technical in nature.


    The choice of measures also depends on the cyber risk assessment, the costs, lead time of the project and the moment when the need arises to actually migrate the legacy system.


    Each situation will be unique and, as a result, so will the choices made. This means being aware of the risk and accepting the consequences. Not making choices or not making decisions is also an associated decision.


    As an example, this is shown in the figure below.

        1. One recognises the cyber security risk.
        2. The assessment is that the consequences are (too) big.
        3. The renovation plans are too uncertain.
        4. The timeframe between doing nothing and the renovation is too long or too uncertain.
        5. That one decides to start a project to reduce the risk as much as possible.


    This project can be seen as an RFC, as shown in the earlier figure above.

    Figure 11.2 Decision-making on risk (temporary) reduction measures


    For controlling risks, the following types of measures can be considered (in line with the ‘defence in depth’ model):

        1. Organisation and human factor: both are the weakest measures but tightening policies and monitoring them can prevent a lot.
        2. Physical and logical access control:
          1. Protect the legacy system from physical and logical access as much as possible. Here, logical access has the most attention. This could include:
            1. Limiting the number of data connections if possible.
            2. Closing or blocking ports that are not used (hardening).
            3. Migrating the data connection to a connection that can be made cyber secure.
            4. Placing an additional gateway or controller in front of the legacy system.
            5. Also checking whether an old modem connection is still present and removing it.
        3. Network/system segmentation: check whether the legacy system can be segmented at microlevel from the other systems by, for example, placing the system in a separate VLAN in combination with firewalls or special gateways.
        4. Account management: often, legacy systems are set up with generic accounts where, over the years, many people both internally and externally know how to log in. Changing access codes regularly limits the number of people who actually have access to those who actually need it.
        5. Antivirus/malware: this measure is mostly not feasible because the behaviour of legacy systems i.c.w. antivirus protection software cannot be estimated. The vendor of the legacy system is unlikely to provide support (anymore) and the risk lies entirely with the end user. In addition, antivirus software requires computing power and memory that will often be insufficient on legacy systems.
        6. Patching: this is the most effective measure. However, this measure is often not feasible if support on systems has stopped (security patches are no longer issued). Sometimes, at high cost, agreements can be made with the supplier to still receive these patches. These are then developed specifically for the customer.
        7. Endpoint protection, which includes:
          1. Whitelisting. This form of protection is very effective and requires relatively little processor capacity.
          2. Remove unused software.
          3. Block unused drivers and interfaces.
        8. Backup and restore: a good backup and restore policy and strategy are important. This is the last resort to eventually get back to a firmly defined situation.


    An extra consideration is the use of monitoring and early detection. Systems that analyse network traffic and provide early warnings and alerts can reduce the impact of a cyber incident or, at best, prevent it. These systems monitor not only legacy systems but also all other assets included in the networks.


    Finally, some additional options are worth exploring:

        1. In case of obsolete hardware:
          • Last buy of spare parts from vendor or search the second hand market or refurbishment
          • Re-host (lift and shift), move OS and application to new hardware (or virtualise). This measure should be investigated early whether this solution will work technically while fully preserving the function (application). This solution only solves the hardware-related risk.
        2. In case of an operating system:
          • Re-platform (lift and shape), migrate to a current operating system. Vulnerabilities on new operating system are fixed with current security patches. Application must be compatible with the new operating system.
        3. In case of an obsolete application:
          • Here the possibilities are very limited. If the supplier no longer supports the current application, the only option is to migrate to a functionally equivalent application.
        4. In case of full replacement:
          • Retire: This is often the most expensive option and in practice means replacing it with an entirely new (partial) system to fill in the relevant functionality. This new system should then be set up according to the security-by-design principle.

    11.5 Preventing legacy [link id=”0hhm0″]

    Managing legacy systems is what no organisation should want. From a cybersecurity perspective, having and using legacy systems should be avoided as much as possible and one should have an active policy for this to be achieved. Of course, legacy cannot always be avoided, see as an example figure 11.1.

    11.5.1 Technology, asset management strategy and process [link id=”6m9vs”]

    In order to prevent legacy as much as possible, cybersecurity risks can be taken into account during the design phase (security by design) and space can be created for the possibility of implementing measures. Think of making patching, maintenance, upgrades, replacement of (partial) systems, etc. easier. This can be done, for instance, through standardisation, use of open standards, protocols and APIs. This allows functionality to be built modularly and reduces dependency between systems by well-defined interfaces. In addition, virtualisation platforms provide a good basis for reducing dependencies between hardware, software and operating systems.


    Further considerations include:

          • Good test environment or perhaps preferably a full failover environment (environment to which can be switched in case of problems). This increases the ability to perform updates/upgrades to the systems while minimising the need to sacrifice maintenance time.
          • Asset management, lifecycle management:
            • Planning updates/upgrades to the systems in the future, and also budgeting for them.
            • Alignment with maintenance, replacement and renovation
            • Opportunities during changeover of maintenance contracts
          • CMDB: having insight into and thus being able to identify components at risk of obsolescence by means of a CMDB is a prerequisite for good asset management.
          • Agreements with suppliers, insight into support periods.

    11.5.2 PDCA [link id=”39pg2″]

    A plan-do-check-act (PDCA) strategy is widely used in cybersecurity to maintain and (continuously) improve security processes and measures. Cybersecurity threats are constantly changing due to obsolescence of one’s own installed base, technological developments, malicious parties and knowledge becoming more and more accessible. By performing a risk assessment and evaluation on legacy systems at least annually, it is checked whether the minimum risk profile is met. It also considers the future, such as a possible bridging period and any other issues that may arise (cyclical developments, budget availability, contract periods and contract forms, etc.).

    11.5.3 Additional information [link id=”szqh9″]

    The NCSC and RWS have highlighted the topic of legacy from their expertise as a vision and recommendation in how to deal with legacy systems:

          • The Dutch NCSC published a whitepaper (in Dutch) on legacy systems.
          • In their Cyber Security Implementation Directive (CSIR), Rijkswaterstaat evaluated the aspect of legacy systems and, where applicable, made it specific and defined it for use within the management processes of a tunnel object.

    12 Business case [link id=”lcl8k”]

    12.1 The whys and wherefores of a business case [link id=”gq62t”]

    A commonly heard statement is ‘What can I, as a tunnel manager, learn from a cybersecurity business case? There’s no way my object is going to be attacked.’ However, this is an incorrect statement. The recent threat assessment by the NCSC clearly indicates that the risk of a digital attack, specifically also on operational control and monitoring systems by criminals or state actors from different parts of the world, is in fact increasing daily. It is no longer a question whether an object will be affected by a cybersecurity incident, but when and what form the incident will take. When the consequences become apparent, the availability and integrity of the object will also be affected. Both aspects can lead directly to security and availability risks, both of which are among the primary responsibilities of the object manager.


    Not everyone is designed to struggle their way through a jungle of standards or understand often previously unheard terminology. A recognisable cybersecurity incident described in the context of the object to be managed can be of huge assistance in considering the potential consequences for the safety and availability of the object. A risk analysis based on that business case then provides an insight into the threats, vulnerabilities, risks, acceptable residual risks and measures to be taken. Subsequently, the business case clarifies the costs of the measures to be implemented in relation to the risks to be taken. Balancing the costs of the measures against the present risks can help in selecting the most suitable measures, also in terms of their cost.


    It is good to view cybersecurity as effectively a form of risk management. Cybersecurity serves to reduce the risk of damage and physical injury as a result of a cyber incident and to speed up recovery following the incident. For the business case, this means that the risk to be mitigated by cybersecurity consists of the likelihood of damage and personal injury and the consequences of failing to meet targets (KPIs) for security, availability and privacy. Fines as a result of the enforcement of laws or contractual obligations can be another form of risk, the probability of which must be reduced.


    The cost aspects of the business case then relate to:

          • The costs for preventive measures to protect against threats targeting vulnerabilities in the system, to prevent an incident.
          • The costs for preventive measures in the event of analysis and recovery of the system and the operation, as a consequence of an incident.


    On the benefit side of the business case are the risks to be mitigated in terms of the % probability x type and scope (euro) such as:

          • Physical damage (physical incident due to the cyber incident, such as a multi-vehicle collision because the tunnel lighting is suddenly remotely switched off).
          • Financial loss (for example economic and social loss due to days or weeks of unavailability and possibly fines).
          • Physical injury (for example a cyclist who falls from the escalator because it is suddenly reversed remotely*).
          • Reputation damage (including communication costs).


    Cybersecurity as insurance or insuring cybersecurity

    In terms of awareness, cybersecurity can be viewed as a form of insurance. The insurance premium consists of the up-front costs of the preventive measures while the cover consists of the reduction of the likelihood of damage and recovery.


    On the other hand, there are insurers who offer cybersecurity insurance. It should be noted that the insurer will require sufficient attention to have already been paid to protection against threats and rapid recovery, for example in the form of compliance with the ISO 27001 standard or IEC 62443. Effectively, the insurer expects you to do what you should already be doing, anyway.

    Residual risks and the business case

    It is also important to identify which residual risks are considered acceptable. In that case, a deliberate decision is taken to not invest in preventive measures for a specific risk (or threat type); instead, the losses and physical injuries accompanying that risk are accepted as a calculated residual risk. It is worthwhile including those calculated residual risks in the business case.

    12.2 Phases in cybersecurity perception [link id=”55w3m”]

    In order to analyse the development of a management organisation in terms of its perception of cybersecurity, and to monitor that development, the figure below can prove useful. By using this figure to determine the degree of maturity of the organisation, the steps to be taken also become clear.


    Figure 12.1: Phases of maturity in terms of cybersecurity

    12.2.1 Denial [link id=”5mkt5″]

    During this phase, the overriding idea is that cybersecurity represents no threat to operational activity. The reasons can be varied: ‘we have never experienced any incidents’, ‘our systems are not linked to the Internet’. The manager is not informed because the personnel (also) do not recognise the risk, cybersecurity adds nothing to the operating result (availability and security) and is primarily seen as an additional cost item.


    This denial phase can be breached by:

          1. gathering facts and information by the organisation or third parties (consultants, sellers, government, etc.);
          2. presenting information to management in understandable terms;
          3. if possible, providing examples from your own organisation.


    Communication in the form of consultations and discussions is needed to move on to the ‘Recognition’ phase.

    12.2.2 Acknowledgement [link id=”x0hdg”]

    In this phase, the manager recognises that cybersecurity does represent a risk to day-to-day operations and business operation, the impact of which can vary enormously. This subject is placed on the internal agenda. Other departments such as IT, automation, health safety environment, etc. are involved in further investigating the theme. Support within the organisation starts to grow, leading to the transition to the next stage.

    12.2.3 Exploration [link id=”lrm56″]

    The manager instructs the internal organisation (possibly supplemented with one or more external consultants) and releases part of his budget to initiate an investigation. The aim of the team is to determine the nature of the risks, to assess the acceptable residual risk and to decide which elements of the organisation need to be investigated; for example the enterprise network (IT), the operational systems, facility management systems (fire alarm control room for tunnel/buildings), energy monitoring system, physical security systems, energy supply installations (medium voltage/low voltage/NSA), etc. Each element will have to be further investigated to map out the risks and potential consequences, together with an initial estimate of the expected necessary budget. As a rule, this means that outdated systems have to be modernised or even replaced. A high level planning timetable will also have to be produced. The risk, scope, costs and planning are then discussed with the management. Following a positive decision, the initiative will be launched for preparing a business case, which marks the start of the next phase.

    12.2.4 Action [link id=”7c1r6″]

    In this phase, the management commissions the writing of the business case. The business case describes various scenarios, and the best scenario is elaborated further. The structure of the business case depends on the organisation. Later in this chapter, a business case is elaborated in a specific format. The business case is discussed internally, and following approval, the project (programme) is initiated. A project organisation has to be established with the aim of realising the (various) objectives within the scope, the budget and the timetable, in relation to the desired cybersecurity level. Following completion of the project, the organisation, the internal (and external) processes and the technology should be organised in such a way that the cybersecurity level is maintained.

    12.2.5 Persevere [link id=”glq2k”]

    During this phase, it is of crucial importance that the organisation and the external parties (including the companies that carry out maintenance work) remain aware of cybersecurity and act accordingly. Relapse is always a risk; old, familiar habits can quickly return, which in turn results in an increased cybersecurity risk. Constant attention is vital in the form of quality monitoring and assurance. Regular checks, assessments and continuous monitoring will be needed. Cybersecurity must become an integral part of the life cycle management process.


    This process permits several different loops:

          • The ideal loop is from investigation to action to maintenance. If the organisation is in this loop, the level of cybersecurity will be continuously monitored and adjusted. The installed base is of key importance. Filtering (policies) will be carried out on this database, for example identifying non-supported operating systems (Windows XP, Windows 7), systems that cause security incidents (vulnerabilities), network incidents, etc. Based on the filter results, a list will be compiled of systems that need to be upgraded/replaced. In addition, the processes, new techniques and potential threats and risks will have to be re-examined. Policy is then adapted accordingly.
          • The loop from recognition to investigation to recognition. During the recognition phase, people are convinced of the need to start an investigation. This investigation may reveal that there are few/no risks (probability x impact), so no business case is prepared. In that situation, a memo is enough to explain that a risk inventory, analysis and evaluation were carried, resulting in a limited risk profile.
          • Relapse loop: relapse to denial After conducting the risk analysis and implementing the selected measures, a (false) sense of security may arise (‘I’ve done everything I needed to do’) resulting in a relapse to old behaviour patterns. This ignores the fact that new threats can emerge or new vulnerabilities become known, that may require new measures.

    12.3 Aspects of a business case in relation to cybersecurity [link id=”cfk96″]

    The figure below describes in general terms the various aspects of a business case.


    Figure 12.2: The aspects of a business case


    The following explanation deals specifically with a business case for implementing measures in the field of cybersecurity in the (critical) infrastructure. This is not a business case for introducing a new service or a new product.


    1. Background

    In any business case, attention must be focused on the background against which the business case is drawn up. In the event of an object in the infrastructure, this refers to operation, management and monitoring of the infrastructure, and ensuring that users of the infrastructure can be confident of using it safely.


    2. Added value

    This part of a business case often uses qualitative terms. In many cases, taking measures in the field of cybersecurity delivers no direct measurable value in terms of revenue. However, the value lies in reducing the probability of an incident occurring, with major consequences. Value can also in part be created in complying with the applicable (legal) frameworks, and explaining that risk-mitigating measures are necessary. These may be related directly to cybersecurity (such as the Wbni) but also be indirectly related to safety.


    3. Alternatives

    In any good business case, in addition to the ‘recommended’ measures, one or more alternatives are also proposed. One of those alternatives should be based on the scenario of ‘doing nothing’. This makes it possible to balance between ‘doing nothing’ and implementing measures. The advantages and disadvantages, the costs and the benefits of all of those measures are also mapped out, to allow decision makers to arrive at a choice. In a business case for cybersecurity measures, the alternatives relate to accepting a higher or lower level of risk with the resultant consequences. Accepting a higher level of risk means implementing fewer measures and therefore investing less, but with a possibility of high costs if the risk occurs. Accepting a lower level of risk means higher investments (possibly in phases) with a smaller probability of high costs in the long term. This principle can be presented as a trade-off matrix (TOM) in which the various aspects are compared and contrasted.


    4. Revenue

    This part of the business case describes the return from the proposals. As far as possible, this section often uses quantitative terms, preferably money. On the one hand, cybersecurity relates to reducing the probability of an incident occurring in the event of a digital breach (with measures aimed at identification, prevention and detection) and on the other hand reducing the consequences of such an incident (measures aimed at correction and recovery; see also the NIS Cybersecurity Framework, the five functions). The financial consequences can be revealed by combining the costs of damage possibly caused to an object and/or equipment (recovery costs) with the costs of the impact on operational aspects (costs of non-availability, additional personnel, etc.).


    5. Business objectives versus risks

    This section of the business case considers the proposal in the light of the business objectives (including safety and availability). The risks that are mitigated through cybersecurity are the deviations from the business objectives. Attention should also be focused on the (explicitly accepted) residual risks; after all, it is impossible to exclude all risks. In the case of infrastructure, a business objective can for example be ‘the rapid, safe and reliable use of the available infrastructure for the transport of people and goods’. It can be argued that the return from the component ‘Revenue’ contributes to that objective. Safety and availability should preferably be quantified in terms of for example the number of victims per year with minor injuries or the number of hours of non-availability per year.


    6. Time

    This section of the business case refers to the time needed to realise the proposals and the time (man hours) needed to implement the measures during operation, both by users and maintenance staff. For cybersecurity, for example, it will also be necessary to find a balance between the order in which the measures should be carried out.


    7. Risks

    This section deals with the risks relating to implementing the proposed plan and the measures for managing those risks. This is also needed when implementing cybersecurity measures, because these too involve risks that need to be managed.


    8. Investment

    The business case concludes with a breakdown of the investments needed to achieve the proposals. This must also take into account the effect of the measures to be taken on operating costs. In a business case for implementing cybersecurity measures, the investment can partially be expressed in monetary terms, and partially in man hours (which also cost money), because cybersecurity management measures always have consequences for technology, people (attitude, behaviour, awareness) and processes.


    As already discussed in the introduction, and as is clearly explained in this living document, the subject of cybersecurity breaks down into three aspects: people, organisation and technology. Taking measures must always be viewed as maintaining a balance between these different aspects. By focusing heavily on technical solutions, without helping the people who use the technology in the fulfilment of their tasks, and without explaining why the measures have been taken as they are, people will search for (and indeed find) ways of making their lives easier. One well-known example is writing down passwords on a post-it note which is then stuck underneath the keyboard. With a number of additions, this imbalance was the cause of the downfall of the company Diginotar (see the book Het is oorlog maar niemand die het weet by Huib Modderkolk).

    Appendix 1 Practical experiences [link id=”v68pk”]

    This appendix contains practical examples that might be found during inspections or after evaluations of incidents. The findings are accompanied by suggestions for management measures that could be taken to enhance resilience against cyber risks.


    For news of recent hacks, see the notices on the site of the National Cybersecurity Centre (NCSC).

    B1.1 Layered security [link id=”73hp3″]

    For the security of an infrastructure object, the principles of layered security are followed:


    Figure B1.1: The principles of layered security.


    Hackers first have to succeed in gaining access to a system. They can do this via the network or by gaining physical access to the rooms where the system is kept. Once access has been gained to the system, the hacker can try and enter it or manipulate it. This can be done by abusing accounts and passwords or by exploiting software, viruses, existing vulnerabilities, etc. For the tunnel’s operator, manager or maintenance staff, it is important to detect and recognise what has happened. They then also have to know what to do. If things do go wrong, the operator must know what action to take to limit the impact and prevent any further damage.

    B1.2 Physical access security [link id=”6f6n2″]


          • Maintenance workers are often on-site unannounced; the tunnel operator has no security task and allows people in. Risk: a hacker can enter the site unchallenged.
          • There is no procedure for granting access where it is needed, but even simple anti-burglary measures often do not meet requirements. Risk: a hacker can simply break in and gain access to rooms in which the systems are kept.



          • Introduce a key management and access procedure, keep it up to date and ensure compliance.
          • Always require visitors to report on arrival at a location, particularly contractors.
          • Improve and maintain the standard of locks and hinges.
          • Institute security zones (on the grounds and in buildings).

    B1.3 Network security [link id=”0fkwn”]


          • Unfounded confidence in the insulation of the SCADA system from other networks means that people fail to take adequate measures in other areas. Risk: Risk awareness declines, making the system vulnerable.
          • There are direct (Internet) connections to the tunnel systems. Risk: infection of SCADA with malware or open access to SCADA for unauthorised persons.
          • There are various connections between office automation and the SCADA network. Risk: infection of SCADA with malware or access to SCADA from office automation.
          • There are various modems and open ports that were previously used to gain access to SCADA from the Internet. Risk: it is easier for a hacker to abuse the network.



          • Insulate the SCADA system from other networks.
          • Monitor and check the connections with the tunnel system.
          • Remove connections between office automation and the tunnel system.
          • Remove open ports and modems (that have no function).
          • Authentication and encryption.
          • Create a controlled route for getting ‘from outside to inside’, for example using a jump server which has proper security and is permanently monitored.

    B1.4 Logical access protection [link id=”zmbzp”]


          • Functional accounts rather than personal accounts. Risk: it is impossible to ascertain who did what if something goes wrong.
          • Passwords are almost never changed. Risk: everyone who has ever had contact with the tunnel can log in.
          • Passwords are almost always in the local handbook. Risk: unauthorised persons can also log in.



          • Tighten up and regularly update the password policy.
          • Tighten up and regularly update the log-in procedures, for example with the use of multi-factor authentication.
          • Change passwords whenever procedures are introduced.
          • Lock management and operating workstations after use and during absence.

    B1.5 Anti-malware and patching [link id=”7536b”]


          • Possible malware found in SCADA network. Absence of anti-malware measures. Risk: malware can have a negative effect on the system or allow remote access.
          • Patches are only implemented if SCADA is not working. Very old software is often still used. Risk: the hacker can use a vulnerability in the system to evade the logical access control or malware can cause a DoS or open up access.



          • Prohibit the connection of unscanned USB sticks or laptops.
          • Remove the malware that is found and apply anti-malware programs around the tunnel systems.
          • Stipulate anti-malware measures in maintenance contracts.
          • Replace software that is no longer supported by the supplier (no security updates).
          • Have a system administrator install verified security patches via a portal.

    B1.6 Detection [link id=”vr6k6″]


          • Operators treat all unusual reports as a technical fault in the SCADA systems. Risk: a hack is no longer recognised as a non-conformity, with the result that the vulnerability the hacker exploited is not looked for or discovered and the hacker can proceed again after the function has been restored by a member of the maintenance staff.
          • Maintenance staff do not look for vulnerabilities or malware. Risk: vulnerabilities or malware may be present without anyone’s knowledge, thus allowing a hacker to proceed.
          • Tunnel systems do not all save log data or the data are not retained. Risk: it is impossible to discover the cause of abnormal behaviour, so a hacker can proceed.



          • Internal and external personnel must be sufficiently aware of and familiar with the risks (cybersecurity awareness). This can be promoted by organising:
            • specific workshops for tunnel operators (awareness training).
            • training (e-learning) for maintenance staff and managers.
          • Every suspected cyber incident should be notified to the incident officer. If necessary, hire a security specialist for advice.
          • Draft and implement a procedure for cyber incidents.
          • Arrange active monitoring by a security operations centre. (SOC).
          • Record and save log data for all tunnel systems.

    B1.7 Potential for response [link id=”5gw86″]


          • There is no perspective for action for a cyber incident specifically for tunnels. Risk: the maintenance party resets the SCADA system and the hacker can proceed again or hack the following tunnel. The tunnel manager is ‘blind’ to hackers.



          • Draft a tunnel-specific perspective for action for various hacking scenarios.
          • In the event of serious disruption to the normal operating process, use the emergency stop to prevent further damage/consequences.
          • Stick to the crisis management process.
          • The operator’s initial reaction is to close the tunnel. That is correct.

    B1.8 Backups and asset management [link id=”1zf6g”]


          • There are clean backups, but they are not always up-to-date or immediately accessible. Risk: there is an unnecessary delay in restoring the function and the most recent changes are lost.
          • There are clean and recent backups, but the recovery process for the entire chain is never practised. Risk: restoring the system’s operations is unnecessarily delayed.
          • Asset management is not entirely in order. Documents are sometimes updated and distributed, but sometimes they are not. Risks 1) there is insufficient reliable documentation available if recovery or rebuilding is necessary. 2) It is impossible to respond to notifications about new vulnerabilities or hacks from a CERT if it is not known which versions of software have been installed.



          • Regularly make or update backups and test whether resets are possible.
          • Update registration of software and documentation.
          • Improve the management process for software and documentation management.
          • Improve the change process.
          • Contractually guarantee that the maintenance party will keep the documentation up to date.


    Appendix 2 OT vs IT [link id=”6dd33″]

    The IEC 62443 standards are written from the perspective of the industrial environment (operational technology, OT). OT systems comprise ICS, SCADA, PLC and IO systems and contain software, firmware and hardware. Confidentiality, integrity and availability are less of a priority within OT: a tunnel must remain operational to prevent traffic congestion, whereby information is more widely available than is desirable from the perspective of confidentiality.


    The ISO 27000 standard is intended mainly for office environments based on information technology (IT), such as office automation (SAP, Microsoft Office, etc.). Confidentiality is the most important factor in the case of IT, followed by integrity and availability. For example, who has access to what confidential business data?


    The following table shows the differences between IT and OT systems (source: Applied Control Solutions).



    Priority for OT

    Priority for IT




    Message integrity

    Very high

    Low – medium

    System availability

    Very high

    Low – medium



    Medium – high


    Low – medium



    Very high


    Time criticality


    Delay tolerated

    System downtime






    System service life

    15-25 years

    3-5 years



    Not critical

    Computing resources


    (almost) unlimited


    IEC 62443

    IEC 27000


    Note: whereas IT and OT have traditionally been separate domains, a growing number of today’s manufacturing processes depend on IT solutions. Consequently, the OT environment is more often affected by malware from the IT environment. In practice, more than half of the malware problems in an OT environment are in fact found to arise from the organisation’s own IT systems. The development of the Internet of Things (IoT) has now reached the OT world. For proper protection of both domains, the logical answer seems to be an integrated ‘IT/OT’ approach to cybersecurity.

    Appendix 3 Cybersecurity and tunnel safety [link id=”38gxd”]

    B3.1 Background [link id=”zmz0t”]

    Following the launch of the updated living document Cybersecurity tunnels in 2019, it quickly became clear that cybersecurity was not considered as a critical element of the tunnel, by the officers responsible for tunnel safety. It turned out that the reason for this was that the Tunnels Act (2013), Road Tunnel Safety (Supplementary) Rules Act (2020) and the Road Tunnels Safety (Supplementary) Rules Regulations (2020) established no hard relationship between cybersecurity and tunnel safety. As a consequence, cybersecurity was also not included in the Tunnel Safety Assessment Framework (Toetsingskader Tunnelveiligheid) and has not been included as a potential risk in the risk analysis. At the same time, it is evident that cyber incidents can cause physical incidents. Moreover, the Network and Information Systems Security Act (Wbni) has created a statutory obligation upon providers of essential services (including tunnel managers) to protect their IT systems more and better against the risk of cyber incidents.


    From this point of view, in 2020 in addition to the living document, the working group developed the ‘Cybersecurity and tunnel safety memorandum‘, in which the relationship is established between cybersecurity and tunnel safety, which calls for cybersecurity to be explicitly included as an aspect of the tunnel safety file.


    On this basis, work was conducted in 2021 into specifying the measurability of tunnel safety with a view to cybersecurity. In effect, a combination was made between the memorandum and the Quick scan cybersecurity tunnels.

    B3.2 Introduction [link id=”0b5ln”]

    Tunnel operators, tunnel managers and their safety officers, authorised bodies and all other stakeholders benefit if a tunnel with its installed tunnel systems is reliable and as a consequence available and safe for operation, now and in the future. If tunnel installations (turn out to be) unreliable, the question must be asked whether the tunnel is in fact safe and as a consequence may be kept open, in accordance with the Road Tunnels (Supplementary) Regulations Act (Warvw).


    For the majority of officers responsible for tunnel safety, cybersecurity is a dark and incomprehensible development, but one which is unavoidable. Multiple cyber incidents have occurred over the past few years, and it is no longer a question whether but above all when a cyber incident is going to take place.


    Whereas generally speaking there is a clear vision on structural, mechanical engineering and electrical engineering risks, and these risks develop relatively slowly, this is not the case for cyber risks. A cyber incident can develop in a very short timeframe, without there being a clear warning signal, easily observable from the outside. A cyber incident only reveals itself when physical incidents and/or unexplained disruptions occur. The cause may have been present for a considerable time in the form of an undesirable intrusion or a ‘dormant’ virus in the system, which has gone unnoticed. The fact that such threats remain unnoticed is because the IT/OT network of the tunnel does not include any active monitoring or control functions.

    B3.3 Project phase [link id=”x1c3f”]

    Because cyber incidents can lead to physical (safety) incidents, cyber risks will have to be included in the risk file in the project phase of any tunnel new build or renovation project, on the basis of which requirements can be imposed on cybersecurity. The contractor can then call in the necessary expertise to ensure that the tunnel manager is able to fulfil its duty of care. The tasks of the contractor consist of designing, building, testing, commissioning, handing over and completing a fully operational tunnel safety system. When the project organisation hands over the tunnel to the management organisation, it must be demonstrated that the tunnel is ‘safe’ and can be used safely. To also secure the reliability, availability and safety of the tunnel against malicious external digital influences, also following handover by the contractor and commissioned by the user, cybersecurity measures must be taken.

    B3.4 Operations phase [link id=”8bt61″]

    For (existing) tunnels already in use, and for which no particular attention was paid to cybersecurity during the project phase, it is advisable to commission a cyber risk analysis. A cyber risk analysis provides an insight into the digital vulnerabilities, threats and operational risks to safety, availability and privacy. In most cases, a process of catch-up will be needed in order to face the growing threats presented by hackers and the resultant risks for safety, privacy and availability.


    Awareness in relation to tunnel safety and cybersecurity is however something that needs to grow, and that takes time. In that connection, it is desirable to start with a cybersecurity scan, to clearly demonstrate the current level of the organisation. In determining the desired level of cybersecurity measures, the client will have to decide on the current or intended cybersecurity resistance level of the tunnel. This effectively also determines the level of ambition that should eventually be achieved. This ambition level is partially dependent on the risks that need to be covered.

    B3.5 Backgrounds [link id=”51flp”]

    The primary background for road tunnels is contained in the Tunnels Act, the Road Tunnels Safety (Supplementary) Regulations Act (Warvw) and the Road Tunnels Safety Supplementary Regulations (Rarvw). Government institutions are also subject to the General Data Protection Regulations (GDPR) and the Government Information Security Baseline (BIO). Municipal, provincial and national tunnels are fully subject to these laws and regulations.


    The BIO is based on the ISO 27001 standard and describes the process for dealing with the security of information with a view to safeguarding the confidentiality, availability and integrity of information within an organisation. This includes protecting personal and/or company data and providing protection against hackers and break-ins. The requirements imposed in the ISO 27001 standard are drawn up from the perspective of a generic IT environment (e.g. office automation and software). However, systems in tunnels have a different purpose than generic IT systems, namely the safe passage of traffic. In a tunnel, use is therefore made of different types of technical solutions. As a consequence, the network techniques and software applications used are fundamentally different; the aim here is to offer the reliability, availability and serviceability needed to guarantee the safety of passing traffic in all operating situations (specifically in the event of accidents and in the case of fires).


    The network and operating, control and monitoring functions employed in tunnels cannot be described within the generic ICT standard: with that in mind, Rijkswaterstaat has developed the Cybersecurity Implementation Guideline (CSIR). This guideline outlines specific risks and imposes requirements relating to cybersecurity of tunnel systems and the linked operations.


    In addition to the above referred to legislation for tunnels, the Network and Information Systems Security Act (Wbni) also came into effect in 2018. The Wbni applies to ‘providers of essential services’, national government and digital service providers. This law now also applies to the national road and railway network, and therefore also to many tunnels.


    B3.7 Total overview of supervision contains a diagrammatic representation of the complexity of the issue of tunnel safety cybersecurity.


    Note: for both light and heavy railways, the aspect of cybersecurity is included explicitly in a so-called Integral Safety Case (ISC). This ISC was drawn up to satisfy the requirement contained in the Railways Act and the Local Railways Act, according to which it must be demonstrated that the component of fixed infrastructure of a specific section of the railway network is safeguarded and how it can be safely used.

    B3.6 Questions and possible answers with regard to cybersecurity and safety [link id=”dt0mv”]

    The main questions facing a tunnel organisation with regard to the interaction between cybersecurity and tunnel safety can be summarised as follows:

          • How can a tunnel manager demonstrate that it has sufficiently fulfilled its duty of care for cybersecurity in terms of safety, availability and privacy?
          • Is cybersecurity safeguarded upon handover of the construction and following the opening in the operations phase and hence also for regular management?


    Cybersecurity is not a one-time process of complying with a framework of standards to verify a project. Developments in the outside world are taking place so fast that cybersecurity incidents are practically daily news. Rather than a single measurement, a cyclic PDCA process is needed to constantly evaluate and possibly mitigate the risks. This cycle needs to have a higher repetition frequency than the traditional tunnel inspection that has to be carried out at least once every six years. After all, cybersecurity risks develop at a more rapid pace. An annual evaluation of measures and the conducting of internal and/or external audits is the minimum requirement.


    The solution for tunnel managers must therefore be sought in developing a variety of instruments for carrying out these regular assessments. Sections ‘5.1 Objectives of cybersecurity for OT’ and ‘5.2 How does cybersecurity tie in with the approach to tunnel safety’ contain a number of proposals.

    B3.6.1 The aim of cybersecurity for OT [link id=”g8l27″]

    The aim of cybersecurity for OT is to facilitate the operational goals of the tunnel, namely:


          1. S: physical safety of users and maintenance staff. To prevent personal injury and to comply with current regulations in the field of safety (expressed in KPIs for safety).
          2. A: operational availability of the tunnel operations. Satisfying the operational availability requirements imposed (KPIs) for the efficient and effective operation for road users and maintenance staff (expressed in KPIs for availability).
          3. P: privacy of users and maintenance staff. To prevent data leaks involving personal data (such as camara images and intercom streams) and compliance with the GDPR (expressed in KPIs for privacy).


    Working towards these S, A, P goals is intended to prevent damage and injury:

          • Damage relates to all forms of damage namely: business and personal, tangible and intangible and direct and indirect damage or losses, such as social losses, financial losses, reputation damage, etc.
          • Injury relates to direct or indirect physical injury to all types of users and managers, such as road users, operators and maintenance staff.


    By way of illustration, two examples of ways in which a cyber incident can represent a risk to the operation of a tunnel:

          • The maintenance worker on a tunnel operating system connects a 4G modem to a PLC to carry out remote updates. The maintenance worker leaves his laptop behind on a train and a computer science student is unable to resist the temptation and via the laptop remotely deactivates the water pump in the tunnel during rush hour, in the middle of a downpour. Cars are flooded in the tunnel. Not all occupants are able to escape on time. The disaster organisation is activated according to the plan and the emergency services are called out. Via social media, the student discovers the effects of his actions, removes his fingerprints from the laptop and disposes of the laptop in a container, in the street.
          • In October 2013, a cybersecurity incident was initiated and published in Israeli tunnels. Cyber criminals succeeded in switching off the cameras via a vulnerability in the manufacturer’s credentials. Multiple tunnels had to be shut down because the control centre had lost sight of activities in the tunnels. It took eight hours to solve the problem, during which time the tunnels were actually kept closed.


    From operational goals to system goals

    The operating and monitoring systems in the tunnel (‘tunnel installations’) facilitate the tunnel operation. In other words, the tunnel systems serve tunnel operations. This means that the goals for the systems need to be derived from the operating goals.


    Figure B.3.1: The triangle availability, confidentiality and integrity.


    The goals for the tunnel installation/systems (relating to cybersecurity) are:

    B: availability

    V: confidentiality

    I: integrity

    Fold out Fold in



    Availability here refers to the availability of the tunnel for road traffic. The availability of installations is derived from this requirement. In the event of non-compliance of an installation with the failure definitions, the tunnel will be closed and availability is threatened.

    In the memorandum Cybersecurity and tunnel safety the relationship is described between the failure of systems on the one hand and the failure of operations on the other. Failure as a consequence of a malicious hack or cyber infection in a tunnel system can lead to tangible operational failure, but system failure can also go unnoticed in the operation (unnoticed system failure).

    B3.6.2 How does cybersecurity tie in with the tunnel safety approach? [link id=”m3bgx”]

    It is essential that the tunnel manager and the safety officer recognise that in today’s world, cyber risks cannot be ignored. Among others the government, the Intelligence Services (AIVD and MIVD), the NCTV/NCSC, the WRR and the Netherlands Court of Audit have, over the past few years, focused urgent attention on and called for further attention for the subject, in different ways. Moreover, the media reports on the numerous hacks cannot be ignored. The working group suggests that both tunnel managers and safety officers must take account of recent developments and act as responsible managers; they cannot seek to hide behind the fact that tunnel regulations do not specifically apply in this area.


    Duty of care and due diligence

    The legal meaning of the term ‘due diligence’ gives the tunnel manager and the safety officer the responsibility (and liability) for taking cybersecurity seriously. The above sections make it clear that cyber failure can directly threaten tunnel safety, potentially resulting in injuries and possibly fatalities. The lack of (financial) resources cannot be used as a reason for not placing cybersecurity on the agenda. Wherever the security officer has insufficient knowledge of cybersecurity, it is advisable to offer him or her temporary support in acquiring this knowledge, and to then take the steps outlined below.


    Dutch Data Protection Authority (AP)

    The AP imposes fines on organisations that in the event of a data leak are insufficiently able to demonstrate how ‘due diligence’ is organised; according to the GDPR article 32 this is also known as the ‘duty of care’. The amounts of these fines can be considerable. The highest fine ever imposed to date amounted to 830,000 euro, and was charged to the BKR in Tiel. In other words, the unlawful sharing of camera images could result in considerable costs.


    Article 5 of the Warvw, Security officer

    … On behalf of the organisation of the tunnel manager, the security officer coordinates all preventive and security measures to ensure the safety of tunnel users and tunnel personnel…

    In the light of the increasing regularity of cyber incidents and general social awareness, all preventive and security measures also include cybersecurity measures. After all, a hack of an operational operating system can result in physical unsafety. The manager and the security officer, as ‘good stewards’ must focus attention on current threats. Cyber threats today are amongst the realistic threats to which any tunnel is exposed.


    Article 6.1 of the Warvw, risk analysis: small probability, high impact, means medium-sized risks

    Although the probability of cyber incidents is (still) relatively small, the (national) impact (in the media) and the personal consequences for affected parties are considerable. As such, the risks are sufficiently large to be taken very seriously. In the tunnel risk analysis, cyber risks must also be taken into account. A number of examples are already given above.


    For the purposes of risk analysis, the tunnel safety plan should also include a reference to a specific cyber threat policy for the level of resistance of the tunnel, as specified by the owner. This risk-based approach to cyber risk analysis is described elsewhere in this living document. In this way, cyber risks become a normal element of the risk management process of the tunnel. Cyber risks can be mitigated with measures as described in this living document, the measures from the CSIR from RWS, the IEC 62443, the ISO 27002 or the NiST directives.


    Article 6.2 of the Warvw, tunnel safety plan for a new tunnel

    The recommendations from the risk analysis impact on the project phase, but certainly also on the management phase. The technical and organisational measures can be included in the tunnel safety plan (or at least reference must be made to the cybersecurity file of the tunnel).


    Article 10 of the Warvw, tunnel safety file for a new tunnel

    When the tunnel is handed over to the new management organisation, the tunnel safety plan is adjusted and updated to a tunnel safety file. The cyber measures – not only the technical but above all also the organisational measures – must be an integral part of the tunnel safety file. Based on an annual update of the risk analysis and the cyber security plan (for example using the COB quick scan method), an indication can be given of the growth path the tunnel management organisation is following.


    Art 10 of the Warvw, tunnel safety file for an existing tunnel

    The tunnel manager and the security officer, both of whom are responsible for an operational tunnel, must carry out or commission a cyber risk analysis of the existing situation. The cyber risk analysis gives an insight into the vulnerabilities in the systems and processes and reveals the cybersecurity-related risks to which the tunnel operation is subjected. The recommendations from the cyber risk analysis can be taken up in a multiyear cybersecurity plan. The multiyear cybersecurity plan is a logical and natural element of the maintenance management file and/or the tunnel safety file.


    The multiyear plan is an excellent tool for managing and clarifying the improvement process for all stakeholders. Raising the level of cybersecurity can take a number of years. This is clarified by a multiyear (implementation) plan. By compiling this multi-year plan, budgeting can also be organised, irrespective of whether the financing is obtained. A multiyear plan is useful in that it allows the investment to be spread over several years. By conducting an annual update of the risk analysis and the cybersecurity plan (e.g. using the COB quick scan), an indication is given of the continuing effectiveness of the plan, or the need for adjustment.

    B3.7 Total overview of supervision [link id=”96008″]

    Appendix 4 Knowledge guide [link id=”tl6wk”]

    B4.1 Introduction [link id=”fg412″]

    Tunnel operators, tunnel managers and their safety officer, competent authorities and all other stakeholders are responsible for the reliability and availability of a tunnel, with the tunnel installations installed, so that the tunnel is safe for operation and remains so in the future. If the tunnel installations are not (or prove to be not) reliable, the question must be asked whether the tunnel is safe and can therefore remain open under the Tunnel Act.


    The reliability certainly includes the competence of the personnel around the tunnel. They are part of the tunnel system and are needed to assess the reliability of the systems and thus draw an informed conclusion.


    Nowadays, not only the physical and functional safety of the tunnel is important in this context, but, due to increasing automation, attention must also be paid to ‘cybersecurity’. Within the cybersecurity world, a multitude of specialists are working, each mastering one or more knowledge fields. The interconnectedness between information security, privacy and cybersecurity makes it almost impossible to cut off parts of information as irrelevant. The technical overlap of IT and OT systems ensures that requirements from both worlds must be taken into account.


    This living document Cybersecurity tunnels provides a handle for implementing and controlling cybersecurity in tunnels. The living document identifies who the stakeholders are for a tunnel project and what the roles and tasks are in the context of cybersecurity. One of the most important measures is in the area of training and awareness. People who access the tunnels will have to be cybersecurity-aware and trained for their job. Now the questions may arise:

          • What is expected of me in the cybersecurity domain?
          • What should I know for my work in/with the tunnel?
          • Where do I get this knowledge from?


    This knowledge guide aims to answer these questions. The questions may arise at the individual level, but might also arise among the project team responsible for putting together the organisation of the tunnel. Based on these questions, a start is made for a track to be followed for expanding and deepening knowledge in the field of cybersecurity among the target group.

    B4.1.1 Objective [link id=”5dm1x”]

    The purpose of this document is to provide a tunnel employee with a tool to independently form an opinion about the cybersecurity situation in his working environment in relation to his duties and responsibilities. The following questions are answered:

          • What cybersecurity target groups are there for personnel of tunnel organisations?
          • What knowledge needs are there among the different stakeholders in the tunnel organisation?
          • What should someone who wants to learn about cybersecurity read in what order?
          • Is there a trail through the information that allows for insight and independent judgement rather than imposing a prescriptive approach without knowing why?


    When looking for information on cybersecurity and information security, it soon becomes clear that the amount of documentation on IT environments greatly exceeds that on OT environments. This creates the risk of someone getting bogged down in a document maze of irrelevant sources. One consequence is that people knock each other over with more or less relevant lists.


    This living document points out that the security of the operational systems in the OT environment depend on the security of the IT environment. Because of the partial interdependence between the two environments, a certain knowledge of IT security should therefore not be lacking. But how far does that go? And can I rely on the security of the IT environment?

    B4.1.2 Target audience [link id=”94hv4″]

    The target audience for this knowledge guide consists of the stakeholders of a tunnel who have a responsibility in the operational phase of the tunnel. The target group is limited to the operational phase because this period is the longest period of a tunnel’s life cycle. The following questions were considered for determining the target group:

          • How is an overview of cybersecurity stakeholders created?
          • To which of these stakeholders is (what) cybersecurity knowledge relevant?
          • And if cybersecurity is relevant, is it just about awareness or more? And what then?
          • Can relevant cybersecurity knowledge for a tunnel be clustered by tunnel roles? In other words, are we looking at roles or at individual stakeholders?
          • Should stakeholders and roles be added for tunnels of parties other than Rijkswaterstaat? Or is that easy to translate?

    B4.1.3 Background [link id=”3gl35″]

    In the legislation for tunnels, a number of stakeholders and roles are defined but there is no real focus (yet) on digital threats and the resulting risks. What is expected from whom is not well documented. What is interesting or required for which function? And do you need to be certified, get a certificate or is it enough to just read about it?


    Those who start delving into the subject quickly come across terms like: CSMS, ISMS, NCSC, NEN-ISO-27001, IEC-62443, CSIR, AVG, GDPR, Wbni/Bbni, NIS/NIS2, OSI model, BIO, NIST, SOC, COBIT, ANSI, CVE, CIS etc. And those who are not dizzy yet can go on and on. Some of these terms are also covered and explained in this living document. The abbreviations listed here represent very different things. They are:

          • Standards (ISO-270xy, IEC-62443) and their derivatives (BIO, CSIR).
          • Organisations (NCSC, NIST, ISA, ANSI, SOC).
          • Frameworks (COBIT, CIS).
          • Regulations (GDPR, the NIS/NIS2 directive, implemented in legislation),


    An example

    How about the text below from a job description in which, for an administrator of operational systems in the OT environment, is asked for:

          • Experience with management, security and project methodologies (ITIL, ISO-27001/2 and PRINCE2) and knowledge of ICS components/system parts such as PLCs, HMIs.
          • One or more security certifications such as GIAC GICSP, GRID, GCIH, GMON.


    You are bombarded with terms and abbreviations. The confusion between process and technology in the IT environment (office) and the OT environment (operational processes) is great here. It is totally unclear whether the vacancy is for a process manager or a technical manager. It is probably not yet clear to the drafter which role has been conceived for the vacancy. The work area of information security and cybersecurity, divided between the aspects people, process and technology, is too big to be captured in a simple diagram. If your area of work is in tunnels with the associated OT environment, you may have ease of knowledge from the IT domain, but that does not make you a good system administrator in a tunnel system.

    B4.2 Stakeholder analysis [link id=”v9dxk”]

    For the stakeholder analysis, the roles listed in 4.4 Basic organisation model for cybersecurity are used as a starting point. For this analysis, two preliminary remarks are important:

          • Appendix 5 Analysis tunnel stakeholders and cybersecurity concerns offers a more comprehensive analysis of stakeholders, based on both the Warvw and input from the LTS. Because cybersecurity roles do not appear in these two sources, the link between roles and stakeholders cannot be made one-on-one. This will have to be made explicit for each tunnel system, where the information provided here can help well.
          • The target group is limited to the operational phase. This phase consists of operation and (regular) management and maintenance. However, the analysis can be extended to the other phases of the life cycle. For the planning phase, for instance, this starts with the decision-makers for tenders. Then it is already important to discuss the conflicts between norms, frameworks and standards in relation to cybersecurity.


    The intended roles are shown in the table below:

    Role (see 4.4 Basic organisation model for cybersecurity)

    Operational phase –

    management and maintenance

    Operational phase –


    Incident manager



    Cybersecurity coordinator/ advisor



    Cybersecurity auditor



    Security engineers/ specialists



    Network specialist(s)



    Application manager(s)



    Tunnel operator (for tunnel personnel, see section 3.2 Dutch Tunnel Act. In this knowledge guide we are mainly referring to operating personnel)



    Asset management (maintenance contractor). Added as a key stakeholder to better distinguish between internal and external specialists.



    Risk management. Added as a key role for translating cybersecurity risks into object risks.



    First, it is necessary to define who is responsible for which cybersecurity aspects. Here, the following aspects were adopted as selection criteria:

          • Responsible from the perspective of (legal) liability (see also working group on tunnel safety and safety )
          • Technical specialists tunnel technical installations (TTI)
          • Risk assessment and risk management
          • Physical access to technical areas
          • Network operators


    Stakeholders are sought among:

          • Managers
          • Asset owners
          • Technical managers
          • Testers
          • Maintenance contractors
          • National services
          • Emergency services



    Possible fulfilment

    Possible stakeholder(s) (partly based on the RWS organisation)

    Incident manager

    Tunnel manager

    VWM/region: functional administrator

    Other Dutch tunnel operators namely…(municipalities)

    Cybersecurity-coordinator/ advisor

    Asset management

    Operational manager TTI

    Functional administration

    Regional service

    Nationwide services (GPO, CIV, VWM)

    Cybersecurity auditor

    Auditor/ inspector/ licence provider

    Municipalities (mayor and councillors)

    Security officer’s office (BVB)

    Provincially responsible officer in province where tunnel is being built

    HID of the regio that will be responsible for the tunnel

    Security engineers/ specialists

    Operational manager TTI


    Knowledge centre RWS


    Nationwide services (GPO, CIV, VWM)

    Manufacturers and installers of TTI

    Network specialist(s)

    Operational manager TTI

    Knowledge centre RWS

    Nationwide services

    Nationwide services (GPO, CIV, VWM)

    Network services provider: telephone companies (KPN, Vodafone, etc.)

    Application manager(s)

    Knowledge centre RWS


    Nationwide services (GPO, CIV, VWM)

    Manufacturers and installers of TTI

    Tunnel operator

    Tunnel operator

    Traffic engineer

    Joint control room

    Dynamic traffic management (DVM) from traffic centre

    Road traffic controller

    Tunnel operator

    Operational traffic management (OVM) being the traffic control centres

    Emergency services

    Asset management (maintenance contractor)

    Tunnel manager

    Maintenance contractor

    Other Dutch tunnel operators namely…(municipalities)

    Manufacturers and installers of TTI

    Risk management

    Tunnel manager

    Asset management

    Operational manager TTI

    VWM/region: Functional administration

    Other Dutch tunnel operators namely…(municipalities)

    Regional service

    Nationwide services (GPO, CIV, VWM)

    This table should be expanded for each tunnel system, see also the information in B.4.B Analysis tunnel stakeholders and cybersecurity concerns. For tunnels, the list will be specific for:

          • State tunnels
          • Provincial tunnels
          • Municipal tunnels
          • Private tunnels (e.g. businesses)
          • Rail tunnels (train, metro)


    From the definition of responsibilities arises the question of what knowledge and skills are expected in the field of cyber security arises. This is the subject of the next section.

    B4.3 Required knowledge and skills [link id=”v22fq”]

    The next step is to determine the required knowledge and skills (regarding cybersecurity) for each stakeholder to fulfil its duties. These knowledge and skills can be obtained in many ways. To provide some structure here, it is important to distinguish with the following criteria:

          • Distinction between IT and OT environment. Most knowledge is offered from an IT background. See elsewhere in this living document for the difference and importance of distinguishing between the two environments.
          • Laws, standards and guidelines: know what you have to comply with, a standard will mostly be mandatory, a guideline is a choice.
          • Start by promoting awareness, follow up with that and persevere with awareness.

    B4.3.1 Topics and aspects [link id=”1k4lg”]


    To start with the latter, with regard to awareness-raising, a distinction can be made between goals:

          • General
          • Functional
          • Framework-oriented


    An awareness of the importance of cybersecurity from a guiding role can very well start with reading the document Cybersecurity Assessment Netherlands 2022. Within the organisation of a tunnel system, it is important to draw up and follow an awareness programme in which, at least once a year, it is checked whether there is sufficient awareness of cyber security aspects. Awareness focuses primarily on recognising high-risk situations by recognising incidents and, from there, on teaching desired behaviour.


    Cyber security frameworks and organisation

          • Knowledge of information security, cybersecurity and privacy laws and regulations.
          • Knowledge of the organisation and its processes.
          • Knowledge of third-party dependencies (suppliers etc.)


    Expertise in managing cybersecurity risks

    The question ‘What risks does an organisation/object actually face?’ can only be answered from knowing the value of the organisation/object. Accepting or reducing risks is a formal process (risk management) which must be set up by the organisation.

          • BIA for determining what the crown jewels are.
          • Risk assessment and risk management: risk ownership is key! Who is the ‘owner’ of a risk?
            • A person and not a department… the one who feels the impact.
            • As low in the organisation as possible, but no lower than that decisions can be made and the impact of measures can be overseen.
            • Is mandated and has a budget for measures.
            • Has sufficient procuration to accept residual risks.
            • Can delegate measures, but remains the risk owner.
            • Is accountable for residual risks.


    Cybersecurity policy

    To demonstrably comply and remain compliant, knowledge and skills are required in the areas of:

          • Policies for patching and updates.
          • Supplier and subcontractor relationships.
          • Document management with an eye for confidentiality of (also technical) information.
          • Business certification and standardisation (ISO 27001 and NEN-EN-ISO 9001).
          • Evaluation, auditing and status quo of cybersecurity.
          • Screening of personnel.
          • Incident awareness.
          • Incident response and business continuity plans.
          • Compliance and assurance.


    Technical frameworks

    The operational environment requires a lot of technical knowledge. For a tunnel system, this knowledge is stored in a tunnel safety file. In practice, system operation and maintenance rely on the knowledge of manufacturers/suppliers and/or the maintenance contractor. For each tunnel system, there is a specific project organisation and there are specific agreements with subcontractors, laid down in SLAs. These SLAs define the tasks and preconditions for cybersecurity.


    To demonstrate technical measures concerning cybersecurity in the tunnel safety file, information will have to be stored on:

          • Security architecture and security by design
          • Asset information and configuration
          • Service providers and SLA’s
          • Network analysis, layout and security
          • Vulnerabilities and risks
          • Account management (IAA)
          • Logging and monitoring
          • Software security
          • Cybersecurity technical framework(s)
          • Backups and restore
          • Security assessment, pen testing or pen testing

    B4.3.2 Sources of knowledge and skills [link id=”34t63″]

    Training overviews on websites

    There are many companies working in the field of cybersecurity training. Various websites list training courses. This mainly shows how broad the spectrum is and that there are always opportunities to broaden or deepen knowledge through training.


    In this context, we would like to mention one site. This is an overview aimed at the US market where IT and OT are mixed together. Nevertheless, it gives a wonderful impression of what is on offer in the cybersecurity field. It is perhaps the most complete overview of security-related certificates with the abbreviation written out with each certificate and the price of the exam (i.e. not the sometimes mandatory training) and a link to a further description. The development of this overview can be followed on the corresponding GitHub repository. The overview provides a track for beginners, intermediates and experts on technical and organisational aspects.



          • NEN. For all standards such as 2700x and the 62443 series. Knowledge is offered through, for example, the Industrial Platform Cyber Security (IPCS). Within this platform of the NEN, a short manual has been written that explains in one-liners what subject is covered in a 62443 standards section. A Training and exam IEC 62443 is also offered through the NEN. The importance of the IPCS is that it provides a lot of knowledge in the field of the OT environment. For information from NEN, it is important to determine which framework will (may) be applicable. Which frameworks/standards there are and where the differences are. Standards mostly have to be bought.
          • NCSC. The Dutch National Cyber Security Centre, part of the Ministry of Justice and Security, aims, mainly from an IT perspective, to make The Netherlands more digitally secure. Provides a lot of information on support to government organisations including through Basic measures cybersecurity and the cyber crisis exercise ISIDOOR.
          • CIP. The Centre for information security and privacy protection is a public-private network organisation with participants and knowledge partners. Participants are employees from government, semi-government and healthcare. Knowledge partners are employees from market parties. CIP provides a lot of information in the two areas 1) secure software and 2) ethics and rules around data protection. For use within tunnels, the first part is particularly interesting. It provides a framework to test the cyber security of software.
          • IBD. The Information security service is organised by the Association of Netherlands municipalities (in Dutch: VNG). It provides mainly information on the BIO and how to deploy it in municipalities. The IBD is the sectoral computer emergency response team / computer security incident response team (CERT/CSIRT) for all Dutch municipalities and provides support for information security incidents.
          • DTC. The Digital Trust Center shares general information relevant to entrepreneurs about cyber threats or vulnerabilities through its news channels and the DTC Community.



    The Dutch Government reference architecture (NORA) is a standard applied to the government as a whole. This mainly indicates security in communication between organisations inside and outside of government. Among others, the Dutch Ketens de Baas clearly indicates that the ‘hardware’ within chains is not formed by computers, but by the ‘soft skills’ of the chain players.

    B4.4 Options per stakeholder and duties [link id=”w21pk”]

    Below is a schematic overview (also downloadable as a PDF: Kennis en kunde cybersecurity tunnels) of information sources, training and certifications that are recommended for the various roles. Further explanation follows in the paragraphs below.



    B4.4.1 Important side notes [link id=”s3g8w”]


          • The overview is intended to raise awareness of cybersecurity knowledge. It does not pretend to be complete.
          • The starting point is a tunnel in use, in which a complete cybersecurity file is present. Knowledge is focused on understanding and using the delivered product information and cybersecurity procedures.
          • A single specific training course is mentioned that refers to a training institute or certification body. This is done because these training courses have become widely known and it is explicitly not intended as advertisement. The training courses mentioned are quite broad and can also be replaced by other training courses that go into more detail.
          • The overview has been filled in by the authors based on their experience and knowledge. Any other team of authors would come up with different examples.
          • The overview gives an impression of the roles, responsibilities and authority of the role holders. In each tunnel organisation, the implementation may be slightly different and also depend on the personalities and actual knowledge and experience present.
          • The overview is a snapshot. Especially in the field of security of the OT environment, much more training is expected. An update will then be useful.

    B4.4.2 Legend [link id=”tlbx9″]

          • Vertically, the different cybersecurity roles are plotted with a generally applicable rule at the top.
          • The row ‘general’ applies to everyone. This knowledge is best secured through joint knowledge sessions such as toolboxes or security programmes.
          • Horizontally are plotted: first ‘awareness’, and then the types of work being carried out.
          • The column ‘policy-making’ refers to the development and maintenance of a longer-term vision by reflecting on the current situation and new developments in technology and society.
          • Empty boxes are added to make it clear that there is no role and thus no relevance. For example, it is not the case that a tunnel operator is not allowed to participate in evaluation and auditing, however, no specific knowledge will be expected.
          • Blue is used for general topics or project files; transparent for training or third-party documents.
          • The symbols indicate documents or actions/topics. For some of these topics, an awareness presentation is recommended.
          • This icon indicates a training course with a certificate. The other icons have been chosen as illustrations and have no specific meaning.

    B4.4.3 Growth model [link id=”k7fqk”]

    Parties/stakeholders/roles concerned with tunnel security do so with the common goal of safe traffic flow. What the impact is of cybersecurity aspects is mostly determined instinctively. By increasing awareness, knowledge and tooling, cybersecurity is made measurable. In order to achieve this for a tunnel, it is recommended to work mainly on joint awareness and, in addition, to acquire (or have acquired) specific specialist knowledge. As long as this knowledge is insufficient, external advisers will be used.


    Awareness-raising is promoted by holding joint sessions, e.g. in the form of role-playing. Put yourself in the shoes of a role (e.g. mechanic) and discuss the limitations and uncertainties experienced due to cybersecurity. These sessions should address the three aspects: people, technology and process. Examples could include:

          • Distinction between OT and IT.
          • Vulnerabilities in a tunnel, where are the risks?
          • Organisational model of the tunnel environment i.r.t. Cybersecurity.


    In all training courses, the basic principles and cybersecurity risks are repeated again and again. This continuous repetition from different perspectives is a powerful tool for reaching understanding and experience. Notably, this promotes forming one’s own vision and approach to cybersecurity in the tricky OT environment in the tunnels.

    B4.4.4 Tips for implementation [link id=”8p4gv”]

          • Take every opportunity to gain cybersecurity knowledge. Talk about it and share opinions and insights.
          • Reflect again and again to the basic principles and cybersecurity risks. Repetition is also the best teacher here. Looking at it through slightly different lenses each time grows understanding of the tricky OT environment in the tunnels. Notably, this promotes forming your own vision and approach.
          • Follow your own track. Taking note of standards and rules is important for testing. For daily practice, developing your own intuitive sense and independent judgement is important.

    Appendix 5 Analysis tunnel stakeholders and cybersecurity concerns [link id=”q2tbh”]

    This section provides an overview of the roles and stakeholders for a tunnel system and suggestions for the link to cybersecurity aspects. The overview starts from the Warvw and uses the stakeholder overviews from the LTS. An addition for roles and stakeholders for rail and non-rail tunnels is relatively easy to make based on this overview.

    B5.1 Perspective law and legislation [link id=”2hpf3″]

    The Warvw provides guidance in several articles on the roles legally required for a tunnel organisation (translated for convenience):


          • Article 5.1: “For each tunnel, as well as for each tunnel under consideration for construction or under construction, there shall be one Tunnel Manager and one Safety Officer.”
          • Article 6 mentions someone who carries out a risk analysis and is independent of the tunnel manager.
          • Article 7.1 then mentions some more parties: “Before opening a tunnel, the Tunnel Manager shall, after consultation with the Safety Officer and the mayor of the municipality or of each of the municipalities in which the tunnel is located, draw up a safety management plan. The plan shall include at least the organisation of tunnel management, coordination of this management with the emergency services, traffic management, conservation activities and the response to disasters or other events in or near a tunnel which may endanger human life, the environment or the tunnel. The plan shall also include an analysis of accident scenarios. Further rules on the content of the safety management plan and the method for carrying out the analysis of accident scenarios shall be laid down by ministerial regulation. The analysis referred to in the third sentence may be omitted, giving reasons.”
          • Article 8.1: “It is prohibited to open a tunnel to traffic without a permit from the competent Municipal Executive to that effect.”


    The LTS (release 1.2 SP2 B3) defines and describes, in line with the Warvw, the concerns of the various stakeholders surrounding the tunnel system from the framework of the national road tunnels. This overview is applicable for non-state tunnels by renaming and/or combining the stakeholders listed. The LTS has included the roles and stakeholders in several sections. First, the roles are named in H4.3. The roles translate the analysis of the current processes into the organisation of Rijkswaterstaat in terms of traffic management and management and maintenance of the tunnels. Distinguished are the roles as shown in the figure below, which is copied from the LTS.


    Figure B.4.B.1 Roles around the state tunnel system


    Chapter 5 of the LTS establishes a further relationship between the roles and the relevant stakeholders who fill those roles:


    Stakeholders set requirements for the RWS Tunnel System and fulfil roles in the RWS Tunnel System. In the table below, the roles around the RWS Tunnel System are linked to the various stakeholders from the LTS’ stakeholder analysis. The reason is that stakeholders can be elaborated in the underlying specifications and designs based on their roles with respect to the tunnel system. The final defined roles depend on the further design of the system.”


    The LTS thus distinguishes roles and stakeholders. All stakeholders fulfil a role. The LTS is not fully consistent in the assignment of roles to stakeholders, new roles arise and not all roles are filled by stakeholders. The final roles and stakeholders will have to be defined for each tunnel system.


    Given the amount of roles, stakeholders and the corresponding tasks and responsibilities, it is not yet easy to assign cybersecurity work to one specific stakeholder. An important aspect of this is that there are several layers of responsibility. Ultimately, the competent authority remains responsible, but day-to-day practice lies with operations and maintenance staff.

    B5.2 Concerns by role and stakeholder [link id=”kq1qc”]

    This appendix uses the cybersecurity roles as defined in this living document. When implementing the stakeholders on a specific tunnel system, the cybersecurity roles will have to be explicitly invested in. This will require naming all stakeholders with their role in the cybersecurity domain. The table below can be used as starting point for a complete analysis of cybersecurity concerns in the stakeholder table. Here, the table from Chapter 5 of the LTS has been extended with a character according to RASCI matrix and a description of the cybersecurity interest of the respective stakeholder. The RASCI matrix was used as follows.



    (is) responsible

    Responsible for executing for a task


    (is) accountable

    (Ultimately) responsible


    (can be) supportive



    (should be) consulted

    Should be consulted


    (should be) informed

    Should be informed


    Not applicable

    Cybersecurity aspects are not (directly) applicable


    Direct users/affected parties:






    Cybersecurity aspect

    Social environment (quality of life)

    Residents groups

    • Ensuring quality of life in city centres
    • Mobility concerns
    • Safety of surroundings and tunnel object
    • Social safety


    Availability of the property

    Road user

    Road users

    • Safety of the tunnel object in all circumstances
    • Good traffic circulation
    • Comfort


    Availability of the property









    Cybersecurity aspect

    Functional management RWS

    National services

    Uniformity and standardisation


    CMDB, CS-dossier, vulnerabilities, risks, problems, incidents

    Traffic engineer

    Operational traffic management (OVM) being the traffic control centres

    • Good functioning of the RWS tunnel system
    • Good traffic circulation, incident management
    • Meeting SLAs


    Vulnerabilities, incidents, incident response

    Road inspector/ duty officer

    Dynamic traffic management (DVM) from traffic centre / road traffic controller/tunnel operator

    Operationally sound and safe operation of the tunnel system on the road, both in terms of process, technology and organisation.

    R (first response)

    C (handling)

    Vulnerabilities, incidents, incident response

    Road traffic controller

    Dynamic traffic management (DVM) from traffic centre / road traffic controller/tunnel operator

    Good integration in the VM systems HWN of the tunnel system, both in terms of process, technology and organisation.

    R (first response)

    C (handling)

    Ability to operate, monitoring, knowledge of vulnerabilities, incidents, incident response

    Legislator/ policy maker


    Manageable tunnel standard, which can be applied operationally in construction and asset management and maintenance.


    Compliance, basic knowledge of CS standards and guidelines

    Legislator/ policy maker

    Steunpunt Tunnelveiligheid (STV)

    • Unambiguous safety requirements and approach for tunnels
    • Availability of an adequate set of instruments to fill in these frameworks (and make the decision-making process effective
    • Focused on safety.


    Provide direction and framework on cybersecurity

    Legislator/ policy maker


    Interest in a well-functioning tunnel system, reliable in construction and asset management and maintenance, safe and smooth in operations.


    Should be aware of the importance of CS frameworks and identify that CS policies should be included.

    Legislator/ policy maker

    Verkeerscentrum Nederland (VCNL)

    Good provision of information for DVM from the tunnel system. Diversion routes etc. measures at national level around a tunnel.


    Availability of reliable information

    Legislator/ policy maker

    National tunnel director

    • Fulfilling professional responsibility of RWS and coordination of state services
    • Controlled realisation of tunnel processes; new tunnels available on time, within financial, quality and safety constraints.


    Incorporate CS policy into standardisations and roll out across all tunnels.

    TTI technical manager

    National services (GPO, CIV, VWM)



    Perform technical management, updates, patching, testing, control logging, backups

    Domain expert tunnels


    National services (GPO, CIV, VWM)



    A domain expert refers to a specialist. This is, for example, RWS-CIV SOC. The tasks of this domain expert are supportive and advisory to the tunnel organisation and include:

    • Framework setting (guideline issuing).
    • Testing the implementation of CS measures during realisation.
    • Network monitoring in operational operation.
    • Communication of vulnerabilities and incidents.
    • Informing during operations and incidents.

    Construction manager

    Regional service

    Workable tunnel standards throughout the lifecycle.


    Implement CS in the project as far as it is included in the tunnel standard.


    Construction manager

    HID of department under which a tunnel will fall


    A safe and manageable object appropriate within the opening permit


    Compliance, basic knowledge of CS standards and guidelines


    RWS Tunnel project organisations

    • Deliver on time, within budget, a safe, functioning tunnel that meets the specifications of the LTS.
    • Ensure the project’s ideas regarding standardisation are secured with the LTS


    Technical and procedural elaboration of CS measures, CS risk analyses









    Cybersecurity aspect

    Interface supplier

    Telephone companies (KPN, Vodafone, etc.)

    • Good margin
    • Delivering stable services


    Delivering secure services


    Telephone companies (KPN, Vodafone, etc.)





    Manufacturers and installers TTI

    • A good margin
    • A stable market
    • Taking maximum advantage of existing legacy
    • Unambiguous and uniform functional specifications


    CS audits, screening, certification



    Other public authorities:






    Cybersecurity aspect

    Tunnel operator

    Other Dutch tunnel operators; namely…(municipalities)

    • Availability, flow and safety
    • Clear frameworks, requirements and solutions to test whether this objective is met, or for balancing safety, availability and maintenance.
    • Local situational concerns met


    Checking CS activities and reports

    Auditor/ inspector/ licensing officer

    Municipalities (mayor and aldermen)

    • Safety and traffic flow
    • Being able to issue permits without risk
    • Assurance that mayor can take disaster response responsibilities and will not be held accountable for negligence in the event of an emergency


    Annual audits, reporting

    Auditor/ inspector/ licensing officer

    Water board

    Ensure good water management, safe water levels and excellent water quality.


    Reliable supply of fire water and sewage disposal (no specific role from the tunnel point of view)

    Auditor/ inspector/ licensing officer

    Municipal department(s) environment and building supervision

    • Building regulations are met
    • The building regulations are met
    • Environmental (local) standards are met


    Not beyond CS measures in building codes

    Auditor/ inspector/ licensing officer

    Ministry of the Interior and Kingdom Relations (which department?)

    Do not erode mayors’ responsibility


    No other than reliable facilities for emergency response

    Joint control room


    • Not being obstructed when providing assistance
    • Getting to the right place quickly
    • Clarity – safety during emergency assistance
    • Uniform procedures, working methods and systems
    • Do justice to the local environmental situation


    No direct role for specific tunnel. The control room is outside the scope of the tunnel, though there must have been a check on the CS in the control room, and awareness must also be in place

    Joint control room Emergency service

    Fire brigade

    • Road user safety and safety services
    • Good accessible tunnel
    • Clear approach routes
    • Clarity about the operational, maintenance and safety condition of the tunnel object


    No direct role for specific tunnel. The fire brigade is outside the scope of the tunnel; however, the fire brigade should also be checked on cybersecurity.

    Assistance worker

    Emergency services



    No other than reliable facilities for emergency response

    Assistance worker

    25 safety regions (mayors, fire brigade, ambulances, emergency services)

    • See fire brigade
    • Unambiguous emergency response procedures


    No other than reliable facilities for emergency response

    Assistance worker

    Fire brigade



    No other than reliable facilities for emergency response

    Legislator/ policy maker

    Former Ministry of


    Limit responsibility of competent authority, fire and emergency services


    Check the policy on CS against tunnel regulations. Additional rules may come from this legislation. It may also be that deficient legislation leads to risks that require additional measures.

    Legislator/ policy maker

    Water board



    Check the policy on CS against tunnel regulations. Additional rules may come from this legislation. It may also be that deficient legislation leads to risks that require additional measures.

    Legislator/ policy maker

    Ministry I&M/DGMo

    Political and public support for policy, laws and regulations; exclude legal and political risks; coordinate with VROM and BZK on policy; preserve necessary decision space for the minister in policy matters.


    Check the policy on CS against tunnel regulations. Additional rules may come from this legislation. It may also be that deficient legislation leads to risks that require additional measures.

    Legislator/ policy maker

    Ministry of the Interior and Kingdom Relations (which department?)



    Check the policy on CS against tunnel regulations. Additional rules may come from this legislation. It may also be that deficient legislation leads to risks that require additional measures..

    Legislator/ policy maker

    Local Politics

    Reliability, social acceptance, balanced responsibilities.


    Check the policy on CS against tunnel regulations. Additional rules may come from this legislation. It may also be that deficient legislation leads to risks that require additional measures.









    Cybersecurity aspect

    Auditor/ inspector/ licensing officer

    Safety Officer’s Office (BVB)

    • Law and process enforcement
    • Safety of tunnels must comply with the law (and RWS policy) at all times and this must be demonstrable; other concerns must never result in the required level of safety not (or no longer) being met.


    Review CS reports from the tunnel organisation. Perform independent audits.









    Cybersecurity aspect

    Tunnel instructor

    Fire academy


    Opportunities to practise calamities and incidents on site and/or via simulation


    Can CS incidents also be practised? Or is that completely out of scope for this stakeholder?

    Auditor/ inspector/ licensing officer

    Provincially responsible officer in province where tunnel is being built

    See mayor and aldermen


    Gaat niet direct over de CS-aangelegenheden. B&W blijft verantwoordelijk, mogelijk zal provincie hierin ook een rol (moeten) gaan spelen.

    Assistance worker

    Fire academy



    I of N

    Can CS incidents also be practised as part of a contingency exercise? Or is that completely out of scope for this stakeholder?

    Assistance worker

    Fire brigade corps association NVBR

    • See fire brigade
    • Defending pooled concerns of fire brigades
    • Unity, uniformity

    I of N

    Can CS incidents also be practised? Or is that completely out of scope for this stakeholder?

    Social environment (quality of life)

    Interest groups/opponents (safety and air quality)

    See the concerns of those others


    The environment is a potential attacker. Include this in CS risk analyses. E.g. is there resistance to the tunnel from the surrounding area?

    Road user

    Transporters (representatives)


    • Traffic flow before traffic safety
    • Low margins
    • Important economic factor NL


    This is already listed above under direct users.



    As little risk as possible.


    Vulnerabilities, risks



    The press wants to deliver news.


    Involve in raising awareness , inform about incidents and exercises



    The tables above identify a number of stakeholders whose importance to the tunnel environment is not immediately apparent. To determine this importance and estimate what this means for cybersecurity, the descriptions below were used:


          • Direct users/stakeholders
            • Residents groups: input environmental wishes
            • Road users: users of the tunnel.
          • RWS
            • National services: responsibility to deliver correct functional and technical specifications to RWS project organisation GPO. GPO: mostly principal/writer DBFM. CIV: technical manager. VWM/Region: functional manager.
            • Operational traffic management (OVM) being the traffic control centres: implementation of traffic management from the traffic centres. The following roles can be identified within this: traffic engineer, road traffic controller/tunnel operator, technical/functional management.
            • Highways inspector (WIS), duty officer (OvD): performs operational traffic management on the road.
            • Dynamic traffic management (DVM) from traffic centre, road traffic controller/tunnel operator: carries out operational traffic management from traffic centre.
            • HID RWS-GPO: determines the organisation and policy of GPO.
            • Support centre for tunnel safety (STV): knowledge and advisory centre within RWS in the field of tunnel safety, develops and manages the safety frameworks for tunnels, monitors that RWS tunnels comply with these frameworks, in doing so, acts in a directing capacity on behalf of HID RWS-GPO (and SDG), including through involvement in tunnel projects, advising tunnel operators, etc., coordinates frameworks with (B)VB, provides policy advice to DGMo.
            • RWS (S)DG: determines how the RWS agency implements its mission.
            • Verkeerscentrum Nederland (VCNL): responsible for traffic information and national traffic management.
            • National tunnel director: set national agreements/frameworks, responsible for project management and construction technology tunnel projects, authorised to make binding proposals on behalf of RWS for tunnel design and tunnel safety.
            • Regional service: performs tunnel standard, both in construction, operation and management and maintenance.
            • HID of directorate under which tunnel will fall: ensures responsibility of manager.
            • RWS Project organisations tunnels: contract responsible client, responsible for realisation of new tunnel or renovation of existing tunnel, incl. design, implementation and testing of technical safety facilities.
          • Market
            • Telephone companies (KPN, Vodafone, etc.): facilitate mobile telephony.
            • Manufacturers and installers TTI: responsible for realisation, maintenance.
          • Other public authorities
            • Other Dutch tunnel operators; namely…(municipalities): operating responsible.
            • Municipalities (mayor and aldermen): competent authorities in issuing building permit, opening permit, etc. as final part of the relevant decision-making process, mayor also responsible for disaster management.
            • Water board: licensing.
            • Municipal department(s) environment and building supervision: advise competent authority to grant permits to TA (e.g. building application).
            • Ministry of the Interior and Kingdom Relations (which department?): responsible for (regulations on) emergency and crisis response (e.g. Fire Act and Working Conditions Act), represents concerns of administrators and emergency responders at ministerial level.
            • Emergency services: responsible for emergency response to incidents, calamities, etc.
            • Fire brigade: advising to competent authority, responsible for emergency response to incidents, calamities, etc.
            • 25 safety regions (mayors, fire brigade, ambulances, emergency services): responsible for (coordination of) emergency assistance at incidents, calamities, etc. (preparation, repression).
            • Former Ministry of VROM: responsible for building regulations and thus a large part of tunnel safety regulations (Housing Act, Building Decree, Construction Decree Regulation, Use Decree).
            • Ministry I&M/DGMo: responsible for (preparation of) policy, laws and regulations; advises minister on policy issues; takes initiative on behalf of minister in formal evaluation of tunnel law.
            • Local politics: responsible for local political decision-making process.
            • Provincially responsible director in province where tunnel is to be built: possibly co-responsible for administrative covenants.
          • Independent
            • Safety officer’s office (BVB): monitoring, advising, coordinating with regard to safety. Independent safety adviser for all RWS tunnel managers, responsibilities and powers are laid down by law, monitors safety.
          • Other
            • Fire service academy NIFV: training possibly co-responsible for administrative covenants.
            • Fire brigade corps association NVBR: represents fire brigades.
            • Interest groups/opponents (safety and air quality): represent others.
            • Transporters (representatives): transport goods, people and hazardous substances.
            • Insurers: risk hedging within construction and life cycle.
            • Press: public opinion-forming.

    Appendix 6 Scenarios [link id=”6nth5″]

    B6.1 Foreword [link id=”qml29″]

    This annex to the living document contains scenarios based on cybersecurity events in tunnel automation and related IT systems. One can use these scenarios for training and awareness purposes.


    So far, tunnel automation has not been an obvious target of targeted cyber attacks. Nevertheless, a cyber attack, targeted or otherwise, can lead to a lot of economic damage and even unsafe situations. Many organisations have taken measures and set up procedures using standards such as the BIO, ISO27001 or IEC62443, or using best practices such as this living document on Cybersecurity tunnels. These measures and procedures aim to prevent or handle cybersecurity incidents in the best possible way. Testing these measures in practice is essential to prove that the measures are adequate. A practical exercise also leads to increased awareness and valuable insights that can further improve the measures and processes.

    B6.2. Approach [link id=”hwl70″]

    To keep the scenarios recognisable for different organisations, we describe them in as generic a manner as possible. For each scenario, we describe the following components:

          • Event – What exactly is going on and how does it manifest itself?
          • Consequences – What are the consequences for being able to (safely) operate the tunnel and the organisation behind it?
            • Escalate – How can the consequences worsen?
            • De-escalate – How can the consequences be reduced or controlled?
          • People involved – What internal and external roles are involved?
          • Themes living Document – Which topics in the living document are consistent with this scenario?
            • Prevention – How can the event be prevented in the future?
            • Response – How can the event and the resulting adverse effects be controlled?

    B6.3 Scenarios [link id=”6d1r0″]

    1: Malicious affected operating system [link id=”xp2m4″]



    People involved

    Themes living document

    Fold out Fold in

    2: Control system sabotaged [link id=”5k20b”]



    People involved

    Themes living document

    Fold out Fold in

    3. Malware infection [link id=”0hxhx”]



    People involved

    Themes living document

    Fold out Fold in

    4: SOC makes an observation [link id=”ms85r”]



    People involved

    Themes living document

    Fold out Fold in


    Appendix 7 Overview of working group participants [link id=”q54l5″]















    Wim van Asperen

    Municipality of Amsterdam, Metro and Tram






    Ron Beij

    Amsterdam-Amstelland Fire Brigade






    Jack Blok







    Peter Borsje







    Johannes Braams

    TEC Tunnel Engineering Consultants






    Marijn Brans

    KienIA Industriële Automatisering






    Bob van den Bosch

    Siemens Nederland






    Arie Bras

    Wegschap Tunnel Dordtse Kil






    Michiel Dubbeldeman

    Ministry of Infrastructure and Water Management






    Peter Freesen







    Talmai Geradts







    René van der Helm

    Ministry of Infrastructure and Water Management






    Harald Hofstede

    Siemens Nederland






    Erik Holleboom

    Strypes Nederland






    Cuno van den Hondel







    Jan Houwers

    Antea Group






    Wim Jansen†







    Ico Jacobs







    Erik de Jong







    Marcel Jutte

    Hudson Cybertec






    William Kaan

    Province of Noord-Holland






    Jasper Kimstra







    Lennart Koek

    Hudson Cybertec






    Arnold Kroon







    Robert Jan de Leede

    Siemens Mobility






    Mark van Leeuwen







    Bernhard van der Linde







    Sjaak Matze







    Ron Perrier

    ENGIE Services Infra & Mobility






    Nick Pfennings







    Ronald van der Poel

    Otis Industry






    Rita Puggioni

    Province of North-Holland/NRT






    Jos Renkens







    Philip Roodzant







    Robbert Ross

    Rijkswaterstaat WNN






    Pieter de Ruiter

    Province of Noord-Holland






    Ruud Scholten

    Croon Wolter Dros






    André Stehouwer







    Peter Stroo

    Municipality of The Hague






    Tom van Tintelen

    DON Bureau/Technolution






    René Valstar







    Johan van der Velde







    Riemer te Velde

    Municipality of Rotterdam






    Tom Versluijs

    Municipality of The Hague






    Erik Versteegt

    Siemens Mobility






    Erik Vinke







    Jaap van Wissen

    Rijkswaterstaat CIV






    Gijs Withagen

    KienIA Industriële Automatisering






    Turabi Yildirim

    Rijkswaterstaat CIV








    Leen van Gelder (COB/Soltegro) was chairman of the ISAC Tunnels and coordinator of the working group during phase one and two (and publication of the living document). Jasper Kimstra (COB/Kimpro) is project leader of the working group in phases three, four and five (and publications 3.0, 4.0 and 5.0 of the living document).

    Tom van Tintelen (COB/DON Bureau) is the current chairman of ISAC Tunnels.

    Cindy Peek of COB provides support to the groups.

    Appendix 8 Abbreviations [link id=”skkwg”]


    General intelligence and security service


    Dutch municipalities information security baseline


    Business impact analysis


    Government information security baseline


    Government information security baseline


    Water authorities information security baseline


    Computer emergency response team


    Configuration management database


    Cybersecurity management system


    Configuration item


    Chief information officer


    Computer security incident response team


    Domain name system


    Digital trust centre


    Failure mode, effects (and criticality) analysis


    General data protection regulation


    General packet radio service


    High frequency


    Industrial automation


    Industrial automation and control systems


    Interprovincial information security baseline


    Industrial control systems


    Information and communication technology


    International electrotechnical commission


    Internet protocol


    Information sharing and analysis centre


    Information technology


    Office automation

    MAC address

    Media access control address


    Managed service providers


    National cyber security centre


    Dutch standard


    Network and information protection management


    Network and information systems


    National institute of standards and technology


    Operational technology


    Personal computer


    Reliability, availability, maintainability and safety


    Reliability, availability, maintainability, safety, security, health, environment, economics, politics


    Request for change


    Risk inventory and evaluation


    Supervisory control and data acquisition


    Security information and event management


    Security operations centre


    Team high tech crime


    Universal mobile telecommunications system


    Universal serial bus


    Certificate of conduct


    Regulation information protection rijkswaterstaat


    Dutch data protection act


    Appendix 9 Standards and directives [link id=”twhsg”]

    List of the current norms and guidelines cited in the living document as of 5 November 2020.


    Standard/Guideline with hyperlink

    Abbreviation, if commonly used



    Algemene gegevensbescherming


    Netwerk- en informatieveiligheid

    NIB Directive

    Wet beveiliging netwerk- en informatiesystemen


    Wet aanvullende regeling voor wegtunnels


    Regeling aanvullende regeling voor wegtunnels




    Wet lokaal spoor


    Systems and software engineering — System life cycle processes

    ISO 15288

    Information security management

    ISO 27001

    Information technology — Security techniques — Code of practice for information security controls

    ISO 27002

    Information technology — Security techniques — Information security management systems — Guidance

    ISO 27003

    Information security risk management

    ISO 27005


    ISO 27035

    Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management – Requirements and guidelines

    ISO 27701


    ISO 31000

    Asset management — Overview, principles and terminology

    ISO 55000

    Standard specifies security capabilities for control system components

    IEC 62443

    Voorschrift informatiebeveiliging Rijksdienst


    Voorschrift Informatiebeveiliging Rijksdienst – Bijzondere Informatie


    Handreiking prestatiegestuurde risicoanalyses, Rijkswaterstaat


    Leidraad systems engineering


    Cybersecurity implementatierichtlijn objecten RWS, version 2.4


    Integraal projectmanagement (IPM), Rijkswaterstaat


    Werkwijzebeschrijving 00044 – Verificatie en validatie


    Federal incident reporting guidelines


    Computer security incident handling guide


    Cyber security incident response guide


    National institute of standards and technology


    Handreiking risicoanalyse, National Advisory Centre for the Critical Infrastructure (NAVI)