Over the past couple of months, we have felt the effects of new Artificial Intelligence (AI) seemingly redefining the workplace as we know it. News articles mentioning the wide range of its capabilities and how it may impact writers, coders, marketing, and much, much more has really challenged our understanding of what work “we humans” are uniquely capable of doing. When we step back and really look at what AI is capable of, it mostly ties back to data and word processing. One aspect of cybersecurity which has historically been extremely time-consuming and, lets be honest, life-draining is the writing of cybersecuirty policy and standards. Does AI have what it takes to write cybersecurity policy for us, or should we still be the ones to drive direction of the organization?
Regulations and statutory requirements are often times the central gravity pulling together cybersecurity policy, ensuring that businesses operate within the parameters of governing authorities. As such, cybersecurity policy must include the policy documents, references, and capabilities that are instated by these regulations. Despite the ability to no doubt generate overviews of these requirements, accurately citing requirements is something AI can struggle with. One significant example of AI’s citing capabilities being questionable was when a legal team used it to generate a court document, and the AI referenced court cases as examples which did not exist. Although the system was able to generate relevant information to the untrained eye, these references were not factual. The system, unaware of its own mistakes, ended up providing misleading advice and in the context of legally binding documents, ended in the court case being immediately dismissed at the discovery of this oversight. This substantial risk must be acknowledged when entrusting AI with any form of legal accountability.
Policy documentation should reflect the current technology landscape, highlighting the most effective tools and resources available to prevent cybersecurity threats. However, an AI model's knowledge cutoff prevents it from having up-to-date information on the latest technologies and tools available in the cybersecurity landscape. For instance, a model with a training cutoff in September 2021 would not be aware of advancements or changes that have occurred since. Consequently, any recommendations it makes concerning specific tools could be outdated or even obsolete, creating potential vulnerabilities. Policy is already difficult enough to apply and make truly valuable, and pulling away the pertinence of its scope and application of its principles makes it even harder. If specific tools are being included or alluded to in policy, subject matter experts should be making the final decisions on whether policy is including their capabilities or not.
Despite the limitations, AI can certainly take a seat at the table of the policy generation process. Its ability to process large amounts of data at high speed can help prototype policies quickly, providing a good starting point. In addition to providing the starting line to policy, it can be useful in taking current templates and adapting their contents to adapt to the needs of the organization. AI is incredibly good at manipulating input, and taking an already well-written section of policy to pass over to AI to wordsmith or make recommendations on may be a good move. Asking AI to explain concepts which may be new to the policy writer and helping aid in making research more efficient is a perfect use of AI’s capabilities, without reaching a point of reliance.
A key point that should be made clear is that using AI is not always a timesaver. If you have a great starting point for a policy, or already have a good idea of where it needs to be directed, you may not want to use AI at all. Many subject matter experts use AI to help them iterate through ideas and brainstorming. However, AI's limitations mean that using it to generate drafts will likely require considerable human intervention to review, refine, and tailor them to a specific organization's needs. AI's outputs, though promising, are far from perfect and require knowledgeable human expertise for fine-tuning. This fact even assumes that all of the AI generated information is cohesive and accurate, which is not always the case. Between crafting the correct prompt and double-checking the information, it can take subject matter experts more time correcting the mistakes that AI makes along the crash-course idea generation process than just starting from a known good source.
At the end of the day, organizations which care about making useful policy which does not bloat the organization or put them in the firing line of auditors need to treat AI generated policy with the same scrutiny as we do policy templates from the internet. Whether it's adhering to regulations, citing specific tools, or reflecting an organization's culture, AI can miss critical elements that are essential for an effective cybersecurity policy. While AI can prototype ideas and provide a foundation to build upon, the responsibility ultimately falls upon the human experts to add the final, necessary touch that makes a policy truly effective and provides peace of mind knowing that regulatory requirements are being met, and not misquoted.