Will AI Write your Cyber Policy?

Garrett Poorbaugh
Published
August 7, 2023
Last Updated
August 7, 2023 11:28 AM

Can AI Do My Laundry?

Over the past couple of months, we have felt the effects of new Artificial Intelligence (AI) seemingly redefining the workplace as we know it. News articles mentioning the wide range of its capabilities and how it may impact writers, coders, marketing, and much, much more has really challenged our understanding of what work “we humans” are uniquely capable of doing. When we step back and really look at what AI is capable of, it mostly ties back to data and word processing. One aspect of cybersecurity which has historically been extremely time-consuming and, lets be honest, life-draining is the writing of cybersecuirty policy and standards. Does AI have what it takes to write cybersecurity policy for us, or should we still be the ones to drive direction of the organization?

 

We know AI is great at writing; but how about legal writing?

Struggling with Statutes

Regulations and statutory requirements are often times the central gravity pulling together cybersecurity policy, ensuring that businesses operate within the parameters of governing authorities. As such, cybersecurity policy must include the policy documents, references, and capabilities that are instated by these regulations. Despite the ability to no doubt generate overviews of these requirements, accurately citing requirements is something AI can struggle with. One significant example of AI’s citing capabilities being questionable was when a legal team used it to generate a court document, and the AI referenced court cases as examples which did not exist. Although the system was able to generate relevant information to the untrained eye, these references were not factual. The system, unaware of its own mistakes, ended up providing misleading advice and in the context of legally binding documents, ended in the court case being immediately dismissed at the discovery of this oversight. This substantial risk must be acknowledged when entrusting AI with any form of legal accountability.

 

Tools of Times Past

Policy documentation should reflect the current technology landscape, highlighting the most effective tools and resources available to prevent cybersecurity threats. However, an AI model's knowledge cutoff prevents it from having up-to-date information on the latest technologies and tools available in the cybersecurity landscape. For instance, a model with a training cutoff in September 2021 would not be aware of advancements or changes that have occurred since. Consequently, any recommendations it makes concerning specific tools could be outdated or even obsolete, creating potential vulnerabilities. Policy is already difficult enough to apply and make truly valuable, and pulling away the pertinence of its scope and application of its principles makes it even harder. If specific tools are being included or alluded to in policy, subject matter experts should be making the final decisions on whether policy is including their capabilities or not.

 

Culture Concealed from Computers

A potent cybersecurity policy should resonate with an organization's culture, representing its ethos, philosophy, and business processes. Although AI can be fed with data about an organization's culture, the generated policy often remains boilerplate, lacking the necessary customization. AI, in its current state, struggles to understand the intricate nuances of an organization's culture, including the shared values, beliefs, behaviors, and norms. As a result, it can't produce a truly personalized policy that aligns perfectly with a particular organization's culture. This point is one that could be met with contention, however – with the argument of non-specific and broad policy accomplishing the goal just as well as those generated with organizational culture in mind. This can be true for some policies, such as those which achieve objectives like a back up or incident response policy, however, other policy items such as terms of use or planning policies should truly consider the intricacies of the organization to be successful.

 

Artificial Intelligence's Ideal Inclusion

Despite the limitations, AI can certainly take a seat at the table of the policy generation process. Its ability to process large amounts of data at high speed can help prototype policies quickly, providing a good starting point. In addition to providing the starting line to policy, it can be useful in taking current templates and adapting their contents to adapt to the needs of the organization. AI is incredibly good at manipulating input, and taking an already well-written section of policy to pass over to AI to wordsmith or make recommendations on may be a good move. Asking AI to explain concepts which may be new to the policy writer and helping aid in making research more efficient is a perfect use of AI’s capabilities, without reaching a point of reliance.

 

AI has a place to be included; it just isn't the silver bullet.

The Time Cost of Trusting AI

A key point that should be made clear is that using AI is not always a timesaver. If you have a great starting point for a policy, or already have a good idea of where it needs to be directed, you may not want to use AI at all. Many subject matter experts use AI to help them iterate through ideas and brainstorming. However, AI's limitations mean that using it to generate drafts will likely require considerable human intervention to review, refine, and tailor them to a specific organization's needs. AI's outputs, though promising, are far from perfect and require knowledgeable human expertise for fine-tuning. This fact even assumes that all of the AI generated information is cohesive and accurate, which is not always the case. Between crafting the correct prompt and double-checking the information, it can take subject matter experts more time correcting the mistakes that AI makes along the crash-course idea generation process than just starting from a known good source.

 

Sometimes repairing something is more time consuming that starting over from scratch.

Conclusion

At the end of the day, organizations which care about making useful policy which does not bloat the organization or put them in the firing line of auditors need to treat AI generated policy with the same scrutiny as we do policy templates from the internet. Whether it's adhering to regulations, citing specific tools, or reflecting an organization's culture, AI can miss critical elements that are essential for an effective cybersecurity policy. While AI can prototype ideas and provide a foundation to build upon, the responsibility ultimately falls upon the human experts to add the final, necessary touch that makes a policy truly effective and provides peace of mind knowing that regulatory requirements are being met, and not misquoted.

Security Connections to Remember

  • Accurate legal referencing can be a challenge for AI, potentially risking your business's regulatory compliance.
  • Due to its knowledge cut-off, AI might suggest outdated cybersecurity tools, leading to increased vulnerabilities.
  • Crafting policies that truly reflect your unique business culture can be difficult for AI, potentially affecting policy effectiveness.
  • Even with AI's assistance, policy drafts often require substantial human intervention, so it may not be the time-saver you'd expect.

Stop Collecting, Start Connecting.

Copyright © 2022 Security Connections. All rights reserved.

Partner of: