The main goal of this project created by Contrast Security is to create a clear and usable policy for managing privacy and security risks when utilizing Generative AI and Large Language Models (LLMs) in organizations, according to the project’s GitHub page.
The policy primarily aims to address several key concerns:
1. Avoid situations where ownership and intellectual property (IP) rights of software cannot be disputed later on.
2. Guard against the creation or use of AI-generated code that may include harmful elements.
3. Prohibit employees from using public AI systems to learn from the organization’s or third-party proprietary data.
4.Prevent unauthorized or underprivileged individuals from accessing sensitive or confidential data.
This open-source policy is designed as a foundation for CISOs, security experts, compliance teams, and risk professionals who are either new to this field or require a readily available policy framework for their organizations.
“AI is no longer just a concept. It is embedded in our everyday lives, powering a vast array of systems and services, from personal assistants to financial analytics. As with any transformative technology, it is imperative that its use be governed by thoughtful and comprehensive policies to mitigate potential risks and ethical dilemmas,” David Lindner, Chief Information Security Officer at Contrast Security stated in a blog post. “The Contrast Responsible AI Policy Project is a testament to our belief in transparency, cooperation and shared growth. As AI continues to evolve, we need to ensure that its potential is harnessed in a responsible and ethical manner. Having a clear, well-defined AI policy is essential for any organization implementing or planning to implement AI technologies.”