Artificial Intelligence and Risk Management: How Prepared Are You?
Did you know that:
According to NIST: AI systems are inherently socio-technical in nature, meaning societal dynamics and human behavior influence them. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.
According to the US National Institute of Standards and Technology (NIST): AI risk management offers a path to minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts. Addressing, documenting, and managing AI risks and potential negative impacts effectively can lead to more trustworthy AI systems.
If your institution is now using AI or planning to begin using AI, we (CRMA) are available to assist with identifying and managing AI and AI-related risks.
Watch this space!