- Cybersecurity Risks of AI in Finance: Regulators are increasingly focusing on mitigating the unique cybersecurity risks posed by the use of AI in financial services.
- Proactive Risk Management: Financial institutions need to be proactive in understanding, assessing, and mitigating AI-related risks as part of their cybersecurity frameworks.
- Building Expertise: Developing internal expertise or leveraging external resources is crucial for effectively managing AI risks.
- Multi-layered Security Approach: A layered security approach with overlapping controls is essential to minimize the impact of potential cyberattacks targeting AI systems.
Key Facts and Ideas:
- The New York State Department of Financial Services (NYDFS) issued guidance advising financial institutions to address cybersecurity risks related to AI, including:
- Social Engineering: AI can be exploited to create highly sophisticated phishing attacks and manipulate individuals into divulging sensitive information.
- Cyberattacks: AI-powered attacks can be more effective at bypassing traditional security measures and exploiting vulnerabilities.
- Data Theft: AI can be used to identify and exfiltrate valuable nonpublic information from financial institutions.
- The guidance doesn’t impose new regulations but reinforces existing cybersecurity regulations and emphasizes the need for AI-specific risk management.
- NYDFS Superintendent Adrienne Harris stressed the importance of expertise: “I think it’s really about making sure there’s expertise in the institution, making sure they’re engaging with lots of stakeholders, so they understand the development of the technology.”
- The guidance recommends a multi-layered security approach with overlapping controls to provide redundancy in case one control fails.
- National Level: While California recently vetoed a bill focusing on large AI models, the discussion surrounding AI regulation continues at both state and federal levels.
Quotes:
- “It’s about making sure that you’ve got the right expertise in-house—or that you’re otherwise seeking it through external parties—to make sure your institution is equipped to deal with the risk presented.” – Adrienne Harris, NYDFS Superintendent
- The NYDFS guidance states: “These controls should include a risk assessment and risk-based programs and procedures, the ability to conduct due diligence on third parties and vendors, cybersecurity training and data management.”
Looking Forward:
- Increased regulatory scrutiny of AI in finance is expected, potentially leading to more specific regulations and guidelines.
- Financial institutions need to prioritize developing robust AI risk management frameworks and investing in cybersecurity expertise.
- Collaboration between regulators, industry experts, and technology developers is crucial to ensure the responsible and secure use of AI in the financial sector.