News
Insight: Chaucer's Cyber Broker Breakfast gave insights into the risks of AI in systems security
Discussion on Artificial Intelligence can barely be avoided these days, it’s everywhere. From social media platforms and shopping tools to algorithmic client assessments and policy drafting in insurance, everyone is talking about how it’s impacting our lives and its potential uses in business.
As with any new technology however, understanding what it could mean for the future and having real conversations about the drawbacks, can be limited. This month, Chaucer’s cyber team created a breakfast event for brokers which aimed to simplify and create a discussion around how AI is used specifically in systems security, raising awareness of the potential risks of a future dominated by computer-driven decision making.
- Work done by computers with a process and output which resembles human intelligence
- Machine learning
- Large Language Models such as ChatGPT and others
- As artificial intelligence learns, it speeds up the productivity of developers, making connections faster and creating a base for future uses
- This will speed up attempts to streamline and consolidate information systems
- AI itself can then hunt for vulnerabilities and provide its own testing
- It could then be used for defence of networks, creating patches and deploying them
- Whilst it helps coding easier, it also makes malware coding easier – bad actors will need fewer skills and less knowledge to create threats
- Social engineering will become more sophisticated, writing personalised messages to AI-identified targets
- When AI makes a mistake, it can be hard to spot, and mistakes introduce further vulnerabilities.
- Overconfidence in tools can encourage insufficient human review
- AI systems are granted access to sensitive data and are performing more tasks previously undertaken by humans
- It creates an additional, potentially all-encompassing realm for companies to defend
- It creates supply chain vulnerabilities, as more companies and systems become exposed to all these potential pitfalls
As AI grows in ubiquity and sophistication, everyone in the insurance ecosystem from clients through brokers to insurers will need to consider carefully how and where they are deploying AI. Awareness of the threats and consideration of the data to be used in conjunction with AI will help reduce the risks of using technology which can also revolutionise processes and analytics.
Chaucer is developing its approach including revised proposal forms to capture as much information as possible on how AI is used at a Proposer’s company, and the potential vulnerabilities of their systems.
Published on 27.02.2025