Telescope's AI Safety and Ethics Policy
Today, we’re publishing our AI Safety & Ethics Policy – a comprehensive framework designed to manage risks as we develop increasingly capable AI systems for financial services.
As AI models become more sophisticated in financial applications, they offer significant value in investment analysis and decision-making, while also presenting new challenges. Our policy focuses on preventing systemic risks and non-compliance with the fast evolving landscape of AI regulation.
”Financial institutions must balance innovation with responsibility. Our approach to AI safety isn’t just about compliance—it’s about building trust through transparent, ethical AI systems that deliver consistent, reliable outputs for investors,” says Kevin Algeo, Chief Strategy Officer at Telescope.
A structured approach to safety
Our framework is designed to align with key international standards, guidelines and regulations governing AI implementation in financial services. This includes:
- Rigorous pre-deployment validation
- Continuous monitoring of model outputs
- Clear human-in-the-loop structures for model oversight
- Technical controls to prevent misuse
- Regular human and automated monitoring
From a business perspective, this policy enhances rather than constrains our product offerings. It should be viewed as analogous to standard risk management practices in financial services.
While these commitments represent our current approach, we recognize the rapid evolution of AI technology requires continuous refinement of safety measures. We will regularly update this framework as industry standards and technical capabilities advance.