Follow the podcast on
The leading AI companies have agreed to a series of safeguards as the technology increases in sophistication Â
Meta, OpenAI, Google, Amazon, Microsoft and others have pledged to work within a framework designed in collaboration with the US government. This is a voluntary effort, there aren't any penalties if they break the pact.Â
Broadly, the rules are designed to make it easier for folks to spot AI content - which is certainly important as the US heads into the Presidential election season nearly next year.Â
The companies agreed to:Â
- Security testing of their AI systems by internal and external experts before their release.
- Ensuring people are able to identify AI generated content through watermarks.
- Publicly reporting AI capabilities and limitations on a regular basis.
- Researching the risks such as bias, discrimination and the invasion of privacy.
Â
In the UK, the future of encryption is being testedÂ
The new Online Safety bill would allow Ofcom - the UK's communications regulator - to be able to request tech companies to scan encrypted use data for child exploitation and counter-terrorism threats. It's interesting that they're seeking to give this power to a regulator, and not the courts as is common for things like search warrants and detailed data collection about someone. Â
Those supporting the bill say it's needed to tackle "record levels" of child abuse hidden away from view. But privacy advocates say it's a step too far. The tech companies agree - with Meta saying they'd pull WhatsApp from the UK. Apple says they'd pull FaceTime and iMessage. They don't want to create a backdoor to their global platform for a specific country, and broadly don't believe in breaking encryption.Â
Â
LISTEN ABOVE
Take your Radio, Podcasts and Music with you