Twenty tech firms including OpenAI, Google and Meta have signed a new accord aimed at combating the deceptive use of AI in 2024 elections. It’s largely symbolic, and companies aren’t proposing any slow down in the development of deep synthesis technology. Instead, the focus is on technological fixes. That includes “watermarking” (baking invisible signals into AI-generated content to identify them as such), and improving AI detection algorithms. There’s little guarantee such technologies will actually be effective – researchers have consistently found existing watermarking techniques can all be bypassed simply by adding random “noise” to distort the watermarking signal. Meanwhile, AI advances continue apace. Last week OpenAI announced its latest AI video model, the realism of which really needs to be seen to be believed. The potential for generating deceptive content is obvious. OpenAI have held back on immediately making it publicly available, and say they are working on building in safety features, but are unlikely to delay releasing it for too long given their competitors are working on similar systems.