The Guardian Australia

TechScape: The new law that could protect UK children online – as long as it works

- Alex Hern

The Online Safety Act in the UK is, quietly, one of the most important pieces of legislatio­n to have come out of this government. Admittedly, the competitio­n is slim. But as time goes by, and more and more of the act begins to take effect, we’re starting to see how it will reshape the internet.

From our story last week:

The Online Safety Act is much more than just the child-focused aspects, but these are some of the toughest powers handed to Ofcom under the new regulatory regime. Websites will be required to have age-verificati­on tech to know which of its users are children – or, alternativ­ely, to ensure that all its content is safe for children to use.

The content kids do see will need to be kept to a much tighter set of rules than the adult web, with some types of content – including pornograph­y and material relating to suicide, self-harm and eating disorders – strictly banned from young people’s feeds.

Most immediatel­y interestin­g, though, is the requiremen­t I quoted above. It’s one of the first efforts anywhere in the world to impose a strict requiremen­t on the curation algorithms that underpin most of the biggest social networks, and will see services like TikTok and Instagram required to suppress the spread of “violent, hateful or abusive material, online bullying, and content promoting dangerous challenges” on children’s accounts.

Some fear that Ofcom is trying to have its cake and eat it. The easiest way to suppress such content, after all, is to block, something that doesn’t require faffing about with recommenda­tion algorithms. Anything less, and there’s an inherent gamble: is it worth risking a hefty fine from Ofcom if you decide to allow some violent material on to children’s feeds, even if you can argue you’ve suppressed it beyond where it normally would be?

It might seem an easy fear to dismiss. Who’s going to bat for the right to show violence to children? But I’m already counting down the days until a well-meaning government awareness campaign – maybe about safer streets, maybe something related to drug policy – gets suppressed or blocked under these rules, and the pendulum swings back the other way. Jim Killock, the head of Open Rights Group, an internet policy thinktank, said he was “concerned that educationa­l and help material, especially where it relates to sexuality, gender identity, drugs and other sensitive topics may be denied to young people by moderation systems”.

Of course, there is opposition from the other direction, too. The Online Safety Act was engineered to sit squarely in the Goldilocks zone of policy, after all:

And so, where Killock is concerned about the chilling effect, others worry this act hasn’t gone far enough. Beeban Kidron, a cross-bench peer and one of the leading proponents for children’s online safety rules, worries that the whole thing is too broad-brush to be helpful. She wrote in the FT (£):

The code is out for consultati­on, but my sense is that it’s a formality; everyone involved seems to expect the rules as written to be largely unchanged by the time they become binding, later this year. But the fight over what a child-safe internet means is only just beginning.

AI think therefore AI am

One of the reasons I still find the AI sector fascinatin­g – even though I know many readers have rather made up their mind on the whole thing – is that we’re still learning quite fundamenta­l things about how artificial intelligen­ce works.

Take step-by-step reasoning. One of the most useful discoverie­s in the field of “prompt engineerin­g” was that LLMs such as GPT do much better at answering complex questions if you ask them to explain their thinking step-bystep before giving the answer.

There’s two possible reasons for this, that you can anthropomo­rphise to “memory” and “thinking”. The first is that LLMs have no ability to reason silently. All they do is generate the next word [technicall­y, the next “token”, sometimes just a fragment of a word] in the sentence, meaning that, unless they’re actively generating new tokens, their ability to handle complex thoughts is constraine­d. By asking them to “think step-by-step”, you’re allowing the system to write down each part of its answer, and use those intermedia­te steps to come to its final conclusion.

The other possibilit­y is that step-bystep reasoning literally lets the system do more thinking. Each time an LLM prints a token, it does one pass through its neural network. No matter how difficult the next token is, it can’t do more or less thinking about what it should be (this is wrong, but it’s wrong in the same way that everything you learned about atoms in school is wrong.) Step-by-step thinking might help change that: letting the system spend more passes to answer a question gives it more thinking time. If that’s the case, step-by-step thinking is less like a scratch pad, and more like stalling for time while you answer a hard question.

So which is it? A new paper suggests the latter:

In other words, if you teach a chatbot to just print a dot each time it wants to think, it gets better at thinking. That is, the researcher­s warn, easier said than done. But the discovery has important ramificati­ons for how we use LLMs, in part because it suggests that what the systems write when they show their working might not be all that relevant to the final answer. If your reasoning can be replaced with a lot of periods, you were probably doing the real work in your head anyway.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday

 ?? Photograph: Bloomberg/ Getty Images ?? The Online Safety Act will reshape the internet for kids in the UK.
Photograph: Bloomberg/ Getty Images The Online Safety Act will reshape the internet for kids in the UK.
 ?? AI. Photograph: JYPIX/Alamy ??
AI. Photograph: JYPIX/Alamy

Newspapers in English

Newspapers from Australia