New Zealand's startup ThroughLine could steer AI-identified extremists towards deradicalisation support via humans and chatbots.
The initiative by ThroughLine aims to improve AI safety amid lawsuits accusing companies of failing to prevent or enabling violence. SYDNEY: People who show violent extremist tendencies on ChatGPT and other artificial intelligence platforms could be directed in the future to human and chatbot‑based deradicalisation support through a new tool in development in New Zealand, the people behind it said.
The initiative is the latest attempt to address safety concerns in the face of a growing number of lawsuits accusing AI companies of failing to stop, and even enabling, violence. OpenAI was threatened with intervention by the Canadian government in February after revealing a person who carried out a deadly school shooting had been banned by the platform without the authorities being informed. ThroughLine, a startup hired in recent years by ChatGPT owner OpenAI as well as rivals Anthropic and Google, to redirect users to crisis support when they are flagged as being at risk of self-harm, domestic violence or an eating disorder, is also exploring ways to broaden its offer to include preventing violent extremism, its founder and former youth worker Elliot Taylor said. The company is in discussions with The Christchurch Call, an initiative to stamp out online hate formed after New Zealand's worst terrorist attack in 2019, which would involve the anti-extremism group giving guidance while ThroughLine develops the intervention chatbot, the former youth worker said. 'It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms,' Taylor said in an interview, adding that no timeframe had been set. OpenAI confirmed the relationship with ThroughLine but declined to comment further. Anthropic and Google did not immediately respond to requests for comment. Taylor's firm, which he runs from his home in rural New Zealand, has become a go-to for AI firms with its offer of a constantly checked network of 1,600 helplines in 180 countries. Once the AI detects signs of a potential mental health crisis, it routes the user to ThroughLine, which matches them with an available human-run service nearby. But ThroughLine's scope has been limited to specific categories, the founder said. The breadth of mental health struggles that people disclose online has exploded with the popularity of AI chatbots and now includes dalliances with extremism, he added. More chatbots, more problems The anti-extremism tool would probably be a hybrid model combining a chatbot trained to respond to people who show signs of extremism and referrals to real-world mental health services, Taylor said. 'We're not using the training data of a base LLM,' he said, referring to the generic datasets large language model platforms use to form coherent text. 'We're working with the correct experts.' The technology is currently being tested, but no date has been set for release. Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped to roll the product out for moderators of gaming forums and for parents and caregivers who want to weed out extremism online. A chatbot rerouting tool was 'a good and necessary idea because it recognises that it's not just content that is the problem, but relationship dynamics', said Henry Fraser, an AI researcher at Queensland University of Technology. The product's success may depend on questions of 'how good are follow-up mechanisms and how good are the structures and relationships that they direct people into addressing the problem,' he said. Taylor said follow-up features, including possible alerts to authorities about dangerous users, were still to be determined but would take into account any risk of triggering escalated behaviour. He said people in distress tended to share things online that they were too embarrassed to say to a person, and governments risked compounding danger if they pressured platforms to cut off users who engaged in sensitive conversations. Heightened moderation associated with militancy by platforms under pressure from law enforcement has seen sympathisers moving to less regulated alternatives like Telegram, according to a 2025 study by New York University's Stern Center for Business and Human Rights. 'If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support,' Taylor said.
AI Safety Chatgpt Moderation Christchurch Call Content Moderation Tools Deradicalisation Chatbot Extremist Detection Online Extremism Prevention Throughline New Zealand Violent Extremism AI
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Global energy crisis: Housing projects to be completed on schedule, says NgaLetter from Dr Goh Lim Thye, senior lecturer, Department of Economics, Faculty of Business and Economics, Universiti Malaya
Read more »
Police Deploy to Petrol Stations Near Borders to Combat Fuel Smuggling Amidst Energy CrisisThe Malaysian government is deploying police to over 80 petrol stations near the borders to curb fuel smuggling due to the global energy crisis. The Home Minister announced the decision following a Cabinet directive, emphasizing the need to strengthen border controls and protect national resources. The initiative will not affect the core duties of the police. Data from January to March shows 735 smuggling cases with seized goods valued at RM2.81 billion.
Read more »
Malaysia Considers Expanding Diesel Subsidy Amidst Global Energy CrisisThe Malaysian government is studying proposals to expand its diesel subsidy scheme to help farmers and other groups affected by rising diesel prices. The Finance Minister's political secretary stated that the government is exploring targeted subsidies and expanding fleet card access, drawing on the successful BUDI95 program for guidance. This is happening while the family mourns an army private who died after assault and there is an ongoing clash between Iran and US.
Read more »
IMF Warns of AI-Driven Cyber Risks to Global Monetary SystemThe International Monetary Fund (IMF) has issued a stark warning about the escalating cyber risks to the global monetary system posed by artificial intelligence (AI). IMF Managing Director Kristalina Georgieva highlighted the urgency of addressing cybersecurity vulnerabilities amplified by AI, particularly concerning new models like Anthropic's 'Mythos.' Georgieva emphasized the need for global collaboration to protect financial stability in an increasingly AI-driven world. The IMF's concern follows the announcement by Anthropic, which decided to limit the release of its new AI model due to its ability to rapidly identify security vulnerabilities. This underscores the need for proactive security measures to safeguard financial systems.
Read more »
IMF Warns of AI Cybersecurity Risks to Global Monetary SystemThe IMF Managing Director has warned that the global monetary system is unprepared for the rapidly escalating cybersecurity risks posed by artificial intelligence, specifically highlighting concerns around Anthropic's new 'Mythos' model and its ability to identify vulnerabilities. The call for global collaboration on AI safety guardrails comes amidst US regulatory action and Anthropic's limited release of the model.
Read more »
Hold off on state polls until global crisis eases, says Ahmad MaslanLetter from Dr Goh Lim Thye, senior lecturer, Department of Economics, Faculty of Business and Economics, Universiti Malaya
Read more »
