Back

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

In 2019, the government acknowledged that the Criminal Justice System (CJS) was failing survivors of rape and sexual assault, leading to a decrease in public trust. The Sarah Everard case further eroded this trust, whilst delays in processing digital evidence have further undermined police investigations and the prosecution process. In response, the Home Office’s £1.5m Domestic Abuse Perpetrators Research Fund awarded LSBU – with project partners De Montfort University, University of Brighton, and Edge Hill University – a feasibility study grant to test the viability of an AI program to identify perpetrators of domestic abuse using digital communications on mobile phones; at the same time examining police and survivor attitudes towards using AI in police investigations.

In February 2022, the project’s consortium, including Tirion Havard – Professor of Gender Abuse and Policy at HSC Social Work, Community and Public Health in The School of Allied and Community Health – set out to study how AI, specifically natural language processing (NLP) methods, might be used to assist decision-making in the CJS relating to domestic abuse cases. The focus was on the analysis of digital communication between intimate partners, such as text messages and emails, using machine learning (ML), computational linguistics and ‘Natural Language Processing’ (NLP) techniques. NLP involves the use of algorithms to convert text into machine-readable digits that enable the computer to understand the context of sentences and paragraphs. And whilst used in broader areas of criminal justice, the approach had not been applied to the analysis of technology-facilitated coercive control in domestic abuse cases. The project’s aim was to develop an enhanced tool that understands natural language for the analysis of sentiments, attitudes, and emotions expressed in the communication between survivors' and perpetrators of domestic abuse.

The high-profile cases of Sarah Everard and Sabina Nessa, along with instances of police misconduct, first highlighted the urgent need to address Violence Against Women and Girls (VAWG) and police culture.

The UK government had also acknowledged the failures of the CJS in addressing rape and sexual assault cases, along with concerns about the backlog of domestic abuse cases due to the pandemic. Against this backdrop, the project sought out the views of survivors and police staff on the potential of this AI-based approach to support police enquiries in domestic abuse cases. In 2019, only a small percentage of coercive control offenses resulted in prosecution, highlighting the need for improved statistics and the role of police in supporting investigations. The increase in digital data contributes to delays in evidence and case building, with lengthy and invasive investigations leading to high dropout rates from survivors who feel digitally strip searched.

Combining backgrounds and specialisms, the team sought to establish the effectiveness of NLP and ML in digital communications analysis of perpetrator identification, and escalation indicators of abuse and risk. It also assesses if the anonymity and discretion of using NLP methods for data collection encourages domestic abuse survivors to share digital data with police, and if police would embrace NLP methods as a tool to analyse digital data quickly and support with their investigations.

The broad subject matter was broken down into four work packages:

Work Package 1: Survivor Survey

An online survey was created which aimed at capturing survivor experiences of sharing digital data with the police, and to see if AI techniques made survivors less digitally strip searched. Purposive sampling and snowballing techniques were used to gather the views of survivors, whilst survivor charities were involved in the research and supported recruitment.  

The survey showed that 70% of participants said they would be likely (or very likely) to share information (for example text messages) stored on their mobile phones if the police used a computer programme to help them investigate domestic abuse. 9% indicated they were unlikely to share this information, and fewer than 5% said they were very unlikely.

58% of participants were likely (or very likely) to worry that irrelevant information would be used against them in a police investigation. Only 22% were neither likely nor unlikely to worry about this.

Findings suggests optimism about AI technology’s role in decision making processes, though there were reservations about the police’s ability to use it without bias. The concern around bias is understandable given the limited public understanding of AI, its use and potential for misuse and the recent media coverage of AI. Results showed that participants supported the use of AI by police because it was deemed to be fair. A separate study shows that it is the data, not the algorithm that is open to bias. This suggests that educating the public about AI technology and its role in decision making could help build trust between the public and the police. Further research is required to understand this within a context of domestic abuse.

Work Package 2: Police Survey

The police survey explored police attitudes regarding the model’s potential to support police with their investigations and securing successful outcomes. Purposive sampling was used to obtain the views of police staff and snowballing techniques were also employed. The majority (82%) of police staff who responded to the survey indicated that they would be willing or very willing to use AI technology to support them in identifying domestic abuse. Three-quarters felt that the technology would be helpful in the investigation of domestic abuse incidents.

The survey indicates that there is interest from the police to utilise AI technology in investigations, but reservations remain about its understanding of digital communication in the context of coercive control, which can often go unnoticed or misunderstood. The need for collaboration between the police and the CJS on AI ability and assessment of domestic abuse risk is highlighted.

Work Package 3: Digital Data Collection

Local and national media reports were scanned, seeking out reports that identified the use digital communication in domestic abuse cases. Permission was then sought from the relevant crown courts for extracts from these court cases. Text messages between the prosecution and defence were extracted and used to test the effectiveness of an NPL system. In total, six transcripts, with 219 messages, 389 sentences and 3087 words provided the data set for analysis by the NPL data mining systems.

This work package sought to identify suitable data from real life digital conversations to test if it would enhance computational understanding of an NLP system. Findings showed the use of court data significantly increased the reliability of the raw data analysed.

Work Package 4: Data Analysis and Modelling

This work package utilised two approaches to analyse the digital evidence obtained from court transcripts and model perpetrator behaviour:

Mining Natural Language Sources: Text mining and NLP techniques were used to extract, analyse, summarize, and assess the digital evidence. Innovative tools were developed to identify message threads, categorize message types, detect coercive terms, measure sentiment and emotions, and cluster messages into thematic groups. Messaging patterns such as frequency, length, and submission times were also considered.

Modelling Perpetrator Behaviour: ML techniques were employed to model the behaviour extracted from court transcripts, with the transcripts serving as positive cases of coercive behaviour for training the ML model. Textual communication from unrelated sources was used as negative cases.

A lab-based evaluation was conducted to assess the performance of the NLP and ML models in identifying coercive behaviour and modelling perpetrator behaviour from written text, focusing on speed and accuracy. The goal was to test the program's ability to identify (alleged) perpetrators of domestic abuse and indicators of abuse escalation and risk.

Importance / Impact

It addresses government promises that no adult victim of rape should be without a phone for more than 24 hours1. This technology would quicken police ability to process digital data, reduce the length of time they hold survivors' phones, limit delays in the CJS and reduce the number of survivors who withdraw their complaints.

It makes a significant contribution to the use of AI to assist decision making within the CJS in domestic abuse cases and is the first-time survivor views about the role of AI in police investigations have been sought.

Conclusions

The consortium found that due to the limited data collected over such a brief time, the findings of this study are considered insightful but not conclusive. However, the results are useful in terms of establishing that the model demonstrates feasibility and further research with larger data sets is necessary to generate a higher level of confidence in the AI tool.

It is also worth noting that the research aligns with current policy, by utilising machine learning and data modelling to analyse written communication in domestic abuse cases. However, although text mining and NLP techniques have been used (sparingly) in the criminal justice arena, no existing study has employed them to analyse TFCC in domestic abuse cases. The work also distinguishes itself from previous studies by focusing on communication between perpetrators and survivors at the time of abuse, whilst including the perspectives of police staff and survivors, a novel approach that aims to build trust and partnerships in the CJS.

Recommendations / Potential

Early indications suggest cautious optimism towards the technology, with wider literature also supporting the strengthening of these relations. The study also highlights the need to address delays in the CJS, such as the prolonged retention of survivors' mobile phones, which along with other factors can lead to case withdrawals.

The research's proposed method supports the government's funding and directives to enhance digital investigation skills and expedite data processing. Implementing AI has the potential to improve public confidence in the police, and further research is needed to explore the opportunities and best practices in the CJS.

The work contributes significantly to the use of AI in assisting decision-making in the CJS regarding domestic abuse cases, specifically in the context of technology-facilitated coercive control.

If you are facing a similar challenge that you feel we could address, please do not hesitate to contact us.

The Project

This study is one of the 21 projects funded by the Home Office for research on perpetrators of domestic abuse.

People involved in the project:

  • Dr Tirion Havard, LSBU
  • Dr Nonso Nnamoko, Edge Hill University  
  • Dr Chris Magill, Brighton  
  • Cyndie Demeocq, LSBU  
  • Jack Procter, Edge Hill University  
  • Denise Harvey, LSBU
  • Prof Vanessa Bettinson, De Montford University

London South Bank University undertook the feasibility study and collated the report.

Partner Information:

SAWN (Support and Action Women's Network)

RISE (Research and Innovation Staff Exchange)

Salus  

Rising Sun

Health Education Partnership

Funding Information:

Home Office Domestic Abuse Perpetrators Research Fund 2021 "The views reflected in this research are not necessarily those of the Home Office"

Information Links:

Can Artificial Intelligence be used to identify perpetrators of domestic abuse? | Social Work Today

https://www.dmu.ac.uk/about-dmu/news/2022/april/ai-tool-designed-to-identify-coercive-language-patterns-receives-home-office-funding.aspx

Alfie Foster
Digital Marketing Apprentice
Share this post
No items found.