top of page
Ruby Real

When AI Gets It Wrong: The Deadly Miscalculations of VioGén for Women

Words by Ruby Real (she/her)


 
CW: Domestic Violence, Gender-based Violence, Police Profiling 

In the 2002 thriller Minority Report, we are transported to the year 2054, where a pre-crime unit uses psychic power to predict violent crime before it occurs. This portrayal of 'pre-emptive justice' sparks a crucial debate: How reliable are predictive technologies in ensuring true security, and can we trust them? The film forces us to confront the reliability of predictive technologies in our own world. As we increasingly turn to algorithms and data-driven methods in law enforcement we must grapple with the limitations and potential dangers these systems pose – especially when it comes to objectivity, bias, and the discrimination already embedded into our social foundations. How can we reconcile the promise of crime prevention with the dangers of algorithmic error? What happens when they get it wrong, and most importantly, who is held accountable? 


The real-world parallel to this fictional technology is Spain’s VioGén (Integral Monitoring System in cases of Gender Violence) system, designed to assess and manage risks related to domestic violence. In operation for over fifteen years, VioGén has registered more cases than any other system globally—over three million, as reported in 2022.


The algorithm operates through classic statistical models to determine a risk score, with categories that directly correspond to the level of protection subsequently offered to the victim. VioGén embodies the promise of predictive technologies, intended to enhance the safety of vulnerable individuals and provide a structured response to forecast gender-based violence. But whilst the algorithm is undoubtedly a pioneer of its kind, external audits of the system have raised alarming concerns about its transparency, accountability and effectiveness. 


The most acute criticism of VioGéns is its risk classification system. When women come forward to report domestic violence, the police use a standardised 39-item questionnaire to input data into VioGén. This information is then supposed to objectively determine the victims risk level and guide the appropriate level of safeguarding and intervention. These are ranked from ‘non-existent’ to ‘extreme’. There are already a multitude of barriers that prevent women from seeking state intervention in situations as precarious and enmeshed as intimate violence, but even a system created to protect them underestimates the risks they may face. 

Under its current design, the algorithm's risk analysis is shaped not only by the direct information gathered from the initial assessment but also by the distribution of gender-violence cases. 


In 2021, only one in seven women who sought out police protection actually received it. This discrepancy reveals a troubling trend where resource allocation and funding constraints disproportionately affect women’s access to protection, reflecting broader societal issues of gender inequality and systematic marginalisation. 


The algorithm's design allows many women to fall through the cracks, with rigid questioning that leaves no room for nuance and fails to consider the psychological nature of abuse. Moreover, the system’s objectivity is compromised by its resource limitations. VioGén is limited to a finite number of ‘extreme’ risk scores, meaning that budget constraints directly affect the likelihood that women will receive adequate protection. Many cases are dismissed with a ‘no apreciado’ (non-existent) or ‘bajo’ (low) risk score, reflecting both a technological failure and a critical fault in how gendered violence is institutionalised.


However, the design flaw is not just a defect in the algorithm, it can be deadly. In 2014, fourteen out of the fifteen women murdered that year who had reported their aggressors for gender-based violence had received a non-existent or low-risk assessment from the police. This tragic outcome highlights how the intersection of inadequate technological solutions and systemic failures can have devastating consequences for women. 

Whilst the algorithm has undergone some pretty significant updates since 2014, namely its new iteration VioGén 5.0 which implemented a dual-evaluation system for the likelihood of recidivism and lethal assault, it remains deeply flawed. VioGén is one of the most complex technologies of its kind,  but if such advanced systems are failing those who most need protection, what does this mean for all others like it?  The system has undoubtedly done some good, but how do we address the broader issues it raises?


Comments


bottom of page