Security Is a Human Problem: The Case for Developer-Centred Security

8 October 2024  |  Developer Security, Responsible Software Engineering

The popular narrative around software security tends to focus on tools: static analysers, vulnerability scanners, penetration testing frameworks, formal verification. These are valuable. But after years of studying how developers actually engage with security in practice, I am increasingly convinced that the tools are not the bottleneck. The people are — not because they are careless or incompetent, but because we have systematically designed security practice around threat models and compliance checklists rather than around the humans who must actually implement it.

What Research Tells Us About Developers and Security

The empirical picture that emerges from studying developers and security is both humbling and instructive. Developers are not indifferent to security. In study after study, including my own work on adaptive security interventions and freelance software development, we find that developers care about building secure software. They simply face a set of conditions that make acting on that concern enormously difficult.

Those conditions include: time pressure that frames security as a luxury; organisational cultures that reward feature delivery and treat security incidents as exceptional rather than systemic; tooling that interrupts workflow with warnings that are frequently false positives; and a security discourse so dominated by adversarial framing and specialist jargon that many developers feel it belongs to a different profession entirely.

In the context of freelance and gig-economy software development — an area my colleagues and I have studied closely — these pressures are amplified. Freelancers rarely have access to security review processes or a team to consult. Their security decisions are made in isolation, under commercial pressure, for clients who rarely specify security requirements explicitly.

Adaptive Security Interventions

One direction my research has pursued is the idea of adaptive security interventions — security support that responds to context, rather than applying the same checklist to every developer in every situation. The intuition is straightforward: a junior developer building their first web application has different needs from an experienced engineer working on safety-critical infrastructure. Treating them identically is both inefficient and counterproductive.

Adaptive interventions might consider a developer's prior experience, the nature of the codebase they are working on, the time pressure they are under, and the specific security decisions at stake. The goal is not to make security easier in a way that sacrifices rigour, but to make the path of least resistance more closely aligned with the secure path.

This is, at heart, a design challenge as much as a technical one. And it requires treating security as something that happens in social and organisational context, not merely as a property of code.

Social Considerations in Security Decision-Making

One of the more striking findings from our research is how strongly social factors shape security behaviour. Developers make security decisions in relation to colleagues, clients, managers, and perceived professional norms. A developer who privately believes a particular implementation is insecure may nonetheless ship it because raising the concern feels awkward, because they doubt their own assessment, or because the organisational incentive structure makes it the rational choice.

This means that improving security practice requires attending to organisational culture, team dynamics, and the incentive structures that frame individual decisions. Technical controls alone cannot fix a culture in which security concerns are systematically deprioritised or where the person who raises them is seen as blocking progress.

Responsible Software Engineering as a Frame

I increasingly think that developer-centred security is best understood as one facet of a broader agenda: responsible software engineering. Security is one dimension of responsible practice, but it sits alongside fairness, accessibility, privacy, and the wider social consequences of the systems we build. Framing security within that larger context — rather than as a specialist discipline cordoned off from the rest of engineering — may be one of the more productive moves available to us.

If we want developers to engage with security as a genuine value rather than a compliance obligation, we need to meet them where they are, speak to the concerns they actually have, and design security practice around the realities of their work. That is a harder problem than building a better scanner. But it is the problem that actually needs solving.

← Back to Blog