In the first of a two-part series about why humane technology is vital for humanity, we focus on why we need it — and the first three steps to realising it.
The world is a very different place than it was. And that’s not all down to covid. Although it would be remiss to write a piece about humane technology without acknowledging that cyberattacks have increased 600 per cent since the pandemic. So, OK, it’s partly down to covid. We are no stranger to challenges but we are in an unprecedented time in which threats can become global in hours — just like it takes 36 hours for a pathogen to make its way from a remote village to major cities on every continent. With humanity at threat, we need technology that unites and enhances our collective capabilities to mitigate catastrophic risks. And, inspired by the Center for Humane Technology’s teaching in this area, here’s the first three of six steps to help you wrap your head around how humane technology can help us, together, to find our way through …
Let’s start at the very beginning. You might be thinking what the heck is humane technology, anyway? Well, humane technology, in a proverbial nutshell, is aligning technology to human needs rather than exploiting human vulnerabilities for profit; it’s focused on providing benefits for users and society without causing a negative impact on people’s lives. It’s preventing disinformation, extremism, promotion of discord and information overload. It’s mitigating the issues associated with social media, such as addiction, political polarisation and social isolation. And it’s enhancing the quality of content that people encounter to manage the spread of hate speech and propaganda and reduce their effect on individuals and society, which might include accountability for the types of content that are shared through a given platform and mandating algorithmic transparency.
While we may have an awareness of what humane technology is, it’s not always so straightforward to implement given that our relationship with technology functions within a complex system of human vulnerabilities, economic and social mechanisms, and deeply held paradigms of thought. That’s why intervention is needed; moving forwards with a new paradigm can be an effective way to change a system that brings about tangible, enduring, impact — and change for the better.
Cue the six tenets of humane technology that can help us progress in this brave new world — centred around respecting rather than exploiting human nature; accounting for and minimising negative consequences; prioritising values over metrics; supporting the shared understanding necessary for effective democracies; promoting fairness and justice; and helping people thrive.
Let’s look at the first three …
1. Respect human nature
Mental shortcuts — or heuristics if you’re feeling fancy — have helped us survive for generations. It’s the reason that we’re conditioned to pay more attention to dangerous stimuli; why we remember things that hurt us more than those that help us, and why we prioritise information that we learned recently rather than memories from yesteryear. But these hardwired biases, which were evolved to help us run away from the jaws of a ravenous lion and hunt for our dinner, can be exploited by technology for short-term profits. We believe more in our abilities than the possibility of being shaped, influenced, and manipulated by what we may perceive as a harmless scroll; just because we click on something doesn’t mean that we want it or that it adheres to our values. Human technologists use these biases to build products that respect human vulnerabilities.
Key takeaway: Rather than assuming intent based on what users reveal via engagement metrics, it’s important to focus on understanding users’ values and goals to help them refine and navigate their intentions.
2. Minimise harmful consequences
The world is full of detrimental externalities — those costs that aren’t ‘priced in’ to products but show up elsewhere; we might, for example, be able to get items delivered the next day but this convenience comes at the cost of poor working conditions and significant environmental damage from carbon dioxide emissions and packaging waste. For humane technologists, it’s an ongoing process of identifying and accounting for the externalities of their products and asking what the world needs from them to build systems that strengthen digital-open societies to collaboratively solve problems. But with an awareness and understanding of externalities and factoring them in as design constraints, a considered, viable, design process is possible.
Key takeaway: It’s possible to minimise harmful consequences by matching KPIs with anti-KPI ‘measures of failure’; engaging and listening to people, who might be most negatively impacted by a product, and self-educating on existing activism, learning, and lived experience around externalities.
3. Centre on values
Values are engrained in whatever is built — and what an organisation chooses to prioritise as an organisation is an expression of its values and priorities. But questions need to be asked to rigorously assess so-called values and those of other companies: In what ways is this true? Is there any evidence that the opposite is happening? We all need to be aware when we are not upholding our values, so that we can defend them — and be mindful of what can be easily measured with what matters. In short, what we measure is what we value.
Key takeaway: When designing digital experiences at scale that throw up complex needs and subsequent difficulties around prioritisation, ask: What values do you want your output to enable in the world?
Look out for part two next week. We help our clients around the world to create digital experiences that make a positive impact on people’s lives. Need help achieving that in your organisation? Get in touch.
SHARE: