
While technology innovation is promising to solve many of humanity’s day-to-day problems, we rarely stop to consider what kind of unexpected new problems all these new technologies also cause.
Or, if we do, we often just develop even newer technologies to try to solve these new problems, or we attempt to tackle them with new laws and regulations, all of which have the potential to cause more problems down the line (just think how much UX frustration is caused by GDPR’s well-intentioned cookie policies, for example).
So, how do we balance the benefits of technology innovation with the risks it poses? Does the revolutionary promise of technology outweigh all potential negative consequences?
Well, this is definitely a complicated issue without a clear blanket solution, and as such needs to be treated on a case-by-case basis. There are essentially 4 main broad possibilities we can consider:
- High risk, low reward: a lot of “smart” technologies & technologies based on biometric data – there is the tradeoff between privacy and convenience, which seems to be one of the key issues of the digital era
- Low risk, low reward: increasing processing power & planned obsolescence of laptops, smartphones, tablets, etc.; proliferation of new/specialized digital frameworks
- Low risk, high reward: basic things used in homes, such as toasters and electric heaters; core programming languages/web technologies, e.g. Java/HTML
- High risk, high reward: high-impact fields such as AI, nuclear energy and digital banking
The key issue here is that technologies tend to not be contained to a single of these categories, especially once new technologies enter the equation which interact in new and sometimes unexpected ways with existing innovations.
Another factor which crucially influences how new technologies are perceived, and which of these categories they supposedly fall into, is the marketing and hype around them. I.e. the risks will often be downplayed while the (supposed) benefits are touted as gamechanging.
The subjective factor is also an important one to consider, since people will have widely differing opinions on how beneficial specific tradeoffs are, for example the tradeoff between privacy and convenience which seems to be dominant in areas such as biometrics and smart technology.
Armed with the power of hindsight, we would likely classify a lot of the most important technological innovations into the “high risk, high reward” category. These tend to be innovations that are essential and/or widely used in people’s everyday lives, and the majority of us could hardly imagine living without them.
Of course, such important innovations should not be halted by high risk factors. Instead, we should pursue innovating while remaining wary of both existing and potential risks that may intensify down the line.
Let’s take something like nuclear energy as an example. While most people are well aware of the benefits of nuclear, as well as the need for alternative energy sources, the risks that it poses are often deemed too great to innovate in the field, especially with the recency of the Fukushima nuclear accident which occurred in 2011.
So, if we are to continue innovating in nuclear energy, this must be done with a very strong “safety-first” approach in order to reap the rewards while minimizing the risks as much as possible.
This may mean better regulation, better training and stricter qualifications, strict quality control at all stages, as well as other key factors (e.g. building nuclear power plants in areas that are not prone to natural disasters). Most likely, there would have to be a combination of all of the above to maximize safety.
On the other hand, there’s a slightly different story when it comes to innovations in a field such as artificial intelligence. Referring back to the above point about the marketing and hype around a particular innovation and how those impact the perception of benefits and risks – there has been a lot of AI hype in the past few years.
With AI, the benefits are marketed as revolutionary – and indeed, in some sectors they can be – but what we’re seeing more often than not with AI are benefits in terms of cost cutting and productivity gains. While essential to businesses, these are not essential to individuals’ everyday wellbeing.
The risks of AI, however, could have a huge impact on everyday wellbeing, and that’s only considering misuse of AI, let alone abuse. With AI becoming such a ubiquitous element of digital experience, even misuse itself is concerning; an example of this would be beneficial AI innovation where data collection biases translate into a biased system.
On the other side of the equation, however, we have innovation for the sake of nefarious purposes: deepfakes, voice clones and other security threats, where the negative impact is the goal rather than a by-product, or the result of negligence.
The third risk area to explore is just the inherent risk of developing a technology that promises to be as dominant as artificial intelligence does. The pursuit of AGI (artificial general intelligence) could have devastating consequences if not done carefully.
We’ve all seen at least one film or TV show about an AI-driven dystopia, so we know that that’s something we definitely don’t want. The main problem is that it’s much more difficult to uncover and address issues in real time, and with something such as improperly developed AGI, it may likely be too late to backtrack if we only spot the problems once the technology is already created and implemented.
So, how can we make sure to responsibly innovate in AI and other technologies while minimizing the (new) risks they introduce?
- Responsible product development: the first thing to embrace is a responsible approach to developing both physical and digital products. Products developed responsibly, with the customer/user top of mind, should not harm or cause frustration to said customer/user. They should not prioritize profits and/or quick wins at the expense of ethics, but should instead be developed in a “people-first” manner.
- Ethical AI: speaking of ethics, they become particularly important when developing artificial intelligence. The basis of ethical AI is solid, unbiased, ethically collected & responsibly managed data. Having humans in the loop to review and tweak AI systems is also essential, as is training to ensure that these humans in the loop don’t end up introducing additional biases and/or risks into the system.
- Focus on privacy: with responsible data collection and management, it also becomes that much easier for businesses to respect the privacy of their users/customers. Collect only as much data as you need and be transparent about how you use it, and consider prioritizing first-party over third-party data to engage with audiences online.
Final words
As this article has demonstrated, there is no clearcut solution to the double-edged sword of innovation. Many of the most beneficial innovations also come with the highest risks, whether that be due to accidental misuse or deliberate abuse.
Unfortunately there will always be people and businesses willing to capitalize on loopholes and lacking regulation; where profit is the main goal rather than providing the best solutions to particular needs, the potential for abuse becomes much greater, and the risks increase with it.
So, a key conclusion is that, more often than not, it’s not technology that is inherently problematic, but rather the people developing it and working with it, whether that be out of neglect, ignorance or actual nefarious purposes.
In order to minimize the risks while still reaping the benefits of innovation, we need to innovate more responsibly, always keeping ethics top of mind and prioritizing people’s actual needs without infringing on their privacy.