Menu Close

‘Society has outsourced its future to a Technology Elite’ - Discuss

“Our society has effectively outsourced the building of software that makes our world possible to a small group of engineers in an isolated corner of the country. The question is whether we also want to outsource the adjudication of some of the most consequential moral and philosophical questions of our time.”

Alex Karp’s assertion¹ is certainly thought-provoking. But is it fair comment? A detailed analysis suggests the risks for the future of society are even worse than Karp fears.

Over the last 20–30 years, Digital and Information Technology (D&IT) services have fundamentally altered the patterns of how we live and behave. We now rely on these services as much as we rely on the supply of electricity and clean water.

The benefits of D&IT services are clear and hardly need stating. Just think how much worse it would have been if the COVID pandemic had started 30 years ago. Not just no vaccines, but no mass-access to D&IT services to enable us to work from home and to cope with the other constraints and stresses of the lockdowns.

However, the downsides of the new services are equally clear. They enable the spread of misinformation, fake news and all manner of anti-social behaviour, including criminal and terrorist activities — though the word ‘enable’ does not convey their full impact. Social media aims to increase our engagement by feeding information to us, recording our responses, and then feeding back to us more of the type of information that tends to confirm or gratify our existing views. These mechanisms therefore tend to harden or amplify polarization in society on contentious subjects as in politics, vaccination, immigration, abortion, and so on.

To be fair to D&IT service providers, they do spend considerable sums aiming to filter out obviously harmful material, but their efforts are clearly not enough. Others are more concerned about who determines, and what criteria are used, to identify, remove and block anything that the service providers consider to be harmful, misinformation, or fake. And these filters do not exist at all in many countries where local D&IT services cannot handle many minority languages. In these circumstances, local uncontrolled use of these services can be extremely disruptive.

So is Karp’s assertion valid and fair? My answer would undoubtedly be ‘yes, up till now’. Historically, society has given a few industry giants free rein to introduce these services. But governments are fast waking up to the scale of the downsides and are setting regulators to work. The really interesting question therefore is whether society’s future will continue to be outsourced to a ‘a small group of engineers in an isolated corner of the country’ (i.e. the USA)? Examining this question leads to troubling conclusions.

Leaving aside the responses of autocratic societies — witness China’s restrictions on social media and internet access — regulators in more democratic societies face some inherently difficult challenges when attempting to ‘tame the tigers’:

  • How to balance fundamental rights to freedoms, such as for privacy, speech and beliefs, with the need to protect D&IT users from harm?
  • How to draft regulations to protect D&IT users from future harms, without impairing the freedom of the service suppliers to innovate, especially in light of the unrelenting pace of technology advances?
  • What should be the target(s) of regulations: specific harms, specific user communities, specific technologies, specific D&IT suppliers etc.?
  • Where should the regulator(s) be positioned in Government and what are the risks of political misuse of regulations?

Let’s start with a few assertions.

  • Given the nature of D&IT services, effective regulation should be trans-national. This ideal should be possible at the level of principles, but it will be difficult to draft practical regulations that satisfy the reality of different national cultures.
  • Certain targets for regulation can be dealt with fairly easily. For example, legislation exists already to deal with anti-trust behaviour and to protect privacy of personal data. The principle of protecting minors from exposure to harmful material is easy to define, though difficult to control in practice. Other harm targets are very difficult to define, let alone to regulate. For example, having to make a borderline decision on whether a series of comments is likely to cause a riot is a thankless task. By the time there is enough firm evidence to make this decision with real confidence, it will be too late.
  • Assuming the continuing rapid advances in artificial intelligence, surveillance systems, virtual reality, quantum computers, etc., etc., regulators will always be behind the technology curve.
  • Technology-specific regulations (e.g. to help control AI, as currently being drafted by the European Union) will never be 100% effective due to definition difficulties combined with the continuing rate of technology advances.
  • And if we accept that the existing defenses against social media causing harm are not good enough, what chance do we have of monitoring billions of transactions in real-time for potential harm in a virtual-reality environment such as envisaged by Meta (owner of Facebook)?

Let’s now examine more closely Karp’s assertion that the future of society is ‘outsourced to ‘a small group of engineers in an isolated corner of the country’ (the USA). If true, targeting the few big suppliers should make the regulators’ tasks easier. But this assertion is certainly not the whole story. Whilst a few executives at the top of the giant D&IT suppliers may set the main directions and control the investments, the software products of the D&IT industry are developed in practice by tens of millions of engineers of varying competence, working for a vast range of companies spread all over the world. Furthermore, the processes of software development are nowadays highly decentralized and very flexible; systems can be easily changed without interruption to live running.

Consider the consequences. A small group of developers makes a change to a live digital service: ‘let’s see what happens if we tweak this algorithm ….’ Such actions, which may be repeated hundreds of times per day, effectively amount to launching an unlicensed, uncontrolled experiment on society’s behaviour.

Contrast this with the international regulations that a pharmaceutical company must follow before it can bring a product based on a new mind-altering molecule to market. Clinical trials must follow defined procedures, often lasting many years; findings must be subject to expert, independent review, and so on.

This strongly suggests to me that attempts to control D&IT industry output via targeting the suppliers and/or the technologies will never be enough.

In fact, it suggests the following:

  • Regulation of D&IT services that affect the population at large must not be left to technicians, such as in the FCC (for telecoms carriers in the USA), or OFCOM (the media regulator in the UK).
  • If Governments want to protect their citizens from harm. the principles for controlling D&IT services and their implementation, monitoring and control must be supervised by those best-qualified to deal with the threats. So regulators must be staffed with specialists in mental health, child protection, minority-rights, financial fraud, national security, and whatever issues are currently troubling society.
  • Many societies are instinctively averse to more governmental regulation, especially those where freedoms, such as for speech, are sacrosanct. But what other options are there than some forms of regulation in democratic societies? Do nothing? Leave decisions on the policies for controlling D&IT services to the Tech Elite and the implementation of those policies to zillions of ‘Agile’ systems development practitioners? Seriously?
  • Controlling the use of D&IT services to protect individuals and society from harm must eventually be automated, operating in real-time.

Are these proposals idealistic? Yes, of course they are. In practice, we will have to get accustomed to accepting that the regulators’ task is nigh impossible; there are only imperfect solutions to the challenges they will face.

Do the proposals sound like institutionalizing Big Brother? Yes, they do. But no-one can, or is trying, to constrain freedom of thought. Only when a thought is expressed in speech or writing does civilized society expect the individual to be accountable for the impact of their utterance on the intended audience. And remember that the alternative of continuing to allow D&IT services to operate unimpaired is unthinkable.

Thank you, Alex Karp, for starting this debate.

Charles Symons

Prometheus Endeavor, January 2022


¹ Extract from a letter by Alexander Karp, the CEO of Palantir Technologies Inc, covering a filing to the US SEC, August 25th 2020.

2 Comments

  1. Dennis Mulryan

    Charles, thank you for this compelling post. You challenge us to think about hard questions of reducing the negative consequences of digital technologies while maintaining our freedoms. Technology has always been a two-edged sword.

    In conjunction with regulations, can technology play a role in policing itself? Just as checksum is used to verify downloads are not infected with malware, can blockchain or other methods be employed to verify content origin or detect alteration such as in deep fake videos?

    https://www.weforum.org/agenda/2021/10/how-blockchain-can-help-combat-threat-of-deepfakes/

  2. Bill Kelvie

    You raise a critical issue but what I find most concerning is that technocratic elites have provided a platform that can be used by the most scurrilous actors on the fringes of society. Example: the anti-vaxxers who claim that covid 19 shots inject Bill Gates’s microchips into our bodies. This prevents us from obtaining herd immunity at a time when the virus is morphing at unprecedented rates.
    As long as Facebook, YouTube, et al profit from viewer’s time on line they will not make meaningful efforts to clean up the destructive myths and lies they promote. It is the executives that must be held accountable, not the engineers laboring in the bowels of the social media giants.

Leave a Reply

Your email address will not be published. Required fields are marked *