Take a second and consider whether you would you have your job if an algorithm had been in charge of hiring you? Think about your financial records, your health file, your friends on social media. Are you a member of a trade union? Do you own a Fitbit? What are your shopping habits and what do you do in your spare time? And then ask, how would all of this affect your work life? Would you get hired, fired, disciplined or promoted?
What seems like a bizarre question is in fact one that we all need to think about and react to.
‘Management-by-algorithm’ is spreading, and more and more data from many different sources is used in human resources processes. Critically, across the world (bar to a certain extent in Europe) there are very few regulations in place that protect the misuse of workers’ personal data in and by companies. Trade unions must fill this regulatory gap and put workers’ data rights on the agenda to hold management and governments accountable and responsible.
The recent Facebook/Cambridge Analytica scandal all too clearly showed the value of (personal) data. Its importance for advertising, profiling and marketing is so high that it is brought and sold for an unknown figure every year. In 2014, the value of data flows was estimated to be U$2.8 trillion. Now put that dazzling figure in relation to the fact that three years, later in 2017, the World Economic Forum estimated that 90 per cent of all data at that time had been produced since 2015. We can only imagine what the value of current data flows really is.
We are leaving a data trail behind us all the time. From our social media profiles, our likes and posts, to customer service phone calls, visits to the doctor, use of our GPS or cash withdrawals from the bank. We willingly give away our names and email addresses when we log on to free Wi-Fi hotspots in cafes, airports or train stations and we more or less have become so accustomed to ‘free’ digital services that we almost get irritated when a mobile app costs money. The thing is, nothing is free. What we have been doing, and still are doing, is freely and oftentimes willingly giving away our location, habits, activities and opinions. In other words, we are paying with our data.
But who is actually buying, reading, analysing and selling this data? The short answer is we don’t know, and we even can’t know.
Insiders that UNI has spoken to estimate that the big tech companies such as Google, Amazon, Facebook, Apple, Microsoft and Alibaba own more than 70 per cent of the world’s combined data.
This concentration of what is such a valuable asset is putting these companies into an unacceptable position of economic, digital, social and even political power.
Workers across the world have very few, if any, legal rights to demand insight and influence over the use of their personal data. We know of the existence of so-called data-brokers, firms that make a living out of buying and selling data. We know that companies are mining workers of their data. Do they then sell it? And if so, to whom? Who gets to know what your health file says, or how productive an algorithm or a company thinks you are? How is this data – which is apparently easily accessible to anyone who can afford to pay – being used by companies to manage workers?
Whilst our eyes have been slightly opened by the revelations of how data was used to target and manipulate voters such as in the US election and the Brexit result, politicians and experts afford very little attention to how data is used, and potentially misused, in relation to work. There is a sharp rise in the use of algorithms, data and artificial intelligence (AI) in human resources and productivity planning. Companies are popping up that offer AI solutions to cut costs on dealing with people. From autonomous sorting of job applicants and applications, to the use of extensive data to measure productivity, to employee mood testing, to ways to automatically find out what motivates you, and much more…
Whilst some of this can have positive effects, the risks to workers’ privacy and the risk of us being judged against a digital norm are plentiful. Will you not get promoted because of your health file? Will you not get a job because you are a union member, or have particular friends, or have personal characteristics that the algorithm has been told to reject?
This might all sound hypothetical, but unfortunately it is not. In UNI Global Union we are already seeing how these autonomous systems are having a detrimental effect on workers. Especially those in non-unionised workplaces, where there are no check-and-balances in the form of organised labour, and no means to reach agreements to rectify misconduct.
In one such case, bank employees in a customer service centre are subject to a system that measures the customers’ and workers’ tone of voice and mood. It then advises the workers on what to say, sale and do, and monitors them for succeeding in doing the ‘right’ thing. For these non-unionised workers, the system has been catastrophic. Appraisal was linked to performance, but the system failed to recognise female voices as well as male voices, and downscored ethnic minority accents relative to white men. Even though the workers could go through the recordings with management, mistakes were seldom rectified. All of this was adding to the digital footprint of the individual worker and not only harming them in that current job, but potentially also making it harder for them to find another job.
There are plenty of other examples: one company provided all workers with a FitBit in a contest to become a healthy company, but subsequently used the GPS data to caution an overweight worker, who apparently didn’t move much in his spare time, of becoming a liability to the company. There are warehouse workers whose every hand and arm movement is tracked for their efficiency in packing goods, homecare workers who are cautioned if they spend too much time with a client, workers who get fired because an algorithm said they should.
There can be little doubt that unions must act now. We need to organise, organise and organise. We need to build alliances with like-minded others and demand a share of the data wealth, and we must fill these regulatory gaps and demand workers’ data rights. This should be done on all levels: from collective agreements to national and international legislation and conventions. We should mobilise the International Labour Organization (ILO), the United Nation’s Human Rights Council, national governments, the social partners and companies themselves.
UNI Global Union is working on these issues across the world. We are discussing how we, the unions, can tap into the significance of datasets and benefit from the insights they can offer. We are raising our voices against the monopolisation of data ownership and asking whether data should be made a commons. A public good that can be accessed by us all. One thing is to protect our fundamental rights, the other is to take that one step further and demand a collective ownership of data. Both are equally important.
We have also written two key documents, namely, the Top Ten Principles of Workers’ Data Privacy and Protection and the Top Ten Principles of Ethical AI. The documents are interrelated and list the essential demands we must put in place to avoid a future where workers are subjected to algorithmic decision-making that is beyond human control and insight.
These principles cover the essential issues of right of access, influence, and consultation. In essence, they stipulate that workers must:
In addition, companies should commit to the data minimalisation principle and, importantly, to being transparent and accountable in their data usage. This latter point is crucial and remarkably absent in the EU’s General Data Protection Regulation.
The term ‘artificial intelligence’ covers here all automated/semi-automated systems, including algorithmic decision-making. Here our principles cover key issues such as transparency, responsibility and control. Firstly, we must demand that autonomous systems are traceable, meaning that the data sets used in the algorithm can be identified.
All too often, you will hear data experts say that it is not possible to unpack the algorithm. This is totally unacceptable. Imagine what it implies: that neither management nor workers can demand to know on what basis (data) an algorithmic outcome has been built on. This inturn could lead to a situation where management, either deliberately or unintentionally, subordinate their control and responsibility to an algorithm with all of the risks and dangers this poses to not only workers but society at large. We must never reach the situation where management can simply shrug their shoulders and say, ‘the algorithm told me fire you, but I don’t know why’.
Humans must at all times be in control over the system, not the other way around. Nor must we ever give in to the notion that autonomous system (robots, algorithms) can be made liable. Robots are things; they are commodities and must never be attributed legal responsibility.
There is a definite urgency of now. Unions across the world must address these fundamental issues. We simply cannot rely on others to do so. Digital technologies are developing at great speed, and our ethical demands to them must be clear. We cannot risk that people are prevented from working or thriving in the labour market due to an algorithm that nobody claims to control, and nobody can rectify.
UNI Global Union believes that a collective ownership of data, ethical AI and workers’ data rights are key issues for unions. We must commit management as well as governments to take responsibility. Only by doing so can we ensure a digital world of work that is empowering, inclusive and open to all.
See the whole magazine here
This article is posted with permission from Christina Colclough, Director of Platform and Agency Workers, Digitalisation and Trade UNI Global Union.