Accessibility

Image is illustrative. Photo by Alex Knight on Unsplash

Much of the current debate around the social implications of  automation and other emerging technologies is shaped by dichotomies. For some, automation may improve living standards, better service provision and accessibility, and better connectivity and opportunities. For others, automation equals economic disruption, job displacement, rising inequality and a digital divide. 

Building on this, decision makers are faced with answering important and pressing questions such as:

  • How will the labour market change? 
  • What jobs will disappear and what new ones will replace them? 
  • What are the skills of the (not so distant) future? Can everybody master these skills? How should they best be taught?
  • What are the benefits of automation and how do they balance out disadvantages?
  • Might automation increase or decrease inequalities (income, skills, wellbeing and health)? Are social groups affected differently, and what can governments do to cushion technological inequalities?


Essentially, when thinking about automation, it should be understood that automation isn’t new – humans have been employing technologies to automate tasks and replace physical human labour for centuries (remember the steam engine and electricity?) What we need to be mindful of when thinking of the current automation initiatives is that we shouldn’t delegate cognitive tasks to machines. Technologies should be used to optimise processes, workflows, and tasks, in order to conserve human time and energy and deliver the best user experience; they should enhance human skill and compensate for human limitations (by complementing — rather than competing with — human activity).

In order for automation to become a driver of positive societal change, stakeholders need to look at embracing the paradigm of automation for good, which is deeply rooted in the United Nations 2030 Agenda for Sustainable Development. To address this sustainability concern, as well as the issue of whether robots will take over the world, decision makers need to make sure that the development, design and deployment of automation meets several requirements:

  • Transparency and accountability
  • Participatory development, design and deployment
  • Robustness
  • Human-centeredness
  • Regulatory framework complementarity


At the end of the day, we should ask ourselves: Could robots tick the “I am not a robot” captcha box?

Authors

Irina Buzu
Irina Buzu

Irina is a techlaw and intellectual property attorney, currently pursuing her PhD research in AI regulation with a focus on the legal status and accountability of AI. She is an emerging technologies fellow at Europuls, as well as a Algorithmic decision making cycle co-lead at the Institute for Internet and the Just Society. Most recently, she became part of the AI literacy expert group of the Council of Europe and a member of the European AI Alliance.