Much of the current debate around the social implications of  automation and other emerging technologies is shaped by dichotomies. For some, automation may improve living standards, better service provision and accessibility, and better connectivity and opportunities. For others, automation equals economic disruption, job displacement, rising inequality and a digital divide

Building on this, decision makers are faced with answering important and pressing questions such as:

  • How will the labour market change? 
  • What jobs will disappear and what new ones will replace them? 
  • What are the skills of the (not so distant) future? Can everybody master these skills? How should they best be taught?
  • What are the benefits of automation and how do they balance out disadvantages?
  • Might automation increase or decrease inequalities (income, skills, wellbeing and health)? Are social groups affected differently, and what can governments do to cushion technological inequalities?


Essentially, when thinking about automation, it should be understood that automation isn’t new – humans have been employing technologies to automate tasks and replace physical human labour for centuries (remember the steam engine and electricity?) What we need to be mindful of when thinking of the current automation initiatives is that we shouldn’t delegate cognitive tasks to machines. Technologies should be used to optimise processes, workflows, and tasks, in order to conserve human time and energy and deliver the best user experience; they should enhance human skill and compensate for human limitations (by complementing — rather than competing with — human activity).

In order for automation to become a driver of positive societal change, stakeholders need to look at embracing the paradigm of automation for good, which is deeply rooted in the United Nations 2030 Agenda for Sustainable Development. To address this sustainability concern, as well as the issue of whether robots will take over the world, decision makers need to make sure that the development, design and deployment of automation meets several requirements:

  • Transparency and accountability
  • Participatory development, design and deployment
  • Robustness
  • Human-centeredness
  • Regulatory framework complementarity


At the end of the day, we should ask ourselves: Could robots tick the “I am not a robot” captcha box?

Autores

Irina Buzu
Irina Buzu

passionate about information technology, innovation, art and AI, Irina is pursuing her PhD research in international law, with a focus on AI regulation and digital creativity. She is currently a government advisor on AI and a delegate to the CoE Committee on AI on behalf of Moldova. Irina is also an emerging tech expert at Europuls, and as part of her research interests studies the intersection between algorithmic decision-making, ethics and public policy, aiming to understand and explore the functioning of the technology that enables algorithmic decision-making and how such technologies shape our worldview and influence our decisions.