Artificial Intelligence (AI) and elections

Year of production: 2024

Photo by ThisIsEngineeringover from Pexels

With both parliamentary and presidential elections scheduled in 2024 in some of the world’s largest democracies such as the EU and the US, concerns arise pertaining to the impact of technologically-driven and AI-driven propaganda, and how this can shape democratic processes, influence voter awareness, fact-checking, political engagement, combating misinformation, and post-election analysis. Given the context, we should explore the issue of voter awareness and delve into the integration of AI and digital participation into media literacy and critical thinking education, emphasizing ethical considerations in democratic contexts. In addition, to effectively counter AI-driven propaganda, we must harness the power of AI itself.

Integrating AI tools into media literacy and critical thinking education can equip voters with the skills to detect synthetic content, analyze data visualizations, and navigate complex online information ecosystems. This proactive approach, grounded in ethical considerations and responsible AI development, can empower citizens to participate meaningfully in democratic processes and safeguard the integrity of elections.

AI-Driven Information Dissemination and Voter Awareness

With the capability of large language models to generate synthetic content based on user prompts, the need for rigorous fact-checking and verifying information accuracy has become even more critical, particularly in the context of democratic processes and election campaigns. Whilst AI can help voters better understand democratic processes and improvement engagement in democratic debate, AI can also be used to trigger disinformation and misinformation by generating false content (clone a candidate’s voice, create a fake film, generate false narratives to undermine the opposition’s messaging), spreading bias or opinions that do not represent the public sentiment. Different terms may be used for this, from ‘ deepfake’ to ‘synthetic propaganda’ or ‘deceptive media’ when referring to AI.

Increased Misinformation

AI dissects our online lives to craft hyper-personalized campaigns, feeding us content that affirms our biases and drowns out opposing views. This echo chamber effect threatens democracy with polarization, low turnout,and even manipulated outcomes. Algorithmic bias, data privacy, and voter education are crucial to counter this.

Consider Brexit or the role of Cambridge Analytica in the Trump campaign as an example, when social media was used essentially as the battleground that shaped voting outcomes and in the aftermath of which microtargeting remains the most mentioned term in the conversation about political ads, and it is perceived as the greatest threat that must be dealt with.

Polarization

The political threat of filter bubbles stems from their personalized nature, especially in the context of voter access to information during political campaigns. Microtargeting and personalized campaign ads focus exclusively on individuals that campaigners consider to be ‘persuadable’, those voters commonly referred to as ‘swing-voters’ in traditional political vernacular.

Targeted campaign ads, a seemingly efficient tool, fuel inequality in political knowledge access. This impoverishes public discourse and potentially facilitates polarization. By tailoring ads to individual preferences, voters receive a narrow view of candidates and policies, hindering their ability to engage in informed deliberation. Furthermore, personalized messages can reinforce extreme views, creating echo chambers that push voters further along existing ideological lines. While this approach might excel in the consumer realm, its application to politics risks undermining the very foundations of a healthy democracy.

This also raises concerns about the echo-chamber effect and the limited exposure to diverse perspectives. Striking a balance between personalized content and diverse information is crucial to ensure a well-informed electorate.

Fact-Checking and AI’s Role in Ensuring Information Accuracy and Combating Misinformation

AI’s potential for both manipulating and verifying information poses challenges, demanding vigilance. While its automated fact-checking capabilities offer an efficient weapon against misinformation, ethical considerations and regulations are crucial to ensure AI serves democracy, not erodes it.

For instance, a study found alarming patterns that Google’s Bard AI tool generates persuasive misinformation content on 78 out of 100 tested narratives, whereas the new version of ChatGPT, ChatGPT-4, is even more susceptible to generating misinformation and more convincing in its ability to do so than its predecessor, ChatGPT-3.5.

The battle against misinformation requires continuous innovation. AI can aid in identifying patterns and trends indicative of deceptive practices. Natural language processing algorithms can analyze the tone, sentiment, and context of information, enabling a more nuanced understanding. Initiatives leveraging AI for misinformation detection and correction contribute to the creation of a more reliable information ecosystem for voters.

Employing a human-centric approach to using AI to support democratic processes might be the most progressive course of action, streamlining ways in which AI can actually support elections in an ethical manner. Take for example programming political bots that could flag and take down articles that contain evident misinformation, or deploying micro-targeting campaigns (the most common are demographic, psychographic, behavioral and lookalike campaigns, that are advertised via social media or e-mail marketing) that could help educate voters on a variety of political issues and enable them to make informed political decisions. And most importantly, we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives.

In addition, beyond fact-checking, AI can delve deeper, identifying underlying propaganda techniques and biases in language, imagery, and data patterns, allowing for a more comprehensive understanding of misinformation tactics and their potential impact.

AI can also help bridge the digital divide by generating accessible versions of complex information, translating languages, and tailoring content to individual learning styles. This ensures everyone has an equal opportunity to engage in informed democratic participation.

When considering the issue of combating misinformation, this requires transparent and accountable AI development processes. Open-source algorithms and community scrutiny can minimize bias and ensure AI tools are used responsibly in democratic contexts. Moreover, investing in AI literacy programs equips citizens with the critical thinking skills needed to evaluate information presented by AI systems, fostering a more informed and resilient electorate.

Post-Election Analysis: Shaping Future Political Landscapes

From dissecting voting patterns to analyzing public sentiment, AI injects political campaigns with data-driven insights. Politicians can leverage this real-time feedback to respond instantly to developments and craft laser-targeted outreach through personalized messages, although this power comes with ethical constraints. Data privacy must be guarded, algorithmic biases countered, and AI’s role transparent. Responsible use holds the key to unlocking AI’s potential for ethical and effective political engagement.

AI could then be used to generate personalized emails or text messages from chatbots to specific audiences, based on their demographics, interests, and past interactions with the campaign. This can create a more engaging experience and potentially increase voter turnout. Moreover, ethical considerations are important – transparency is key. Voters should be aware that they are interacting with a chatbot and not a human, and chatbots should be used to provide information and answer questions, not to mislead or manipulate voters. Also, by using AI responsibly, candidates could create and publish timely response videos to potentially damaging content released by their opposition.

Comparative Analysis of AI-Driven Electoral Practices

Other than the Brexit and Obama and Trump campaigns, there are a few more recent examples of AI-(mis)driven political campaigns:

  • In Toronto, a candidate for the mayoral elections who vowed to clear homeless encampments released a set of AI-generated campaign promises, including fake dystopian images of people camped out on a downtown street and a fabricated image of tents set up in a park.
  • In New Zealand, a political party posted a realistic-looking rendering on Instagram of fake robbers rampaging through a jewelry shop.
  • In Chicago, the runner-up in the mayoral vote complained that a Twitter account masquerading as a news outlet had used AI to clone his voice in a way that suggested he condoned police brutality.

Exploring the impact of data analytics on predicting and shaping post-election narratives provides insights into the evolving nature of political landscapes. Differences in technological infrastructure, regulatory frameworks, and public trust significantly influence the adoption and effectiveness of AI in different democratic contexts.

One approach to control the spread of „computational propaganda” (automatically generated messages designed to influence political opinions) is to adjust existing laws or create new ones to hold these companies accountable for the content they allow on their platforms. Stricter rules on data protection and algorithmic accountability could also reduce the extent to which machine learning can be abused in political contexts. For example, in the US there is a wide range of statutes and regulatory provisions specifically aimed at combating potential threats from misuse of AI in elections and campaigns.

However, challenges persist in fine-tuning algorithms to recognize nuanced context and evolving disinformation tactics. Moreover, specific tools can be employed to detect the use of AI-generated content and techniques such as watermarking can be used to clearly indicate that content has been generated by AI. The EU is currently adapting its legal framework (GDPR, DSA, DMA, EU AI Act) to address the dangers that come with AI and to promote the use of trustworthy, transparent and accountable AI systems. Take Dall-E Mini for example, a text-to-image AI system that creates images based on user-provided prompts, which was initially developed in the US but is now accessible to users in the EU. The EU AI Act classifies Dall-E Mini as a high-risk AI system due to its potential for generating harmful content, such as images that are discriminatory or violate privacy. How is the framework applied?

  • Transparency: Dall-E Mini’s developers have published a detailed technical paper explaining the system’s architecture, training data, and limitations. They also provide clear usage guidelines and warnings about potential risks.
  • Accountability: The developers have implemented safeguards to prevent the generation of harmful content, such as filters for hate speech and nudity. They also have a mechanism for users to report inappropriate images.
  • Risk Mitigation: The system is continuously monitored for biases and errors, with updates made to address any issues that arise.
  • Data Governance: The developers comply with GDPR requirements for data collection and usage, including obtaining user consent and providing options for data deletion.
  • Human Oversight: A team of human moderators reviews flagged images to ensure they comply with EU laws and ethical standards.
  • Impact: This framework helps to mitigate risks associated with Dall-E Mini, such as the potential for misuse to create harmful or discriminatory content. It also promotes transparency and accountability, helping to build trust among users and regulators. As a result, Dall-E Mini is able to operate in the EU while adhering to ethical and legal guidelines, fostering responsible AI development.

Teaching Ethics of AI and Digital Participation within Democratical Contexts

The integration of AI into our lives, particularly in the context of digital participation and democratic processes, raises crucial ethical questions that demand immediate attention. Ethics of AI refers to a set of principles and guidelines for ensuring the responsible development and deployment of AI technologies, particularly in relation to their impact on human values, rights, and well-being. Several frameworks have been proposed to address these concerns, such as the Montreal Declaration for Responsible AI and the Asilomar AI Principles.

Why are AI ethics crucial for effective democratic participation?

  1. Algorithmic Bias: AI algorithms trained on biased data can perpetuate and amplify existing societal inequalities, potentially disenfranchising certain groups within a democracy. Understanding these biases and mitigating their impact is vital for ensuring fair and inclusive participation.
  2. Manipulation and Misinformation: AI can be used to create deepfakes, personalized disinformation campaigns, and echo chambers that manipulate public opinion and undermine trust in democratic institutions. Equipping citizens with the skills to identify and critically evaluate AI-generated content is essential for protecting the integrity of democratic processes.
  3. Transparency and Accountability: As AI increasingly shapes political campaigns, decision-making, and information landscapes, understanding its role and demanding transparency in its development and usage are crucial for holding accountable those who wield this power.

Learning to be Ethical Users of AI in a Democracy:

Teaching AI ethics should not be an afterthought but rather an integral part of education at all levels, from formal K-12 programs to non-formal civic engagement initiatives. Equipping citizens with the critical thinking skills and media literacy necessary to navigate the increasingly complex landscape of AI is crucial for a healthy democracy.

However, educating individuals with low IT competences and media literacy presents unique challenges. While they are often the most vulnerable targets of deepfakes and disinformation campaigns, their limited baseline knowledge can make it more difficult to grasp complex ethical concepts and apply them in real-world situations. Some specific challenges to consider could be:

  • Accessibility: Traditional teaching methods like workshops and simulations may not be readily accessible to those with limited digital literacy. Alternative approaches, such as interactive games, infographics, and community-based discussions, could be more effective in engaging this segment of the population.
  • Prior knowledge: Building upon existing knowledge and experiences is crucial for effective learning. Educators need to adapt their teaching methods to bridge the gap in technical understanding and tailor examples to resonate with the lived realities of individuals with low IT proficiency.
  • Motivational barriers: Lack of awareness about the impact of AI on their lives and a perceived lack of agency in addressing complex ethical issues can demotivate individuals from engaging in learning. Highlighting personal stories of how AI has been used to manipulate or exploit individuals can help raise awareness and urgency.

Despite these challenges, educating individuals with low IT competences and media literacy in AI ethics remains critically important. By developing innovative teaching methods, employing relatable examples, and fostering a sense of personal responsibility, we can empower all citizens to become ethical users of AI and safeguard democratic values in the digital age.

Curricula and Teacher Competencies:

Integrating AI ethics into existing curricula requires the careful development of age-appropriate learning materials and resources. Teachers would benefit from professional development opportunities to deepen their understanding of AI technology and its ethical implications, equipping them to effectively guide students in navigating this complex landscape.

Erasmus+ and European Solidarity Corps:

Programmes like Erasmus+ and European Solidarity Corps already foster active civic engagement and intercultural understanding. Incorporating AI literacy and critical thinking skills into these programs can further empower participants to become responsible and informed citizens in the digital age. The use of AI-driven tools for language translation, collaborative projects, and community engagement can be leveraged while emphasizing ethical considerations and responsible technology use.

By prioritizing AI ethics education and equipping future generations with the necessary skills and awareness, we can build a more resilient and ethical digital democracy where everyone has the opportunity to participate meaningfully and hold power accountable.

Conclusion

AI has a diverse impact on democracy. To make the most of AI while minimizing harm, it’s crucial to educate people of all backgrounds via formal and informal avenues like schools, workshops, and community initiatives. This fosters lifelong engagement with evolving technologies for a balanced future.

Media literacy is crucial for navigating the information age. It empowers citizens to identify AI-generated content, bias, and misinformation, making them active participants in informed discourse. Integrating ethics into this education ensures future generations use AI responsibly and avoid its pitfalls.

Learning isn’t a singular event, as the constant evolution of technology demands lifelong upskilling. Embrace online tools, hackathons, peer groups – whatever keeps you learning to stay informed, engaged, and adaptable in the digital age.

Ultimately, ensuring a vibrant and robust digital democracy hinges on a multifaceted approach. By nurturing a citizenry equipped with AI literacy, media savvy, and a strong ethical compass, we can harness the power of technology to enhance, not hinder, the core values of transparency, accountability, and informed civic participation. Let us embrace this lifelong journey of learning and adaptation, ensuring that AI becomes a force for good, propelling our democracies towards a brighter, more inclusive future.

Szerzők

Irina Buzu
Irina Buzu

passionate about information technology, innovation, art and AI, Irina is pursuing her PhD research in international law, with a focus on AI regulation and digital creativity. She is currently a government advisor on AI and a delegate to the CoE Committee on AI on behalf of Moldova. Irina is also an emerging tech expert at Europuls, and as part of her research interests studies the intersection between algorithmic decision-making, ethics and public policy, aiming to understand and explore the functioning of the technology that enables algorithmic decision-making and how such technologies shape our worldview and influence our decisions.