+41 544 36 40
  • en
  • de
  • Artificial intelligence, language and the question of social responsibility

    Artificial intelligence, language and the question of social responsibility

    Author: Michael Keusgen, CEO, Ella Media AG

    Systems based on artificial intelligence (AI) generally aim to support people in their actions. In doing so, AI systems open up new possibilities for minimizing errors, making processes more efficient and dealing with existing problems. In this context, the development of artificial intelligence has the potential to increasingly influence the decisions of humans. It is therefore important to handle learning systems responsibly.

     

    AI imitates language without understanding the content

    AI itself is value-free; it captures, analyzes and processes language based solely on the data with which it has been trained. In doing so, AI reproduces existing patterns without actually understanding the content of its own texts. This can have undesirable consequences, such as reproducing a stereotypical representation of certain groups of people.

    A well-known and well-researched example is the discrimination of women in AI-supported job application processes – for example, because the data basis originates from a time when almost exclusively men worked in the specific occupational field.
     

    Those who develop speech AI bear great responsibility

    Speech-based AI systems are only as good as the texts they learn from. For this reason, system operators and developers need to consistently make sure to identify biases and the resulting discrimination (bias) caused by errors in data collection and selection or in the processing of data based on algorithms long before an AI is used for the first time. This requires a value orientation and a high level of criticality and learning ability on the part of the companies and the people behind them.

    For companies, however, responsible use of speech AI also means keeping an eye on possible misuse of the technology as early as the development stage – for example, through automatically generated texts with false or intentionally misleading content. AI systems can be targeted to create this distorted information in bulk and for specific audiences – including to influence public discourse.

    Wise and targeted selection of data sources also prevents misuse. An editorial AI system like Carla, based solely on data from major news outlets, offers no potential for abuse in disinformation campaigns.

     

    Speech AI opens up new ways in the battle against disinformation

    Corporate social responsibility also includes the question of how AI can be used to benefit people. When it comes to recognizing unintentional errors, plagiarism, and even targeted fake news, artificial intelligence is often already streets ahead of humans. The first applications are already in use. For example, special AI systems, fed in real time with texts and other social media data, can help combat bots.

    Posted on September 8, 2021