+41 544 36 40
  • en
  • de
  • Addressing AI concerns with a realistic mindset

    Ella empowers the users of its AI tools

    Hardly a day goes by without AI being discussed (in the media). However, the current discourse tends to be focused on fear-mongering depiction of AI’s risks. While it is essential to acknowledge potential challenges, we must resist succumbing to fear and instead embrace a balanced and realistic perspective on AI’s societal impacts.

    Throughout history, every social upheaval has encompassed risks as well as potential. Technological change through AI is no exception. It is important to approach AI challenges proactively and responsibly, but not with fear. Fear can hinder progress and innovation, limiting us from fully harnessing the transformative power of AI for the betterment of society.

    Education and transparency play a key role here. Only by promoting knowledge about AI can AI’s societal impacts be better understood and AI challenges overcome. This will empower individuals and corporations to make informed decisions about AI, enabling them to use AI tools responsibly and ethically.

    Transparency in AI development and deployment is equally vital, fostering trust and cooperation between developers, users, and society at large. It ensures that AI systems are held accountable and operate in a manner aligned with human values and preferences.

    The enormous potential of AI should be harnessed to enhance human capabilities. To do so, humans must be placed at the center of AI development. Ella exemplifies this commitment by focusing on ethical design and transparency in the development of our AI software.

    We believe that empowering users with AI technology should not come at the expense of privacy, autonomy, or fairness. Instead, AI should be a collaborative partner and augment human skills. Ella’s goal is to empower users to extend their human capabilities through our AI tools.

     

    Artificial intelligence, language and the question of social responsibility

    Author: Michael Keusgen, CEO, Ella Media AG

    Systems based on artificial intelligence (AI) generally aim to support people in their actions. In doing so, AI systems open up new possibilities for minimizing errors, making processes more efficient and dealing with existing problems. In this context, the development of artificial intelligence has the potential to increasingly influence the decisions of humans. It is therefore important to handle learning systems responsibly.

     

    AI imitates language without understanding the content

    AI itself is value-free; it captures, analyzes and processes language based solely on the data with which it has been trained. In doing so, AI reproduces existing patterns without actually understanding the content of its own texts. This can have undesirable consequences, such as reproducing a stereotypical representation of certain groups of people.

    A well-known and well-researched example is the discrimination of women in AI-supported job application processes – for example, because the data basis originates from a time when almost exclusively men worked in the specific occupational field.
     

    Those who develop speech AI bear great responsibility

    Speech-based AI systems are only as good as the texts they learn from. For this reason, system operators and developers need to consistently make sure to identify biases and the resulting discrimination (bias) caused by errors in data collection and selection or in the processing of data based on algorithms long before an AI is used for the first time. This requires a value orientation and a high level of criticality and learning ability on the part of the companies and the people behind them.

    For companies, however, responsible use of speech AI also means keeping an eye on possible misuse of the technology as early as the development stage – for example, through automatically generated texts with false or intentionally misleading content. AI systems can be targeted to create this distorted information in bulk and for specific audiences – including to influence public discourse.

    Wise and targeted selection of data sources also prevents misuse. An editorial AI system like Carla, based solely on data from major news outlets, offers no potential for abuse in disinformation campaigns.

     

    Speech AI opens up new ways in the battle against disinformation

    Corporate social responsibility also includes the question of how AI can be used to benefit people. When it comes to recognizing unintentional errors, plagiarism, and even targeted fake news, artificial intelligence is often already streets ahead of humans. The first applications are already in use. For example, special AI systems, fed in real time with texts and other social media data, can help combat bots.

    Posted on September 8, 2021