Settings
  • Light
  • Dark
  • Auto
Select Page

This week in artificial intelligence (AI) news, we take a look at a Minecraft competition to further the idea of ‘imitation learning’, the potential of AI in the field of psychiatry, and the issue of gender bias when using artificial intelligence.

If you’re a parent, or an avid gamer yourself, you may be familiar with the popular video game Minecraft – an open-world game that allows its players to be as creative as they would like, without any specific objective; however, there are still tasks a player can complete within the game, including mining a diamond. There are many steps one needs to take in order to mine the diamond, which can be learned simply by watching a short YouTube video. But during a recent computing competition, participants developed and train AI models to find and mine the diamond. The purpose of this competition was to advance the idea of ‘imitation learning’, which basically means that the AI system is given a set of demonstrations on how to perform a specific task and it tries to learn the optimal way to achieve said task by following and imitating the instructions given. The coding event, called ‘MineRL’, could have an impact on furthering the computational uses of imitation learning. Click on this link to read the full story from Nature.

The United States (US) is beginning to experience a shortage of mental health professionals that will continue to grow over the next five years, according to a recent article in Time. Due to this gap and the growing issue of mental health in the US, artificial intelligence tools may be able to step in and help mitigate this problem. AI has already had a positive impact in other healthcare disciplines, mainly helping to ease the burden of the mundane work that keeps clinicians bogged down. Although there is disagreement among mental health professionals on the limits of AI in psychiatry, many do believe it could be an effective tool in checking in with patients, reading for speech-based signs of mental distress, and more. Read more on the potential of AI in mental healthcare in this article from Time.

A recent article in The New York Times explores the issue of gender bias that comes with artificial intelligence. A problem with AI is that it really is only as good as the data on which it is trained, and then that data includes outdated books, Wikipedia articles, and questionable sources, the AI may react in an unexpected way, resulting in cases of misogyny, racism, and bigotry. As an example of bias, the author takes a look at BERT, whort for Bidirectional Encoder Representations from Transformers, a natural language processing (NLP) system developed by Google. This system basically tries to understand the context of words used in searches in order to create a more relevant output of results for the individual searching. When trained on large amounts of data, there is the danger, as previously mentioned, that the system being trained can pick up bias. For example, a team of computer scientists from Carnegie Mellon University published a paper that showed that BERT was “more likely to associate the word “programmer” with men than with women”; and the bias was not limited to gender. Read the full article from The New York Times by clicking here.

Latest posts