Settings
  • Light
  • Dark
  • Auto
Select Page

This week in artificial intelligence (AI) news, we take a look at physicists in Switzerland who developed a neural network that is teaching itself about the laws of physics in our solar system, researchers using AI systems to predict when lightning will strike, and three women discussing how they approach bias in AI.

In a recent article from Nature, physicists have designed a neural network that is capable of teaching itself the laws of physics and, more specifically, the astronomical model of heliocentrism. The algorithm was designed by physicist Renato Renner at the Swiss Federal Institute of Technology (ETH), in collaboration with other researchers. Renner and his team created a different type of neural network composed of two sub-networks connected to one another through several links. This design allowed the first sub-network to learn from the data fed to it, functioning as a standard neural network, and the second sub-network to “use that ‘experience’ to make and test new predictions”, according to the article. Because of the small number of links between the two sub-networks, the first sub-network was “forced to pass information to the other in a condensed format”. The implications for this technology are great, as neural networks like this could be the key to solving the mysteries of quantum mechanics and other laws of physics. Read the full article from Nature.com by clicking here.

Researchers at École Polytechnique Fédérale de Lausanne in Switzerland are using artificial intelligence to predict when lightning will strike. Lightning is known to be unpredictable, and its destructive nature can have both environmental and economic repercussions. The AI system uses meteorological data and machine learning to “predict lightning strikes down to the nearest 10 to 30 minutes inside a 30-kilometre radius”. The data collected trained the machine learning algorithm using information such as air pressure, air temperature, and relative humidity, which are all indicators of weather conditions that typically lead to lightning strikes and storms. To read the full article from Popular Mechanics, click here.

In a recent article in The New York Times, three accomplished women in tech brought their perspectives on dealing with and confronting bias in AI: Daphne Koller, co-founder of online education company Coursera and founder and chief executive of Insitro; Olga Russakovski, assistant professor in the Department of Computer Science at Princeton University and co-founder of AI4ALL; and Timnit Gebru, who is a research scientist at Google on the ethical AI team and co-founder of Black in AI. All three of these women encounter bias on a daily basis, including, but not limited to, gender, race, and real-world data on which AI systems are tested. This article, which you can read here, provides important points of view, both broadly and specifically, on how companies and individuals can approach and mitigate bias in AI.

Latest posts