Biology & Medicine:
Every moment, thousands of different microRNAs are present in your cells, regulating processes such as neuron growth, aging, and tumor development. Specifically, they control gene expression by binding to messenger RNA (mRNA) molecules, preventing their translation into proteins or marking them for degradation. This fine-tuning of gene activity is crucial for maintaining cellular homeostasis and responding to environmental changes. Originally discovered by Victor Ambros and Gary Ruvkun in the nematode worm C. elegans, over 1000 different microRNAs are known to be present in humans today. Their role in health and disease makes them an exciting focus of research, with potential applications in both diagnostics and therapeutics.
Chemistry:
Proteins are essential molecules that perform a wide range of functions in living organisms, from catalyzing biochemical reactions to supporting cellular structure. Understanding their structure is crucial for revealing how they function, as a protein's shape determines its interactions and activities. However, predicting protein structures is incredibly challenging due to the vast number of possible ways a protein's amino acid chain could fold into its final shape.
However, the recent creation of machine learning algorithms like AlphaFold, have revolutionized protein structure prediction, making it faster and more accurate. Half of this year’s Chemistry Nobel Prize went to Demis Hassabis and John Jumper for creating this mode.
The other half went to David Baker for his innovations in computational protein design. Typically, we think of proteins as purely naturally occurring biological molecules, but Baker and his lab pioneered methods of instead creating artificial proteins that could serve specific biological mechanisms and purposes. Through this, we can create specialized proteins to target specific mechanisms underlying disease and more, creating new potential therapeutics.
Physics
This year’s Nobel Prize in Physics was awarded to Professor John J. Hopfield at Princeton University and Professor Geoffrey E. Hinton at the University of Toronto "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Let’s dive into what this means.
The concept of artificial intelligence (AI) began to take shape in the 1940s, not long after the invention of modern computers. A variety of approaches were explored, including neural networks, but early work largely focused on rule-based expert systems. Neural networks were initially sidelined because of performance limitations. Then came the AI winter in the 1970s. Despite these setbacks, Professors Hopfield and Hinton continued to believe in the potential of neural networks and their work laid essential groundwork for today’s neural networks. It turned out that the true catalyst for neural networks to exhibit their full power was the increase in computational power, fueled by the rise of GPUs, cloud computing, and big data.
Today, applications like computer vision, large language models (such as ChatGPT), and AI-generated art are all built on neural networks. And it will continue to fuel many exciting AI innovations ahead!
Want to learn more about AI? Crash Course has an excellent series on the subject and several episodes focus on Neural Networks. Plus, listen to Prof. Hinton himself presenting his work on Neural Network at the Turing Award lecture.