7 min read · Jul 30th · Like a perpetual motion machine, a time crystal forever cycles between states without consuming energy. Physicists claim to have built this new phase of matter inside a quantum computer.
The time crystal is a new category of phases of matter, expanding the definition of what a phase is. All other known phases, like water or ice, are in thermal equilibrium: Their constituent atoms have settled into the state with the lowest energy permitted by the ambient temperature, and their properties don’t change with time. The time crystal is the first “out-of-equilibrium” phase: It has order and perfect stability despite being in an excited and evolving state.
10 min read · Aug 24th · Familiar categories of mental functions such as perception, memory and attention reflect our experience of ourselves, but they are misleading about how the brain works. More revealing approaches are…
Neuroscientists are the cartographers of the brain’s diverse domains and territories
It’s easy to show how things are not working. The hard part now is where to go from here.
3 min read · Sep 2nd · Computational neuroscientists taught an artificial neural network to imitate a biological neuron. The result offers a new way to think about the complexity of single brain cells.
4 min read · Jul 7th · A temporal pattern of activity observed in human brains for the first time may explain how we can learn so quickly.
“It’s really important not just how many [neuron activations] occur, but when exactly they occur,”
Some researchers think the discovery might help solve a major mystery: how brains can learn so quickly.
The closer you get to the center of a place field, the faster the corresponding place cell fires. As you leave one place field and enter another, the firing of the first place cell peters out, while that of the second picks up.
5 min read · Aug 17th · The most widely used technique for finding the largest or smallest values of a math function turns out to be a fundamentally difficult computational problem.
This result puts the brakes on what you could possibly shoot for.
Many aspects of modern applied research rely on a crucial algorithm called gradient descent. This is a procedure generally used for finding the largest or smallest values of a particular mathematical function — a process known as optimizing the function.