‘Black box’ technique may lead to more powerful AI

OpenAI researchers have developed an evolution strategy that promises more powerful AI systems. Rather than using standard reinforcement training, they create a “black box” where they forget that the environment and neural networks are even involved. It’s all about optimizing a given function in isolation and sharing it as necessary.

The system starts with many random parameters, makes guesses, and then tweaks follow-up guesses to favor the more successful candidates, gradually whittling things down to the ideal answer.

The technique eliminates a lot of the traditional cruft in training neural networks, making the code both easier to implement and roughly two to three times faster.

There’s a long way to go before you see the black box approach used in real-world AI. However, the practical implications are clear: neural network operators could spend more time actually using their systems instead of training them. And as computers get ever faster, this increases the likelihood that this kind of learning can effectively happen in real time.

[Source]

Leave a Comment

Your email address will not be published. Required fields are marked *