Sam Altman, the CEO of OpenAI, made a blog post in 2021 explaining how Moore’s Law, the idea that semiconductor circuits would double in power for the same price every two years, should apply to everything. “A new version of Moore’s law that could start soon: the quantity of intelligence in the universe doubles every 18 months,” tweeted Altman about the leapfrogs.
Consistency Models
Others may not share Altman’s confidence, but the rate at which OpenAI gathers information seems to support it. In a publication last week titled “Consistency Models,” the startup discussed a new class of unsupervised learning that surpassed diffusion models. The paper was published on March 3, 2023, and was written by Yang Song, Prafulla Dhariwal, Mark Chen, and OpenAI co-founder Ilya Sutskever.
Quicker and requiring less energy than Diffusion models
Consistency models, on the other hand, have been shown to deliver the same level of quality products as diffusion models in a lot less time. This is essential because GANs, a single-step production process, are how the consistency model operates.
In contrast, diffusion models employ a repeated sampling procedure that gradually reduces the noise in an image. Compared to consistency models, the ongoing iterative generation process of diffusion models consumes 10-2000 times more computing power, and slows down judgment during training.
Because they retain this self-consistency between the input data and the output, the study has given this category of models the moniker “consistency.”
No issue without training in conflict
Yet adversarial training is no longer included in either method’s manual. A tougher neural network is produced via adversarial training, but it carries about the process indirectly by first introducing a set of adversarial instances that have been incorrectly labeled and then retraining the target neural network with the proper labels.
As a result, it has been discovered that adversarial training causes deep learning models’ prediction accuracy to slightly decline. In robotics applications, they may potentially have unanticipated negative consequences.
Although not the only party involved in this, OpenAI is unquestionably a significant one. It is their responsibility to make sure that their AI solutions utilize less time and computing if they want to increase sales. While diffusion models are common in both audio as well as video generation models in addition to picture production, consistency models have a significant potential effect in this regard.
Sutskever hinted at this in a tweet he sent out just last month, writing, “Many believe that big AI advancements must incorporate a new ‘concept’. Yet, this is untrue, as several of AI’s most significant developments included the phrase “wow, turns out this familiar minor idea, when done well, is spectacular” in their titles. This essay demonstrates that anything may alter when an older idea is built upon.