At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companiesnot to mention other countries. Probably theres going to be less open research from industry, and things are going to be more siloed and organized around building products, says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.
And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.
This is the rationale behind OpenAI's "early" (before the alignment issue is solved) release. Some (Yudkowsky) have argued vehemently that this was the wrong decision. But there's no doubt that it has resulted in an explosion of research and study, so that seems like a good outcome. What we really need to fear, imho, is if this powerful tool were deployed quietly and nobody noticed.
In the Preface in Max Tegmark's book "Life 3.0", a pretty optimistic utopian vision of how this might've all unfolded is presented. But later on in the book, he shows how easily, with a power shift, it could all crumble into severe dystopia. From what I've witnessed of humans in my life, I tend to believe that unfortunately the dystopic view will prevail. :sigh: