tech is increasingly being matched, if not superseded in some respects, by open-source alternatives and thus might bolster arguments that the “big four” called to the White House should not be singled out for any regulatory action.) The leaked memo does a good job of laying out some of the problems with the ultra-large generative A.I. (Since it makes the case that Google’s A.I. The reason it leaked when it did, the same day as the White House meeting, is indeed suspect. Google has neither confirmed nor denied the memo’s legitimacy, but it seems likely to be genuine. news from last week: that allegedly leaked Google “We have no moat” memo. Which brings me to another very interesting bit of A.I. is going to have to figure out what to do about open-source. But any serious effort to govern advanced A.I. world is a much, much bigger challenge than slapping limits on companies like Microsoft and Google. People are also already using open-source software to turn LLMs into nascent agents that can perform actions across the internet. OpenAI could always cut off your access if it discovered your operation. If you wanted to create a malware factory, it would make more sense to download an open-source language model like Alpaca from Hugging Face than rely on OpenAI’s API. ![]() and its risks.Īrguably, the dangers with open-source software are greater than with the private models the big tech companies are building and making available through APIs: While is often easier to find security vulnerabilities or safety flaws with open-source software, it is also much easier for those with ill-intentions, or simply a cavalier attitude towards potential risks, to use these models however they want. These open-source players really ought to be “in the room where it happens” if the Biden Administration is serious about grappling with A.I. ![]() They are rapidly matching the capabilities of the systems built by OpenAI, Microsoft, and Google. The open-source models these companies are building (and hosting in the case of Hugging Face) are being used by thousands of businesses and individual developers. Stability is a British company, but Hugging Face is incorporated in the U.S., and its CEO and cofounder, Clem Delangue, although French, lives in Miami. ecosystem, most notably Hugging Face and Stability AI, both of which are also participating in the DEFCON 31 exercise. There also were no representatives from the fast-growing open-source A.I. What about Cohere (which is technically Canadian) but is also building very large language models, with financial backing and close support from Google? boom, and it is also building its own large language models. Yet it’s an American company, its chips are a linchpin of the current generative A.I. development.īut there were a lot of other players absent: Nvidia is participating in the red-teaming exercise at DEFCON 31 but wasn’t invited to the White House. technology and research, but, unlike the companies meeting with Harris, has not integrated the technology into a consumer-facing do-it-all chatbot, as well as Amazon and Apple, both of which are perceived as lagging in A.I. innovation.” Many read that as a burn on Mark Zuckerberg’s Meta, which has invested heavily in A.I. (Google DeepMind’s Demis Hassibis also looked to be there from the video clip of Biden addressing the group.) When asked why only these companies, and not others, were present, the White House said it wanted to meet with the “four American companies at the forefront of A.I. ![]() Meeting with Harris were the CEOs of Microsoft, Google, OpenAI, and Anthropic. What got the most attention, however, is who was in the room-and who wasn’t. models to probing by independent security, safety, and ethics researchers at the DEFCON 31 cybersecurity conference in August, and a policymaking effort from the Office of Management and Budget that will result in guidelines for how the U.S. companies will voluntarily submit their A.I. Research Institutes, a major red-teaming exercise where seven major A.I. The White House also used the occasion to announce several new initiatives, including $140 million in funding to establish seven new National A.I. systems, the importance of there being a way to evaluate and verify the safety, security, and performance of this software, and the need to secure the systems from malicious actors and attacks. ![]() A summary of the meeting later provided by the White House said that the executives and Harris had “a frank and constructive discussion” on the need for the companies to be more transparent about their A.I.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |