USD
41.2 UAH ▼0.06%
EUR
45.22 UAH ▼2.06%
GBP
53.89 UAH ▼2.5%
PLN
10.46 UAH ▼3.07%
CZK
1.78 UAH ▼2.74%
Researchers have come to the conclusion that

"Great" si won't help the US in the war with China: why the hike mask is dangerous for the army

Researchers have come to the conclusion that "small" AI in some cases is more useful for servicemen than large energy -intensive models. Today, in the field of artificial intelligence (AI), the principle of "more, the better" prevails, but a new study shows that such a hike can undermine the development of Shi-technologies needed by American servicemen now and in the future. This is stated in Defense One.

Researchers Gael Varoko from the University of Paris-Sakle, Alexander Sasha Luchni from the Institute of Quebek and Meredith Wittcker of Signal Foundation in his article "The stability, stability and price of the paradigm" More and best "in the AI" studied the history of forming the aforementioned principle in the field. They found that this idea was formulated in 2012 in the article of Professor of Toronto Alex Kryzhevskaya University.

In his work, Alex Kryzhevsk argued that large amounts of data and large -scale neural networks produce much better results to classify images than smaller ones. This idea was supported by other researchers, and later this approach became dominant among large AI companies. "The consequence is both an explosion of investment in large -scale AI models and a accompanying jump in the size of noticeable (highly quoted) models.

Generative AI, whether for images or text, or for the text, brought this assumption to a new level, as within the discipline of the AI ​​research discipline , as well as a component of the popular narrative "more - it means better" that surrounds the AI, " - the study reads. The material states that the productivity of large AI models does not always justify the resources that are needed for their operation.

In addition, the concentration of efforts in the field of AI in a relatively small number of large technological companies carries geopolitical risks. Despite the fact that the US Department of Defense pays attention to both large models of AI and less large-scale prosets, experts fear that future studies in the field of "small" AIs may be limited due to the growing influence of large Shi companies.

An example is the statement of the former Google Chairman Eric Schmidt, who stated that companies and governments should continue to engage in energy -intensive large AI models regardless of the cost of energy, since "we will still not achieve climate goals. " At the same time, environmental costs, in particular energy consumption, are increasing much faster than improving the productivity of AI models.

Experts point out that studies in the field of AI on the principle of "more - means better" lead to narrowing and loss of diversity in this area. According to Defense One, this narrowing can have negative consequences for the development of the AI ​​military. The fact is that smaller AI models can be important in places where computer resources are small and intermittent, rare or even absent.

"It is often the case that smaller, more focused on the tasks of the model work better than large, wide -profile models, when solving specific tasks below the stream," - reads in a separate article published by a group of researchers from Berkeley. As an example, UAVs operated under the action of the HRS, and small bases in advanced positions where the energy is low and the bond is weak.

Operators may encounter a number of situations where a AI model that works on a relatively small array of data and does not require a massive server farm or a large number of graphic processors. These can be applications for UAV and satellite images, as well as technologies for processing economic, weather, demographic and other data for planning more efficient and safe operations in cities.

"But if the AI ​​research sector gives priority to the Great Expert Over the Small, it can mean less research and less experts to teach operators how to create their own small AI models well," the publication said. Another potential consequence of the priority development of the "great" AI is the concentration of power. Only some companies have resources to create and deploy large models. An example is mentioned by Ilon Musk, which is one of the richest defensive contractors in the world.

Ilon Musk also becomes one of the key financial players in the development of the future AI. "Concentrated private power over AI creates a small and financially motivated segment of persons who make decisions in the field of AI. We must consider how such concentrated power with the agency over centralized AI can form a society in more authoritarian conditions," the researchers said.

According to Defense One, a new class of AI experts also shares the opinion that the concentration on the "large" AI is suppressed by approaches that could be more useful for specific groups. Yes, the CEO of the startup Shi Useful Sensors Pete Worden said to the publication that the obsession of the industry and academic circles by more large -scale AI misses what most people actually want from the AI. "Academic benchmarks disagree with the requirements of the real world.

For example, many clients simply want to be able to extract results from available information (for example, user guides), not generating a new text in response to the question, but researchers do not consider it interesting," Pit said. Worden. For his part, Drew Breunig, an exclusive of data on data and strategic clients in Placeiq, who now works in Precisely, added that the high expectations of many people on large AI models are unlikely to be justified.

Drew Breunig divides AI into three groups. The first is the "gods", which he defines as "overcome pieces of AI" and "replacing people who do many different things without supervision. " Below the hierarchy are "trainees", which he describes as "specifics specific to the subject area that assist experts with hard and tiring work, doing what the trainee could do. " These models are under the supervision of experts. The third, the most local form of the AI, Drew Breunig called "gears".

These are models for the same task, with very low error resistance that work without supervision in applications or conveyors. According to the expert, this is the most common type of AI used by companies - all large platforms have switched to the help of companies in downloading their own data to set up models of AI, which can do well.

"A cool thing in focusing on the gears is that you can make so many with small models! A tiny model, configured for one thing, can surpass the giant general model when accomplishing the same task," the expert summed up. Earlier, American senators Maggie Hassan and Marsha Blackburn have stated that China was successful in quantum information science, exceeding in this direction of development of the United States by scale and coverage.