Amid an overall decline in AI venture funding, Gen AI investment octupled. The implication may be that corporate users perceive more immediate productivity and financial benefits from Gen AI while AI generally faces growing consumer and government concern.
Notable among these trends is that tech industry development of new LLM models now far surpasses that from academia and that the US is far ahead of China as AI continues to surpass human performance on a number of important benchmarks. JL
Michael Nunez reports in Venture Beat:
While private AI investment declined for a second year, funding for “generative AI” octupled to $25.2 billion. Companies like OpenAI and Anthropic closed massive funding rounds. Private companies produced 51 notable machine learning models last year, compared to only 15 from academia as AI surpassed human performance on several benchmarks, yet trails on more complex tasks like competition-level mathematics, visual commonsense reasoning, and planning.” Costs to train cutting-edge AI systems skyrocketed (as) the training costs of state-of-the-art AI models reached unprecedented levels. (And) “robust standardized evaluations for LLM are lacking, which complicates efforts to systematically compare the risks and limitations of top AI models.”Artificial intelligence made major strides in 2023 across technical benchmarks, research output, and commercial investment, according to a new report from Stanford University‘s Institute for Human-Centered AI. However, the technology still faces key limitations and growing concerns about its risks and societal impact.
The AI Index 2024 annual report, a comprehensive look at global AI progress, finds that AI systems exceeded human performance on additional benchmarks in areas like image classification, visual reasoning, and English understanding. However, they continue to trail humans on more complex tasks like advanced mathematics, commonsense reasoning, and planning. “AI has surpassed human performance on several benchmarks,” the report says. “Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning, and planning.”
The report details an explosion of new AI research and development in 2023, with industry players leading the charge. Private companies produced 51 notable machine learning (ML) models last year, compared to only 15 from academia. Collaborations between industry and academia yielded an additional 21 high-profile models. Costs to train cutting-edge AI systems skyrocketed, with OpenAI’s GPT-4 language model using an estimated $78 million worth of computing power. Google’s even larger Gemini Ultra model cost a staggering $191 million to train, according to estimates in the report.
“Frontier models get way more expensive,” the authors explain. “According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels.”
Geographic dominance in AI production
The United States dominated other countries in producing leading AI models, with 61 notable systems originating from U.S. institutions in 2023. China and the European Union trailed with 15 and 21 respectively.
Investment trends painted a mixed picture. While overall private AI investment declined for a second year, funding for “generative AI” — systems that can produce text, images and other media — nearly octupled to $25.2 billion. Companies like OpenAI, Anthropic and Stability AI closed massive funding rounds.
“Generative AI investment skyrockets,” the report notes. “Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion.”
Need for standardized AI testing
As AI rapidly advances, the report finds a troubling lack of standardized testing of systems for responsibility, safety and security. Leading developers like OpenAI and Google primarily evaluate their models on different benchmarks, making comparisons difficult.
“Robust and standardized evaluations for [large language model] LLM responsibility are seriously lacking,” according to the AI Index analysis. “This practice complicates efforts to systematically compare the risks and limitations of top AI models.”
Emerging risks and public concern
The authors point to emerging risks, including the spread of political deepfakes which are “easy to generate and difficult to detect.” They also highlight new research revealing complex vulnerabilities in how language models can be manipulated to produce harmful outputs.
Public opinion data in the report shows growing anxiety about AI. The share of people who think AI will “dramatically” affect their lives in the next 3-5 years rose from 60% to 66% globally. More than half now express nervousness about AI products and services.
“People across the globe are more cognizant of AI’s potential impact — and more nervous,” the report states. “In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.”
As AI becomes more powerful and pervasive, the AI Index aims to provide an objective look at the state of the technology to inform policymakers, business leaders, and the general public. With AI at an inflection point, rigorous data will be crucial to navigate the opportunities and challenges ahead.
1 comments:
The Emoji Kitchen game tracks your creative milestones, such as the number of emojis created or unique combinations discovered. Reaching these milestones can unlock special features or rewards.
Post a Comment