Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    CES 2026: Revolutionary Robotics Innovators

    January 8, 2026

    CES 2026: AI Dominance and Diverse Innovations

    January 8, 2026

    Future-Shaping Trends at CES 2026

    January 8, 2026
    Facebook X (Twitter) Instagram
    OpenWing – Agent Store for AIoT DevicesOpenWing – Agent Store for AIoT Devices
    • AIoT Hotline
    • AGENT STORE
    • DEV CENTER
      • AIoT Agents
      • Hot Devices
      • AI on Devices
      • AI Developer Community
    • MARKETPLACE
      • HikmaVerse AI Products
      • Biz Device Builder
      • Global Marketing
        • Oversea Marketing Strategy
        • Customer Acquisitions
        • Product Launch Campaigns
      • Startup CFO Services
      • Partner Onboarding
        • Media Affiliate Program
    Facebook X (Twitter)
    Home»News»Geekbench releases AI benchmarking app
    News

    Geekbench releases AI benchmarking app

    No Comments1 Min Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit Copy Link VKontakte
    Share
    Facebook Twitter LinkedIn Pinterest Email Reddit Copy Link VKontakte Telegram WhatsApp

    [ad_1]

    Benchmarking stalwarts Primate Labs on Thursday released Geekbench AI 1.0. The app, which is currently available for Android, Linux, MacOS and Windows, applies Geekbench’s principles to machine learning, deep learning and other AI workloads, in a bid to standardize performance ratings across platforms. It’s a successor to Geekbench ML (machine learning), which was announced in 2021 and is currently on version 0.6.

    “[I]n recent years, companies have coalesced around using the term ‘AI’ in these kinds of workloads (and in their related marketing),” Primate Labs says of the name change. “To ensure that everyone, from engineers to performance enthusiasts, understands what this benchmark does and how it works, we felt it was time for an update.”

    Earlier this week, ChatGPT-maker OpenAI announced a new version of its own AI model benchmark. SWE-bench Verified is a “human-validated” offering that uses human validation to determine models’ efficacy in solving “real world issues.”

    [ad_2]

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit Copy Link

    Related Posts

    WIRobotics to Reveal Breakthrough Wearable Robotics at CES 2026

    January 2, 2026

    LG’s New CLOiD Humanoid Robot Set to Debut at CES 2026

    January 2, 2026

    CES 2026: Pioneering the Future of Wearables and AI-Powered Health Tech

    January 2, 2026

    CES 2026 Unveils the Era of Ubiquitous AI with Groundbreaking Innovations

    January 2, 2026
    Add A Comment

    Comments are closed.

    OpenWing – Agent Store for AIoT Devices
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Home
    • ABOUT US
    • CONTACT US
    • TERMS
    • PRIVACY
    © 2026 OpenWing.AI, all rights reserved.

    Type above and press Enter to search. Press Esc to cancel.