Subscribe to Updates
Get the latest creative news from FooBar about art, design and business.
Author: kissdev
[ad_1] An AI social media vetting startup Ferretly has raised $2.5 million in seed funding and is launching a new platform designed to screen election personnel. Founded in 2019, Ferretly leverages AI to scan social media and publicly available online data to uncover potential risks and behaviors that traditional background checks may overlook. The startup is the brainchild of Darrin Lipscomb, who previously founded software startups Pipestream and Avrio, which sold to BMC Software and Hitachi, respectively. Lipscomb told TechCrunch that Ferretly is designed to help hiring managers ensure that the person they’re hiring aligns with their company’s values. The…
[ad_1] Roeland Decorte grew up in a nursing home in Belgium, where he learned to spot the subtle early signs of mental decline in small changes to how residents walked or talked. When Decorte was 11, his father, who owned and managed the care home, started waking up in the middle of the night with chest pains and an overwhelming sense of impending doom.He went to two doctors, who briefly listened to his heartbeat through their stethoscopes and diagnosed him with anxiety. But the symptoms persisted, and it was only when he underwent a full set of scans at a…
[ad_1] World Labs, a stealthy startup founded by renowned Stanford University AI professor Fei-Fei Li, has raised two rounds of financing two months apart, according to multiple reports. The latest financing was led by NEA and valued the company at over $1 billion, TechCrunch has learned from several people with knowledge of the investments. This was a $100 million round previously reported by the Financial Times in July. This was a significant increase in valuation from World Labs’ initial financing, which took place in April, and valued World Labs at $200 million, one person said. Investors in the first round included…
[ad_1] The latest generative models make for great demos, but are they really about to change how people make movies and TV? Not in the short term, according to filmmaking and VFX experts. But in the long term, the changes could be literally beyond our imagining. On a panel at SIGGRAPH in Denver, Nikola Todorovic (Wonder Dynamics), Freddy Chavez Olmos (Boxel Studio) and Michael Black (Meshcapade, Max Planck Institute) discussed the potential of generative AI and other systems to change — but not necessarily improve — the way media is created today. Their consensus was that while we can justly…
[ad_1] Elon Musk’s Grok released a new AI image-generation feature on Tuesday night that, just like the AI chatbot, has very few safeguards. That means you can generate fake images of Donald Trump smoking marijuana on the Joe Rogan show, for example, and upload it straight to the X platform. But a new startup is the one behind the controversial feature. The social media site is already flooded with outrageous images from the new feature. That certainly raises concerns heading into an election cycle, but strictly speaking it’s not really Elon Musk’s AI company powering the madness. Musk seems to…
[ad_1] Popular iOS pro photography app Halide launched its new version today with a new feature called Process Zero, which does not use AI in image processing. Lux Optics, the company behind the Halide app, believes that this option can be a creative tool for photographers to take different kinds of snaps. The company previously allowed users to reduce default image processing on the app. The new option skips the standard image processing and is based on a single exposure RAW file. Halide uses 12-megapixel RAW DNG files for Process Zero pictures. The company said using the fast processing pipeline…
[ad_1] The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews. The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it’s published in the official government publication called Federal Register. The FTC’s new rule has been a long time coming. It aims to improve the often untrustworthy online review system and — hopefully — make it easier for…
[ad_1] All generative AI models hallucinate, from Google’s Gemini to Anthropic’s Claude to the latest stealth release of OpenAI’s GPT-4o. The models are unreliable narrators in other words — sometimes to hilarious effect, other times problematically so. But not all models make things up at the same rate. And the kinds of mistruths they spout depends on which sources of info they’ve been exposed to. A recent study from researchers at Cornell, the universities of Washington and Waterloo and the nonprofit research institute AI2 sought to benchmark hallucinations by fact-checking models like GPT-4o against authoritative sources on topics ranging from…
[ad_1] Hiya, folks, welcome to TechCrunch’s regular AI newsletter. This week in AI, a new study shows that generative AI really isn’t all that harmful — at least not in the apocalyptic sense. In a paper submitted to the Association for Computational Linguistics’ annual conference, researchers from the University of Bath and University of Darmstadt argue that models like those in Meta’s Llama family can’t learn independently or acquire new skills without explicit instruction. The researchers conducted thousands of experiments to test the ability of several models to complete tasks they hadn’t come across before, like answering questions about topics…
[ad_1] Which specific risks should a person, company or government consider when using an AI system, or crafting rules to govern its use? It’s not an easy question to answer. If it’s an AI with control over critical infrastructure, there’s the obvious risk to human safety. But what about an AI designed to score exams, sort resumes or verify travel documents at immigration control? Those each carry their own, categorically different risks, albeit risks no less severe. In crafting laws to regulate AI, like the EU AI Act or California’s SB 1047, policymakers have struggled to come to a consensus…
