AI News
AI in health should be regulated, but don’t forget about the algorithms, researchers say
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.
Google Launches Gemini 2.0 with Autonomous Tool Linking
Gemini 2.0 Flash is available now, with other model sizes coming in January. It adds multilingual voice output, image output, and some trendy “agentic” capabilities.
'Us' vs. 'them' biases plague AI, too
A study by a team of scientists finds that AI systems are also prone to social identity biases, revealing fundamental group prejudices that reach beyond those tied to gender, race, or religion.
AI thought knee X-rays show if you drink beer -- they don't
A new study highlights a hidden challenge of using AI in medical imaging research -- the phenomenon of highly accurate yet potentially misleading results known as 'shortcut learning.' The researchers analyzed thousands of knee X-rays and found that AI models can 'predict' unrelated and implausible traits such as whether patients abstained from eating refried beans or beer. While these predictions have no medical basis, the models achieved high levels of accuracy by exploiting subtle and unintended patterns in the data.
Adoption of AI calls for new kind of communication competence from sales managers
Artificial intelligence, AI, is rapidly transforming work also in the financial sector. A recent study explored how integrating AI into the work of sales teams affects the interpersonal communication competence required of sales managers. The study found that handing routine tasks over to AI improved efficiency and freed up sales managers' time for more complex tasks. However, as the integration of AI progressed, sales managers faced new kind of communication challenges, including those related to overcoming fears and resistance to change.
Researchers reduce bias in AI models while preserving or improving accuracy
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Study: Some language reward models exhibit political bias
Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.
OpenAI’s Sora: Everything You Need to Know
ChatGPT Plus and Pro users now have access to Sora Turbo, intended to be faster and safer than the version shown in February.
China Investigates NVIDIA for Allegedly Breaking Monopoly Law
China conditionally approved NVIDIA’s acquisition of Mellanox in 2020, but the investigation announcement suggests the AI chipmaker may not be meeting the conditions.
Enabling AI to explain its predictions in plain language
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Daniela Rus wins John Scott Award
MIT CSAIL director and EECS professor named a co-recipient of the honor for her robotics research, which has expanded our understanding of what a robot can be.
Scientists create AI that 'watches' videos by mimicking the brain
Imagine an artificial intelligence (AI) model that can watch and understand moving images with the subtlety of a human brain. Now, scientists have made this a reality by creating MovieNet: an innovative AI that processes videos much like how our brains interpret real-life scenes as they unfold over time.
IBM’s Co-Packaged Optics Prototype Packs More Bandwidth Into a Single Connector
Polymer optical waveguides in co-packaged optics could speed up AI training.
Black-box forgetting: A new method for tailoring large AI models
Pretrained large-scale AI models need to 'forget' specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers addressed this issue through a strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users' privacy.
Readers trust news less when AI is involved, even when they don't understand to what extent
Researchers have published two studies in which they surveyed readers on their thoughts about AI in journalism. When provided a sample of bylines stating AI was involved in producing news in some way or not at all, readers regularly stated they trusted the credibility of the news less if AI had a role. Even when they didn't understand exactly what AI contributed, they reported less trust and that 'humanness' was an important factor in producing reliable news.
Citation tool offers a new approach to trustworthy AI-generated content
Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.
If you can make this AI bot fall in love, you could win thousands of dollars
If you can be the first person to get an AI bot named Freysa to say ‘I love you,’ you’ll win anywhere from $3,000 to tens of thousands of dollars. © 2024 TechCrunch. All rights reserved. For personal use only.
Employee Data Access Behaviors Putting Australian Employers At Risk
New CyberArk research finds Australian employees choosing convenience over cyber security policies.
Dell: Chief AI Officers Are Emerging as Lynchpin in AI Success
Dell urges APAC enterprises to appoint Chief AI Officers and adopt top-down AI strategies, predicting significant ROI in 2025.
Google Photos launches a ‘2024 Recap’ for a look back at this year’s memories
Spotify Wrapped isn’t the only service offering a year-end recap these days. In addition to the year-end reviews from other streamers and social apps, Google Photos is among the apps providing users with a look back at key moments throughout the past year. “2024 Recap,” as the feature is called, introduces a collection of memories, […] © 2024 TechCrunch. All rights reserved. For personal use only.