AI News

www.wired.com

YouTube Shorts Challenges TikTok With Music-Making AI for Creators

YouTube creators will get to test a new AI tool that generates and remixes music in the style of several famous musicians, including Sia, Demi Lovato, and T-Pain.

Read more
news.mit.edu

Technique enables AI on edge devices to keep learning over time

With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.

Read more
www.bbc.co.uk

How factories are deploying AI on production lines

Manufacturers are exploring how AI can predict machine faults before they happen.

Read more
www.techrepublic.com

Microsoft Ignite: New Solutions Offer More Security and Productivity from Windows in the Cloud

Cloud PCs give you access to Windows AI tools on any device, and Windows 365 now has AI-powered tools to help IT give users the right cloud PC for their needs.

Read more
www.techrepublic.com

Supercomputing ‘23: NVIDIA High-Performance Chips Power AI Workloads

NVIDIA’s AI Enterprise software shown at Supercomputing ‘23 connects accelerated computing to large language model use cases.

Read more
www.techrepublic.com

Australia Needs to Prepare to Reap the Benefits of Artificial Intelligence

Australia’s international standing in artificial intelligence research is one indicator of strong local expertise and potential in AI, according to the CSIRO, but community buy-in could hold the country back.

Read more
www.techrepublic.com

Behind the Controversy: Why Artists Hate AI Art

AI-generated art has sparked a heated debate regarding the originality, authorship and copyright of AI artworks. Read on to learn more about AI art legal issues and how to avoid being caught in the crossfire.

Read more
www.sciencedaily.com

New deep learning AI tool helps ecologists monitor rare birds through their songs

Researchers have developed a new deep learning AI tool that generates life-like birdsongs to train bird identification tools, helping ecologists to monitor rare species in the wild.

Read more
techcrunch.com

Google Photos turns to AI to organize and categorize your photos for you

Google Photos is rolling out a set of new features today that will leverage AI technologies to better organize and categorize photos for you. With the addition of something called Photo Stacks, Google will use AI to identify the “best” photo from a group of photos taken together and select it as the top pick […] © 2023 TechCrunch. All rights reserved. For personal use only.

Read more
www.techrepublic.com

Microsoft Copilot Announced for Azure

At Microsoft Ignite, Microsoft introduced an AI assistant for Azure troubleshooting and more. Azure now hosts NVIDIA generative AI foundation models.

Read more
news.mit.edu

This 3D printer can watch itself fabricate objects

Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.

Read more
www.wired.com

Underage Workers Are Training AI

Companies that provide Big Tech with AI data-labeling services are inadvertently hiring young teens to work on their platforms, often exposing them to traumatic content.

Read more
www.wired.com

Social Media Sleuths, Armed With AI, Are Identifying Dead Bodies

Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.

Read more
www.techrepublic.com

Get a Lifetime of Amazing Content Generation for Only $20

You don't need to blow a hole in your budget to get a great AI-powered content generator. Now at $19.97 through 11/16.

Read more
www.techrepublic.com

Red Hat: UK Leads Europe in IT Automation, But Key Challenges Persist

The U.K.'s position as a financial services hub puts it ahead in enterprise-wide IT automation, says Red Hat. But skills shortages remain an issue for all IT leaders surveyed.

Read more
www.sciencedaily.com

New water treatment method can generate green energy

Researchers have designed micromotors that move around on their own to purify wastewater. The process creates ammonia, which can serve as a green energy source. Now, an AI method will be used to tune the motors to achieve the best possible results.

Read more
techcrunch.com

Andreessen Horowitz backs Civitai, a generative AI content marketplace with millions of users

AI image generator Stable Diffusion already has a lot of fans, and now those experimenting with the new AI technology to develop their own models have a place to share their work with other enthusiasts. A startup called Civitai — a play on the word Civitas, meaning community — has created a platform where members […] © 2023 TechCrunch. All rights reserved. For personal use only.

Read more
www.wired.com

Google DeepMind’s AI Weather Forecaster Handily Beats a Global Standard

Machine learning algorithms that digested decades of weather data were able to forecast 90 percent of atmospheric measures more accurately than Europe’s top weather center.

Read more
techcrunch.com

YouTube adapts its policies for the coming surge of AI videos

YouTube today announced how it will approach handling AI-created content on its platform with a range of new policies surrounding responsible disclosure as well as new tools for requesting the removal of deepfakes, among other things. The company says that, although it already has policies that prohibit manipulated media, AI necessitated the creation of new […] © 2023 TechCrunch. All rights reserved. For personal use only.

Read more
localhost

Ghostbuster: Detecting Text Ghostwritten by Large Language Models

The structure of Ghostbuster, our new state-of-the-art method for detecting AI-generated text. Large language models like ChatGPT write impressively well—so well, in fact, that they’ve become a problem. Students have begun using these models to ghostwrite assignments, leading some schools to ban ChatGPT. In addition, these models are also prone to producing text with factual errors, so wary readers may want to know if generative AI tools have been used to ghostwrite news articles or other sources before trusting them. What can teachers and consumers do? Existing tools to detect AI-generated text sometimes do poorly on data that differs from what they were trained on. In addition, if these models falsely classify real human writing as AI-generated, they can jeopardize students whose genuine work is called into question. Our recent paper introduces Ghostbuster, a state-of-the-art method for detecting AI-generated text. Ghostbuster works by finding the probability of generating each token in a document under several weaker language models, then combining functions based on these probabilities as input to a final classifier. Ghostbuster doesn’t need to know what model was used to generate a document, nor the probability of generating the document under that specific model. This property makes Ghostbuster particularly useful for detecting text potentially generated by an unknown model or a black-box model, such as the popular commercial models ChatGPT and Claude, for which probabilities aren’t available. We’re particularly interested in ensuring that Ghostbuster generalizes well, so we evaluated across a range of ways that text could be generated, including different domains (using newly collected datasets of essays, news, and stories), language models, or prompts. Examples of human-authored and AI-generated text from our datasets. Why this Approach? Many current AI-generated text detection systems are brittle to classifying different types of text (e.g., different writing styles, or different text generation models or prompts). Simpler models that use perplexity alone typically can’t capture more complex features and do especially poorly on new writing domains. In fact, we found that a perplexity-only baseline was worse than random on some domains, including non-native English speaker data. Meanwhile, classifiers based on large language models like RoBERTa easily capture complex features, but overfit to the training data and generalize poorly: we found that a RoBERTa baseline had catastrophic worst-case generalization performance, sometimes even worse than a perplexity-only baseline. Zero-shot methods that classify text without training on labeled data, by calculating the probability that the text was generated by a specific model, also tend to do poorly when a different model was actually used to generate the text. How Ghostbuster Works Ghostbuster uses a three-stage training process: computing probabilities, selecting features, and classifier training. Computing probabilities: We converted each document into a series of vectors by computing the probability of generating each word in the document under a series of weaker language models (a unigram model, a trigram model, and two non-instruction-tuned GPT-3 models, ada and davinci). Selecting features: We used a structured search procedure to select features, which works by (1) defining a set of vector and scalar operations that combine the probabilities, and (2) searching for useful combinations of these operations using forward feature selection, repeatedly adding the best remaining feature. Classifier training: We trained a linear classifier on the best probability-based features and some additional manually-selected features. Results When trained and tested on the same domain, Ghostbuster achieved 99.0 F1 across all three datasets, outperforming GPTZero by a margin of 5.9 F1 and DetectGPT by 41.6 F1. Out of domain, Ghostbuster achieved 97.0 F1 averaged across all conditions, outperforming DetectGPT by 39.6 F1 and GPTZero by 7.5 F1. Our RoBERTa baseline achieved 98.1 F1 when evaluated in-domain on all datasets, but its generalization performance was inconsistent. Ghostbuster outperformed the RoBERTa baseline on all domains except creative writing out-of-domain, and had much better out-of-domain performance than RoBERTa on average (13.8 F1 margin). Results on Ghostbuster's in-domain and out-of-domain performance. To ensure that Ghostbuster is robust to the range of ways that a user might prompt a model, such as requesting different writing styles or reading levels, we evaluated Ghostbuster’s robustness to several prompt variants. Ghostbuster outperformed all other tested approaches on these prompt variants with 99.5 F1. To test generalization across models, we evaluated performance on text generated by Claude, where Ghostbuster also outperformed all other tested approaches with 92.2 F1. AI-generated text detectors have been fooled by lightly editing the generated text. We examined Ghostbuster’s robustness to edits, such as swapping sentences or paragraphs, reordering characters, or replacing words with synonyms. Most changes at the sentence or paragraph level didn’t significantly affect performance, though performance decreased smoothly if the text was edited through repeated paraphrasing, using commercial detection evaders such as Undetectable AI, or making numerous word- or character-level changes. Performance was also best on longer documents. Since AI-generated text detectors may misclassify non-native English speakers’ text as AI-generated, we evaluated Ghostbuster’s performance on non-native English speakers’ writing. All tested models had over 95% accuracy on two of three tested datasets, but did worse on the third set of shorter essays. However, document length may be the main factor here, since Ghostbuster does nearly as well on these documents (74.7 F1) as it does on other out-of-domain documents of similar length (75.6 to 93.1 F1). Users who wish to apply Ghostbuster to real-world cases of potential off-limits usage of text generation (e.g., ChatGPT-written student essays) should note that errors are more likely for shorter text, domains far from those Ghostbuster trained on (e.g., different varieties of English), text by non-native speakers of English, human-edited model generations, or text generated by prompting an AI model to modify a human-authored input. To avoid perpetuating algorithmic harms, we strongly discourage automatically penalizing alleged usage of text generation without human supervision. Instead, we recommend cautious, human-in-the-loop use of Ghostbuster if classifying someone’s writing as AI-generated could harm them. Ghostbuster can also help with a variety of lower-risk applications, including filtering AI-generated text out of language model training data and checking if online sources of information are AI-generated. Conclusion Ghostbuster is a state-of-the-art AI-generated text detection model, with 99.0 F1 performance across tested domains, representing substantial progress over existing models. It generalizes well to different domains, prompts, and models, and it’s well-suited to identifying text from black-box or unknown models because it doesn’t require access to probabilities from the specific model used to generate the document. Future directions for Ghostbuster include providing explanations for model decisions and improving robustness to attacks that specifically try to fool detectors. AI-generated text detection approaches can also be used alongside alternatives such as watermarking. We also hope that Ghostbuster can help across a variety of applications, such as filtering language model training data or flagging AI-generated content on the web. Try Ghostbuster here: ghostbuster.app Learn more about Ghostbuster here: [ paper ] [ code ] Try guessing if text is AI-generated yourself here: ghostbuster.app/experiment

Read more