AI News

news.mit.edu

Helping companies deploy AI models more responsibly

MIT spinout Verta offers tools to help companies introduce, monitor, and manage machine-learning models safely and at scale.

Read more
news.mit.edu

3 Questions: Leo Anthony Celi on ChatGPT and medicine

The chatbot’s success on the medical licensing exam shows that the test — and medical education — are flawed, Celi says.

Read more
news.mit.edu

Solving a machine-learning mystery

A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.

Read more
news.mit.edu

Automating the math for decision-making under uncertainty

A new tool brings the benefits of AI programming to a much broader class of problems.

Read more
news.mit.edu

MIT Solve announces 2023 global challenges and Indigenous Communities Fellowship

More than $1 million in funding available to selected Solver teams and fellows.

Read more
news.mit.edu

Putting clear bounds on uncertainty

Computer scientists want to know the exact limits in our ability to clean up, and reconstruct, partly blurred images.

Read more
news.mit.edu

MIT researchers develop an AI model that can detect future lung cancer risk

Deep-learning model takes a personalized approach to assessing each patient’s risk of lung cancer based on CT scans.

Read more
news.mit.edu

Gaining real-world industry experience through Break Through Tech AI at MIT

A new experiential learning opportunity challenges undergraduates across the Greater Boston area to apply their AI skills to a range of industry projects.

Read more
news.mit.edu

2022-23 Takeda Fellows: Leveraging AI to positively impact human health

New fellows are working on health records, robot control, pandemic preparedness, brain injuries, and more.

Read more
news.mit.edu

Engineering in harmony

AeroAstro major and accomplished tuba player Frederick Ajisafe relishes the community he has found in the MIT Wind Ensemble.

Read more
openai.com

Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk

OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. Read the full report here.

Read more
news.mit.edu

Program teaches US Air Force personnel the fundamentals of AI

MIT researchers developed and studied a customized AI training program for users with varied backgrounds, which could be delivered across large organizations.

Read more
news.mit.edu

Unpacking the “black box” to build better AI models

Stefanie Jegelka seeks to understand how machine-learning models behave, to help researchers build more robust models for applications in biology, computer vision, optimization, and more.

Read more
news.mit.edu

Simulating discrimination in virtual reality

The role-playing game “On the Plane” simulates xenophobia to foster greater understanding and reflection via virtual experiences.

Read more
news.mit.edu

Strengthening electron-triggered light emission

A new method can produce a hundredfold increase in light emissions from a type of electron-photon coupling, which is key to electron microscopes and other technologies.

Read more
news.mit.edu

Cognitive scientists develop new model explaining difficulty in language comprehension

Built on recent advances in machine learning, the model predicts how well individuals will produce and comprehend sentences.

Read more
news.mit.edu

Subtle biases in AI can influence emergency decisions

But the harm from a discriminatory AI system can be minimized if the advice it delivers is properly framed, an MIT team has shown.

Read more
openai.com

Point-E: A system for generating 3D point clouds from complex prompts

Read more
news.mit.edu

Machine learning and the arts: A creative continuum

CAST Visiting Artist Andreas Refsgaard engages the MIT community in the ethics and play of creative coding.

Read more
news.mit.edu

Meet the 2022-23 Accenture Fellows

This year's fellows will work across research areas including telemonitoring, human-computer interactions, operations research,  AI-mediated socialization, and chemical transformations.

Read more