Tag: machine learning algorithms

How to detect poisoned data in machine learning datasets

Almost anyone can poison a machine learning (ML) dataset to alter its behavior and output substantially and permanently. With careful, proactive detection efforts, organizations could retain weeks, months or even years of work they would otherwise use to undo the damage that poisoned data sources caused. Data poisoning is a type of adversarial ML attack […]

Read More

Stanford’s mobile ALOHA robot learns from humans to cook, clean, do laundry

Join leaders in San Francisco on January 10 for an exclusive night of networking, insights, and conversation. Request an invite here. A new AI system developed by researchers at Stanford University makes impressive breakthroughs in training mobile robots that can perform complex tasks in different environments.  Called Mobile ALOHA (A Low-cost Open-source Hardware System for […]

Read More

New reinforcement learning method uses human cues to correct its mistakes

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Scientists at the University of California, Berkeley have developed a novel machine learning (ML) method, termed “reinforcement learning via intervention feedback” (RLIF), that can make it easier to train AI […]

Read More

Realtime generative AI art is here thanks to LCM-LoRA

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI art has quickly emerged as one of the most interesting and popular applications of the new technology, with models such as Stable Diffusion and Midjourney claiming millions of […]

Read More

New method reveals how one LLM can be used to jailbreak another

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More A new algorithm developed by researchers from the University of Pennsylvania can automatically stop safety loopholes in large language models (LLM). Called Prompt Automatic Iterative Refinement (PAIR), the algorithm can identify “jailbreak” prompts that […]

Read More