Tag: category-/Science/Computer Science

ELLIS launches $220 million initiative to keep AI talent in Europe

The European Laboratory for Learning and Intelligent Systems (ELLIS) today announced the selection of 17 cities in 10 European countries and Israel where it’s establishing project that it hope grows into AI research institutes keen on societal impact. Each selected site — in places like Amsterdam, Copenhagen, Helsinki, Tel Aviv, and Zurich — will begin […]

Read More

MIT and IBM’s ObjectNet shows that AI struggles at object detection in the real world

Object recognition models have improved by leaps and bounds over the past decade, but they’ve got a long way to go where accuracy is concerned. That’s the conclusion of a joint team from the Massachusetts Institute of Technology and IBM, which recently released a data set — ObjectNet — designed to illustrate the performance gap […]

Read More

Eclipse Foundation forms working group for open source edge computing tech

The Eclipse Foundation today announced the formation of the Edge Native Working Group, which will drive the adoption of open source software for edge computing. Founding members include Adlink, Bosch, Edgeworx, Eurotech, Kynetics, Huawei, Intel, and Siemens. The working group plans to focus on creating an end-to-end software stack that will support deployments of today’s […]

Read More

Intel previews AI advances in software testing, sequence models, and explainability

This week marks the kickoff of Neural Information Processing Systems (NeurIPS), one of the largest AI and machine learning conferences globally. NeurIPS 2017 and NeuIPS 2018 received 3,240 and 4,854 research paper submissions, respectively, and this year’s event — which takes place from December 8 to December 14 in Vancouver — is on track to […]

Read More

Intel Labs unveils cryogenic control chip for quantum computing

Intel Labs unveiled a first-of-its-kind cryogenic control chip — code-named Horse Ridge — that will speed up the development of quantum computing systems. Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems – a major milestone on the path to quantum practicality. The challenge of […]

Read More

PerceptiLabs’ drag-and-drop interface makes ML modeling easier and faster

One of machine learning’s promises is to help humans do things faster and more efficiently. Ironically, one of the roadblocks that keeps businesses and independent developers from capitalizing on ML’s capabilities is that it can be time-consuming and difficult to build, train, and deploy models. PerceptiLabs, a two-person Swedish startup, developed a visual drag-and-drop interface […]

Read More

Uber’s PPLM language model can change the topic and sentiment of AI-generated text

Generative AI language models like OpenAI’s GPT-2 produce impressively coherent and grammatical text, but controlling the attributes of this text — such as the topic or sentiment — requires architecture modification or tailoring to specific data. That’s why a team of scientists at Uber, Caltech, and the Hong Kong University of Science and Technology devised […]

Read More

Mozilla updates DeepSpeech with an English language model that runs ‘faster than real time’

DeepSpeech, a suite of speech-to-text and text-to-speech engines maintained by Mozilla’s Machine Learning Group, this morning received an update (to version 0.6) that incorporates one of the fastest open source speech recognition models to date. In a blog post, senior research engineer Reuben Morais lays out what’s new and enhanced, as well as other spotlight […]

Read More

Researchers develop AI that reads lips from video footage

AI and machine learning algorithms capable of reading lips from videos aren’t anything out of the ordinary, in truth. Back in 2016, researchers from Google and the University of Oxford detailed a system that could annotate video footage with 46.8% accuracy, outperforming a professional human lip-reader’s 12.4% accuracy. But even state-of-the-art systems struggle to overcome […]

Read More

AI capsule system classifies digits with state-of-the-art accuracy

There’s strong evidence that humans rely on coordinate frames, or reference lines and curves, to suss out the position of points in space. That’s unlike widely-used computer vision algorithms, which differentiate among objects by numerical representations of their characteristics. In pursuit of a more human-like approach, researchers at Google, Alphabet subsidiary DeepMind, and the University […]

Read More

AWS launches major SageMaker upgrades for machine learning model training and testing

Amazon today announced half a dozen new features and tools for AWS SageMaker, a toolkit for training and deploying machine learning models to help developers better manage projects, experiments, and model accuracy. AWS SageMaker Studio is a model training and workflow management tool that collects all the code, notebooks, and project folders for machine learning […]

Read More

Amazon researchers use NLP data set to improve Alexa’s answers

Improving the quality of voice assistants’ responses to questions is of interest to tech giants like Google, Apple, and Microsoft, who seek to address shortfalls in their respective natural language processing (NLP) technologies. They’ve plenty in the way of motivation — more than 50% of U.S. smart speaker owners say they ask questions of their […]

Read More

Facebook details the AI technology behind Instagram Explore

According to Facebook, over half of Instagram’s roughly 1 billion users visit Instagram Explore to discover videos, photos, livestreams, and Stories each month. Predictably, building the underlying recommendation engine — which curates the billions of pieces of content uploaded to Instagram — posed an engineering challenge, not least because it works in real time. In […]

Read More

AI Weekly: Multimodal learning is in right now — here’s why that’s a good thing

Data sets are fundamental building blocks of AI systems, and this paradigm isn’t likely to ever change. Without a corpus on which to draw, as human beings employ daily, models can’t learn the relationships that inform their predictions. But why stop at a single corpus? An intriguing report by ABI Research anticipates that while the […]

Read More

OpenAI releases Safety Gym for reinforcement learning

While much work in data science to date has focused on algorithmic scale and sophistication, safety — that is, safeguards against harm — is a domain no less worth pursuing. This is particularly true in applications like self-driving vehicles, where a machine learning system’s poor judgement might contribute to an accident. That’s why firms like […]

Read More

How many programming languages are too many?

Presented by ThoughtWorks Every organization faces a dilemma when it comes to support for programming languages. On the one hand, there’s an understandable desire to standardize — a set of approved languages can help reign in maintenance costs and long-term viability. But equally, many of us recognize the benefits of using whichever language the developer […]

Read More

Xiaomi’s Xiao AI assistant passes 49.9 million users

Today marked the kickoff of Xiaomi’s annual Mi Developer conference in Beijing, and the tech giant wasted no time announcing updates across its AI portfolio. It took the wraps off of the latest release of Mobile AI Compute Engine (MACE), its open source machine learning framework, and it demoed an improved version of its Xiao […]

Read More

Cerebras Systems deploys the ‘world’s fastest AI computer’ at Argonne National Lab

Cerebras Systems is unveiling the CS-1, billed as the fastest artificial intelligence computer in the world and certainly one of the most daring attempts to create a better supercomputer. And it has gained acceptance from the U.S. federal government’s supercomputing program. The CS-1 has an entire wafer as its brain, rather than a chip. Normally, […]

Read More

Google details DeepMind AI’s role in Play Store app recommendations

AI and machine learning model architectures developed by Alphabet’s DeepMind have substantially improved the Google Play Store’s discovery systems, according to Google. In a blog post this morning, DeepMind detailed a collaboration to bolster the recommendation engine underpinning the Play Store, the app and game marketplace that’s actively used by over two billion Android users […]

Read More

Yamagata University uses IBM’s PAIRS Geoscope and Watson to uncover patterns in ancient etchings

Ever heard of the Nasca Lines? They’re literal lines etched in the sands of southern Peru covering an area of nearly 1,000 square kilometers, which depict over 300 different figures including animals and plants. The best evidence suggests that they’re pre-Columbian in origin, dating from between roughly 500 BC and 500 AD, and that they […]

Read More

Facebook’s MoCo closes the gap between supervised and unsupervised learning in vision tasks

Unsupervised representation learning — the set of techniques that allows an AI system to automatically discover the representations needed to classify raw data — is becoming widely used in natural language processing. But it’s not yet gained traction in computer vision, perhaps because the raw signals in tasks like image classification are in a continuous, […]

Read More

IBM’s Lambada AI generates training data for text classifiers

What’s a data scientist to do if they lack sufficient data to train a machine learning model? One potential avenue is synthetic data generation, which researchers IBM Research advocate in a newly published preprint paper. They used a pretrained machine learning model to artificially synthesize new labeled data for text classification  tasks. They claim that […]

Read More

Facebook research suggests pretrained AI models can be easily adapted to new languages

Multilingual masked language modeling involves training an AI model on text from several languages, and it’s a technique that’s been used to great effect. In March, a team introduced an architecture that can jointly learn sentence representations for 93 languages belonging to more than 30 different families. But most previous work in language modeling has […]

Read More