Intel judgment critical for other EU antitrust cases
Europe’s top court will rule on Wednesday whether US chipmaker Intel offered illegal rebates to squeeze out rivals in a judgment that could affect EU antitrust regulators’ cases against Qualcomm and Alphabet’s Google. The ruling by the Luxembourg-based Court of Justice of the European Union (ECJ) could also provide more clarity on whether rebates are anticompetitive by nature or whether enforcers need to prove the anticompetitive effect. The European Commission in a 2009 decision said that Intel tried to thwart rival Advanced Micro Devices by giving rebates to PC makers Dell, Hewlett Packard, NEC and Lenovo for buying most of their computer chips from the company.
Scientists from the Massachusetts Institute of Technology (MIT), including those of Indian origin, have developed a new system that allows robots to understand voice commands just like artificial intelligence (AI) assistants such as Siri and Alexa.
Currently, robots are very limited in what they can do.
Their inability to understand the nuances of human language makes them mostly useless for more complicated requests.
For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost.
Picking it up means being able to see and identify objects, understand commands, recognise that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.
Researchers from MIT have gotten closer to making this type of request easier.
They have developed an Alexa-like system called “ComText” — for “commands in context” — that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments.
“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3D maps generated from sensors,” said Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”
The team tested ComText on a two-armed humanoid robot Baxter. ComText can observe a range of visuals and