"Three laws, the robotics experts say, are nowhere near sufficient to ensure human safety in a world where cleaning, carrying and even cooking could one day be performed by machines. So the Ministry of Economy, Trade and Industry has drafted a hugely complex set of proposals for keeping robots in check."
This is because it is a government agency. In Asimov's world, the Three Laws of Robotics were an engineering concern.
Asimov's robots didn't need a definition of "risk" or "harm" to understand that they could not just sit idly by and watch as a 16 ton weight fell on a person. Even if the robot's positronic brain assigned a high order of probability to its being damaged by saving the person's life, the robot would still act to prevent the human from being injured. "Self preservation" was the third law; protecting humans from harm was the first, and took top priority.
The Three Laws of Robotics were so intrinsic to the positronic brain that, it was said in I, Robot, it was impossible to manufacture a robot without them. I would suspect that some of that was hyperbole, but not entirely unfounded; I'd bet that it would be impossible to do so using the infrastructure that U.S. Robots and Mechanical Men Inc had in place, and that it was not economically feasible to re-design all the hardware and procedures which manufactured the positronic brains.
Besides, building a robot without the Three Laws was undesirable, for a variety of reasons.
Most of Asimov's stories in I, Robot--it is an anthology of short stories, for those of you who have never read it--most of them center on what happens when the Three Laws of Robotics generate some kind of conflict. In "Liar", for example, somehow a telepathic robot is created; it lies to people in order to spare them emotional harm. The robot gets confused when it is told that it must tell the truth, but after all "obey all orders" is only the second law.
These kinds of articles never talk about the "zeroeth law", which states that a robot "...may never hurt the race of mankind, or, through inaction, allow mankind to come to harm." It's not hard-coded but certain highly sophisticated robots are capable of making the logical leap. (I am probably misquoting the hell out of the 0th law. Sorry. I have only read it once, and I don't remember which book, nor do I even think I have that book any more.)
If they try to encode Japan's "laws of robotics" in the hardware of real-world robots, I expect them to behave like RoboCop in RoboCop II when they fill his brain with all kinds of silly and extraneous directives.