Robots with moral reasoning are pursued for public reassurance and to constrain autonomous weapons, but practical robot ethics remains unclear and difficult.
A robot may not injure a human being or, through inaction, allow a human being to come to harm, unless that human being did something to really annoy the human being who programmed it.