There’s no difference between a land mine (kills or maims whoever steps on it) and a lethal autonomous drone (kills whoever it decides is a military target). Same ethical issues. More importantly, not any NEW ethical issues.
«Current artificial intelligence is particularly brittle; it can be easily fooled or make mistakes. For example, a single pixel can convince an artificial intelligence that a stealth bomber is a dog. A complex, dynamic battlefield filled with smoke and debris makes correct target identification even harder, posing risk to both civilians and friendly soldiers. Even if no one is harmed, errors may simply prevent the system from achieving the military objective.»