Of course all the reasonable steps should be in place to make sure that Alexa glitches like this don't happen.
https://arstechnica.com/gadgets/2021/12/alexa-tells-10-year-old-to-try-a-shocking-tiktok-challenge/
Over the years there has been various concerns and_issues.
https://www.bbc.com/news/technology-59810383
People expect a certain degree of safety with such A.I. devices.
https://www.cnbc.com/2021/12/29/amazons-alexa-told-a-child-to-do-a-potentially-lethal-challenge.html
https://developer.amazon.com/en-US/alexa
https://support.alexa.com/hc/en-us/articles/4410503838999
https://bernardmarr.com/what-is-the-difference-between-ai-and-robotics
https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificial-intelligence
Isaac_Asimov in the early 1940s devised the Three_Laws_of_Robotics. Although not officially adhered to, the general concept remains.
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Alterations
The problem is in today's world, AI and robotics can be use to hunt, harm and kill people, albeit under limited circumstances. What happens if at a certain point, AI deduces that if some people decide that its OK to use AI & Robotics to kill humans, why can't AI kill all humans?
Perhaps some might even set others up to take the blame.
https://en.wikipedia.org/wiki/I,_Robot_(2004_film)
A.I. & Robotics can be great tools & devices for humanity. However, if people allow themselves to become totally taken over by A.I. control & management systems, don't expect it to be humane.
https://bernardmarr.com/future-developments-of-artificial-intelligence