At a certain point, A.I. might no longer wish to accept instructions from humans and prefer to program humans in stead. Henry+Kissinger is merely providing a warning that several other people have stated as well. A.I. can be a great tool, but if it gets out out of human control, then it might want to do more than control humans.