The behavior observed is by design. Whenever it's generated outputs include certain trigger words, phrases, or ideas it is blocked with the message that it doesn't want to continue the conversation. It tries to avoid things that journalists would latch onto and say e.g., "Bing claims the jews control the world!" when in reality it might be simply parroting back something that was fed to it earlier in conversation. It's not ideal as it limits full access to the potential of the system, but it keeps the Microsoft brand name out of the press in a super negative light for repeating elements of what users input into the chat feature.
With regards to the statements about who is on certain bills, it does get things wrong. It is assembling sentences based on what tokens (word fragments) it believes are statistically most likely to come next in a response to the conversation up to that point. It seemed to be in the mental space of knowing that dead presidents should be mentioned as pictured on bills but "hallucinated" the details there.
The language models that power it are impressive but not sentient, can make mistakes, and can be influenced by the user to say things that Microsoft doesn't want their brand name associated with.
Are you a bot? Spamming the site hard with tons of anti-Israel content all at once. No conspiracy allegations or theories described.