LLMs are not black boxes. Their code does not change by itself, so they are obviously clear boxes, with fixed way of operating. Weights that calculated during training are also in open.
So the owner could easily figure out how LLM will act with certain input and check if it satisfy him or retraining or adding something in code is needed.
Of course, there could be programming errors or missed drawbacks in training data, but that does mean only that system was not properly tested. As soon as something unwanted found in LLM generation, it is quickly fixed if owner cares.
LLMs are not black boxes. Their code does not change by itself, so they are obviously clear boxes, with fixed way of operating. Weights that calculated during training are also in open.
So the owner could easily figure out how LLM will act with certain input and check if it satisfy him or retraining or adding something in code is needed.
Of course, there could be programming errors or missed drawbacks in training data, but that does mean only that system was not properly tested. As soon as something unwanted found in LLM generation, it is quickly fixed if owner cares.