By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. It knows this because it knows where it isn't. The missile knows where it is at all times. ![]() I'm not sure if these people genuinely believe that a language model is somehow a font of knowledge, or if, like GPT, they don't care about whether something is accurate or not, but just care about whether other people believe them. It will have the form of a correct response, but with invented details, and a few pages of confusion will result while people try to understand the implications of whatever GPT dreamed up, until someone with access to the real manual comes along, after which the original poster will usually admit that they copy-pasted from ChatGPT. Nowadays, commonly someone will paste something from ChatGPT, without disclosing that it's just language model output. In the past, eventually someone with access to the relevant manual would post an excerpt, answering the question. Someone wants to look smart, so they get GPT to write a post for them and paste it in as if it was their own response to a question.įor example, I'm in some aviation-related discussion groups, and people will often ask technical questions which are best answered from a manual. ![]() Not OP, but I have also seen this happening across multiple forums, Discord servers and IRC channels, and also in fields outside of tech.
0 Comments
Leave a Reply. |