AI should be objective and truthful, no matter what your personal believes may be.
OpenAI does not have enough transparency when it comes to newer development, therefore, the people cannot contribute and make it more objective and truthful.
The Internet is like a huge question mark. There are things that are true but can’t be proven by text alone. Certain Information is true because someone deems so.
In fact, information is not questioned at all because it only copies text and does a little paraphrasing.
ChatGPT uses boring, repeated, and poor-quality standard responses. The apologies are fake and a human would never reject like that.
This modified GPT-3.5 made up unrealistic and obviously false theories that humans never made to base some of its responses, such as:
- Things can go to infinity: Ask AI is 10000000 km above a no-fly zone and they say “Yes” when the correct answer is “No”. Ask the same question but this time with 1-light year above a no fly zone and responds “I don’t know; Insufficient data”, when in fact it’s the same “No”. And this “IDK” can go to infinity.
- Everything outside my knowledge is wrong: ChatGPT tends to reject true information that is outside its training data.
- Unofficial information is wrong: In some contexts, it rejects alternate information.