Discussion about this post

User's avatar
Abdulkabir's avatar

At the end of the day, LLMs too have similar behaviours, "Garbage in, garbage out". To have a better performance, context is gold. However, I am just discovering about providing too many information and how it can unsettle the model.

Adeola's avatar

This speaks volume. I have been using LLMs has a magic wand not like a power tool. Most times during a conversation with Open AI GPT, I would be annoyed and telling the tool that it memory has lost. You broke down what actually happened and how people can learn, then implement. Thanks for that. I would be revisiting this write-up more often.

7 more comments...

No posts

Ready for more?