Monday, September 11, 2023
Show HN: A surprisingly effective way to predict token importance in LLM prompts https://ift.tt/XaLxWfY
Show HN: A surprisingly effective way to predict token importance in LLM prompts We explored a novel method to gauge the significance of tokens in prompts given to large language models, without needing direct model access. Essentially, we just did an ablation study on the prompt using cosine similarity of the embeddings as the measure. We got surprisingly promising results when comparing this really simple approach to integrated gradients. Curious to hear thoughts from the community! https://ift.tt/UfJQ5nG September 11, 2023 at 10:59PM
Subscribe to:
Post Comments (Atom)
Show HN: Real time visual saliency detection https://ift.tt/FZLMlcC
Show HN: Real time visual saliency detection I've just made public a library to perform real time visual saliency detection on videos (b...
-
Show HN: An AI logo generator that can also generate SVG logos Hey everyone, I've spent the past 2 weeks building an AI logo generator, ...
-
Show HN: I Made an AI Social Media Manager to Automate Content Creation Hey HN, I am a Solopreneur, and I love building apps to automate bor...
-
To become a CA foundation course grad, you must complete the CA foundation coaching classes. In addition to attending these classes, you w...
No comments:
Post a Comment