You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Really thank you for your reply! I also feel that this project is indeed difficult to replicate the results reported in the paper. I'm wondering if it's because the authors haven't disclosed some of their tricks or techniques.
But my issue #161 is an easy work,I 've got such poor result. It doesn't feel like it should be this way.
Have you tried the performance of timellm on simple tasks? How did it perform?
---- Replied Message ----
| From | ***@***.***> |
| Date | 11/27/2024 11:08 |
| To | ***@***.***> |
| Cc | ***@***.***>***@***.***> |
| Subject | Re: [KimMeen/Time-LLM] Predicting with Time-LLM using GPT2 has awful performance (Issue #161) |
#109
related issue. It is observed that their results are worse than reported in the paper.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
I follow the script in https://towardsdatascience.com/time-llm-reprogram-an-llm-for-time-series-forecasting-e2558087b8ac to Predicting with Time-LLM using GPT2.
My code is exactly the same as the script. But got the terrible result.
My result:
script:
https://towardsdatascience.com/time-llm-reprogram-an-llm-for-time-series-forecasting-e2558087b8ac
code:
The text was updated successfully, but these errors were encountered: