Looking back, there were several issues from my inexperience:
Starting with a big model. It's better to start small, rather than trying to lump everything in from the get go. Find out what works and what doesn't.
Optimizing a very large hyperparameter space. Optuna will eventually find the best parameter, but for how long? The larger the space the larger the wait.
Placing a high max for a parameter that significantly increase training time.
Using RNN. RNN must process the sequence sequentially. This makes it require more computation as they can't be done at once. However transformer requires way more parameters.
Keep using SIRD even though it was clear that it creates worse pattern visible.
Not checking whether MSSE loss produce good gradient. All I cared was that it makes it scale invariant.
Only doing scaling for preprocessing. The data was bad. There should be more that can be done. Neural network can theoretically learn any pattern given enough capacity and data, but there was few data and large model was infeasible. It's hard to learn from it. Maybe should've log transformed it.
Dimension explosion. The one hot encoded holidays added a real lot of dimension and dominated the input dimension. I should've added a linear layer for embedding or something, just like transformer does.