Hi Anatoly, I appreciate the comment and your interest!
With any feature importance technique, the importances should be derived from the predictions of the test set since the predictions of the training set will very often be overfitted (naturally, though to varying degrees) . If the predictions cannot be trusted, I wouldn't put much faith in the feature importances it generates.
I understand your point about the ix_training and ix_test. If I remember rightly, I coudln't access the training IDs when I did it all as one loop. It was strange, I would try to select the 0 index and it would return the IDs across all the folds. In the end, I settled on this way to get the individual IDs with each fold.
WRT RepeatedKFold, maybe you're right, but I think I might have done it this way to keep the data within each fold. Or maybe I didn't think too much about it because it served my purpose ;)
Thanks for your feedback Anatoly!