You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 2, 2023. It is now read-only.
I would also like to test how different pre-processing approaches affect the calculations, normalization vs. standardization, anomalies, difference from DOY average
Are information theory metrics sensitive to autocorrelation? Idea for testing this from @jds485: [9:03 AM] Smith, Jared D
I was thinking that we could numerically test what the significance threshold should be by generating synthetic timeseries with known autocorrelations, compute MI, and MIcrit, and determine how much we need to change MIcrit for MI to not be significant. It's not a perfect correction, but it can give a sense for how large the correction could get
Block shuffling the timeseries instead of shuffling single observations to get MIcrit could be another way to estimate a more correct MIcrit value when there is autocorrelation. The block size would need to change based on the autocorrelation value.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'd like to come up with a series of tests to benchmark and get a feel for the information theory functions. In general the goals would be:
The text was updated successfully, but these errors were encountered: