Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project dependencies may have API risk issues #33

Open
PyDeps opened this issue Oct 26, 2022 · 0 comments
Open

Project dependencies may have API risk issues #33

PyDeps opened this issue Oct 26, 2022 · 0 comments

Comments

@PyDeps
Copy link

PyDeps commented Oct 26, 2022

Hi, In Mem2Seq, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

cycler==0.10.0
joblib==0.14.1
kiwisolver==1.1.0
matplotlib==3.2.1
nltk==3.4.5
numpy==1.18.2
pandas==1.0.3
pyparsing==2.4.6
python-dateutil==2.8.1
pytz==2019.3
scikit-learn==0.22.2.post1
scipy==1.4.1
seaborn==0.10.0
six==1.14.0
sklearn==0.0
torch==1.1.0
tqdm==4.43.0

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numpy
char.isdigit
The calling methods from the all methods
gete_s.Variable.transpose
input_batches.size
torch.bmm.transpose
zip
gate.append
argparse.ArgumentParser.add_argument
self.LuongAttnDecoderRNN.super.__init__
self.concat.cuda
embed_A.torch.sum.squeeze.size
super
target_gate.float
item.lower.replace
torch.sum
lengths.max.sequences.len.torch.ones.long
torch.nn.functional.log_softmax
LuongAttnDecoderRNN
correct.lstrip.rstrip.lstrip
torch.nn.functional.tanh
set
join.find
i.story.append
args.globals
story.contiguous
toppi.view.Variable.input_batches.torch.gather.transpose
read_langs
all_decoder_outputs_ptr.transpose.contiguous
args.globals.evaluate
PtrDecoderRNN
str
numpy.zeros
self.preprocess
lengths.max.sequences.len.torch.zeros.long
self.add_module
torch.autograd.Variable
h0_encoder.cuda.cuda
ID.append
self.evaluate_batch
toppi.view
self.decoder.train
el.replace.tokenizer.join.replace.lower
correct.lstrip.rstrip
st.lstrip.rstrip.lstrip
self.PtrDecoderRNN.super.__init__
torch.nn.utils.clip_grad_norm
logging.info
tqdm.set_description
char.isdigit
st.lstrip.rstrip.split
ast.literal_eval
self.decoder_optimizer.step
torch.nn.GRU
sequence_length.data.max
self.from_whichs.append
six.moves.urllib.request.urlretrieve
sequence_length.size
input.size
max_len.torch.arange.long
argparse.ArgumentParser.parse_args
self.compute_prf
kb_arr.append
torch.nn.utils.rnn.pad_packed_sequence
hop.self.C
hidden.squeeze
trg_seqs.Variable.transpose
decoder_ptr.data.topk
d.pop
gete_s.cuda.cuda
torch.LongTensor
tempfile.NamedTemporaryFile.write
AttrProxy
torch.nn.MSELoss
self.EncoderMemNN.super.__init__
e.keys
max_len.sequences.len.torch.zeros.long
context.squeeze.word_embedded.torch.cat.unsqueeze
energy.transpose.transpose
context.squeeze.squeeze
entity_replace
self.embedding_dropout.cuda
torch.autograd.Variable.float
torch.nn.functional.softmax
day.el.split.rstrip
self.W
torch.nn.functional.sigmoid
d.split
references.join.encode
masked_cross_entropy.item
line.strip.replace
self.encoder.cuda
nltk.wordpunct_tokenize
get_type_dict.keys
torch.nn.Softmax
torch.Tensor
torch.optim.Adam
line.strip.strip
logits.size
k.item.lower
self.index_word
day.el.split.split
k.item.lower.replace
entity.type_dict.append
VanillaDecoderRNN
entity_nav.append
utils.until_temp.entityList.append
utils.measures.wer
os.path.abspath
self.decoder
el.keys
ind_seqs.cuda.cuda
os.path.dirname
encoder_outputs.transpose.transpose
torch.Tensor.split
p.replace
correct.lstrip
self.concat
item.keys
tqdm.tqdm.set_description
globals
encoder_outputs.data.shape.self.v.repeat.unsqueeze
target_batches.transpose.contiguous
self.VanillaSeqToSeq.super.__init__
os.path.exists
sent_new.append
torch.gather
dict
i.top_ptr_i.item
embed_A.torch.sum.squeeze.view
hypotheses.join.encode
line.replace.replace
torch.nn.Parameter
self.C.unsqueeze
self.W1
seq_range.unsqueeze.expand
torch.nn.functional.sigmoid.squeeze
story.contiguous.view
os.path.join
enumerate
input_batches.transpose
encoder_outputs.transpose.size
tqdm.tqdm
all_decoder_outputs_gate.cuda.cuda
candid_all.append
unicodedata.category
context.split
self.softmax
s.re.sub.strip.lower
self.decoder.ptrMemDecoder
src_seqs.Variable.transpose
last_hidden.unsqueeze
input_batches.transpose.self.encoder.unsqueeze
r_index.append
torch.Tensor.append
embed_C.torch.sum.squeeze
unicode_to_ascii
sequence_length.unsqueeze.expand_as
tqdm
int
self.U
candid2candDL.keys
torch.save
join
self.gru
float
entity_list.append
os.makedirs
max_len.sequences.len.torch.ones.long
mask.float.sum
embed_A.torch.sum.squeeze
generate_memory
re.sub
a.long.size
args.globals.print_loss
args.globals.train_batch
torch.bmm
line.split.replace
open
a.cuda.long
target.size
torch.utils.data.DataLoader
hidden.squeeze.unsqueeze
torch.gather.squeeze
ref.append
all_decoder_outputs_vocab.cuda.transpose
torch.nn.utils.rnn.pack_padded_sequence
bleu_out.decode.decode
hyp.append
length.torch.LongTensor.Variable.cuda
target_batches.transpose
self.decoder.parameters
DecoderMemNN
torch.zeros
numpy.random.binomial
max
self.encoder
sum
os.chmod
conv_seqs.Variable.transpose
line.replace.split
get_type_dict
prepare_data_seq
entity_cal.append
prob_.unsqueeze.expand_as
r.split
seq_range_expand.cuda.cuda
torch.nn.Linear
self.embedding_dim.bsz.torch.zeros.Variable.cuda
math.sqrt
story.size
self.encoder_optimizer.step
EncoderRNN
topi.view
torch.rand
torch.cat
utils.until_temp.entityList
print
torch.nn.functional.softmax.bmm
all_decoder_outputs_ptr.cuda.cuda
range
self.W.cuda
decoder_input.cuda.cuda
json.load.keys
os.cpu_count
from_which.append
self.encoder_optimizer.zero_grad
max_len.s_t.repeat.transpose
torch.utils.data.append
conv_seqs.cuda.cuda
self.dropout
toppi.squeeze
self.W1.cuda
rnn_output.squeeze.squeeze
EncoderMemNN
x.str.lower
re.search
tempfile.NamedTemporaryFile.close
map
nltk.tokenize.word_tokenize
logging.basicConfig
self.U.cuda
global_temp.append
Lang.index_words
self.preprocess_gate
numpy.uint8.h.len.r.len.numpy.zeros.reshape
self.PTRUNK.super.__init__
src_seqs.cuda.cuda
self.out.cuda
self.DecoderMemNN.super.__init__
get_seq
bool
idx.token_array.isdigit
cleaner
temp.append
entityList
self.decoder.load_memory
embedded.view.view
sequence_mask.float
merge
st.lstrip.rstrip
any
s.lower.strip
last_hidden.unsqueeze.repeat
eng.lower
getattr
i.data_dev.dialog_acc_dict.append
new_token_array.pop
i.d.str.lower
self.get_state.unsqueeze
self.EncoderRNN.super.__init__
self.v.data.normal_
hidden.unsqueeze.repeat
slot.el.replace.tokenizer.join.lower
torch.nn.Embedding
candidates.append
vars
numpy.float32
s.re.sub.strip
loss.backward
self.get_state
Dataset
embed_C.torch.sum.squeeze.size
u.unsqueeze.expand_as
st.lstrip
self.criterion
a.long.transpose
y_seq.append
self.C
self.encoder.parameters
i.topvi.item
target.view
sequence_mask
unicodedata.normalize
x_seq.append
utils.measures.moses_multi_bleu
ptr_seq.append
torch.load
torch.gather.view
decoder_vacab.data.topk
self.encoder.train
json.load
el.replace
sequence_length.unsqueeze
sent.split
entity.append
dialog_acc_dict.keys
p.append
MEM_TOKEN_SIZE.lengths.max.sequences.len.torch.ones.long
self.preprocess.append
self.LuongSeqToSeq.super.__init__
trg_seqs.cuda.cuda
all_decoder_outputs_vocab.cuda.cuda
model.scheduler.step
hidden.squeeze.append
self.embedding.cuda
self.dropout.cuda
global_rp.keys
masked_cross_entropy
prob.unsqueeze.expand_as
subprocess.check_output
c0_encoder.cuda.cuda
list
topvi.squeeze
ptr_index.append
format
numpy.array
self.embedding_dropout
elm.split
self.v.repeat
sentence.split
torch.arange
all_decoder_outputs_vocab.transpose.contiguous
decoded_words.append
self.VanillaDecoderRNN.super.__init__
tempfile.NamedTemporaryFile.flush
join.replace
target_index.transpose.contiguous
self.preprocess_inde
self.out.squeeze
self.save_model
p.e.str.lower.replace
line.replace.strip
argparse.ArgumentParser
C.weight.data.normal_
day.el.split.rstrip.replace
min
all_decoder_outputs_ptr.cuda.transpose
self.v.size
random.random
entity_wet.append
slot.el.replace
get_type_dict.append
len.append
load_candidates
target_index.transpose
dict.items
el.replace.tokenizer.join.replace
p.e.str.lower
list.append
self.m_story.append
new_token_array.append
prob_.unsqueeze.expand_as.unsqueeze
i.toppi.item
fre.lower
DecoderrMemNN
temp_gen.append
torch.ones
torch.nn.Dropout
os.path.realpath
join.lstrip
torch.utils.data.sort
tempfile.NamedTemporaryFile
self.decoder_optimizer.zero_grad
self.lstm.cuda
line.split.split
day.el.split
bleu_out.re.search.group
a.bmm.squeeze
self.softmax.unsqueeze
a.cuda.cuda
numpy.transpose
self.v.cuda
loss.item
el_key.el.tokenizer.join.lower
embed_C.torch.sum.squeeze.view
numpy.ones
logits.view
len
torch.optim.lr_scheduler.ReduceLROnPlateau
input_seq.size
gate_seq.append
line.strip.split
item.lower
a.long.contiguous
p.str.replace
self.out
max_len.torch.arange.long.unsqueeze
self.decoder.cuda
input_batches.self.encoder.unsqueeze
numpy.size
candid2DL
names.keys
ind_seqs.Variable.transpose
self.embedding
length.float.sum
hidden.squeeze.size
torch.nn.LSTM
story.size.story.contiguous.view.long
self.lstm
Lang
join.split
self.DecoderrMemNN.super.__init__
self.Mem2Seq.super.__init__

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant