Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

没有镜像 ghcr.io/kosmos-io/clusterlink-floater:0.2.1 #16

Open
912988434 opened this issue Jul 8, 2024 · 3 comments · May be fixed by #20
Open

没有镜像 ghcr.io/kosmos-io/clusterlink-floater:0.2.1 #16

912988434 opened this issue Jul 8, 2024 · 3 comments · May be fixed by #20
Labels

Comments

@912988434
Copy link

[root@dev-tools ~]# ./netctl init
I0708 16:49:55.411695 9899 init.go:69] write opts success
[root@dev-tools ]# cat config.json
{
"namespace": "kosmos-system",
"version": "0.2.1",
"protocol": "tcp",
"podWaitTime": 30,
"port": "8889",
"maxNum": 3,
"cmdTimeout": 10,
"srcKubeConfig": "
/.kube/config",
"srcImageRepository": "ghcr.io/kosmos-io"
}
执行初始化后这里的自动定义的version 为:0.2.1 ,貌似是没有这个镜像 ,但是例子里的v0.2.0是有的
➜ ~ docker pull --platform linux/amd64 ghcr.io/kosmos-io/clusterlink-floater:0.2.1

Error response from daemon: manifest unknown

~ docker pull --platform linux/amd64 ghcr.io/kosmos-io/clusterlink-floater:v0.2.0

v0.2.0: Pulling from kosmos-io/clusterlink-floater
8921db27df28: Retrying in 1 second
414834ece500: Retrying in 1 second
4c8a6e1d1bac: Retrying in 1 second
f945322ee5cb: Waiting
5206f920d3a9: Waiting
error pulling image configuration: download failed after attempts=6: Using feature requires a Business Subscription: a SOCKS proxy
➜ ~ docker pull --platform linux/amd64 ghcr.io/kosmos-io/clusterlink-floater:v0.2.1

Error response from daemon: manifest unknown

@qq547475331
Copy link

@qq547475331
Copy link

我也是

@duanmengkk
Copy link
Collaborator

duanmengkk commented Jul 10, 2024

floater应该没改啥,就用0.2.0 就行了应该 @912988434 @qq547475331

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants