Support Kubernetes up to V1.28
“k8s.io/xxx” and all its related dependencies are upgraded to version “v0.28.9”, for ensuring OpenYurt is compatible with Kubernetes v1.28 version. This compatibility has been confirmed by an end-to-end (E2E) test where we started a Kubernetes v1.28 cluster using KinD and deployed the latest components of OpenYurt. At the same time, all the key components of OpenYurt, such as yurt-manager and yurthub, are deployed on the Kubernetes cluster via Helm to ensure that the Helm charts provided by the OpenYurt community can run stably in the production environment. #2047 #2074
Reduce cloud-edge traffic spike during rapid node additions
NodePool
resource is essential for managing groups of nodes within OpenYurt clusters, as it records details of all nodes in the collective through the NodePool.status.nodes
field. YurtHub relies on this information to identify endpoints within the same NodePool, thereby enabling pool-level service topology functionality. However, when a large NodePool—potentially comprising thousands of nodes—experiences swift expansion, such as the integration of hundreds of edge nodes within a mere minute, the surge in cloud-to-edge network traffic can be significant. In this release, a new type of resource called NodeBucket
has been introduced. It provides a scalable and streamlined method for managing extensive NodePool
, significantly reducing the impact on cloud-edge traffic during periods of rapid node growth, and ensuring the stability of the clusters is maintained.
#1864
#1874
#1930
Upgrade YurtAppSet
to v1beta1 version
YurtAppSet v1beta1 is introduced to facilitate the management of multi-region workloads. Users can use YurtAppSet to distribute the same WorkloadTemplate
(Deployment/Statefulset) to different nodepools by a label selector NodePoolSelector
or nodepool name slice (Pools
). Users can also customize the configuration of workloads in different node pools through WorkloadTweaks
.
In this release, we have combined the functionality from the three old crds (YurtAppSet v1alpha1, YurtAppDaemon and YurtAppOverrider) in yurtappset v1beta1. We recommend to use this in favor of the old ones.
#1890
#1931
#1939
#1974
#1997
Improve transparent management mechanism for control traffic from edge to cloud
The current transparent management mechanism for cloud-edge control traffic has certain limitations and cannot effectively support direct requests to the default/kubernetes service. In this release, a new transparent management mechanism for cloud-edge control traffic, aimed at enabling pods using InClusterConfig or the default/kubernetes service name to access the kube-apiserver via YurtHub without needing to be aware of the details of the public network connection between the cloud and edge. #1975 #1996
Separate clients for yurt-manager component
Yurt-manager is an important component in cloud environment for OpenYurt which holds multiple controllers and webhooks. Those controllers and webhooks shared one client and one set of RBAC (yurt-manager-role/yurt-manager-role-binding/yurt-manager-sa) which grew bigger as we add more function into yurt-manager. This mechanism makes a controller has access it shouldn't has. and it's difficult to find out the request is from which controller from the audit logs. In the latest release, we restrict each controller/webhook to only the permissions it may use and separate RBAC and UA for different controllers and webhooks. #2051 #2069
Enhancement to Yurthub's Autonomy capabilities
New autonomy condition have been added to node conditions so that yurthub can report autonomy status of node in real time at each nodeStatusUpdateFrequency. This condition allows for accurate determination of each node's autonomy status. In addition, an error key mechanism has been introduced to log cache failure keys along with their corresponding fault reasons. The error keys are persisted using the AOF (Append-Only File) method, ensuring that the autonomy state is recovered even after a reboot and preventing the system from entering a pseudo-autonomous state. These enhancements also facilitate easier troubleshooting when autonomy issues arise. #2015 #2033 #2096
- improve ca data for yurthub component by @rambohe-ch in #1815
- improve FieldIndexer setting in yurt-manager by @2456868764 in #1834
- fix: yurtadm join ignorePreflightErrors could not set all by @YTGhost in #1837
- Feature: add name-length of dummy interface too long error by @8rxn in #1875
- feat: support v3 rest api client for edgex v3 api by @wangxye in #1850
- feat: support edgex napa version by auto-collector by @LavenderQAQ in #1852
- feat: improve discardcloudservice filter in yurthub component (#1924) by @huangchenzhao in #1926
- Add missing verb to the role of node lifecycle controller by @crazytaxii in #1936
- don't cache csr and sar resource in yurthub by @rambohe-ch in #1949
- feat: improve hostNetwork mode of NodePool by adding NodeAffinity to pods with specified annotation (#1935) by @huangchenzhao in #1959
- move list object handling from ObjectFilter into ResponseFilter by @rambohe-ch in #1991
- The gateway can forward traffic from extra source cidrs by @River-sh in #1993
- return back watch.Deleted event to clients when watch object is removed in OjbectFilters by @rambohe-ch in #1995
- add pool service controller. by @zyjhtangtang in #2010
- aggregated annotations and labels. by @zyjhtangtang in #2027
- improve pod webhook for adapting hostnetwork mode nodepool by @rambohe-ch in #2050
- intercept kubelet get node request in order to reduce the traffic by @vie-serendipity in #2039
- bump controller-gen to v0.13.0 by @Congrool in #2056
- improve nodepool conversion by @rambohe-ch in #2080
- feat: add version metrics for yurt-manager and yurthub components by @rambohe-ch in #2094
- fix cache manager panic in yurthub by @rambohe-ch in #1950
- fix: upgrade the version of runc to avoid security risk by @qclc in #1972
- fix only openyurt crd conversion should be handled for upgrading cert by @rambohe-ch in #2013
- fix the cache leak in yurtappoverrider controller by @MeenuyD in #1795
- fix(yurt-manager): add clusterrole for nodes/status subresources by @qclc in #1884
- fix: close dst file by @testwill in #2046
- Proposal: High Availability of Edge Services by @Rui-Gan in #1816
- Proposal: yurt express: openyurt data transmission system proposal by @qsfang in #1840
- proposal: add NodeBucket to reduce cloud-edge traffic spike during rapid node additions. by @rambohe-ch in #1864
- Proposal: add yurtappset v1beta1 proposal by @luc99hen in #1890
- proposal: improve transparent management mechanism for control traffic from edge to cloud by @rambohe-ch in #1975
- Proposal: enhancement of edge autonomy by @vie-serendipity in #2015
- Proposal: separate yurt-manager clients by @luc99hen in #2051
Thank you to everyone who contributed to this release! ❤
- @wangxye
- @huiwq1990
- @testwill
- @fengshunli
- @Congrool
- @zyjhtangtang
- @vie-serendipity
- @dsy3502
- @YTGhost
- @River-sh
- @qclc
- @lilongfeng0902
- @NewKeyTo
- @crazytaxii
- @MeenuyD
- @dzcvxe
- @2456868764
- @8rxn
- @huangchenzhao
- @karthik507
- @MundaneImmortal
- @rambohe-ch
And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc.
Support for HostNetwork Mode NodePool
When the resources of edge nodes are limited and only simple applications need to be run (for instance, situations where container network is not needed and there is no need for communication between applications), using a HostNetwork mode nodepool is a reasonable choice. When creating a nodepool, users only need to set spec.HostNetwork=true to create a HostNetwork mode nodepool.
In this mode, only some essential components such as kubelet, yurthub and raven-agent will be installed on all nodes in the pool. In addition, Pods scheduled on these nodes will automatically adopt host network mode. This method effectively reduces resource consumption while maintaining application performance efficiency.
Support for customized configuration at the nodepool level for multi-region workloads
YurtAppOverrider is a new CRD used to customize the configuration of the workloads managed by YurtAppSet/YurtAppDaemon. It provides a simple and straightforward way to configure every field of the workload under each nodepool. It is fundamental component of multi-region workloads configuration rendering engine.
Support for building edgex iot systems by using PlatformAdmin
PlatformAdmin is a CRD that manages the IoT systems in the OpenYurt nodepool. It has evolved from the previous yurt-edgex-manager. Starting from this version, the functionality of yurt-edgex-controller has been merged into yurt-manager. This means that users no longer need to deploy any additional components; they only need to install yurt-manager to have all the capabilities for managing edge devices.
PlatformAdmin allows users with a user-friendly way to deploy a complete edgex system on nodepool. It comes with an optional component library and configuration templates. Advanced users can also customize the configuration of this system according to their needs.
Currently, PlatformAdmin supports all versions of EdgeX from Hanoi to Minnesota. In the future, it will continue to rapidly support upcoming releases using the auto-collector feature. This ensures that PlatformAdmin remains compatible with the latest versions of EdgeX as they are released.
Supports yurt-iot-dock deployment as an iot system component
yurt-iot-dock is a component responsible for managing edge devices in IoT system. It has evolved from the previous yurt-device-controller. As a component that connects the cloud and edge device management platforms, yurt-iot-dock abstracts three CRDs: DeviceProfile, DeviceService, and Device. These CRDs are used to represent and manage corresponding resources on the device management platform, thereby impacting real-world devices.
By declaratively modifying the fields of these CRs, users can achieve the operational and management goals of complex edge devices in a cloud-native manner. yurt-iot-dock is deployed by PlatformAdmin as an optional IoT component. It is responsible for device synchronization during startup and severs the synchronization relationship when being terminated or destroyed.
In this version, the deployment and destruction of the yurt-iot-dock are all controlled by PlatformAdmin, which improves the ease of use of the yurt-iot-dock.
Some Repos are archived
With the upgrading of OpenYurt architecture, the functions of quite a few components are merged into Yurt-Manager (e.g. yurt-app-manager, raven-controller-manager, etc.), or there are repos migrated to openyurt for better management (e.g. yurtiotdock). The following repos have been archived:
- yurt-app-manager
- yurt-app-manager-api
- raven-controller-manager
- yurt-edgex-manager
- yurt-device-controller
- yurtcluster-operator
- feat: use real kubernetes server address to yurthub when yurtadm join by @Lan-ce-lot in #1517
- yurtadm support enable kubelet service by @YTGhost in #1523
- feat: support SIGUSR1 signal for yurthub by @y-ykcir in #1487
- feat: remove yurtadm init command by @YTGhost in #1537
- add yurtadm join node in specified nodepool by @JameKeal in #1402
- rename pool-coordinator to yurt-coordinator for charts by @JameKeal in #1551
- move iot controller to yurt-manager by @Rui-Gan in #1488
- feat: provide config option for yurtadm by @YTGhost in #1547
- add yurtadm to install/uninstall staticpod by @JameKeal in #1550
- change access permission to default in general. by @fujitatomoya in #1576
- build: added github registry by @siredmar in #1578
- feat: support edgex minnesota through auto-collector by @LavenderQAQ in #1582
- feat: prevent node movement by label modification by @y-ykcir in #1444
- add cpu limit for yurthub by @huweihuang in #1609
- feat: provide users with the ability to customize the edgex framework by @LavenderQAQ in #1596
- add kubelet certificate mode in yurthub by @rambohe-ch in #1625
- delete configmap when yurtstaticset is deleting by @JameKeal in #1640
- add new gateway version v1beta1 by @River-sh in #1641
- feat: reclaim device, deviceprofile and deviceservice before exiting YurtIoTDock by @wangxye in #1647
- feat: upgrade YurtIoTDock to support edgex v3 api by @wangxye in #1666
- feat: add token format checking to yurtadm join process by @YTGhost in #1681
- Add status info to YurtAppSet/YurtAppDaemon by @vie-serendipity in #1702
- fix(yurt-manager): raven controller can't list calico blockaffinity by @luckymrwang in #1676
- feat: support yurtadm config command by @YTGhost in #1709
- improve lease lock for yurt-manager component by @rambohe-ch in #1741
- add nodelifecycle controller by @rambohe-ch in #1746
- disable the iptables setting of yurthub component by default by @rambohe-ch in #1770
- fix memory leak for yur-tunnel-server by @huweihuang in #1471
- fix yurthub memory leak by @JameKeal in #1501
- fix yurtstaticset workerpod reset error by @JameKeal in #1526
- fix conflicts for getting node by local storage in yurthub filters by @rambohe-ch in #1552
- fix work dir nested
yurthub/yurthub
by @luc99hen in #1693 - fix pool scope crd resource etcd key path by @qsfang in #1729
- proposal for raven l7 by @River-sh in #1541
- proposal of support raven NAT traversal by @YTGhost in #1639
- Proposal for Multi-region workloads configuration rendering engine by @vie-serendipity in #1600
- Proposal of install openyurt components using dashboard by @401lrx in #1664
- Proposal use message-bus instead of REST to communicate with EdgeX by @Pluviophile225 in #1680
Thank you to everyone who contributed to this release! ❤
- @huiwq1990
- @y-ykcir
- @JameKeal
- @Lan-ce-lot
- @YTGhost
- @fujitatomoya
- @LavenderQAQ
- @River-sh
- @huweihuang
- @luc99hen
- @luckymrwang
- @wangzihao05
- @yojay11717
- @lishaokai1995
- @yeqiugt
- @TonyZZhang
- @vie-serendipity
- @my0sotis
- @Rui-Gan
- @zhy76
- @siredmar
- @wangxye
- @401lrx
- @testwill
- @Pluviophile225
- @shizuocheng
- @qsfang
And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc.
Refactor OpenYurt control plane components
In order to improve the management of all repos in OpenYurt community, and reduce the complexity of installing OpenYurt, after detailed discussions in the community, a new component named yurt-manager was agreed to manage controllers and webhooks scattered across multiple components (like yurt-controller-manager, yurt-app-manager, raven-controller-manager, etc.).
After the refactoring, based on the controller-runtime framework, new controllers and webhooks can be easily added to the yurt-manager component in the future. Also note that the yurt-manager must be installed on the same node as the K8s control-plane component (like kube-controller-manager). #1067
Support OTA or AdvancedRollingUpdate upgrade models for static pods
As you know, static pods are managed directly by the kubelet daemon on the node and there is no APIServer watching them. In general, if a user wants to upgrade a static pod(like YurtHub), the user should manually modify or replace the manifest of the static pod. This can be a very tedious and painful task when the number of static pods becomes very large.
Users can define Pod templates and upgrade models through YurtStaticSet CRD. The upgrade models support both OTA and AdvancedRollingUpdate kinds, thus easily meeting the upgrade needs of large-scale Static Pods. Also the Pod template in yurthub YurtAppSet CRD is used to install YurtHub component on the node when the node is joined. #1261, #1168, #1172
NodePort Service supports nodepool isolation
In edge scenarios, users using the NodePort service expect to listen to nodePort ports only in a specified nodepools in order to prevent port conflicts and save edge resources.
Users can specify the nodepools to listen to by adding annotation nodeport.openyurt.io/listen
to the NodePort or
LoadBalancer service, thus getting the nodepool isolation capability of the NodePort or LoadBalancer service. #1183, #1209
- improve image build efficiency by @Congrool in #1191
- support filter chain for filtering response data by @rambohe-ch in #1189
- fix: re-list when target change by @LaurenceLiZhixin in #1195
- fix: yurt-coordinator cannot be rescheduled when its node fails (#1212) by @AndyEWang in #1218
- feat: merge yurtctl to e2e by @YTGhost in #1219
- support pass bootstrap-file to yurthub by @rambohe-ch in #1333
- add system proxy for docker run by @gnunu in #1335
- feat: add yurtadm renew certificate command by @YTGhost in #1314
- add a new way to create webhook by @JameKeal in #1359
- feat: support yurt-coordinator component work in specified namespace by @y-ykcir in #1355
- feat: add nodepool e2e by @huiwq1990 in #1365
- feat: support yurt-manager work in specified namespace by @y-ykcir in #1367
- support yurthub component work in specified namespace by @huweihuang in #1366
- support to specify enabled controllers by @xavier-hou in #1388
- feat: crd generate crds by @huiwq1990 in #1389
- add Yurtappdaemon e2e test by @theonefx in #1406
- fix generated crd name by @huiwq1990 in #1408
- fix handle yurtcoordinator certificates in case of restarting by @batthebee in #1187
- make rename replace old dir by @LaurenceLiZhixin in #1237
- yurtadm minor version compatibility of kubelet and kubeadm by @YTGhost in #1244
- delete specific iptables while testing kube-proxy by @y-ykcir in #1268
- fix yurthub dnsPolicy when using yurt-coordinator by @JameKeal in #1321
- fix: yurt-controller-manager reboot cannot remove taint node.openyurt.io/unschedulable (#1233) by @AndyEWang in #1337
- fix daemonSet pod updater pointer error by @JameKeal in #1340
- bugfix for yurtappset by @theonefx in #1391
Thank you to everyone who contributed to this release! ❤
- @batthebee
- @cndoit18
- @fengshunli
- @luc99hen
- @frank-zsy
- @YTGhost
- @Congrool
- @luckymrwang
- @AndyEWang
- @huiwq1990
- @njucjc
- @xavier-hou
- @kadisi
- @guoguodan
- @JameKeal
- @gnunu
- @y-ykcir
- @Lan-ce-lot
- @River-sh
- @huweihuang
- @lilongfeng0902
- @theonefx
- @fujitatomoya
- @rambohe-ch
And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc.
Improve edge autonomy capability when cloud-edge network off
The original edge autonomy feature can make the pods on nodes un-evicted even if node crashed by adding annotation to node, and this feature is recommended to use for scenarios that pods should bind to node without recreation. After improving edge autonomy capability, when the reason of node NotReady is cloud-edge network off, pods will not be evicted because leader yurthub will help these offline nodes to proxy their heartbeats to the cloud via yurt-coordinator component, and pods will be evicted and recreated on other ready node if node crashed.
By the way, the original edge autonomy capability by annotating node (with node.beta.openyurt.io/autonomy) will be kept as it is, which will influence all pods on autonomy nodes. And a new annotation (named apps.openyurt.io/binding) can be added to workload to enable the original edge autonomy capability for specified pod.
Reduce the control-plane traffic between cloud and edge
Based on the Yurt-Coordinator in the nodePool, A leader Yurthub will be elected in the nodePool. Leader Yurthub will list/watch pool-scope data(like endpoints/endpointslices) from cloud and write into yurt-coordinator. then all components(like kube-proxy/coredns) in the nodePool will get pool-scope data from yurt-coordinator instead of cloud kube-apiserver, so large volume control-plane traffic will be reduced.
Use raven component to replace yurt-tunnel component
Raven has released version v0.3, and provide cross-regional network communication ability based on PodIP or NodeIP, but yurt-tunnel can only provide cloud-edge requests forwarding for kubectl logs/exec commands. because raven provides much more than the capabilities provided by yurt-tunnel, and raven has been proven by a lot of work. so raven component is officially recommended to replace yurt-tunnel.
- proposal of yurtadm join refactoring by @YTGhost in #1048
- [Proposal] edgex auto-collector proposal by @LavenderQAQ in #1051
- add timeout config in yurthub to handle those watch requests by @AndyEWang in #1056
- refactor yurtadm join by @YTGhost in #1049
- expose helm values for yurthub cacheagents by @huiwq1990 in #1062
- refactor yurthub cache to adapt different storages by @Congrool in #882
- add proposal of static pod upgrade model by @xavier-hou in #1065
- refactor yurtadm reset by @YTGhost in #1075
- bugfix: update the dependency yurt-app-manager-api from v0.18.8 to v0.6.0 by @YTGhost in #1115
- Feature: yurtadm reset/join modification. Do not remove k8s binaries, add flag for using local cni binaries. by @Windrow14 in #1124
- Improve certificate manager by @rambohe-ch in #1133
- fix: update package dependencies by @fengshunli in #1149
- fix: add common builder by @fengshunli in #1152
- generate yurtadm docs by @huiwq1990 in #1159
- add inclusterconfig filter for commenting kube-proxy configmap by @rambohe-ch in #1158
- delete yurt tunnel helm charts by @River-sh in #1161
- bugfix: StreamResponseFilter of data filter framework can't work if size of one object is over 32KB by @rambohe-ch in #1066
- bugfix: add ignore preflight errors to adapt kubeadm before version 1.23.0 by @YTGhost in #1092
- bugfix: dynamically switch apiVersion of JoinConfiguration to adapt to different versions of k8s by @YTGhost in #1112
- bugfix: yurthub can not exit when SIGINT/SIGTERM happened by @rambohe-ch in #1143
Thank you to everyone who contributed to this release! ❤
- @YTGhost
- @Congrool
- @LavenderQAQ
- @AndyEWang
- @huiwq1990
- @rudolf-chy
- @xavier-hou
- @gbtyy
- @huweihuang
- @zzguang
- @Windrow14
- @fengshunli
- @gnunu
- @luc99hen
- @donychen1134
- @LindaYu17
- @fujitatomoya
- @River-sh
- @rambohe-ch
And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc.
Support OTA/Auto upgrade model for DaemonSet workload
Extend native DaemonSet OnDelete
upgrade model by providing OTA and Auto two upgrade models.
- OTA: workload owner can control the upgrade of workload through the exposed REST API on edge nodes.
- Auto: Solve the DaemonSet upgrade process blocking problem which caused by node NotReady when the cloud-edge is disconnected.
Support autonomy feature validation in e2e tests
In order to test autonomy feature, network interface of control-plane is disconnected for simulating cloud-edge network disconnected, and then stop components(like kube-proxy, flannel, coredns, etc.) and check the recovery of these components.
Improve the Yurthub configuration for enabling the data filter function
Compares to the previous three configuration items, which include the component name, resource, and request verb. after improvement, only component name is need to configure for enabling data filter function. the original configuration format is also supported in order to keep consistency.
- cache agent change optimize by @huiwq1990 in #1008
- Check if error via ListKeys of Storage Interface. by @fujitatomoya in #1015
- Add released openyurt versions to projectInfo when building binaries by @Congrool in #1016
- add auto pod upgrade controller for daemoset by @xavier-hou in #970
- add ota update RESTful API by @xavier-hou in #1004
- make servicetopology filter in yurthub work properly when service or nodepool change by @LinFCai in #1019
- improve data filter framework by @rambohe-ch in #1025
- add proposal to unify cloud edge comms solution by @zzguang in #1027
- improve health checker for adapting coordinator by @rambohe-ch in #1032
- Edge-autonomy-e2e-test implementation by @lorrielau in #1022
- improve e2e tests for supporting mac env and coredns autonomy by @rambohe-ch in #1045
- proposal of yurthub cache refactoring by @Congrool in #897
- even no endpoints left after filter, an empty object should be returned to clients by @rambohe-ch in #1028
- non resource handle miss for coredns by @rambohe-ch in #1044
Thank you to everyone who contributed to this release! ❤
- @windydayc
- @luc99hen
- @Congrool
- @huiwq1990
- @fujitatomoya
- @LinFCai
- @xavier-hou
- @lorrielau
- @YTGhost
- @zzguang
- @Lan-ce-lot
And thank you very much to everyone else not listed here who contributed in other ways like filing issues, giving feedback, helping users in community group, etc.
We're excited to announce the release of OpenYurt 1.0.0!🎉🎉🎉
Thanks to all the new and existing contributors who helped make this release happen!
If you're new to OpenYurt, feel free to browse OpenYurt website, then start with OpenYurt Installation and learn about its core concepts.
Nearly 20 people have contributed to this release and 8 of them are new contributors, Thanks to everyone!
@huiwq1990 @Congrool @zhangzhenyuyu @rambohe-ch @gnunu @LinFCai @guoguodan @ankyit @luckymrwang @zzguang @hxcGit @Sodawyx @luc99hen @River-sh @slm940208 @windydayc @lorrielau @fujitatomoya @donychen1134
The version of NodePool
API has been upgraded to v1beta1
, more details in the openyurtio/yurt-app-manager#104
Meanwhile, all APIs management in OpenYurt will be migrated to openyurtio/api repo, and we recommend you to import this package to use APIs of OpenYurt.
We track unit test coverage with CodeCov Code coverage for some repos as following:
- openyurtio/openyurt: 47%
- openyurtio/yurt-app-manager: 37%
- openyurtio/raven: 53%
and more details of unit tests coverage can be found in https://codecov.io/gh/openyurtio
In addition to unit tests, other levels of testing are also added.
- upgrade e2e test for openyurt by @lorrielau in #945
- add fuzz test for openyurtio/yurt-app-manager by @huiwq1990 in openyurtio/yurt-app-manager#67
- e2e test for openyurtio/yurt-app-manager by @huiwq1990 in openyurtio/yurt-app-manager#107
OpenYurt makes Kubernetes work in cloud-edge collaborative environment with a non-intrusive design. so performance of some OpenYurt components have been considered carefully. several test reports have been submitted so that end users can clearly see the working status of OpenYurt components.
- yurthub performance test report by @luc99hen in https://openyurt.io/docs/test-report/yurthub-performance-test
- pods recovery efficiency test report by @Sodawyx in https://openyurt.io/docs/test-report/pod-recover-efficiency-test
early installation way(convert K8s to OpenYurt) is removed. OpenYurt Cluster installation is divided into two parts:
and all Control Plane Components of OpenYurt are managed by helm charts in repo: https://github.com/openyurtio/openyurt-helm
- upgrade kubeadm to 1.22 by @huiwq1990 in #864
- [Proposal] Proposal to install openyurt components using helm by @zhangzhenyuyu in #849
- support yurtadm token subcommand by @huiwq1990 in #875
- bugfix: only set signer name when not nil in order to prevent panic. by @rambohe-ch in #877
- [proposal] add proposal of multiplexing cloud-edge traffic by @rambohe-ch in #804
- yurthub return fake token when edge node disconnected with K8s APIServer by @LinFCai in #868
- deprecate cert-mgr-mode option of yurthub by @Congrool in #901
- [Proposal] add proposal of daemosnet update model by @hxcGit in #921
- fix: cache the server version info of kubernetes by @Sodawyx in #936
- add yurt-tunnel-dns yaml by @rambohe-ch in #956
- Separate YurtHubHost & YurtHubProxyHost by @luc99hen in #959
- merge endpoints filter into service topology filter by @rambohe-ch in #963
- support yurtadm join to join multiple master nodes by @windydayc in #964
- feature: add lantency metrics for yurthub by @luc99hen in #965
- bump ginkgo to v2 by @lorrielau in #945
- beta.kubernetes.io is deprecated, use kubernetes.io instead by @fujitatomoya in #969
Full Changelog: https://github.com/openyurtio/openyurt/compare/v0.7.0...v1.0.0-rc1
Thanks again to all the contributors!
Raven: enable edge-edge and edge-cloud communication in a non-intrusive way
Raven is component of the OpenYurt to enhance cluster networking capabilities. This enhancement is focused on edge-edge and edge-cloud communication in OpenYurt. It will provide layer 3 network connectivity among pods in different physical regions, as there are in one vanilla Kubernetes cluster. More information can be found at: ((#637, Raven, @DrmagicE, @BSWANG, @njucjc)
Support Kubernetes V1.22
Enable OpenYurt can work on the Kubernetes v1.22, includes adapting API change(such as v1beta1.CSR deprecation), adapt StreamingProxyRedirects feature and handle v1.EndpointSlice in service topology and so on. More information can be found at: (#809, #834, @rambohe-ch, @JameKeal, @huiwq1990)
Support EdgeX Foundry V2.1
Support EdgeX Foundry Jakarta version, and EdgeX Jakarta is the first LTS version and be widely considered as a production ready version. More information can be found at: (#4, #30, @lwmqwer, @wawlian, @qclc)
Support IPv6 network in OpenYurt
Support OpenYurt can run on the IPv6 network environment. More information can be found at: (#842, @tydra-wang)
- add nodepool governance capability proposal (#772, @Peeknut)
- add proposal of multiplexing cloud-edge traffic (#804, @rambohe-ch)
- provide flannel image and cni binary for edge network (#80, @yingjianjian)
- Remove convert and revert command from yurtctl (#826, @lonelyCZ)
- add tenant isolation for components such as kube-proxy&flannel which run in ns kube-system (#787, @YRXING)
- Rename yurtctl init/join/reset to yurtadm init/join/reset (#819, @lonelyCZ)
- Use configmap to configure the data source of filter framework (#749, #790, @yingjianjian, @rambohe-ch)
- add yurtctl test init cmd to setup OpenYurt cluster with kind (#783, @Congrool)
- support local up openyurt on mac machine (#836, @rambohe-ch, @Congrool)
- cleanup: io/ioutil(#813, @cndoit18)
- use verb %w with fmt.Errorf() when generate new wrapped error (#832, @zhaodiaoer)
- decouple yurtctl with yurtadm (#848, @Congrool)
- add enable-node-pool parameter for yurthub in order to disable nodepools list/watch in filters when testing (#822, @rambohe-ch)
- ingress: update edge ingress proposal to add enhancements (#816, @zzguang)
- add configmap delete handler for approver (#793, @huiwq1990)
- fix: a typo in yurtctl util.go which uses 'lable' as 'label' (#784, @donychen1134)
- ungzip response by yurthub when response header contains content-encoding=gzip (#794, @rambohe-ch)
- fix mistaken selflink in yurthub (#785, @Congrool)
Support YurtAppDaemon to deploy workload to different NodePools
A YurtAppDaemon ensures that all (or some) NodePools run a copy of a Deployment or StatefulSet. As nodepools are added to the cluster, Deployment or StatefulSet are added to them. As nodepools are removed from the cluster, those Deployments or StatefulSet are garbage collected. The behavior of YurtAppDaemon is similar to that of DaemonSet, except that YurtAppDaemon creates workloads from a node pool. More information can be found at: (#422, yurt-app-manager, @kadisi)
Using YurtIngress to unify service across NodePools
YurtIngress acts as a unified interface for services access request from outside the NodePool, it abstracts and simplifies service access logic to users, it also reduces the complexity of NodePool services management. More information can be found at: (#373, #645, yurt-app-manager, @zzguang, @gnunu, @LindaYu17)
Improve the user experience of OpenYurt
- OpenYurt Experience Center
New users who want to try out OpenYurt's capabilities do not need to install an OpenYurt cluster from scratch. They can apply for a test account on the OpenYurt Experience Center and immediately have an OpenYurt cluster available. More information can be found at: (OpenYurt Experience Center Introduction, @luc99hen, @qclc, @Peeknut)
- YurtCluster
This YurtCluster Operator is to translate a vanilla Kubernetes cluster into an OpenYurt cluster, through a simple API (YurtCluster CRD). And we recommend that you do the cluster conversion based on the declarative API of YurtCluster Operator. More information can be found at: (#389, #518, yurtcluster-operator, @SataQiu, @gnunu)
- Yurtctl init/join
In order to improve efficiency of creating OpenYurt cluster, a new tool named sealer has
been integrated into yurtctl init
command. and OpenYurt cluster image(based on Kubernetes v1.19.7 version) has been prepared.
Users can use yurtctl init
command to create OpenYurt cluster control-plane, and use yurtctl join
to add worker nodes(including
cloud nodes and edge nodes). More information can be found at: (#704, #697, @Peeknut, @rambohe-ch, @adamzhoul)
Update docs of OpenYurt
The docs of OpenYurt installation, core concepts, user manuals, developer manuals etc. have been updated, and all of them are located at OpenYurt Docs. Thanks to all contributors for maintaining docs for OpenYurt. (@huangyuqi, @kadisi, @luc99hen, @SataQiu, @mowangdk, @rambohe-ch, @zyjhtangtang, @qclc, @Peeknut, @Congrool, @zzguang, @adamzhoul, @windydayc, @villanel)
- Proposal: enhance cluster networking capabilities (#637, @DrmagicE, @BSWANG)
- add node-servant (#516, adamzhoul)
- improve yurt-tunnel-server to automatically update server certificates when service address changed (#525, @YRXING)
- automatically clean dummy interface and iptables rule when yurthub is stopped by k8s (#530, @Congrool)
- enhancement: add openyurt.io/skip-discard annotation verify for discardcloudservice filter (#524, @rambohe-ch)
- inject working_mode (#552, @ngau66)
- Yurtctl revert adds the function of deleting yurt app manager (#555, @yanyhui)
- Add edge device demo in the README.md (#553, #554, @Fei-Guo, @qclc, @lwmqwer)
- Refactor: separate the creation of informers from tunnel server component (#585, @YRXING)
- add make push for pushing images generated during make release (#601, @gnunu)
- add trafficforward that contains two diversion modes: DNAT and DNS (#606, @JcJinChen)
- yurthub verify bootstrap ca on start (#631, @gnunu)
- deprecate kubelet certificate management mode (#639, @qclc)
- remove k8s.io/kubernetes dependency from OpenYurt (#650, #664, #681, #697, #704, @rambohe-ch, @qclc, @Peeknut, @Rachel-Shao)
- add unit tests for yurthub data filtering framework (#670, @windydayc)
- enable yurthub to handle upgrade request (#673, @Congrool)
- Yurtctl: add precheck for reducing convert failure (#675, @Peeknut)
- ingress: add nodepool endpoints filtering for nginx ingress controller (#696, @zzguang)
- fix some bugs when local up openyurt (#517, @Congrool)
- reject delete pod request by yurthub when cloud-edge network disconnected (#593, @rambohe-ch)
- service topology filter can not work when hub agent work on cloud mode (#607, @rambohe-ch)
- fix transport race conditions in yurthub (#683, @rambohe-ch, @DrmagicE)
Manage EdgeX Foundry system in OpenYurt in a cloud-native, non-intrusive way
-
yurt-edgex-manager
Yurt-edgex-manager enable OpenYurt to be able to manage the EdgeX lifecycle. Each EdgeX CR (Custom Resource) stands for an EdgeX instance. Users can deploy/update/delete EdgeX in OpenYurt cluster by operating the EdgeX CR directly. (yurt-edgex-manager, @yixingjia, @lwmqwer)
-
yurt-device-controller
Yurt-device-controller aims to provider device management functionalities to OpenYurt cluster by integrating with edge computing/IOT platforms, like EdgeX in a cloud native way. It will automatically synchronize the device status to device CR (custom resource) in the cloud and any update to the device will pass through to the edge side seamlessly. More information can be found at: (yurt-device-controller, @charleszheng44, @qclc, @Peeknut, @rambohe-ch, @yixingjia)
Yurt-tunnel support more flexible settings for forwarding requests from cloud to edge
-
Support forward https request from cloud to edge for components(like prometheus) on the cloud nodes can access the https service(like node-exporter) on the edge node. Please refer to the details: (#442, @rambohe-ch, @Fei-Guo, @DrmagicE, @SataQiu)
-
support forward cloud requests to edge node's localhost endpoint for components(like prometheus) on cloud nodes collect edge components(like yurthub) metrics(http://127.0.0.1:10267). Please refer to the details: (#443, @rambohe-ch, @Fei-Guo)
- Proposal: YurtAppDaemon (#422, @kadisi, @zzguang, @gnunu, @Fei-Guo, @rambohe-ch)
- update yurtcluster operator proposal (#429, @SataQiu)
- yurthub use original bearer token to forward requests for inclusterconfig pods (#437, @rambohe-ch)
- yurthub support working on cloud nodes (#483 and #495, @DrmagicE)
- support discard cloud service(like LoadBalancer service) on edge side (#440, @rambohe-ch, @Fei-Guo)
- improve list/watch node pool resource in serviceTopology filter (#454, @neo502721)
- add configmap to configure user agents for specify response cache. (#466, @rambohe-ch)
- improve the usage of certificates (#475, @ke-jobs)
- improve unit tests for yurt-tunnel. (#470, @YRXING)
- stop renew node lease when kubelet's heartbeat is stopped (#482, @SataQiu)
- add check-license-header script to github action (#487, @lonelyCZ)
- Add tunnel server address for converting (#494, adamzhoul)
- fix incomplete data copy of resource filter (#452, @SataQiu)
- remove excess chan that will block the program (#446, @zc2638)
- use buffered channel for signal notifications (#471, @SataQiu)
- add create to yurt-tunnel-server ClusterRole (#500, adamzhoul)
Join or Reset node in one step
In order to enable users to use OpenYurt clusters quickly and reduce the cost of users learning OpenYurt, yurtctl has provided the subcommands convert
and revert
, to implement the conversion between OpenYurt cluster and Kubernetes cluster. However, It still has some shortcomings:
- No support for new nodes to join the cluster directly;
- Users are required to pre-built a kubernetes cluster, and then do conversion, which has a relatively high learning cost for beginners.
So, we need to add subcommands init
, join
, reset
for yurtctl. and join
and reset
subcommands can be used in v0.4.1, and init
subcommand will comes in next version.
Please refer to the proposal doc and usage doc for details. (#387, #402, @zyjhtangtang)
Support Pods use InClusterConfig
access kube-apiserver through yurthub
Many users in OpenYurt community have requested that support InClusterConfig for pods to access kube-apiserver through yurthub on edge nodes. so pods on cloud can move to edge cluster smoothly. So we add the following features.
- yurthub supports https serve.
- env
KUBERNETES_SERVICE_HOST
andKUBERNETES_SERVICE_PORT
should be the address that yurthub listening, like169.254.2.1:10261
Please refer to the issue #372 for details. (#386, #394, @luckymrwang, @rambohe-ch)
Support filter cloud response data on edge side
In the cloud-edge scenario, In response to requests from edge components (such as kube-proxy) or user pods to the cloud, it is hoped that some customized processing can be performed on the data returned by the cloud, we consider providing a generic data filtering framework in the yurthub component, which can customize the data returned from the cloud without being aware of the edge components and user pods, so as to meet business needs simply and conveniently. And provides two specific filtering handlers.
- support endpointslice filter for keeping service traffic in-bound of nodePool
- support master service mutation for pod use InClusterConfig access kube-apiserver
Please refer to the Proposal doc for details. (#388, #394, @rambohe-ch, @Fei-Guo)
- Yurtctl modify kube-controllersetting to close the nodelifecycle-controller (#399, @Peeknut)
- Proposal: EdgeX integration with OpenYurt (#357, @yixingjia, @lwmqwer)
- Proposal: add ingress feature support to nodepool (#373, @zzguang, @wenjun93)
- Proposal: OpenYurt Convertor Operator for converting K8S to OpenYurt (#389, @gnunu)
- add traffic(from cloud to edge) collector metrics for yurthub (#398, @rambohe-ch)
- Add sync.Pool to cache *bufio.Reader in tunnel server (#381, @DrmagicE)
- improve tunnel availability (#375, @aholic)
- yurtctl adds parameter enable app manager to control automatic deployment of yurtappmanager. (#352, @yanhui)
- fix tunnel-agent/tunnel-server crashes when the local certificate can not be loaded correctly (#378, @SataQiu)
- fix the error when cert-mgr-mode set to kubelet (#359, @qclc)
- fix the same prefix key lock error (#396, @rambohe-ch)
Node Resource Manager Released
Node resource manager is released in this version, which provides local node resources management of OpenYurt cluster in a unified manner. It currently supports LVM, QuotaPath and Pmem, and create or update the compute, storage resources based on local devices. It works as daemonset spread on each edge node, and manages local resources with a predefined spec stored in configmap. Please refer to the usage doc for details. (#1, @mowangdk, @wenjun93)
Add Cloud Native IOT Device Management API definition
Inspiring by the Unix philosophy, "Do one thing and do it well", we believe that Kubernetes should focus on managing computing resources while edge devices management can be done by adopting existing edge computing platforms. Therefore, we define several generic custom resource definitions(CRD) that act as the mediator between OpenYurt and the edge platform. Any existing edge platforms can be integrated into the OpenYurt by implementing custom controllers for these CRDs. In addition, these CRDs allow users to manage edge devices in a declarative way, which provides users with a Kubernetes-native experience.(#233, #236, @Fei-Guo, @yixingjia, @charleszheng44)
Kubernetes V1.18 is supported
OpenYurt officially supports version v1.18 of Kubernetes. Now, OpenYurt users are able to convert v1.18 Kubernetes cluster to OpenYurt cluster or deploy components of OpenYurt on v1.18 Kubernetes cluster manually. the main work for supporting v1.18 Kubernetes as following:
- refactor serializer of cache manager to adapt ClientNegotiator in k8s.io/apimachinery v0.18
- add context parameter in api for calling client-go v0.18
and based on Kubernetes compatibility, v1.16 Kubernetes is still supported. (#288, @rambohe-ch)
UnitedDeployment support patch for pool
UnitedDeployment controller provides a new way to manage pods in multi-pool by using multiple workloads. Each workload managed by UnitedDeployment is called a pool and user can only configure the workload replicas in the pool. Based on the patch feature, besides the workload replicas configuration, user can easily configure other fields(like images and resoures) of workloads in the pool.(#242, #12, @kadisi)
Support caching CRD resources by yurthub
Because resources in the resourceToKindMap can be cached by yurt-hub component, when network between cloud and edge disconnected, if any pod(eg: calico) on the edge node that used some resources(like crd) not in the above map want to run continuously, that is to say, the pod can not restarted successfully because resources(like crd) are not cached by yurt-hub. This PR can solve this limitation. Now yurt-hub is able to cache all kubernetes resources, including crd resource that defined by user.(#162, #231, #225, #265, @qclc, @rambohe-ch)
Prometheus and Yurt-Tunnel-Server cross-node deployment is supported via DNS
In the edge computing scenario, the IP addresses of the edge nodes are likely to be the same. So we can not rely on the node IP to forward the request but should use the node hostname(unique in one cluster). This PR provides the ability for the yurt-tunnel-server to handle requests in the form of scheme://[hostname]:[port]/[req_path].(#270, #284, @SataQiu, @rambohe-ch)
Support kind cluster and node level conversion by yurtctl
OpenYurt supports the conversion between the OpenYurt cluster and the Kubernetes cluster created by minikube, kubeadm, and ack. Now OpenYurt supports the conversion between kind cluster and OpenYurt cluster. (#230, #206, #220, #234, @Peeknut)
- add edge-pod-network doc (#302, @wenjun93)
- add resource and system requirements (#315, @wawlian)
- feature: add dummy network interface for yurthub (#289, @rambohe-ch)
- refactor: divide the yurthubServer into hubServer and proxyServert (#237, @rambohe-ch)
- using lease for cluster remote server healthz checker. (#249, @zyjhtangtang)
- yurtctl cluster-info subcommand that list edge/cloud nodes (#208, @neo502721)
- Feature: Support specified kubeadm conf for join cluster (#210, @liangyuanpeng)
- Add --feature-gate flag to yurt-controller-manager (#222, @DrmagicE)
- feature: add promtheus metrics for yurthub (#238, @rambohe-ch)
- Update the Manual document (#228, @yixingjia)
- feature: add meta server for handling prometheus metrics and pprof by yurttunnel (#253, @rambohe-ch)
- feature: add 'yurthub-healthcheck-timeout' flag for 'yurtctl convert' command (#290, @SataQiu)
- fix list runtimeclass and csidriver from cache error when cloud-edge network disconnected (#258, @rambohe-ch)
- fix the error of cluster status duration statistics (#295, @zyjhtangtang)
- fix bug when ServeHTTP panic (#198, @aholic)
- solve ips repeated question from addr.go (#209, @luhaopei)
- fix t.Fatalf from a non-test goroutine (#269, @contrun)
- Uniform label for installation of yurt-tunnel-agent openyurt.io/is-edge-worker=true (#275, @yanhui)
- fix the bug that dns controller updates dns records incorrectly (#283, @SataQiu)
- It solves the problem that the cloud node configuration taint (#299, @yanhui)
- Fixed incorrect representation in code comments. (#296, @felix0080)
- fix systemctl restart in manually-setup tutorial (#205, @DrmagicE)
- Add new component Yurt App Manager that runs on cloud nodes
- Add new provider=kubeadm for yurtctl
- Add hubself certificate mode for yurthub
- Support log flush for yurt-tunnel
- New tutorials for Yurt App Manager
- Implement NodePool CRD that provides a convenient management experience for a pool of nodes within the same region or site
- Implement UnitedDeployment CRD by defining a new edge application management methodology of using per node pool workload
- Add tutorials to use Yurt App Manager
- Add hubself certificate mode for generating and rotating certificate that used to connect with kube-apiserver as default mode
- Add timeout mechanism for proxying watch request
- Optimize the response when cache data is not found
- Add integration test
- Support log flush request from kube-apiserver
- Optimize tunnel interceptor for separating context dailer and proxy request
- Optimize the usage of sharedIndexInformer
- Add new provider=kubeadm that kubernetes cluster installed by kubeadm can be converted to openyurt cluster
- Adapt new certificate mode of yurthub when convert edge node
- Fix image pull policy from
Always
toIfNotPresent
for all components deployment setting
- Support Kubernetes 1.16 dependency for all components
- Support multi-arch binaries and images (arm/arm64/amd64)
- Add e2e test framework and tests for node autonomy
- New tutorials (e2e test and yurt-tunnel)
- Implement yurt-tunnel-server and yurt-tunnel-agent based on Kubernetes apiserver network proxy framework
- Implement cert-manager to manage yurt-tunnel certificates
- Add timeout mechanism for yurt-tunnel
- Add global lock to prevent multiple yurtctl invocations concurrently
- Add timeout for acquiring global lock
- Allow user to set the label prefix used to identify edge nodes
- Deploy yurt-tunnel using convert option
- Remove kubelet config bootstrap args during manual setup
- Avoid evicting Pods from nodes that have been marked as
autonomy
nodes
- Use Kubelet certificate to communicate with APIServer
- Implement a http proxy for all Kubelet to APIServer requests
- Cache the responses of Kubelet to APIServer requests in local storage
- Monitor network connectivity and switch to offline mode if health check fails
- In offline mode, response Kubelet to APIServer requests based on the cached states
- Resync and clean up the states once node is online again
- Support to proxy for other node daemons
- Support install/uninstall all OpenYurt components in a native Kubernetes cluster
- Pre-installation validation check