Introduction¶
StarlingX is a fully integrated edge cloud software stack that provides everything needed to deploy an edge cloud on one, two, or up to 100 servers.
Key features of StarlingX include:
Provided as a single, easy to install package that includes an operating system, storage and networking components, and all the cloud infrastructure needed to run edge workloads.
Optimized software that meets edge application requirements.
Designed with pre-defined configurations to meet a variety of edge cloud deployment needs.
Tested and released as a complete stack, ensuring compatibility among open source components.
Included fault management and service management capabilities, which provide high availability for user applications.
Optimized by the community for security, ultra-low latency, extremely high service uptime, and streamlined operation.
Download the StarlingX ISO image from the StarlingX mirror.
Learn more about StarlingX:
Projects¶
StarlingX contains multiple sub-projects that include additional edge cloud support services and clients. API documentation and release notes for each project are found on the specific project page:
Supporting projects and repositories:
For additional information about project teams, refer to the StarlingX wiki.
New features in StarlingX 10.0¶
Platform Component Upversion¶
The auto_update
attribute supported for StarlingX applications
enables apps to be automatically updated when a new app version tarball is
installed on a system.
See: https://wiki.openstack.org/wiki/StarlingX/Containers/Applications/AppIntegration
The following platform component versions have been updated in StarlingX 10.0.
sriov-fec-operator 2.9.0
kubernetes-power-manager 2.5.1
kubevirt-app: 1.1.0
security-profiles-operator 0.8.7
nginx-ingress-controller
ingress-nginx 4.11.1
secret-observer 0.1.1
auditd 1.0.5
snmp 1.0.3
cert-manager 1.15.3
ceph-csi-rbd 3.11.0
node-interface-metrics-exporter 0.1.3
node-feature-discovery 0.16.4
app-rook-ceph
rook-ceph 1.13.7
rook-ceph-cluster 1.13.7
rook-ceph-floating-monitor 1.0.0
rook-ceph-provisioner 2.0.0
dell-storage
csi-powerstore 2.10.0
csi-unity 2.10.0
csi-powerscale 2.10.0
csi-powerflex 2.10.1
csi-powermax 2.10.0
csm-replication 1.8.0
csm-observability 1.8.0
csm-resiliency 1.9.0
portieris 0.13.16
metrics-server 3.12.1 (0.7.1)
FluxCD helm-controller 1.0.1 (for Helm 3.12.2)
power-metrics
cadvisor 0.50.0
telegraf 1.1.30
security-profiles-operator 0.8.7
vault
vault 1.14.0
vault-manager 1.0.1
oidc-auth-apps
oidc-auth-secret-observer secret-observer 0.1.6 1.0
oidc-dex dex-0.18.0+STX.4 2.40.0
oidc-oidc-client oidc-client 0.1.22 1.0
platform-integ-apps
ceph-csi-cephfs 3.11.0
ceph-pools-audit 0.2.0
app-istio
istio-operator 1.22.1
kiali-server 1.85.0
harbor 1.12.4
ptp-notification 2.0.55
intel-device-plugins-operator
intel-device-plugins-operator 0.30.3
intel-device-plugins-qat 0.30.1
intel-device-plugins-gpu 0.30.0
intel-device-plugins-dsa 0.30.1
secret-observer 0.1.1
node-interface-metrics-exporter 0.1.3
oran-o2 2.0.4
helm 3.14.4 for K8s 1.21 - 1.29
Redfish Tool 1.1.8-1
Kubernetes Upversion¶
StarlingX Release r10.0 supports Kubernetes 1.29.2.
Distributed Cloud Scalability Improvement¶
StarlingX System Controller scalability has been improved in StarlingX 10.0 with both 5 thousand maximum managed nodes and maximum number of parallel operations.
Unified Software Delivery and Management¶
In StarlingX 10.0, the Software Patching functionality and the Software Upgrades functionality have been re-designed into a single Unified Software Management framework. There is now a single procedure for managing the deployment of new software; regardless of whether the new software is a new Patch Release or a new Major Release. The same APIs/CLIs are used, the same procedures are used, the same VIM / Host Orchestration strategies are used and the same Distributed Cloud / Subcloud Orchestration strategies are used; regardless of whether the new software is a new Patch Release or a new Major Release.
See: Appendix A - Commands replaced by USM for Updates (Patches) and Upgrades for a detailed list of deprecated commands and new commands.
Infrastructure Management Component Updates¶
In StarlingX 10.0, the new Unified Software Management framework supports enhanced Patch Release packaging and enhanced Major Release deployments.
Patch Release packaging has been simplified to deliver new or modified Debian packages, instead of the cryptic difference of OSTree builds done previously. This allows for inspection and validation of Patch Release content prior to deploying, and allows for future flexibility of Patch Release packaging.
Major Release deployments have been enhanced to fully leverage OSTree. An OSTree deploy is now used to update the host software. The new software’s root filesystem can be installed on the host, while the host is still running the software of the old root filesystem. The host is simply rebooted into the new software’s root filesystem. This provides a significant improvement in both the upgrade duration and the upgrade service impact (especially for AIO-SX systems), as previously upgrading hosts needed to have disks/root-filesystems wiped and then software re-installed.
See
Unified Software Management - Rollback Orchestration AIO-SX¶
VIM Patch Orchestration has been enhanced to support the abort and rollback of a Patch Release software deployment. VIM Patch Orchestration rollback will automate the abort and rollback steps across all hosts of a Cloud configuration.
Note
In StarlingX 10.0, VIM Patch Orchestration Rollback is only supported for AIO-SX configurations.
In StarlingX 10.0 VIM Patch Orchestration Rollback is only supported if the Patch Release software deployment has been aborted or failed prior to the ‘software deploy activate’ step. If the Patch Release software deployment is at or beyond the ‘software deploy activate’ step, then an install plus restore of the Cloud is required in order to rollback the Patch Release deployment.
Enhancements to Full Debian Support¶
The Kernel can be configured during runtime as [ standard <-> lowlatency ].
Support for Kernel Live Patching (for possible scenarios)¶
StarlingX supports live patching that enables fixing critical functions without rebooting the system and enables systems to be functional and running. The live-patching modules will be built into the upgraded StarlingX binary patch.
The upgraded binary patch is generated as the in-service type (non-reboot-required). The kernel modules will be matched with the correct kernel release version during binary patch upgrading.
The relevant kernel module can be found in the location: ‘/lib/modules/<release-kernel-version>/extra/kpatch’
During binary patch upgrading, the user space tool kpatch
is
used for:
installing the kernel module to ${installdir}
loading(insmod) the kernel module for the running kernel
unloading(rmmod) the kernel module from the running kernel
uninstallling the kernel module from ${installdir}
listing the enabled live patch kernel module
Subcloud Phased Deployment¶
Subclouds can be deployed using individual phases. Therefore, instead of using a single operation, a subcloud can be deployed by executing each phase individually. Users have the flexibility to proactively abort the deployment based on their needs. When the deployment is resumed, previously installed contents will be still valid.
Kubernetes Local Client Access¶
You can configure Kubernetes access for a user logged in to the active controller either through SSH or by using the system console.
Kubernetes Remote Client Access¶
The access to the Kubernetes cluster from outside the controller can be done using the remote CLI container or using the host directly.
IPv4/IPv6 Dual Stack support for Platform Networks¶
Migration of a single stack deployment to dual stack network deployments will not cause service disruptions.
Dual-stack networking facilitates the simultaneous use of both IPv4 and IPv6 addresses, or continue to use each IP version independently. To accomplish this, platform networks can be associated with 1 or 2 address pools, one for each IP version (IPv4 or IPv6). The first pool is linked to the network upon creation and cannot be subsequently removed. The second pool can be added or removed to transition the system between dual-stack and single-stack modes.
Run Kata Containers in Kubernetes¶
There are two methods to run Kata Containers in Kubernetes: by runtime class or by annotation. Runtime class is supported in Kubernetes since v1.12.0 or higher, and it is the recommended method for running Kata Containers.
External DNS Alternative: Adding Local Host Entries¶
You can configure user-defined host entries for external resources that are not
maintained by DNS records resolvable by the external DNS server(s) (i.e.
nameservers
in system dns-show/dns-modify
). This functionality enables
the configuration of local host records, supplementing hosts resolvable by
external DNS server(s).
Power Metrics Enablement - vRAN Integration¶
StarlingX 10.0 supports integrated enhanced power metrics tool with reduced impact on vRAN field deployment.
Power Metrics may increase the scheduling latency due to perf and MSR readings. It was observed that there was a latency impact of around 3 µs on average, plus spikes with significant increases in maximum latency values. There was also an impact on the kernel processing time. Applications that run with priorities at or above 50 in real-time kernel isolated CPUs should allow kernel services to avoid unexpected system behavior.
Crash dump File Size Setting Enhancements¶
The Linux kernel can be configured to perform a crash dump and reboot in response to specific serious events. A crash dump event produces a crash dump report with bundle of files that represent the state of the kernel at the time of the event, which is useful for post-event root cause analysis.
The crash dump files that are generated by Linux kdump are configured to be generated during kernel panics (default) are managed by the crashDumpMgr utility. The utility will save crash dump files but the current handling uses a fixed configuration when saving files. In order to provide a more flexible system handling the crashDumpMgr utility is enhanced to support the following configuration parameters that will control the storage and rotation of crash dump files.
Maximum Files: New configuration parameter for the number of saved crash dump files (default 4).
Maximum Size: Limit the maximum size of an individual crash dump file (support for unlimited, default 5GB).
Maximum Used: Limit the maximum storage used by saved crash dump files (support for unlimited, default unlimited).
Minimum Available: Limit the minimum available storage on the crash dump file system (restricted to minimum 1GB, default 10%).
The service parameters must be specified using the following service hierarchy. It is recommended to model the parameters after the platform coredump service parameters for consistency.
platform crashdump <parameter>=<value>
Subcloud Install or Restore of Previous Release¶
StarlingX r10.0 system controller supports both StarlingX 9.0 and StarlingX r10.0 subclouds fresh install or restore.
If the upgrade is from StarlingX 9.0 to a higher release, the prestage status
and prestage versions fields in the output of the
dcmanager subcloud list command will be empty, regardless of whether
the deployment status of the subcloud was prestage-complete
before the upgrade.
These fields will only be updated with values if you run subcloud prestage
or prestage orchestration
again.
See: Subclouds Previous Major Release Management
For non-prestaged subcloud remote installations
The ISO imported via load-import --active
should always be at the same patch
level as the system controller. This is to ensure that the subcloud boot image
aligns with the patch level of the load to be installed on the subcloud.
See:installing-a-subcloud-using-redfish-platform-management-service
For prestaged remote subcloud installations
The ISO imported via load-import --inactive
should be at the same patch level
as the system controller. If the system controller is patched after subclouds
have been prestaged, it is recommended to repeat the prestaging for each
subcloud. This is to ensure that the subcloud boot image aligns with the patch
level of the load to be installed on the subcloud.
See: Prestaging Requirements
WAD Users Access Right Control via Group¶
You can configure an LDAP / WAD user with ‘sys_protected’ group or ‘sudo all’.
an LDAP / WAD user in ‘sys_protected’ group on StarlingX
is equivalent to the special ‘sysadmin’ bootstrap user
via “source /etc/platform/openrc”
has Keystone admin/admin identity and credentials, and
has Kubernetes /etc/kubernetes/admin.conf credentials
only a small number of users have this capability
an LDAP / WAD user with ‘sudo all’ capability on StarlingX
can perform the following StarlingX-type operations: - sw_patch to unauthenticated endpoint - docker/crictl to communicate with the respective daemons - using some utilities - like show-certs.sh, license-install (recovery only) - IP configuration for local network setup - password changes of Linux users (i.e. local LDAP) - access to restricted files, including some logs - manual reboots
The local LDAP server by default serves both HTTPS on port 636 and HTTP on port 389.
The HTTPS server certificate is issued by cert-manager ClusterIssuer
system-local-ca
and is managed internally by cert-manager. The certificate
will be automatically renewed when the expiration date approaches. The
certificate is called system-openldap-local-certificate
with its secret
having the same name system-openldap-local-certificate
in the
deployment
namespace. The server certificate and private key files are
stored in the /etc/ldap/certs/
system directory.
See:
Accessing Collect Command with ‘sudo’ privileges and membership in ‘sys-protected’ Group¶
The StarlingX 10.0 adds support to run Collect
from any
local LDAP or Remote WAD user account with ‘sudo’ capability and a member
of the ‘sys_protected’ group.
The Collect
tool continues support from the ‘sysadmin’ user account
and also being run from any other successfully created LDAP and WAD account
with ‘sudo’ capability and a member of the ‘sys_protected’ group.
For security reasons, no password ‘sudo’ continues to be unsupported.
Support for Intel In-tree Driver¶
The system supports both in-tree and out-of-tree versions of the Intel ice
,
i40e
, and iavf
drivers. On initial installation, the system uses the
default out-of-tree driver version. You can switch between the in-tree and
out-of-tree driver versions. For further details:
See: Switch Intel Driver Versions
Note
The ice in-tree driver does not support SyncE/GNSS deployments.
Password Rules Enhancement¶
You can check current password expiry settings by running the
chage -l <username> command replacing <username>
with the name
of the user whose password expiry settings you wish to view.
You can also change password expiry settings by running the sudo chage -M <days_to_expiry> <username> command.
Use the following new password rules as listed below:
There should be a minimum length of 12 characters.
The password must contain at least one letter, one number, and one special character.
Do not reuse the past 5 passwords.
The Password expiration period should be defined by users, but by default it is set to 90 days.
See:
Management Network Reconfiguration after Deployment Completion Phase 1 AIO-SX¶
StarlingX 10.0 supports changes to the management IP addresses for a standalone AIO-SX and for an AIO-SX subcloud after the node is completely deployed.
See:
Networking Statistic Support¶
The Node Interface Metrics Exporter application is designed to fetch and
display node statistics in a Kubernetes environment. It deploys an Interface
Metrics Exporter DaemonSet on all nodes with the
starlingx.io/interface-metrics=true node
label. It uses the Netlink library
to gather data directly from the kernel, offering real-time insights into node
performance.
Add Existing Cloud as Subcloud Without Reinstallation¶
The subcloud enrollment feature converts a factory pre-installed system or initially deployed as a standalone cloud system to a subcloud of a DC. Factory pre-installation standalone systems must be installed locally in the factory, and later deployed and configured on-site as a DC subcloud without re-installing the system.
See: Enroll a Factory Installed Non Distributed Standalone System as a Subcloud
Rook Support for freshly Installed StarlingX¶
The new Rook Ceph application will be used for deploying the latest version of Ceph via Rook.
Rook Ceph is an orchestrator that provides a containerized solution for Ceph Storage with a specialized Kubernetes Operator to automate the management of the cluster. It is an alternative solution to the bare metal Ceph storage. See https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more details.
The deployment model is the topology strategy that defines the storage backend capabilities of the deployment. The deployment model dictates how the storage solution will look like when defining rules for the placement of storage cluster elements.
Enhanced Availability for Ceph on AIO-DX¶
Ceph on AIO-DX now works with 3 Ceph monitors providing High Availability and enhancing uptime and resilience.
Available Deployment Models¶
Each deployment model works with different deployment strategies and rules to fit different needs. The following models available for the requirements of your cluster are:
Controller Model (default)
Dedicated Model
Open Model
Storage Backend¶
Configuration of the storage backend defines the deployment models characteristics and main configurations.
Migration with Rook container based Ceph Installations¶
When you migrate an AIO-SX to an AIO-DX subcloud with Rook container-based Ceph installations in StarlingX 10.0, you would need to follow the additional procedural steps below:
Procedure
After you configure controller-1, follow the steps below:
Add a new Ceph monitor on controller-1.
Add an OSD on controller-1.
List host’s disks and identify disks you want to use for Ceph OSDs. Ensure you note the UUIDs.
Add disks as an OSD storage.
List OSD storage devices.
Unlock controller-1 and follow the steps below:
Wait until Ceph is updated with two active monitors. To verify the updates, run the ceph -s command and ensure the output shows mon: 2 daemons, quorum a,b. This confirms that both monitors are active.
Add the floating monitor.
Wait for the controller to reset and come back up to an operational state.
Re-apply the
rook-ceph
application.
To Install and Uninstall Rook Ceph¶
See:
Performance Configurations on Rook Ceph¶
When using Rook Ceph it is important to consider resource allocation and configuration adjustments to ensure optimal performance. Rook introduces additional management overhead compared to a traditional bare-metal Ceph setup and needs more infrastructure resources.
Protecting against L2 Network Attackers - Securing local traffic on MGMT networks¶
A new security solution is introduced for StarlingX inter-host management network:
Attackers with direct access to local StarlingX L2 VLANs
specifically protect LOCAL traffic on the MGMT network which is used for private/internal infrastructure management of the StarlingX cluster.
Protection against both passive and active attackers accessing private/internal data, which could risk the security of the cluster
passive attackers that are snooping traffic on L2 VLANs (MGMT), and
active attackers attempting to connect to private internal endpoints on StarlingX L2 interfaces (MGMT) on StarlingX hosts.
IPsec is a set of communication rules or protocols for setting up secure connections over a network. StarlingX utilizes IPsec to protect local traffic on the internal management network of multi-node systems.
StarlingX uses strongSwan as the IPsec implementation. strongSwan is an opensource IPsec solution. See https://strongswan.org/ for more details.
For the most part, IPsec on StarlingX is transparent to users.
See:
Vault application support for running on application cores¶
By default the Vault application’s pods will run on platform cores.
“If static kube-cpu-mgr-policy
is selected and when overriding the label
app.starlingx.io/component
for Vault namespace or pods, there are two requirements:
The Vault server pods need to be restarted as directed by Hashicorp Vault documentation. Restart each of the standby server pods in turn, then restart the active server pod.
Ensure that sufficient hosts with worker function are available to run the Vault server pods on application cores.
See:
Restart the Vault Server pods¶
The Vault server pods do not restart automatically. If the pods are to be re-labelled to switch execution from platform to application cores, or vice-versa, then the pods need to be restarted.
Under Kubernetes the pods are restarted using the kubectl delete pod command. See, Hashicorp Vault documentation for the recommended procedure for restarting server pods in HA configuration, https://support.hashicorp.com/hc/en-us/articles/23744227055635-How-to-safely-restart-a-Vault-cluster-running-on-Kubernetes.
Ensure that sufficient hosts are available to run the server pods on application cores¶
The standard cluster with less than 3 worker nodes does not support Vault HA on the application cores. In this configuration (less than three cluster hosts with worker function):
When setting label app.starlingx.io/component=application with the Vault app already applied in HA configuration (3 Vault server pods), ensure that there are 3 nodes with worker function to support the HA configuration.
When applying Vault for the first time and with
app.starlingx.io/component
set to “application”: ensure that the server replicas is also set to 1 for non-HA configuration. The replicas for Vault server are overriden both for the Vault Helm chart and the Vault manager Helm chart:cat <<EOF > vault_overrides.yaml server: extraLabels: app.starlingx.io/component: application ha: replicas: 1 injector: extraLabels: app.starlingx.io/component: application EOF cat <<EOF > vault-manager_overrides.yaml manager: extraLabels: app.starlingx.io/component: application server: ha: replicas: 1 EOF $ system helm-override-update vault vault vault --values vault_overrides.yaml $ system helm-override-update vault vault-manager vault --values vault-manager_overrides.yaml
Component Based Upgrade and Update - VIM Orchestration¶
VIM Patch Orchestration in StarlingX 10.0 has been updated to interwork with the new underlying Unified Software Management APIs.
As before, VIM Patch Orchestration automates the patching of software across all hosts of a Cloud configuration. All Cloud configurations are supported; AIO-SX, AIO-DX, AIO-DX with worker nodes, Standard configuration with controller storage and Standard configuration with dedicated storage.
Note
This includes the automation of both applying a Patch and removing a Patch.
See
Subcloud Remote Install, Upgrade and Prestaging Adaptation¶
StarlingX 10.0 supports software management upgrade/update process that does not require re-installation. The procedure for upgrading a system is simplified since the existing filesystem and associated release configuration will remain intact in the versioned controlled paths (e.g. /opt/platform/config/<version>). In addition the /var and /etc directories is retained, indicating that updates can be done directly as part of the software migration procedure. This eliminates the need to perform a backup and restore procedure for AIO-SX based systems. In addition, the rollback procedure can revert to the existing versioned or saved configuration in the event an error occurs if the system must be reverted to the older software release.
With this change, prestaging for an upgrade will involve populating a new ostree deployment directory in preparation for an atomic upgrade and pulling new container image versions into the local container registry. Since the system is not reinstalled, there is no requirement to save container images to a protected partition during the prestaging process, the new container images can be populated in the local container registry directly.
See: Prestage a Subcloud
Update Default Certificate Configuration on Installation¶
You can configure default certificates during install for both standalone and Distributed Cloud systems.
New bootstrap overrides for system-local-ca (Platform Issuer)
You can customize the Platform Issuer (system-local-ca) used to sign the platform certificates with an external Intermediate CA from bootstrap, using the new bootstrap overrides.
See: Platform Issuer
Note
It is recommended to configure these overrides. If it is not configured,
system-local-ca
will be configured using a local auto-generated Kubernetes Root CA.
REST API / Horizon GUI and Docker Registry certificates are issued during bootstrap
The certificates for StarlingX REST APIs / Horizon GUI access and Local Docker Registry will be automatically issued by
system-local-ca
during bootstrap. They will be anchored tosystem-local-ca
Root CA public certificate, so only this certificate needs to be added in the user list of trusted CAs.
HTTPS enabled by default for StarlingX REST API access
The system is now configured by default with HTTPS enabled for access to StarlingX API and the Horizon GUI. The certificate used to secure this will be anchored to
system-local-ca
Root CA public certificate.
Playbook to update system-local-ca and re-sign the renamed platform certificates
The
migrate_platform_certificates_to_certmanager.yml
playbook is renamed toupdate_platform_certificates.yml
.
External certificates provided in bootstrap overrides can now be provided as base64 strings, such that they can be securely stored with Ansible Vault
The following bootstrap overrides for certificate data CAN be provided as the certificate / key converted into single line base64 strings instead of the filepath for the certificate / key:
ssl_ca_cert
k8s_root_ca_cert and k8s_root_ca_key
etcd_root_ca_cert and etcd_root_ca_key
system_root_ca_cert, system_local_ca_cert and system_local_ca_key
Note
You can secure the certificate data in an encrypted bootstrap overrides file using Ansible Vault.
The base64 string can be obtained using the base64 -w0 <cert_file> command. The string can be included in the overrides YAML file (secured via Ansible Vault), then insecurely managed
cert_file
can be removed from the system.
Dell CSI Driver Support - Test with Dell PowerStore¶
StarlingX 10.0 supports a new system application to support kubernetes CSM/CSI for Dell Storage Platforms. With this application the user can communicate with Dell PowerScale, PowerMax, PowerFlex, PowerStore and Unity XT Storage Platforms to provision PVCs and use them on kubernetes stateful applications.
See: Dell Storage File System Provisioner for details on installation and configurations.
O-RAN O2 IMS and DMS Interface Compliancy Update¶
With the new updates in Infrastructure Management Services (IMS) and Deployment Management Services (DMS) the J-release for O-RAN O2, OAuth2 and mTLS are mandatory options. It is fully compliant with latest O-RAN spec O2 IMS interface R003 -v05.00 version and O2 DMS interface K8s profile - R003-v04.00 version. Kubernetes Secrets are no longer required.
The services implemented include:
O2 API with mTLS enabled
O2 API supported OAuth2.0
Compliance with O2 IMS and DMS specs
See: O-RAN O2 Application
Configure Liveness Probes for PTP Notification Pods¶
Helm overrides can be used to configure liveness probes for ptp-notification
containers.
Intel QAT and GPU Plugins¶
The QAT and GPU applications provide a set of plugins developed by Intel to facilitate the use of Intel hardware features in Kubernetes clusters. These plugins are designed to enable and optimize the use of Intel-specific hardware capabilities in a Kubernetes environment.
Intel GPU plugin enables Kubernetes clusters to utilize Intel GPUs for hardware acceleration of various workloads.
Intel® QuickAssist Technology (Intel® QAT) accelerates cryptographic workloads by offloading the data to hardware capable of optimizing those functions.
The following QAT and GPU plugins are supported in StarlingX 10.0.
See:
Support for Sapphire Rapids Integrated QAT¶
Intel 4th generation Xeon Scalable Processor (Sapphire Rapids) support has been introduced for the StarlingX 10.0.
Drivers for QAT Gen 4 Intel Xeon Gold Scalable processor (Sapphire Rapids)
Intel Xeon Gold 6428N
Sapphire Rapids Data Streaming Accelerator Support¶
Intel® DSA is a high-performance data copy and transformation accelerator integrated into Intel® processors starting with 4th Generation Intel® Xeon® processors. It is targeted for optimizing streaming data movement and transformation operations common with applications for high-performance storage, networking, persistent memory, and various data processing applications.
DPDK Private Mode Support¶
For the purpose of enabling and using needVhostNet
, SR-IOV needs to be
configured on a worker host.
SR-IOV FEC Operator Support¶
FEC Operator 2.9.0 is adopted based on Intel recommendations offering features for various Intel hardware accelerators used for field deployments.
See: Configure Intel Wireless FEC Accelerators using SR-IOV FEC operator
Support for Advanced VMs on Stx Platform with KubeVirt¶
The KubeVirt system application kubevirt-app-1.1.0 in StarlingX includes: KubeVirt, Containerized Data Importer (CDI) v1.58.0, and the Virtctl client tool. StarlingX 10.0 supports enhancements for this application, describes the Kubevirt architecture with steps to install Kubevirt and provides examples for effective implementation in your environment.
See:
Support Harbor Registry (Harbor System Application)¶
Harbor registry is integrated as a System Application. End users can use Harbor, running on StarlingX, for holding and managing their container images. The Harbor registry is currently not used by the platform.
Harbor is an open-source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Harbor has been evolved to a complete OCI compliant cloud-native artifact registry.
With Harbor V2.0, users can manage images, manifest lists, Helm charts, CNABs, OPAs among others which all adhere to the OCI image specification. It also allows for pulling, pushing, deleting, tagging, replicating, and scanning such kinds of artifacts. Signing images and manifest list are also possible now.
Note
When using local LDAP for authentication of the Harbor system application, you cannot use local LDAP groups for authorization; use only individual local LDAP users for authorization.
Support for DTLS over SCTP¶
DTLS (Datagram Transport Layer Security) v1.2 is supported in StarlingX 10.0.
The SCTP module is now autoloaded by default.
The socket buffer size values have been upgraded:
Old values (in Bytes):
net.core.rmem_max=425984
net.core.wmem_max=212992
New Values (In Bytes):
net.core.rmem_max=10485760
net.core.wmem_max=10485760
To enable each SCTP socket association to have its own buffer space, the socket accounting policies have been updated as follows:
net.sctp.sndbuf_policy=1
net.sctp.rcvbuf_policy=1
Old value:
net.sctp.auth_enable=0
New value:
net.sctp.auth_enable=1
New features in StarlingX 9.0¶
See: https://docs.starlingx.io/r/stx.9.0/releasenotes/index.html#release-notes
New features in StarlingX 8.0¶
See: https://docs.starlingx.io/r/stx.8.0/releasenotes/index.html#release-notes
New features in StarlingX 7.0¶
See: https://docs.starlingx.io/r/stx.7.0/releasenotes/index.html#new-features-and-enhancements