Configure Container-backed Remote CLIs¶
The StarlingX command lines can be accessed from remote computers running Linux, MacOS, and Windows.
About this task
This functionality is made available using a docker container with pre-installed CLIs and clients. The container’s image is pulled as required by the remote CLI/client configuration scripts.
Prerequisites
You must have Docker installed on the remote systems you connect from. For more information on installing Docker, see https://docs.docker.com/install/. For Windows remote workstations, Docker is only supported on Windows 10.
Note
You must be able to run docker commands using one of the following options:
Running the scripts using sudo
Adding the Linux user to the docker group
For more information, see, https://docs.docker.com/engine/install/linux-postinstall/
For Windows remote workstations, you must run the following commands from a Cygwin terminal. See https://www.cygwin.com/ for more information about the Cygwin project.
For Windows remote workstations, you must also have winpty installed. Download the latest release tarball for Cygwin from https://github.com/rprichard/winpty/releases. After downloading the tarball, extract it to any location and change the Windows <PATH> variable to include its bin folder from the extracted winpty folder.
You need from your StarlingX administrator: your WAD or Local LDAP username and password to get a Kubernetes authentication token, your Keystone username and password to log into Horizon, the OAM IP and, optionally, the Kubernetes CA certificate of the target StarlingX environment. If HTTPS has been enabled for the StarlingX RESTAPI Endpoints on the target StarlingX system, you need the CA certificate that signed the StarlingX REST APIs Endpoint Certificate.
The following procedure helps you configure the Container-backed remote CLIs and clients for a non-admin user.
Procedure
Copy the remote client tarball file from a StarlingX mirror to the remote workstation, and extract its content.
The tarball is available at https://mirror.starlingx.windriver.com/mirror/starlingx/release/latest_release/debian/monolithic/outputs/remote-cli/.
You can extract the tarball contents anywhere on your client system.
$ cd $HOME $ tar xvf stx-remote-clients-<version>.tgz
Download the user/tenant openrc file from the Horizon Web interface to the remote workstation.
Log in to Horizon as the user and tenant that you want to configure remote access for.
In this example, we use ‘user1’ user in the ‘tenant1’ tenant.
Navigate to
.Select Openstack RC file.
The file
admin-openrc.sh
downloads. Copy this file to the location of the extracted tarball.
Note
For a Distributed Cloud system, navigate to
and download the Openstack RC file.Configure the verifying CA certificate for the remote_cli TLS connection.
In a running system, export the CA certificate using the following command:
sysadmin@controller-0:~$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > system-local-ca.crt # Do not copy this over. It is just an example sysadmin@controller-0:~$ cat system-local-ca.crt -----BEGIN CERTIFICATE----- MIIFDjCCAvagAwIBAgIUKOZEZ/F0khUrRlRX6hdvsPUsfVkwDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJc3Rhcmxpbmd4MB4XDTI1MDYyNTExMzE0MFoXDTM1MDYy MzExMzE0MFowFDESMBAGA1UEAwwJc3Rhcmxpbmd4MIICIjANBgkqhkiG9w0BAQEF AAOCAg8AMIICCgKCAgEAwcmbAym7NCDKQCrWcK0dNEnv851QUZhA9QCIcmgXw2pG EvU3JiEo0I1iZON2chH72cu1DTR8kkkUEjdbraKSB5ZfzrffKrA/enrbjs3eIgcz dHg3d5DM7SfX0+Q/o49XysWEvpmDBpMjo2J3rUQw6o6S4/K4LJKGGVN1cs1tdRa0 BWjixegy9KgJFHdD75ruMv+ljQWnh8qbdieZJhmJa+Z9v5k61EDqtqihaxy5VTu+ gi2KZORCFUUhO1zr+heYZUt4qmBDr/MV0UBhUE22Y7k+/to1RFWEtNIAE8+/rQ57 xUNI3q6UNrebcfgEASv1NlIalbodjmBt8fUQm6FZv/BUr4O8l4iAqKXbthYHTSmk F2W/GxN1vHyPGAvPX7R+3Tti5G5Ei6+m972knrpLO3dNXYAW9pGBWH4/UNmtc5xQ GmdhK5U4GxvR8Av3LLFvoMWtgffKTstLSyscVzSNGwWIkT3SPLH4V9d3geIJXMtC KXI+mL9IvIvEUuUDDlvSRG4AiF7J3CNBFqhtCQm5bYLx7x+57lsZ9ThQCynxYU1w EpNxpAboXpF+NWXqTdqyT2wELLkxwVoJwB0SQBIMP4C88rAKOWV2ztAw8pVq3l+V r+IGq0CI2Qgwwas7LHE+e/3GXPHmx/4ViIyR4oIiFpq+tUZh9KeplJRQQb4hReUC AwEAAaNYMFYwFAYDVR0RBA0wC4IJc3Rhcmxpbmd4MAsGA1UdDwQEAwICpDASBgNV HRMBAf8ECDAGAQH/AgEBMB0GA1UdDgQWBBSZ/duD7h3lQGhdo8Eg9XE9UxzYyzAN BgkqhkiG9w0BAQsFAAOCAgEAlT+lomt//0ykqJ1fqDv8NMdnG0wOD82bQJ2zOxNB gz26+7yYOEAWXVF7YORH3tW9Oeqs/mCirH2vnBGOev9j5ZhyenM37lN6TkF57Ziz p5u7apaMaHsasOVGHX7BL7OYJe9cSKyFtBZiyo3mldCrApblktC+4zMHluIjKaZP Z5okojzUoweTQj99h36XwKRVxG/5RwNdPjWateX3aTjQK2Nw7L0Qm8zafvhXOwyB uPhu2N1BFSTXFLTykbOBFKku6hOkWM3NeJPOEmRYBwSC58cKHZkFNlksJkh6cJH1 tdOhb84xxocBXLPoqazxBcFB3BckVAwc8/jOWa39UG7lHCp9cTErf4CWgJ47NUJ+ eO1uY565ixVLmBtSOsmvkNAKOeJPLe9RZ0zfEJ2GW4ReBNpwirp0v4IY8fb0ZTp/ wYpAZcwVbHgCIrKwL7+AqDi3WoZ2F7WnG9nanXBHGjQEGmJJTDtckJPJAlrsBpf1 NfKu+uzW6CbQ2PVOThiOAxatPP7Vql/q0mYPfdrfF2LGDzNUavm0u1w9mf1VJpJC aK7GVfT67yUF+r0pdOj3XbDTjqLewIQJsC4wB6h9tGt4vQGjrwnGW8rpm9FxzTzy CmogyvYdTbN3RbnFOtu/V/z5K3uuoiNb42f5IvvnZv3T1Cl/e5H7ReulZqumeSu1 HD0= -----END CERTIFICATE-----
Note
Contact your StarlingX administrator if you do not have access to a running system.
You can then manually copy this output to a new file in the working directory in the remote machine and save it as
system-local-ca.crt
or use thescp
command to copy the file over.Then, add the following line to the bottom of
admin-openrc.sh
:export OS_CACERT=<path_to_ca>
Where
<path_to_ca>
is the absolute path of the CA certificate. For example:instance /home/path/system-local-ca.crt
.Create an empty user-kubeconfig file on the remote workstation. The contents will be set later.
$ touch user-kubeconfig
On the remote workstation, configure the client access.
Change to the location of the extracted tarball.
$ cd $HOME/stx-remote-clients-<version>/
Create a working directory that will be mounted by the container implementing the remote CLIs.
See the description of the configure_client.sh
-w
option below for more details.$ mkdir -p $HOME/remote_cli_wd
Run the configure_client.sh script.
$ ./configure_client.sh -t platform -r admin-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd
If you specify repositories that require authentication, as shown above, you must remember to perform a docker login to that repository before using remote CLIs for the first time.
The options for configure_client.sh are:
- -t
The type of client configuration. The options are platform (for StarlingX CLI and clients) and openstack (for StarlingX OpenStack application CLI and clients).
The default value is platform.
- -r
The user/tenant RC file to use for openstack CLI commands.
The default value is admin-openrc.sh.
- -k
The kubernetes configuration file to use for kubectl and helm CLI commands.
The default value is temp-kubeconfig.
- -o
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell to set up required environment variables and aliases before running any remote CLI commands.
For the platform client setup, the default is remote_client_platform.sh. For the openstack application client setup, the default is remote_client_app.sh.
- -w
The working directory that will be mounted by the container implementing the remote CLIs. When using the remote CLIs, any files passed as arguments to the remote CLI commands need to be in this directory in order for the container to access the files. The default value is the directory from which the configure_client.sh command was run.
- -p
Override the container image for the platform CLI and clients.
By default, the platform CLIs and clients container image is pulled from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
$ ./configure_client.sh -t platform -r admin-openrc.sh -k user-kubeconfig -w $HOME/remote_cli_wd -p https://hub.docker.com/layers/starlingx/stx-platformclients:stx.11.0-v1.0.1
If you specify repositories that require authentication, you must perform a docker login to that repository before using remote CLIs.
- -a
Override the OpenStack application image.
By default, the OpenStack CLIs and clients container image is pulled from docker.io/starlingx/stx-openstackclients.
The configure-client.sh command will generate a remote_client_platform.sh RC file. This RC file needs to be sourced in the shell to set up required environment variables and aliases before any remote CLI commands can be run.
Copy the file
remote_client_platform.sh
to$HOME/remote_cli_wd
.Update the content in the admin-kubeconfig file using the kubectl command from the container. Use the OAM IP address and
system-local-ca.crt
from step 3. In the example below, the user is calleduser1
. You can change it to your user name. If the OAM IP is IPv6, use the IP enclosed in brackets (example:[fd00::a14:803]
).$ cd $HOME/remote_cli_wd $ source remote_client_platform.sh $ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 $ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 system-local-ca.crt) $ kubectl config set-context user1@wrcpcluster --cluster=wrcpcluster --user user1 $ kubectl config use-context user1@wrcpcluster
Postrequisites
After configuring the platform’s container-backed remote CLIs/clients, the remote platform CLIs can be used in any shell after sourcing the generated remote CLI/client RC file. This RC file sets up the required environment variables and aliases for the remote CLI commands.
Note
Consider adding this command to your .login or shell rc file, such that your shells will automatically be initialized with the environment variables and aliases for the remote CLI commands.
See Using Container-backed Remote CLIs and Clients for details.