%PDF- %PDF-
Direktori : /lib/python3/dist-packages/sos/collector/transports/__pycache__/ |
Current File : //lib/python3/dist-packages/sos/collector/transports/__pycache__/oc.cpython-312.pyc |
� i��d�$ � �R � d dl Z d dlZd dlZd dlmZ d dlmZmZmZ G d� de� Z y)� N)�RemoteTransport)� is_executable�sos_get_command_output�SoSTimeoutErrorc �x � � e Zd ZdZdZdZd� Zed� � Zd� Z d� Z � fd�Z d � fd � Zd � Z ed� � Zd� Z� xZS )�OCTransportaq This transport leverages the execution of commands via a locally available and configured ``oc`` binary for OCPv4 environments. The location of the oc binary MUST be in the $PATH used by the locally loaded SoS policy. Specifically this means that the binary cannot be in the running user's home directory, such as ~/.local/bin. OCPv4 clusters generally discourage the use of SSH, so this transport may be used to remove our use of SSH in favor of the environment provided method of connecting to nodes and executing commands via debug pods. The debug pod created will be a privileged pod that mounts the host's filesystem internally so that sos report collections reflect the host, and not the container in which it runs. This transport will execute within a temporary 'sos-collect-tmp' project created by the OCP cluster profile. The project will be removed at the end of execution. In the event of failures due to a misbehaving OCP API or oc binary, it is recommended to fallback to the control_persist transport by manually setting the --transport option. �oczsos-collect-tmpc �: � t d| j �d|��fi |��S )z\Format and run a command with `oc` in the project defined for our execution �oc -n � )r �project)�self�cmd�kwargss �=/usr/lib/python3/dist-packages/sos/collector/transports/oc.py�run_oczOCTransport.run_oc1 s% � � &�!�\�\�3�/� �� � � c �N � | j d| j z � }|d dk( S )Nz.wait --timeout=0s --for=condition=ready pod/%s�statusr �r �pod_name)r �ups r � connectedzOCTransport.connected: s- � � �[�[�<�t�}�}�L� �� �(�|�q� � r c � � ddd| j j d� d z | j d�ddd d d�d�d dd d�d�ddd d�d�dddd�d�gd| j j sdn| j j dgddd�gi ddd�d dd�ddd�ddd�gddd�dddd� g| j j rdndd | j dddd!�d"�S )#z�Based on our template for the debug container, add the node-specific items so that we can deploy one of these on each node we're collecting from �Pod�v1z%s-sos-collector�.r )�name� namespacezsystem-cluster-critical�host�/� Directory)�path�type)r �hostPath�runz/run�varlogz/var/logz machine-idz/etc/machine-id�Filezsos-collector-tmpz®istry.redhat.io/rhel8/support-toolsz /bin/bash�HOSTz/host)r �value)r � mountPathT)� privileged� runAsUser) r �image�command�env� resources�volumeMounts�securityContext�stdin� stdinOnce�tty�Always�IfNotPresent�Never)�volumes� containers�imagePullPolicy� restartPolicy�nodeName�hostNetwork�hostPID�hostIPC)�kind� apiVersion�metadata�priorityClassName�spec)�address�splitr �optsr. �force_pull_image�r s r �get_node_pod_configzOCTransport.get_node_pod_configA sg � � ��*�T�\�\�-?�-?��-D�Q�-G�G�!�\�\�� ";� !'�$'�$/�%�� !&�$*�$/�%�� !)�$.�$/�%�� !-�$5�$*�%��-�@ !4�'+�y�y��� "J�<@�I�I�O�O�'�$� )/�)0�� � &(� )/�-4�� ).�-3�� )1�-7�� )5�->��)�&