%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /usr/lib/python3/dist-packages/pygments/__pycache__/
Upload File :
Create Path :
Current File : //usr/lib/python3/dist-packages/pygments/__pycache__/lexer.cpython-312.pyc

�

|�e���X�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Zej2d	�Zgd
�Zed��ZGd�d
e�ZGd�de��Z Gd�de �Z!Gd�de"�Z#Gd�d�Z$e$�Z%Gd�de&�Z'Gd�d�Z(d�Z)Gd�d�Z*e*�Z+d�Z,Gd�d �Z-Gd!�d"e�Z.Gd#�d$e�Z/Gd%�d&e e/��Z0Gd'�d(�Z1Gd)�d*e0�Z2d+�Z3Gd,�d-e/�Z4Gd.�d/e0e4��Z5y)0z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16bec��y)N��)�xs �0/usr/lib/python3/dist-packages/pygments/lexer.py�<lambda>r%"���c��eZdZdZd�Zy)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    c�\�d|vrt|d�|d<tj||||�S)N�analyse_text)r�type�__new__)�mcs�name�bases�ds    r$r-zLexerMeta.__new__+s3���Q�� /��.�0A� B�A�n���|�|�C��u�a�0�0r'N)�__name__�
__module__�__qualname__�__doc__r-r"r'r$r)r)%s���
1r'r)c�\�eZdZdZdZgZgZgZgZdZ	dZ
dZd�Zd�Z
d�Zd�Zd�Zdd	�Zd
�Zy)ra�
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have an additional attribute:

    .. autoattribute:: url
       :no-value:

    Lexers included in Pygments may have additional attributes:

    .. autoattribute:: _example
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    Nrc�l�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|jdd	�|_|jd
�xs|j|_g|_	t|dd�D]}|j|��y
)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr"N)�optionsrr8r9r:rr;�getr<r?r
�
add_filter)�selfr@�filter_s   r$�__init__zLexer.__init__�s���&���#�G�Y��=���$�W�j�%�@��
�$�W�j�$�?��
�"�7�I�q�9������J��8��
����L�1�B�T�]�]��
����#�G�Y��;�	%�G��O�O�G�$�	%r'c��|jr'd|jj�d|j�d�Sd|jjzS)Nz<pygments.lexers.z with �>z<pygments.lexers.%s>)r@�	__class__r2�rCs r$�__repr__zLexer.__repr__�s@���<�<�59�^�^�5L�5L�59�\�\�C�
C�*�D�N�N�,C�,C�C�Cr'c�r�t|t�st|fi|��}|jj	|�y)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr?�append)rCrDr@s   r$rBzLexer.add_filter�s/���'�6�*�(��<�G�<�G������G�$r'c��y)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr")�texts r$r+zLexer.analyse_text�r&r'c���t|t�s�|jdk(rt|�\}}n�|jdk(r�	ddl}d}tD]6\}}|j|�s�|t|�dj|d�}n|�9|j|dd�}|j|jd�xsd	d�}|}nZ|j|j�}|jd
�r.|td
�d}n|jd
�r|td
�d}|jdd�}|jd
d�}|jr|j�}n|jr|jd�}|j dkDr|j#|j �}|j$r|j'd�s|dz
}|S#t
$r}td�|�d}~wwxYw)zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r=�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir<ruz
�
�
)rL�strr<rrQ�ImportError�
_encoding_map�
startswith�len�decode�detectrArRr9�stripr8r;�
expandtabsr:�endswith)	rCrO�_rQ�e�decoded�bomr<�encs	         r$�_preprocess_lexer_inputzLexer._preprocess_lexer_input�s����$��$��}�}��'�&�t�,���a����)�+�T�"���%2��M�C�����s�+�"&�s�3�x�y�/�"8�"8��9�"M����
�?�!�.�.��e�t��5�C�"�k�k�#�'�'�*�*=�*H��*3�5�G����{�{�4�=�=�1���?�?�8�,���H�
��/�D����x�(��C��M�N�+���|�|�F�D�)���|�|�D�$�'���=�=��:�:�<�D�
�\�\��:�:�d�#�D��<�<�!���?�?�4�<�<�0�D��=�=����t�!4��D�L�D����I#�T�%�'L�M�RS�T��T�s�G�	G"�G�G"c�x����j�����fd�}|�}|st|�j��}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        c3�N�K��j��D]\}}}||f���y�w�N)�get_tokens_unprocessed)r_�t�vrCrOs   ��r$�streamerz"Lexer.get_tokens.<locals>.streamer	s0������6�6�t�<�
���1�a���d�
�
�s�"%)rdrr?)rCrO�
unfilteredrk�streams``   r$�
get_tokenszLexer.get_tokens�s=����+�+�D�1��	�����"�6�4�<�<��>�F��
r'c��t�)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        )�NotImplementedError)rCrOs  r$rhzLexer.get_tokens_unprocesseds
��"�!r')F)r2r3r4r5r/�aliases�	filenames�alias_filenames�	mimetypes�priority�url�_examplerErJrBr+rdrnrhr"r'r$rr1se��6�r�D��G�
�I��O��I��H��C��H�%�<D�%��"-�^�0	"r'r)�	metaclassc� �eZdZdZefd�Zd�Zy)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    c�r�|di|��|_|di|��|_||_tj|fi|��y�Nr")�
root_lexer�language_lexer�needlerrE)rC�_root_lexer�_language_lexer�_needler@s     r$rEzDelegatingLexer.__init__'s9��%�0��0���-�8��8������
���t�'�w�'r'c�l�d}g}g}|jj|�D]N\}}}||jur&|r|jt	|�|f�g}||z
}�;|j|||f��P|r|jt	|�|f�t||jj|��S)N�)r}rhr~rMrY�
do_insertionsr|)rCrO�buffered�
insertions�
lng_buffer�irirjs        r$rhz&DelegatingLexer.get_tokens_unprocessed-s������
��
��*�*�A�A�$�G�	-�G�A�q�!��D�K�K����%�%�s�8�}�j�&A�B�!#�J��A�
���!�!�1�a��)�,�	-�����s�8�}�j�9�:��Z�!�_�_�C�C�H�M�O�	Or'N)r2r3r4r5rrErhr"r'r$rrs���>C�(�Or'rc��eZdZdZy)rzI
    Indicates that a state should include rules from another state.
    N�r2r3r4r5r"r'r$rrDs���	r'rc��eZdZdZd�Zy)�_inheritzC
    Indicates the a state should inherit from its superclass.
    c��y)Nrr"rIs r$rJz_inherit.__repr__Os��r'N)r2r3r4r5rJr"r'r$r�r�Ks���r'r�c��eZdZdZd�Zd�Zy)�combinedz:
    Indicates a state combined from multiple states.
    c�.�tj||�Srg)�tupler-)�cls�argss  r$r-zcombined.__new__Zs���}�}�S�$�'�'r'c��yrgr")rCr�s  r$rEzcombined.__init__]s��r'N)r2r3r4r5r-rEr"r'r$r�r�Us���(�
r'r�c�:�eZdZdZd�Zd	d�Zd	d�Zd	d�Zd�Zd�Z	y)
�_PseudoMatchz:
    A pseudo match object constructed from a string.
    c� �||_||_yrg)�_text�_start)rC�startrOs   r$rEz_PseudoMatch.__init__gs����
���r'Nc��|jSrg)r��rC�args  r$r�z_PseudoMatch.startks���{�{�r'c�F�|jt|j�zSrg)r�rYr�r�s  r$�endz_PseudoMatch.endns���{�{�S����_�,�,r'c�4�|rtd��|jS)Nz
No such group)�
IndexErrorr�r�s  r$�groupz_PseudoMatch.groupqs����_�-�-��z�z�r'c��|jfSrg)r�rIs r$�groupsz_PseudoMatch.groupsvs���
�
�}�r'c��iSrgr"rIs r$�	groupdictz_PseudoMatch.groupdictys���	r'rg)
r2r3r4r5rEr�r�r�r�r�r"r'r$r�r�bs%�����-��
�r'r�c���d�fd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    c
3��K�t��D]�\}}|��	t|�tur1|j|dz�}|s�1|j	|dz�||f���K|j|dz�}|��b|r|j	|dz�|_||t
|j	|dz�|�|�D]	}|s�|�����|r|j�|_yy�w)N�)�	enumerater,r
r�r��posr�r�)�lexer�match�ctxr��action�data�itemr�s       �r$�callbackzbygroups.<locals>.callback�s������"�4��	'�I�A�v��~���f���+��{�{�1�q�5�)����+�+�a�!�e�,�f�d�:�:��{�{�1�q�5�)���#��"'�+�+�a�!�e�"4��� &�u�'3�E�K�K��A��4F��'M�s�!T�'���"&�J�'�	'� ��i�i�k�C�G��s�<C�0C�1AC�8!Crgr")r�r�s` r$rr}s���"�&�Or'c��eZdZdZy)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nr�r"r'r$r�r��s��r'r�c�����i�d�vr4�jd�}t|ttf�r|�d<nd|f�d<�tur	d��fd�	}|Sd���fd�	}|S)a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootc3�*�K��	r.�	j|j�|jdi�	��}n|}|j�}|j|j�fi���D]\}}}||z||f���|r|j
�|_yy�wr{)�updater@rHr�rhr�r�r�)
r�r�r��lx�sr�rirj�	gt_kwargs�kwargss
        ��r$r�zusing.<locals>.callback�s��������
�
�e�m�m�,�$�U�_�_�.�v�.�������
�A�4�2�4�4�U�[�[�]�P�i�P�
"���1�a��!�e�Q��k�!�
"���)�)�+����s�BBc3��K��
j|j��di�
��}|j�}|j|j	�fi�	��D]\}}}||z||f���|r|j�|_yy�wr{)r�r@r�rhr�r�r�)r�r�r�r�r�r�rirj�_otherr�r�s        ���r$r�zusing.<locals>.callback�s�������M�M�%�-�-�(��!�&�!�B����
�A�4�2�4�4�U�[�[�]�P�i�P�
"���1�a��!�e�Q��k�!�
"���)�)�+����s�BBrg)�poprL�listr�r)r�r�r�r�r�s``  @r$rr�se����I��&���J�J�w����a�$���'�!"�I�g��"(�!��I�g��
��~�
	&�2�O�		&��Or'c��eZdZdZd�Zy)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    c��||_yrg)r�)rCr�s  r$rEzdefault.__init__�s	����
r'N)r2r3r4r5rEr"r'r$rr�s���r'rc��eZdZdZdd�Zd�Zy)rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    c�.�||_||_||_yrg)r�prefix�suffix)rCrr�r�s    r$rEzwords.__init__�s����
������r'c�Z�t|j|j|j��S)N�r�r�)rrr�r�rIs r$rAz	words.get�s������D�K�K����L�Lr'N)r�r�)r2r3r4r5rErAr"r'r$rr�s����
Mr'rc�<�eZdZdZd�Zd�Zd�Zd�Zd
d�Zd�Z	d	�Z
y)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    c��t|t�r|j�}tj||�j
S)zBPreprocess the regular expression component of a token definition.)rLrrA�re�compiler�)r��regex�rflagsr�s    r$�_process_regexzRegexLexerMeta._process_regex�s.���e�V�$��I�I�K�E��z�z�%��(�.�.�.r'c�R�t|�tust|�s
Jd|����|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r,r
�callable)r��tokens  r$�_process_tokenzRegexLexerMeta._process_token�s.���E�{�j�(�H�U�O�	L�DI�K�	L�;��r'c���t|t�r5|dk(ry||vr|fS|dk(r|S|dddk(rt|dd�SJd|z��t|t�rfd|jz}|xjd	z
c_g}|D]3}||k7s
Jd
|z��|j|j
|||���5|||<|fSt|t�r|D]}||vr�|dvr�
Jd|z��|SJd
|z��)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:zunknown new state %rz_tmp_%dr�zcircular state ref %r)r�r�zunknown new state zunknown new state def %r)rLrU�intr��_tmpname�extend�_process_stater�)r��	new_state�unprocessed�	processed�	tmp_state�itokens�istates       r$�_process_new_statez!RegexLexerMeta._process_new_statesK���i��%��F�"���k�)�!�|�#��g�%� � ��2�A��'�)��I�a�b�M�*�*�*�@�4�y�@�@�u�
�	�8�
,�!�C�L�L�0�I��L�L�A��L��G�#�
F����*�L�,C�f�,L�L�*����s�1�1�+�2;�V� E�F�
F�$+�I�i� ��<��
�	�5�
)�#�
2���+�-��"3�3�2�(�6�1�2�4�
2���@�4�y�@�@�5r'c��t|�tus
Jd|z��|ddk7s
Jd|z��||vr||Sgx}||<|j}||D�]?}t|t�r;||k7s
Jd|z��|j|j
||t|����Ot|t�r�`t|t�rO|j|j||�}|jtjd�jd|f���t|�tus
Jd|z��	|j!|d||�}|j'|d
�}
t)|�dk(rd}n|j|d||�}|j||
|f���B|S#t"$r }	t%d	|d�d
|�d|�d|	���|	�d}	~	wwxYw)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rr�Nzwrong rule def %rzuncompilable regex z
 in state z of z: r��)r,rU�flagsrLrr�r�r�rr�r�rMr�r�r�r�r��	Exception�
ValueErrorr�rY)r�r�r�r��tokensr��tdefr��rex�errr�s           r$r�zRegexLexerMeta._process_state#s����E�{�c�!�@�#8�5�#@�@�!��Q�x�3��?� 7�%� ?�?���I���U�#�#�$&�&���5�!�������&�!	3�D��$��(��u�}�K�&C�e�&K�K�}��
�
�c�0�0��i�14�T��<�=���$��)���$��(��2�2�4�:�:�{�I�V�	��
�
�r�z�z�"�~�3�3�T�9�E�F����:��&�B�(;�d�(B�B�&�
F��(�(��a��&�%�@��
�&�&�t�A�w�/�E��4�y�A�~� �	��2�2�4��7�3>�	�K�	�
�M�M�3��y�1�2�C!	3�D�
���
F� �"&�q�'�5�#�s�"<�=�BE�F��
F�s�*F�	F=�F8�8F=Nc��ix}|j|<|xs|j|}t|�D]}|j|||��|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r�r/�	tokendefsr�r�s     r$�process_tokendefzRegexLexerMeta.process_tokendefOsS��,.�.�	�C�O�O�D�)��1����D�!1�	��)�_�	<�E����y�)�U�;�	<��r'c��i}i}|jD]�}|jjdi�}|j�D]t\}}|j|�}|�!|||<	|j	t
�}|||<�:|j|d�}|��O||||dz	|j	t
�}	||	z||<�v��|S#t$rY��wxYw#t$rY��wxYw)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__rA�items�indexrr�r�)
r�r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxs
          r$�
get_tokendefszRegexLexerMeta.get_tokendefsWs���������	C�A��:�:�>�>�(�B�/�D� $�
�
��
C���u�!�:�:�e�,���#�
%*�F�5�M�!�&+�k�k�'�&:��*5�K��&��)�o�o�e�T�:���&��7<���[��]�3�C�#(�+�+�g�"6�K�*5�{�)B�K��&�9
C�	C�B�
��)&�!� �!��"����s$�B;�C
�;	C�C�
	C�Cc���d|jvrLi|_d|_t|d�r
|jrn%|jd|j
��|_tj|g|��i|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsr�)
r�r�r��hasattrrr�rrr,�__call__)r�r��kwdss   r$rzRegexLexerMeta.__call__�sh���C�L�L�(� �C�O��C�L��s�,�-�#�2D�2D��!�2�2�2�s�7H�7H�7J�K����}�}�S�0�4�0�4�0�0r'rg)r2r3r4r5r�r�r�r�r�rrr"r'r$r�r��s.���
/��!A�F*�X�/�b1r'r�c�4�eZdZdZej
ZiZdd�Zy)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    c#�:K�d}|j}t|�}||d}	|D�]&\}}}	|||�}
|
s�|�8t|�tur|||
j	�f��n|||
�Ed{���|
j�}|	��t
|	t�rX|	D]R}|dk(r t|�dkDs�|j��(|dk(r|j|d��B|j|��TnWt
|	t�r#t|	�t|�k\r|dd�=n*||	d�=n$|	dk(r|j|d�n
Jd|	z��||d}n7	||dk(rd	g}|d	}|tdf��|dz
}��P|t||f��|dz
}��d7��#t$rYywxYw�w)
z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�r�Nr�r��wrong state def: %rrSr�)rr�r,r
r�r�rLr�rYr�rMr��absr	rr�)rCrOr�r�r��
statestack�statetokens�rexmatchr�r��mr�s            r$rhz!RegexLexer.get_tokens_unprocessed�s��������L�L�	��%�[�
��
�2��/���/:�0
�+��&�)��T�3�'����)���<�:�5�"%�v�q�w�w�y�"8�8�'-�d�A��6�6��%�%�'�C� �,�%�i��7�)2�=��#(�F�?�'*�:���':�(2���(8�%*�g�%5�$.�$5�$5�j��n�$E�$.�$5�$5�e�$<�=�(�	�3�7� #�9�~��Z��@�$.�q�r�N�$.�y�z�$:�&�'�1�&�-�-�j��n�=�K�*?�)�*K�K�5�&/�
�2��&?���C0
�J��C�y�D�(�&,�X�
�&/��&7��!�:�t�3�3��q��� ��u�d�3�i�/�/��1�H�C�_�7��P"����sM�8F�5F�0F	�1>F�0B!F� F�2F�4F�F�	F�F�F�FN�)r�)	r2r3r4r5r��	MULTILINEr�r�rhr"r'r$rr�s���
�L�L�E�0�F�;r'rc��eZdZdZdd�Zd�Zy)rz9
    A helper object that holds lexer position data.
    Nc�`�||_||_|xst|�|_|xsdg|_y)Nr�)rOr�rYr�r�)rCrOr�r�r�s     r$rEzLexerContext.__init__�s.����	�����#�#�d�)����&�v�h��
r'c�V�d|j�d|j�d|j�d�S)Nz
LexerContext(z, �))rOr�r�rIs r$rJzLexerContext.__repr__s���I�I�t�x�x����-�	-r'�NN)r2r3r4r5rErJr"r'r$rr�s���'�-r'rc��eZdZdZdd�Zy)rzE
    A RegexLexer that uses a context object to store its state.
    Nc#�K�|j}|st|d�}|d}n |}||jd}|j}	|D�]�\}}}|||j|j
�}	|	s�)|�lt
|�tur5|j||	j�f��|	j�|_n&|||	|�Ed{���|s||jd}|��5t|t�r�|D]�}
|
dk(r4t|j�dkDs�!|jj��<|
dk(r)|jj|jd��j|jj|
���n�t|t�rAt|�t|j�k\r|jdd�=nH|j|d�=n8|dk(r)|jj|jd�n
Jd|z��||jd}n�	|j|j
k\ry||jd	k(r9dg|_|d}|jt d	f��|xjdz
c_��;|jt"||jf��|xjdz
c_��s7���#t$$rYywxYw�w)
z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�r	rS)rrr�rOr�r�r,r
r�rLr�rYr�rMr�r
rrr�)rCrO�contextr�r�rr
r�r�rr�s           r$rhz)ExtendedRegexLexer.get_tokens_unprocessedsu����
�L�L�	���t�Q�'�C�#�F�+�K��C�#�C�I�I�b�M�2�K��8�8�D��/:�2
�+��&�)��T�3�7�7�C�G�G�4����)���<�:�5�"%�'�'�6�1�7�7�9�"<�<�&'�e�e�g�C�G�'-�d�A�s�';�;�;�#,�.7��	�	�"�
�.F�� �,�%�i��7�)2�<��#(�F�?�'*�3�9�9�~��'9�(+�	�	�
�
��%*�g�%5�$'�I�I�$4�$4�S�Y�Y�r�]�$C�$'�I�I�$4�$4�U�$;�<�(�	�3�7�"�9�~��S�Y�Y��?�$'�I�I�a�b�M�$'�I�I�i�j�$9�&�'�1��I�I�,�,�S�Y�Y�r�]�;�K�*?�)�*K�K�5�&/��	�	�"�
�&>���G2
�J
��w�w�#�'�'�)���C�G�G�}��,�%+�H��	�&/��&7��!�g�g�t�T�1�1����1��� ��'�'�5�$�s�w�w�-�7�7��G�G�q�L�G�c�<��R"����s^�A,K	�/AK	�J7�A
K	�DK	�J:�2K	�3A	J:�<K	�>7J:�5K	�:	K�K	�K�K	r)r2r3r4r5rhr"r'r$rrs
���@r'rc#�K�t|�}	t|�\}}d}d}|D]�\}}}|�|}d}	|rx|t|�z|k\rg||	||z
}
|
r|||
f��|t|
�z
}|D]\}}}
|||
f��|t|
�z
}�||z
}		t|�\}}|r|t|�z|k\r�g|	t|�ks��||||	df��|t|�|	z
z
}��|r9|xsd}|D]\}}}|||f��|t|�z
}�	t|�\}}|r�8yy#t$r|Ed{���7YywxYw#t$rd}Y��wxYw#t$rd}YywxYw�w)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrY)r�r�r�r��realpos�insleftr�rirj�oldi�tmpval�it_index�it_token�it_value�ps               r$r�r�Os������j�!�J���j�)���w��G��G��%���1�a��?��G����!�c�!�f�*��-��t�E�A�I�&�F���q�&�(�(��3�v�;�&��07�
)�,��(�H��x��1�1��3�x�=�(��
)��1�9�D�
�!%�j�!1���w��!�c�!�f�*��-��#�a�&�=��1�a���h�&�&��s�1�v��}�$�G�+%�0��,�Q���	�G�A�q�!��1�a�-���s�1�v��G�	�	�!�*�-�N�E�7���E��������4!�
����
�� �	��G��	�s��E�D�A*E�D,�E�*E�9AE�?D=�
E�E�D)� D#�!D)�&E�(D)�)E�,D:�7E�9D:�:E�=E�E�
E�Ec��eZdZdZd�Zy)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.c�������t|t�r-t|j|j|j���n|�tj�|��tjf����fd�	}|S)Nr�c����jdj�
�	fddg�}tj�}�j|||�}tj�}|dxxdz
cc<|dxx||z
z
cc<|S)Nr�rr!r�)�
_prof_data�
setdefault�timer�)rOr��endpos�info�t0�res�t1r��compiledr�r�s       ����r$�
match_funcz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func�sr����>�>�"�%�0�0�%����3�x�H�D�����B��.�.��s�F�3�C�����B���G�q�L�G���G�r�B�w��G��Jr')	rLrrr�r�r�r��sys�maxsize)r�r�r�r�r2r1r�s`  ` @@r$r�z&ProfilingRegexLexerMeta._process_regex�sZ����e�U�#��E�K�K����#(�<�<�1�C��C��:�:�c�6�*��),���	�	��r'N)r2r3r4r5r�r"r'r$r&r&�s
��H�r'r&c� �eZdZdZgZdZdd�Zy)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�c#�J�K��jjji�tj	�||�Ed{����jjj�}t
d�|j�D��fd�d��}td�|D��}t�td�jjt|�|fz�td�tdd	z�td
�|D]}td|z��td�y7�ݭw)Nc3�K�|]H\\}}\}}|t|�jd�jdd�dd|d|zd|z|zf���Jy�w)zu'z\\�\N�Ai�)�reprr\rR)�.0r��r�nris     r$�	<genexpr>z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�sa����@�+�F�Q��F�Q���4��7�=�=��/�7�7���E�c�r�J��4�!�8�T�A�X��\�3�@�s�AAc�"��|�jSrg)�_prof_sort_index)r#rCs �r$r%z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>�s���A�d�&;�&;�$<�r'T)�key�reversec3�&K�|]	}|d���y�w)�Nr")r=r#s  r$r@z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�s����+���!��+�s�z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)rHr)rMrrhr��sortedr��sum�printr2rY)rCrOr��rawdatar��	sum_totalr1s`      r$rhz*ProfilingRegexLexer.get_tokens_unprocessed�s���������!�!�(�(��,��4�4�T�4��G�G�G��.�.�+�+�/�/�1���@�/6�}�}��@�=�"�	$��
�+�d�+�+�	�
��
�B��~�~�&�&��D�	�9�=�>�	?�
�i��
�4�7I�I�J�
�i���	5�A��/�!�3�4�	5�
�i��#	H�s�AD#�D!�CD#Nr)r2r3r4r5r)rBrhr"r'r$r6r6�s��P��J���r'r6)6r5r�r3r+�pygments.filterrr�pygments.filtersr�pygments.tokenrrrr	r
�
pygments.utilrrr
rrr�pygments.regexoptr�__all__r�rrW�staticmethod�_default_analyser,r)rrrUrr�rr�r�r�rr�rrrrr�rrrr�r&r6r"r'r$�<module>rTsJ���
�
��1�/�E�E�*�*�'�*���"�*�*�W�
��,�
� �
�.��	1��	1�i"�i�i"�XO�e�O�N	�c�	����*��

�u�

���6�4��
�w��/�d	�	�
M�F�
M� e1�Y�e1�P^��.�^�B
-�
-� E��E�P=�@�n��,�*�0G�r'

Zerion Mini Shell 1.0