Jul  2 07:05:52.048389 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul  1 23:29:55 -00 2024
Jul  2 07:05:52.048414 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607
Jul  2 07:05:52.048423 kernel: BIOS-provided physical RAM map:
Jul  2 07:05:52.048431 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Jul  2 07:05:52.048437 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved
Jul  2 07:05:52.048443 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable
Jul  2 07:05:52.048451 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20
Jul  2 07:05:52.048461 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved
Jul  2 07:05:52.048467 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data
Jul  2 07:05:52.048475 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS
Jul  2 07:05:52.048482 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable
Jul  2 07:05:52.048488 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable
Jul  2 07:05:52.048493 kernel: printk: bootconsole [earlyser0] enabled
Jul  2 07:05:52.048502 kernel: NX (Execute Disable) protection: active
Jul  2 07:05:52.048512 kernel: efi: EFI v2.70 by Microsoft
Jul  2 07:05:52.048518 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 
Jul  2 07:05:52.048528 kernel: SMBIOS 3.1.0 present.
Jul  2 07:05:52.048534 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024
Jul  2 07:05:52.048541 kernel: Hypervisor detected: Microsoft Hyper-V
Jul  2 07:05:52.048550 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2
Jul  2 07:05:52.048557 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0
Jul  2 07:05:52.048566 kernel: Hyper-V: Nested features: 0x1e0101
Jul  2 07:05:52.048572 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40
Jul  2 07:05:52.048578 kernel: Hyper-V: Using hypercall for remote TLB flush
Jul  2 07:05:52.048589 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Jul  2 07:05:52.048596 kernel: tsc: Marking TSC unstable due to running on Hyper-V
Jul  2 07:05:52.048603 kernel: tsc: Detected 2593.905 MHz processor
Jul  2 07:05:52.048609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jul  2 07:05:52.048616 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jul  2 07:05:52.048623 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000
Jul  2 07:05:52.048629 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jul  2 07:05:52.048635 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved
Jul  2 07:05:52.048642 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000
Jul  2 07:05:52.048650 kernel: Using GB pages for direct mapping
Jul  2 07:05:52.048657 kernel: Secure boot disabled
Jul  2 07:05:52.048663 kernel: ACPI: Early table checksum verification disabled
Jul  2 07:05:52.048669 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL)
Jul  2 07:05:52.048676 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048682 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048689 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
Jul  2 07:05:52.048703 kernel: ACPI: FACS 0x000000003FFFE000 000040
Jul  2 07:05:52.048712 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048720 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048730 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048737 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048745 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048754 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048764 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Jul  2 07:05:52.048775 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113]
Jul  2 07:05:52.048782 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183]
Jul  2 07:05:52.048791 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f]
Jul  2 07:05:52.048800 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063]
Jul  2 07:05:52.048807 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f]
Jul  2 07:05:52.048818 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027]
Jul  2 07:05:52.048825 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057]
Jul  2 07:05:52.048836 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf]
Jul  2 07:05:52.048844 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037]
Jul  2 07:05:52.048851 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033]
Jul  2 07:05:52.048861 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Jul  2 07:05:52.048868 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Jul  2 07:05:52.048885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug
Jul  2 07:05:52.048893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug
Jul  2 07:05:52.048902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug
Jul  2 07:05:52.048910 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
Jul  2 07:05:52.048921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug
Jul  2 07:05:52.048930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug
Jul  2 07:05:52.048937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug
Jul  2 07:05:52.048947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug
Jul  2 07:05:52.048954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug
Jul  2 07:05:52.048962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug
Jul  2 07:05:52.048972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug
Jul  2 07:05:52.048979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug
Jul  2 07:05:52.048987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug
Jul  2 07:05:52.048998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug
Jul  2 07:05:52.049005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug
Jul  2 07:05:52.049015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug
Jul  2 07:05:52.049025 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff]
Jul  2 07:05:52.049033 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff]
Jul  2 07:05:52.049040 kernel: Zone ranges:
Jul  2 07:05:52.049050 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jul  2 07:05:52.049057 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jul  2 07:05:52.049065 kernel:   Normal   [mem 0x0000000100000000-0x00000002bfffffff]
Jul  2 07:05:52.049077 kernel: Movable zone start for each node
Jul  2 07:05:52.049084 kernel: Early memory node ranges
Jul  2 07:05:52.049094 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Jul  2 07:05:52.049101 kernel:   node   0: [mem 0x0000000000100000-0x000000003ff40fff]
Jul  2 07:05:52.049109 kernel:   node   0: [mem 0x000000003ffff000-0x000000003fffffff]
Jul  2 07:05:52.049118 kernel:   node   0: [mem 0x0000000100000000-0x00000002bfffffff]
Jul  2 07:05:52.049125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff]
Jul  2 07:05:52.049134 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jul  2 07:05:52.049142 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Jul  2 07:05:52.049152 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges
Jul  2 07:05:52.049161 kernel: ACPI: PM-Timer IO Port: 0x408
Jul  2 07:05:52.049168 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jul  2 07:05:52.049177 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
Jul  2 07:05:52.049186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jul  2 07:05:52.049193 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jul  2 07:05:52.049203 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200
Jul  2 07:05:52.049211 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Jul  2 07:05:52.049218 kernel: [mem 0x40000000-0xffffffff] available for PCI devices
Jul  2 07:05:52.049230 kernel: Booting paravirtualized kernel on Hyper-V
Jul  2 07:05:52.049237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jul  2 07:05:52.049247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Jul  2 07:05:52.049255 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576
Jul  2 07:05:52.049262 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152
Jul  2 07:05:52.049271 kernel: pcpu-alloc: [0] 0 1 
Jul  2 07:05:52.049278 kernel: Hyper-V: PV spinlocks enabled
Jul  2 07:05:52.049286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Jul  2 07:05:52.049296 kernel: Fallback order for Node 0: 0 
Jul  2 07:05:52.049305 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2062618
Jul  2 07:05:52.049314 kernel: Policy zone: Normal
Jul  2 07:05:52.049324 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607
Jul  2 07:05:52.049332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jul  2 07:05:52.049341 kernel: random: crng init done
Jul  2 07:05:52.049348 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jul  2 07:05:52.049357 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jul  2 07:05:52.049366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jul  2 07:05:52.049378 kernel: software IO TLB: area num 2.
Jul  2 07:05:52.049396 kernel: Memory: 8072932K/8387460K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 314268K reserved, 0K cma-reserved)
Jul  2 07:05:52.049410 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Jul  2 07:05:52.049425 kernel: ftrace: allocating 36081 entries in 141 pages
Jul  2 07:05:52.049439 kernel: ftrace: allocated 141 pages with 4 groups
Jul  2 07:05:52.049452 kernel: Dynamic Preempt: voluntary
Jul  2 07:05:52.049466 kernel: rcu: Preemptible hierarchical RCU implementation.
Jul  2 07:05:52.049481 kernel: rcu:         RCU event tracing is enabled.
Jul  2 07:05:52.049494 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Jul  2 07:05:52.049508 kernel:         Trampoline variant of Tasks RCU enabled.
Jul  2 07:05:52.049523 kernel:         Rude variant of Tasks RCU enabled.
Jul  2 07:05:52.049543 kernel:         Tracing variant of Tasks RCU enabled.
Jul  2 07:05:52.049556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jul  2 07:05:52.049570 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Jul  2 07:05:52.049585 kernel: Using NULL legacy PIC
Jul  2 07:05:52.049600 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0
Jul  2 07:05:52.049620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jul  2 07:05:52.049637 kernel: Console: colour dummy device 80x25
Jul  2 07:05:52.049651 kernel: printk: console [tty1] enabled
Jul  2 07:05:52.049665 kernel: printk: console [ttyS0] enabled
Jul  2 07:05:52.049679 kernel: printk: bootconsole [earlyser0] disabled
Jul  2 07:05:52.049693 kernel: ACPI: Core revision 20220331
Jul  2 07:05:52.049707 kernel: Failed to register legacy timer interrupt
Jul  2 07:05:52.049721 kernel: APIC: Switch to symmetric I/O mode setup
Jul  2 07:05:52.049735 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Jul  2 07:05:52.049748 kernel: Hyper-V: Using IPI hypercalls
Jul  2 07:05:52.049768 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905)
Jul  2 07:05:52.049784 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Jul  2 07:05:52.049800 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Jul  2 07:05:52.049815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jul  2 07:05:52.049829 kernel: Spectre V2 : Mitigation: Retpolines
Jul  2 07:05:52.049843 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jul  2 07:05:52.049858 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Jul  2 07:05:52.049873 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Jul  2 07:05:52.049899 kernel: RETBleed: Vulnerable
Jul  2 07:05:52.049918 kernel: Speculative Store Bypass: Vulnerable
Jul  2 07:05:52.049932 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode
Jul  2 07:05:52.049947 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Jul  2 07:05:52.049961 kernel: GDS: Unknown: Dependent on hypervisor status
Jul  2 07:05:52.049975 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jul  2 07:05:52.049989 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jul  2 07:05:52.050004 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jul  2 07:05:52.050018 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Jul  2 07:05:52.050033 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Jul  2 07:05:52.050048 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Jul  2 07:05:52.050061 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jul  2 07:05:52.050081 kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Jul  2 07:05:52.050095 kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Jul  2 07:05:52.050109 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Jul  2 07:05:52.050121 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format.
Jul  2 07:05:52.050132 kernel: Freeing SMP alternatives memory: 32K
Jul  2 07:05:52.050143 kernel: pid_max: default: 32768 minimum: 301
Jul  2 07:05:52.050153 kernel: LSM: Security Framework initializing
Jul  2 07:05:52.050164 kernel: SELinux:  Initializing.
Jul  2 07:05:52.050174 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jul  2 07:05:52.050185 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jul  2 07:05:52.050197 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7)
Jul  2 07:05:52.050209 kernel: cblist_init_generic: Setting adjustable number of callback queues.
Jul  2 07:05:52.050224 kernel: cblist_init_generic: Setting shift to 1 and lim to 1.
Jul  2 07:05:52.050232 kernel: cblist_init_generic: Setting adjustable number of callback queues.
Jul  2 07:05:52.050239 kernel: cblist_init_generic: Setting shift to 1 and lim to 1.
Jul  2 07:05:52.050247 kernel: cblist_init_generic: Setting adjustable number of callback queues.
Jul  2 07:05:52.050254 kernel: cblist_init_generic: Setting shift to 1 and lim to 1.
Jul  2 07:05:52.050262 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Jul  2 07:05:52.050269 kernel: signal: max sigframe size: 3632
Jul  2 07:05:52.050277 kernel: rcu: Hierarchical SRCU implementation.
Jul  2 07:05:52.050285 kernel: rcu:         Max phase no-delay instances is 400.
Jul  2 07:05:52.050292 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Jul  2 07:05:52.050302 kernel: smp: Bringing up secondary CPUs ...
Jul  2 07:05:52.050310 kernel: x86: Booting SMP configuration:
Jul  2 07:05:52.050319 kernel: .... node  #0, CPUs:      #1
Jul  2 07:05:52.050330 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
Jul  2 07:05:52.050340 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Jul  2 07:05:52.050349 kernel: smp: Brought up 1 node, 2 CPUs
Jul  2 07:05:52.050358 kernel: smpboot: Max logical packages: 1
Jul  2 07:05:52.050368 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS)
Jul  2 07:05:52.050381 kernel: devtmpfs: initialized
Jul  2 07:05:52.050389 kernel: x86/mm: Memory block size: 128MB
Jul  2 07:05:52.050399 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes)
Jul  2 07:05:52.050409 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jul  2 07:05:52.050418 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Jul  2 07:05:52.050428 kernel: pinctrl core: initialized pinctrl subsystem
Jul  2 07:05:52.050438 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jul  2 07:05:52.050446 kernel: audit: initializing netlink subsys (disabled)
Jul  2 07:05:52.050453 kernel: audit: type=2000 audit(1719903950.031:1): state=initialized audit_enabled=0 res=1
Jul  2 07:05:52.050463 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jul  2 07:05:52.050471 kernel: thermal_sys: Registered thermal governor 'user_space'
Jul  2 07:05:52.050478 kernel: cpuidle: using governor menu
Jul  2 07:05:52.050488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jul  2 07:05:52.050496 kernel: dca service started, version 1.12.1
Jul  2 07:05:52.050505 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff]
Jul  2 07:05:52.050513 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jul  2 07:05:52.050520 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jul  2 07:05:52.050528 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jul  2 07:05:52.050538 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jul  2 07:05:52.050545 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jul  2 07:05:52.050553 kernel: ACPI: Added _OSI(Module Device)
Jul  2 07:05:52.050560 kernel: ACPI: Added _OSI(Processor Device)
Jul  2 07:05:52.050568 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jul  2 07:05:52.050575 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jul  2 07:05:52.050583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jul  2 07:05:52.050591 kernel: ACPI: Interpreter enabled
Jul  2 07:05:52.050600 kernel: ACPI: PM: (supports S0 S5)
Jul  2 07:05:52.050610 kernel: ACPI: Using IOAPIC for interrupt routing
Jul  2 07:05:52.050618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jul  2 07:05:52.050628 kernel: PCI: Ignoring E820 reservations for host bridge windows
Jul  2 07:05:52.050637 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F
Jul  2 07:05:52.050645 kernel: iommu: Default domain type: Translated 
Jul  2 07:05:52.050652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Jul  2 07:05:52.050662 kernel: pps_core: LinuxPPS API ver. 1 registered
Jul  2 07:05:52.050670 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jul  2 07:05:52.050678 kernel: PTP clock support registered
Jul  2 07:05:52.050691 kernel: Registered efivars operations
Jul  2 07:05:52.050698 kernel: PCI: Using ACPI for IRQ routing
Jul  2 07:05:52.050708 kernel: PCI: System does not support PCI
Jul  2 07:05:52.050716 kernel: vgaarb: loaded
Jul  2 07:05:52.050724 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page
Jul  2 07:05:52.050731 kernel: VFS: Disk quotas dquot_6.6.0
Jul  2 07:05:52.050739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jul  2 07:05:52.050746 kernel: pnp: PnP ACPI init
Jul  2 07:05:52.050755 kernel: pnp: PnP ACPI: found 3 devices
Jul  2 07:05:52.050767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jul  2 07:05:52.050775 kernel: NET: Registered PF_INET protocol family
Jul  2 07:05:52.050786 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jul  2 07:05:52.050794 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jul  2 07:05:52.050801 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jul  2 07:05:52.050811 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jul  2 07:05:52.050820 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Jul  2 07:05:52.050827 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jul  2 07:05:52.050838 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jul  2 07:05:52.050848 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jul  2 07:05:52.050857 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jul  2 07:05:52.050867 kernel: NET: Registered PF_XDP protocol family
Jul  2 07:05:52.050884 kernel: PCI: CLS 0 bytes, default 64
Jul  2 07:05:52.050894 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jul  2 07:05:52.050901 kernel: software IO TLB: mapped [mem 0x000000003ad36000-0x000000003ed36000] (64MB)
Jul  2 07:05:52.050911 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Jul  2 07:05:52.050920 kernel: Initialise system trusted keyrings
Jul  2 07:05:52.050927 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Jul  2 07:05:52.050941 kernel: Key type asymmetric registered
Jul  2 07:05:52.050948 kernel: Asymmetric key parser 'x509' registered
Jul  2 07:05:52.050959 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed
Jul  2 07:05:52.050968 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Jul  2 07:05:52.050978 kernel: io scheduler mq-deadline registered
Jul  2 07:05:52.050987 kernel: io scheduler kyber registered
Jul  2 07:05:52.050996 kernel: io scheduler bfq registered
Jul  2 07:05:52.051006 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Jul  2 07:05:52.051014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jul  2 07:05:52.051026 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jul  2 07:05:52.051034 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Jul  2 07:05:52.051044 kernel: i8042: PNP: No PS/2 controller found.
Jul  2 07:05:52.051190 kernel: rtc_cmos 00:02: registered as rtc0
Jul  2 07:05:52.051272 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T07:05:51 UTC (1719903951)
Jul  2 07:05:52.051351 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
Jul  2 07:05:52.051362 kernel: fail to initialize ptp_kvm
Jul  2 07:05:52.051373 kernel: intel_pstate: CPU model not supported
Jul  2 07:05:52.051383 kernel: efifb: probing for efifb
Jul  2 07:05:52.051391 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Jul  2 07:05:52.051399 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Jul  2 07:05:52.051410 kernel: efifb: scrolling: redraw
Jul  2 07:05:52.051418 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Jul  2 07:05:52.051429 kernel: Console: switching to colour frame buffer device 128x48
Jul  2 07:05:52.051438 kernel: fb0: EFI VGA frame buffer device
Jul  2 07:05:52.051447 kernel: pstore: Registered efi as persistent store backend
Jul  2 07:05:52.051460 kernel: NET: Registered PF_INET6 protocol family
Jul  2 07:05:52.051468 kernel: Segment Routing with IPv6
Jul  2 07:05:52.051478 kernel: In-situ OAM (IOAM) with IPv6
Jul  2 07:05:52.051488 kernel: NET: Registered PF_PACKET protocol family
Jul  2 07:05:52.051496 kernel: Key type dns_resolver registered
Jul  2 07:05:52.051504 kernel: IPI shorthand broadcast: enabled
Jul  2 07:05:52.051515 kernel: sched_clock: Marking stable (957292100, 26540100)->(1211241400, -227409200)
Jul  2 07:05:52.051523 kernel: registered taskstats version 1
Jul  2 07:05:52.051531 kernel: Loading compiled-in X.509 certificates
Jul  2 07:05:52.051541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797'
Jul  2 07:05:52.051551 kernel: Key type .fscrypt registered
Jul  2 07:05:52.051561 kernel: Key type fscrypt-provisioning registered
Jul  2 07:05:52.051569 kernel: pstore: Using crash dump compression: deflate
Jul  2 07:05:52.051577 kernel: ima: No TPM chip found, activating TPM-bypass!
Jul  2 07:05:52.051588 kernel: ima: Allocated hash algorithm: sha1
Jul  2 07:05:52.051596 kernel: ima: No architecture policies found
Jul  2 07:05:52.051603 kernel: clk: Disabling unused clocks
Jul  2 07:05:52.051614 kernel: Freeing unused kernel image (initmem) memory: 47156K
Jul  2 07:05:52.051623 kernel: Write protecting the kernel read-only data: 34816k
Jul  2 07:05:52.051633 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Jul  2 07:05:52.051642 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K
Jul  2 07:05:52.051650 kernel: Run /init as init process
Jul  2 07:05:52.051661 kernel:   with arguments:
Jul  2 07:05:52.051669 kernel:     /init
Jul  2 07:05:52.051676 kernel:   with environment:
Jul  2 07:05:52.051687 kernel:     HOME=/
Jul  2 07:05:52.051694 kernel:     TERM=linux
Jul  2 07:05:52.051702 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jul  2 07:05:52.051717 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jul  2 07:05:52.051728 systemd[1]: Detected virtualization microsoft.
Jul  2 07:05:52.051739 systemd[1]: Detected architecture x86-64.
Jul  2 07:05:52.051747 systemd[1]: Running in initrd.
Jul  2 07:05:52.051757 systemd[1]: No hostname configured, using default hostname.
Jul  2 07:05:52.051765 systemd[1]: Hostname set to <localhost>.
Jul  2 07:05:52.051774 systemd[1]: Initializing machine ID from random generator.
Jul  2 07:05:52.051787 systemd[1]: Queued start job for default target initrd.target.
Jul  2 07:05:52.051795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jul  2 07:05:52.051805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jul  2 07:05:52.051814 systemd[1]: Reached target paths.target - Path Units.
Jul  2 07:05:52.051822 systemd[1]: Reached target slices.target - Slice Units.
Jul  2 07:05:52.051833 systemd[1]: Reached target swap.target - Swaps.
Jul  2 07:05:52.051841 systemd[1]: Reached target timers.target - Timer Units.
Jul  2 07:05:52.051854 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jul  2 07:05:52.051864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jul  2 07:05:52.051885 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket.
Jul  2 07:05:52.051896 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jul  2 07:05:52.051907 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jul  2 07:05:52.051916 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jul  2 07:05:52.051926 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jul  2 07:05:52.051937 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jul  2 07:05:52.051948 systemd[1]: Reached target sockets.target - Socket Units.
Jul  2 07:05:52.051958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jul  2 07:05:52.051968 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jul  2 07:05:52.051976 systemd[1]: Starting systemd-fsck-usr.service...
Jul  2 07:05:52.051988 systemd[1]: Starting systemd-journald.service - Journal Service...
Jul  2 07:05:52.051996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jul  2 07:05:52.052005 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console...
Jul  2 07:05:52.052020 systemd-journald[178]: Journal started
Jul  2 07:05:52.052069 systemd-journald[178]: Runtime Journal (/run/log/journal/2445773fb43c43d9b5bb539833a8db8c) is 8.0M, max 158.8M, 150.8M free.
Jul  2 07:05:52.060897 systemd[1]: Started systemd-journald.service - Journal Service.
Jul  2 07:05:52.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.073527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jul  2 07:05:52.076977 kernel: audit: type=1130 audit(1719903952.063:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.080005 systemd-modules-load[179]: Inserted module 'overlay'
Jul  2 07:05:52.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.083344 systemd[1]: Finished systemd-fsck-usr.service.
Jul  2 07:05:52.095650 kernel: audit: type=1130 audit(1719903952.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.098480 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console.
Jul  2 07:05:52.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.115365 kernel: audit: type=1130 audit(1719903952.097:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.133454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jul  2 07:05:52.133544 kernel: audit: type=1130 audit(1719903952.118:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.142336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jul  2 07:05:52.149621 kernel: Bridge firewalling registered
Jul  2 07:05:52.143527 systemd-modules-load[179]: Inserted module 'br_netfilter'
Jul  2 07:05:52.153062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jul  2 07:05:52.165322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Jul  2 07:05:52.180017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jul  2 07:05:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.197744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jul  2 07:05:52.214941 kernel: audit: type=1130 audit(1719903952.179:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.214979 kernel: audit: type=1130 audit(1719903952.201:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.212086 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jul  2 07:05:52.221174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Jul  2 07:05:52.236527 kernel: audit: type=1130 audit(1719903952.223:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.224000 audit: BPF prog-id=6 op=LOAD
Jul  2 07:05:52.242082 kernel: audit: type=1334 audit(1719903952.224:9): prog-id=6 op=LOAD
Jul  2 07:05:52.242136 kernel: SCSI subsystem initialized
Jul  2 07:05:52.239430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jul  2 07:05:52.249312 dracut-cmdline[199]: dracut-dracut-053
Jul  2 07:05:52.257760 dracut-cmdline[199]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607
Jul  2 07:05:52.295903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jul  2 07:05:52.300442 kernel: device-mapper: uevent: version 1.0.3
Jul  2 07:05:52.305912 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Jul  2 07:05:52.310530 systemd-resolved[204]: Positive Trust Anchors:
Jul  2 07:05:52.310684 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jul  2 07:05:52.310736 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Jul  2 07:05:52.336020 systemd-resolved[204]: Defaulting to hostname 'linux'.
Jul  2 07:05:52.338484 systemd-modules-load[179]: Inserted module 'dm_multipath'
Jul  2 07:05:52.339403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jul  2 07:05:52.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.356913 kernel: audit: type=1130 audit(1719903952.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.358142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jul  2 07:05:52.365127 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jul  2 07:05:52.379507 kernel: Loading iSCSI transport class v2.0-870.
Jul  2 07:05:52.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.370908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jul  2 07:05:52.378357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jul  2 07:05:52.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.401905 kernel: iscsi: registered transport (tcp)
Jul  2 07:05:52.427248 kernel: iscsi: registered transport (qla4xxx)
Jul  2 07:05:52.427337 kernel: QLogic iSCSI HBA Driver
Jul  2 07:05:52.462583 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jul  2 07:05:52.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.471331 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jul  2 07:05:52.537916 kernel: raid6: avx512x4 gen() 18254 MB/s
Jul  2 07:05:52.556931 kernel: raid6: avx512x2 gen() 18316 MB/s
Jul  2 07:05:52.575915 kernel: raid6: avx512x1 gen() 17403 MB/s
Jul  2 07:05:52.595942 kernel: raid6: avx2x4   gen() 17182 MB/s
Jul  2 07:05:52.614926 kernel: raid6: avx2x2   gen() 17025 MB/s
Jul  2 07:05:52.635898 kernel: raid6: avx2x1   gen() 13406 MB/s
Jul  2 07:05:52.635989 kernel: raid6: using algorithm avx512x2 gen() 18316 MB/s
Jul  2 07:05:52.658505 kernel: raid6: .... xor() 27199 MB/s, rmw enabled
Jul  2 07:05:52.658598 kernel: raid6: using avx512x2 recovery algorithm
Jul  2 07:05:52.665911 kernel: xor: automatically using best checksumming function   avx       
Jul  2 07:05:52.815906 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Jul  2 07:05:52.826596 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jul  2 07:05:52.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.830000 audit: BPF prog-id=7 op=LOAD
Jul  2 07:05:52.830000 audit: BPF prog-id=8 op=LOAD
Jul  2 07:05:52.838179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jul  2 07:05:52.863414 systemd-udevd[383]: Using default interface naming scheme 'v252'.
Jul  2 07:05:52.871904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jul  2 07:05:52.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.894062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jul  2 07:05:52.912838 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation
Jul  2 07:05:52.949940 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jul  2 07:05:52.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:52.964114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jul  2 07:05:53.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:53.012126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jul  2 07:05:53.063901 kernel: cryptd: max_cpu_qlen set to 1000
Jul  2 07:05:53.101497 kernel: hv_vmbus: Vmbus version:5.2
Jul  2 07:05:53.111897 kernel: hv_vmbus: registering driver hyperv_keyboard
Jul  2 07:05:53.136916 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0
Jul  2 07:05:53.136982 kernel: AVX2 version of gcm_enc/dec engaged.
Jul  2 07:05:53.141149 kernel: AES CTR mode by8 optimization enabled
Jul  2 07:05:53.141198 kernel: hid: raw HID events driver (C) Jiri Kosina
Jul  2 07:05:53.149894 kernel: hv_vmbus: registering driver hid_hyperv
Jul  2 07:05:53.157125 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1
Jul  2 07:05:53.157189 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Jul  2 07:05:53.163903 kernel: hv_vmbus: registering driver hv_netvsc
Jul  2 07:05:53.168899 kernel: hv_vmbus: registering driver hv_storvsc
Jul  2 07:05:53.173956 kernel: scsi host0: storvsc_host_t
Jul  2 07:05:53.178911 kernel: scsi host1: storvsc_host_t
Jul  2 07:05:53.184984 kernel: scsi 1:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Jul  2 07:05:53.190901 kernel: scsi 1:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Jul  2 07:05:53.211018 kernel: sr 1:0:0:2: [sr0] scsi-1 drive
Jul  2 07:05:53.218267 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jul  2 07:05:53.218290 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0
Jul  2 07:05:53.234720 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Jul  2 07:05:53.249532 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
Jul  2 07:05:53.249727 kernel: sd 1:0:0:0: [sda] Write Protect is off
Jul  2 07:05:53.249911 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00
Jul  2 07:05:53.250083 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Jul  2 07:05:53.250247 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jul  2 07:05:53.250269 kernel: sd 1:0:0:0: [sda] Attached SCSI disk
Jul  2 07:05:53.291905 kernel: hv_netvsc 000d3aba-9c54-000d-3aba-9c54000d3aba eth0: VF slot 1 added
Jul  2 07:05:53.299904 kernel: hv_vmbus: registering driver hv_pci
Jul  2 07:05:53.308614 kernel: hv_pci ae5a1a51-9e6e-48df-8458-3111074e1bcb: PCI VMBus probing: Using version 0x10004
Jul  2 07:05:53.363100 kernel: hv_pci ae5a1a51-9e6e-48df-8458-3111074e1bcb: PCI host bridge to bus 9e6e:00
Jul  2 07:05:53.363298 kernel: pci_bus 9e6e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
Jul  2 07:05:53.363475 kernel: pci_bus 9e6e:00: No busn resource found for root bus, will use [bus 00-ff]
Jul  2 07:05:53.363615 kernel: pci 9e6e:00:02.0: [15b3:1016] type 00 class 0x020000
Jul  2 07:05:53.363784 kernel: pci 9e6e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jul  2 07:05:53.363967 kernel: pci 9e6e:00:02.0: enabling Extended Tags
Jul  2 07:05:53.364126 kernel: pci 9e6e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9e6e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
Jul  2 07:05:53.364278 kernel: pci_bus 9e6e:00: busn_res: [bus 00-ff] end is updated to 00
Jul  2 07:05:53.364419 kernel: pci 9e6e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
Jul  2 07:05:53.535287 kernel: mlx5_core 9e6e:00:02.0: enabling device (0000 -> 0002)
Jul  2 07:05:53.779410 kernel: mlx5_core 9e6e:00:02.0: firmware version: 14.30.1284
Jul  2 07:05:53.779596 kernel: mlx5_core 9e6e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
Jul  2 07:05:53.779749 kernel: mlx5_core 9e6e:00:02.0: Supported tc offload range - chains: 1, prios: 1
Jul  2 07:05:53.779917 kernel: hv_netvsc 000d3aba-9c54-000d-3aba-9c54000d3aba eth0: VF registering: eth1
Jul  2 07:05:53.780252 kernel: mlx5_core 9e6e:00:02.0 eth1: joined to eth0
Jul  2 07:05:53.795952 kernel: mlx5_core 9e6e:00:02.0 enP40558s1: renamed from eth1
Jul  2 07:05:53.832532 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Jul  2 07:05:53.856920 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (432)
Jul  2 07:05:53.872093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Jul  2 07:05:54.098066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Jul  2 07:05:54.126908 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (428)
Jul  2 07:05:54.142348 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Jul  2 07:05:54.145906 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Jul  2 07:05:54.168323 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jul  2 07:05:54.184906 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jul  2 07:05:54.193911 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jul  2 07:05:55.197916 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Jul  2 07:05:55.199419 disk-uuid[568]: The operation has completed successfully.
Jul  2 07:05:55.284301 systemd[1]: disk-uuid.service: Deactivated successfully.
Jul  2 07:05:55.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:55.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:55.284433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jul  2 07:05:55.316141 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jul  2 07:05:55.324162 sh[653]: Success
Jul  2 07:05:55.356903 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Jul  2 07:05:55.560231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jul  2 07:05:55.570616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jul  2 07:05:55.575594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jul  2 07:05:55.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:55.590412 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f
Jul  2 07:05:55.590503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Jul  2 07:05:55.594133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jul  2 07:05:55.597064 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jul  2 07:05:55.599802 kernel: BTRFS info (device dm-0): using free space tree
Jul  2 07:05:55.921366 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jul  2 07:05:55.924301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jul  2 07:05:55.938124 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jul  2 07:05:55.942109 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jul  2 07:05:55.965274 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74
Jul  2 07:05:55.965341 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jul  2 07:05:55.968622 kernel: BTRFS info (device sda6): using free space tree
Jul  2 07:05:56.020795 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jul  2 07:05:56.027430 kernel: BTRFS info (device sda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74
Jul  2 07:05:56.033146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jul  2 07:05:56.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.040000 audit: BPF prog-id=9 op=LOAD
Jul  2 07:05:56.048118 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jul  2 07:05:56.051641 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jul  2 07:05:56.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.058736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jul  2 07:05:56.079570 systemd-networkd[834]: lo: Link UP
Jul  2 07:05:56.079580 systemd-networkd[834]: lo: Gained carrier
Jul  2 07:05:56.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.080236 systemd-networkd[834]: Enumeration completed
Jul  2 07:05:56.080331 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jul  2 07:05:56.084736 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jul  2 07:05:56.084740 systemd-networkd[834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jul  2 07:05:56.085680 systemd[1]: Reached target network.target - Network.
Jul  2 07:05:56.109373 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver...
Jul  2 07:05:56.115285 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver.
Jul  2 07:05:56.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.122239 systemd[1]: Starting iscsid.service - Open-iSCSI...
Jul  2 07:05:56.129908 iscsid[840]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Jul  2 07:05:56.129908 iscsid[840]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Jul  2 07:05:56.129908 iscsid[840]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Jul  2 07:05:56.129908 iscsid[840]: If using hardware iscsi like qla4xxx this message can be ignored.
Jul  2 07:05:56.129908 iscsid[840]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Jul  2 07:05:56.129908 iscsid[840]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Jul  2 07:05:56.130747 systemd[1]: Started iscsid.service - Open-iSCSI.
Jul  2 07:05:56.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.167013 kernel: kauditd_printk_skb: 17 callbacks suppressed
Jul  2 07:05:56.167043 kernel: audit: type=1130 audit(1719903956.162:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.182415 kernel: mlx5_core 9e6e:00:02.0 enP40558s1: Link up
Jul  2 07:05:56.179446 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jul  2 07:05:56.191604 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jul  2 07:05:56.212756 kernel: audit: type=1130 audit(1719903956.194:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.212796 kernel: hv_netvsc 000d3aba-9c54-000d-3aba-9c54000d3aba eth0: Data path switched to VF: enP40558s1
Jul  2 07:05:56.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.203495 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jul  2 07:05:56.204280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jul  2 07:05:56.204754 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jul  2 07:05:56.230252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jul  2 07:05:56.223786 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jul  2 07:05:56.227796 systemd-networkd[834]: enP40558s1: Link UP
Jul  2 07:05:56.227918 systemd-networkd[834]: eth0: Link UP
Jul  2 07:05:56.228071 systemd-networkd[834]: eth0: Gained carrier
Jul  2 07:05:56.228081 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jul  2 07:05:56.237747 systemd-networkd[834]: enP40558s1: Gained carrier
Jul  2 07:05:56.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.240605 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jul  2 07:05:56.268666 kernel: audit: type=1130 audit(1719903956.248:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:56.286966 systemd-networkd[834]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jul  2 07:05:57.038079 ignition[835]: Ignition 2.15.0
Jul  2 07:05:57.038095 ignition[835]: Stage: fetch-offline
Jul  2 07:05:57.039742 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jul  2 07:05:57.038146 ignition[835]: no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:57.038157 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:57.038272 ignition[835]: parsed url from cmdline: ""
Jul  2 07:05:57.038277 ignition[835]: no config URL provided
Jul  2 07:05:57.038285 ignition[835]: reading system config file "/usr/lib/ignition/user.ign"
Jul  2 07:05:57.038295 ignition[835]: no config at "/usr/lib/ignition/user.ign"
Jul  2 07:05:57.038303 ignition[835]: failed to fetch config: resource requires networking
Jul  2 07:05:57.038456 ignition[835]: Ignition finished successfully
Jul  2 07:05:57.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.073904 kernel: audit: type=1130 audit(1719903957.064:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.074321 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Jul  2 07:05:57.088128 ignition[859]: Ignition 2.15.0
Jul  2 07:05:57.088144 ignition[859]: Stage: fetch
Jul  2 07:05:57.088285 ignition[859]: no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:57.088298 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:57.088418 ignition[859]: parsed url from cmdline: ""
Jul  2 07:05:57.088423 ignition[859]: no config URL provided
Jul  2 07:05:57.088429 ignition[859]: reading system config file "/usr/lib/ignition/user.ign"
Jul  2 07:05:57.088439 ignition[859]: no config at "/usr/lib/ignition/user.ign"
Jul  2 07:05:57.088471 ignition[859]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Jul  2 07:05:57.174357 ignition[859]: GET result: OK
Jul  2 07:05:57.174503 ignition[859]: config has been read from IMDS userdata
Jul  2 07:05:57.174543 ignition[859]: parsing config with SHA512: 73cbcf6c1f9847f9f7b2b049eb4647f52d3a308fa90ee39f6f2e431aff8539986aabca32cfa5a28e8d8897e3c58330eadbae5016bc78b31d3aafe28a5ef8fe28
Jul  2 07:05:57.183013 unknown[859]: fetched base config from "system"
Jul  2 07:05:57.183030 unknown[859]: fetched base config from "system"
Jul  2 07:05:57.184196 ignition[859]: fetch: fetch complete
Jul  2 07:05:57.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.183041 unknown[859]: fetched user config from "azure"
Jul  2 07:05:57.202199 kernel: audit: type=1130 audit(1719903957.190:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.184206 ignition[859]: fetch: fetch passed
Jul  2 07:05:57.186040 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Jul  2 07:05:57.184274 ignition[859]: Ignition finished successfully
Jul  2 07:05:57.204049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jul  2 07:05:57.223960 ignition[866]: Ignition 2.15.0
Jul  2 07:05:57.223971 ignition[866]: Stage: kargs
Jul  2 07:05:57.226367 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jul  2 07:05:57.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.224114 ignition[866]: no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:57.246675 kernel: audit: type=1130 audit(1719903957.229:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.224129 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:57.225133 ignition[866]: kargs: kargs passed
Jul  2 07:05:57.249755 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jul  2 07:05:57.225188 ignition[866]: Ignition finished successfully
Jul  2 07:05:57.270001 ignition[872]: Ignition 2.15.0
Jul  2 07:05:57.270070 ignition[872]: Stage: disks
Jul  2 07:05:57.270212 ignition[872]: no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:57.270228 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:57.278597 ignition[872]: disks: disks passed
Jul  2 07:05:57.280386 ignition[872]: Ignition finished successfully
Jul  2 07:05:57.283307 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jul  2 07:05:57.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.295350 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jul  2 07:05:57.299455 kernel: audit: type=1130 audit(1719903957.285:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.304838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jul  2 07:05:57.304958 systemd[1]: Reached target local-fs.target - Local File Systems.
Jul  2 07:05:57.305417 systemd[1]: Reached target sysinit.target - System Initialization.
Jul  2 07:05:57.305826 systemd[1]: Reached target basic.target - Basic System.
Jul  2 07:05:57.326302 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jul  2 07:05:57.384489 systemd-fsck[880]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Jul  2 07:05:57.392721 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jul  2 07:05:57.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.411948 kernel: audit: type=1130 audit(1719903957.396:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:57.413294 systemd[1]: Mounting sysroot.mount - /sysroot...
Jul  2 07:05:57.502894 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none.
Jul  2 07:05:57.503226 systemd[1]: Mounted sysroot.mount - /sysroot.
Jul  2 07:05:57.507795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jul  2 07:05:57.546033 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jul  2 07:05:57.552092 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jul  2 07:05:57.570904 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (889)
Jul  2 07:05:57.570968 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74
Jul  2 07:05:57.574894 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jul  2 07:05:57.579810 kernel: BTRFS info (device sda6): using free space tree
Jul  2 07:05:57.580245 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Jul  2 07:05:57.586684 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jul  2 07:05:57.586736 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jul  2 07:05:57.599314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jul  2 07:05:57.604409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jul  2 07:05:57.612276 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jul  2 07:05:58.119193 systemd-networkd[834]: eth0: Gained IPv6LL
Jul  2 07:05:58.267076 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory
Jul  2 07:05:58.315028 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory
Jul  2 07:05:58.324618 coreos-metadata[891]: Jul 02 07:05:58.324 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Jul  2 07:05:58.331572 coreos-metadata[891]: Jul 02 07:05:58.331 INFO Fetch successful
Jul  2 07:05:58.335383 coreos-metadata[891]: Jul 02 07:05:58.331 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Jul  2 07:05:58.345897 coreos-metadata[891]: Jul 02 07:05:58.345 INFO Fetch successful
Jul  2 07:05:58.349294 coreos-metadata[891]: Jul 02 07:05:58.346 INFO wrote hostname ci-3815.2.5-a-b9d6671d68 to /sysroot/etc/hostname
Jul  2 07:05:58.356660 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory
Jul  2 07:05:58.352987 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jul  2 07:05:58.376524 kernel: audit: type=1130 audit(1719903958.362:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:58.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:58.378743 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory
Jul  2 07:05:59.088322 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jul  2 07:05:59.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:59.108903 kernel: audit: type=1130 audit(1719903959.096:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:59.109185 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jul  2 07:05:59.113619 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jul  2 07:05:59.132655 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jul  2 07:05:59.142430 kernel: BTRFS info (device sda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74
Jul  2 07:05:59.165744 ignition[1005]: INFO     : Ignition 2.15.0
Jul  2 07:05:59.169116 ignition[1005]: INFO     : Stage: mount
Jul  2 07:05:59.173559 ignition[1005]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:59.173559 ignition[1005]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:59.173559 ignition[1005]: INFO     : mount: mount passed
Jul  2 07:05:59.173559 ignition[1005]: INFO     : Ignition finished successfully
Jul  2 07:05:59.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:59.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:05:59.171297 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jul  2 07:05:59.180003 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jul  2 07:05:59.209112 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jul  2 07:05:59.223081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jul  2 07:05:59.259905 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1014)
Jul  2 07:05:59.264904 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74
Jul  2 07:05:59.264967 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Jul  2 07:05:59.269275 kernel: BTRFS info (device sda6): using free space tree
Jul  2 07:05:59.273332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jul  2 07:05:59.295278 ignition[1032]: INFO     : Ignition 2.15.0
Jul  2 07:05:59.295278 ignition[1032]: INFO     : Stage: files
Jul  2 07:05:59.299914 ignition[1032]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jul  2 07:05:59.299914 ignition[1032]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:05:59.299914 ignition[1032]: DEBUG    : files: compiled without relabeling support, skipping
Jul  2 07:05:59.299914 ignition[1032]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jul  2 07:05:59.299914 ignition[1032]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jul  2 07:05:59.428515 ignition[1032]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jul  2 07:05:59.435090 ignition[1032]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jul  2 07:05:59.435090 ignition[1032]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jul  2 07:05:59.428948 unknown[1032]: wrote ssh authorized keys file for user: core
Jul  2 07:05:59.450887 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Jul  2 07:05:59.461594 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Jul  2 07:05:59.525828 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Jul  2 07:05:59.635633 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jul  2 07:05:59.643506 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Jul  2 07:06:00.211251 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Jul  2 07:06:00.610155 ignition[1032]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Jul  2 07:06:00.610155 ignition[1032]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Jul  2 07:06:00.626797 ignition[1032]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jul  2 07:06:00.637551 ignition[1032]: INFO     : files: files passed
Jul  2 07:06:00.637551 ignition[1032]: INFO     : Ignition finished successfully
Jul  2 07:06:00.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.628731 systemd[1]: Finished ignition-files.service - Ignition (files).
Jul  2 07:06:00.660212 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jul  2 07:06:00.674638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jul  2 07:06:00.719609 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jul  2 07:06:00.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.723638 systemd[1]: ignition-quench.service: Deactivated successfully.
Jul  2 07:06:00.723771 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jul  2 07:06:00.741416 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jul  2 07:06:00.741416 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jul  2 07:06:00.753491 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jul  2 07:06:00.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.755834 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jul  2 07:06:00.760997 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jul  2 07:06:00.794048 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jul  2 07:06:00.794179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jul  2 07:06:00.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.805518 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jul  2 07:06:00.813927 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jul  2 07:06:00.816008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jul  2 07:06:00.828679 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jul  2 07:06:00.847058 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jul  2 07:06:00.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.858145 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jul  2 07:06:00.875460 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jul  2 07:06:00.875799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jul  2 07:06:00.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:00.876046 systemd[1]: Stopped target timers.target - Timer Units.
Jul  2 07:06:00.878271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jul  2 07:06:00.878386 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jul  2 07:06:00.896412 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jul  2 07:06:00.904673 systemd[1]: Stopped target basic.target - Basic System.
Jul  2 07:06:00.923542 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jul  2 07:06:00.930189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jul  2 07:06:00.932999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jul  2 07:06:00.935900 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jul  2 07:06:00.954311 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jul  2 07:06:00.963649 systemd[1]: Stopped target sysinit.target - System Initialization.
Jul  2 07:06:00.972367 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jul  2 07:06:00.982259 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems.
Jul  2 07:06:00.987752 systemd[1]: Stopped target swap.target - Swaps.
Jul  2 07:06:00.993473 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jul  2 07:06:00.993653 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jul  2 07:06:01.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.006461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jul  2 07:06:01.013007 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jul  2 07:06:01.014202 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jul  2 07:06:01.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.029238 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jul  2 07:06:01.031388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jul  2 07:06:01.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.039283 systemd[1]: ignition-files.service: Deactivated successfully.
Jul  2 07:06:01.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.041238 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jul  2 07:06:01.048683 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Jul  2 07:06:01.049495 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Jul  2 07:06:01.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.072584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jul  2 07:06:01.080449 systemd[1]: Stopping iscsid.service - Open-iSCSI...
Jul  2 07:06:01.083098 iscsid[840]: iscsid shutting down.
Jul  2 07:06:01.089673 ignition[1076]: INFO     : Ignition 2.15.0
Jul  2 07:06:01.089673 ignition[1076]: INFO     : Stage: umount
Jul  2 07:06:01.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.092294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jul  2 07:06:01.110719 ignition[1076]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jul  2 07:06:01.110719 ignition[1076]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Jul  2 07:06:01.110719 ignition[1076]: INFO     : umount: umount passed
Jul  2 07:06:01.110719 ignition[1076]: INFO     : Ignition finished successfully
Jul  2 07:06:01.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.095050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jul  2 07:06:01.095238 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jul  2 07:06:01.102712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jul  2 07:06:01.102831 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jul  2 07:06:01.129624 systemd[1]: iscsid.service: Deactivated successfully.
Jul  2 07:06:01.129764 systemd[1]: Stopped iscsid.service - Open-iSCSI.
Jul  2 07:06:01.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.142468 systemd[1]: ignition-mount.service: Deactivated successfully.
Jul  2 07:06:01.142585 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jul  2 07:06:01.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.148281 systemd[1]: ignition-disks.service: Deactivated successfully.
Jul  2 07:06:01.148397 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jul  2 07:06:01.153411 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jul  2 07:06:01.153472 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jul  2 07:06:01.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.167190 systemd[1]: ignition-fetch.service: Deactivated successfully.
Jul  2 07:06:01.178114 kernel: kauditd_printk_skb: 20 callbacks suppressed
Jul  2 07:06:01.178142 kernel: audit: type=1131 audit(1719903961.166:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.167278 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Jul  2 07:06:01.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.185855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jul  2 07:06:01.197043 kernel: audit: type=1131 audit(1719903961.184:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.185954 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jul  2 07:06:01.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.200202 systemd[1]: Stopped target paths.target - Path Units.
Jul  2 07:06:01.214380 kernel: audit: type=1131 audit(1719903961.199:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.214428 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jul  2 07:06:01.217347 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jul  2 07:06:01.257130 kernel: audit: type=1131 audit(1719903961.241:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.228484 systemd[1]: Stopped target slices.target - Slice Units.
Jul  2 07:06:01.233466 systemd[1]: Stopped target sockets.target - Socket Units.
Jul  2 07:06:01.236483 systemd[1]: iscsid.socket: Deactivated successfully.
Jul  2 07:06:01.237505 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jul  2 07:06:01.239466 systemd[1]: ignition-setup.service: Deactivated successfully.
Jul  2 07:06:01.240237 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jul  2 07:06:01.258009 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver...
Jul  2 07:06:01.286593 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jul  2 07:06:01.287801 systemd[1]: iscsiuio.service: Deactivated successfully.
Jul  2 07:06:01.287900 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver.
Jul  2 07:06:01.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.298705 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jul  2 07:06:01.317844 kernel: audit: type=1131 audit(1719903961.293:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.298815 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jul  2 07:06:01.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.325285 systemd[1]: Stopped target network.target - Network.
Jul  2 07:06:01.353014 kernel: audit: type=1130 audit(1719903961.322:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.353052 kernel: audit: type=1131 audit(1719903961.322:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.352989 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jul  2 07:06:01.353064 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jul  2 07:06:01.359660 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jul  2 07:06:01.366253 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jul  2 07:06:01.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.379975 systemd-networkd[834]: eth0: DHCPv6 lease lost
Jul  2 07:06:01.407396 kernel: audit: type=1131 audit(1719903961.383:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.407448 kernel: audit: type=1334 audit(1719903961.383:66): prog-id=6 op=UNLOAD
Jul  2 07:06:01.383000 audit: BPF prog-id=6 op=UNLOAD
Jul  2 07:06:01.380127 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jul  2 07:06:01.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.380256 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jul  2 07:06:01.424414 kernel: audit: type=1131 audit(1719903961.406:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.385430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jul  2 07:06:01.385492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Jul  2 07:06:01.428666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jul  2 07:06:01.429085 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jul  2 07:06:01.429548 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jul  2 07:06:01.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.449203 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jul  2 07:06:01.449252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jul  2 07:06:01.451000 audit: BPF prog-id=9 op=UNLOAD
Jul  2 07:06:01.462358 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jul  2 07:06:01.468869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jul  2 07:06:01.469014 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jul  2 07:06:01.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.477306 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jul  2 07:06:01.477368 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jul  2 07:06:01.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.493036 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jul  2 07:06:01.493114 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jul  2 07:06:01.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.511325 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jul  2 07:06:01.520020 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jul  2 07:06:01.529550 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jul  2 07:06:01.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.529786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jul  2 07:06:01.536622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jul  2 07:06:01.536672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jul  2 07:06:01.547187 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jul  2 07:06:01.547363 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jul  2 07:06:01.567033 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jul  2 07:06:01.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.567120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jul  2 07:06:01.593067 kernel: hv_netvsc 000d3aba-9c54-000d-3aba-9c54000d3aba eth0: Data path switched from VF: enP40558s1
Jul  2 07:06:01.571083 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jul  2 07:06:01.571145 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jul  2 07:06:01.576198 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jul  2 07:06:01.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.577573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jul  2 07:06:01.600854 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jul  2 07:06:01.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.614400 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jul  2 07:06:01.614487 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jul  2 07:06:01.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:01.625594 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jul  2 07:06:01.626553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jul  2 07:06:01.640479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jul  2 07:06:01.640549 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console.
Jul  2 07:06:01.651058 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jul  2 07:06:01.651627 systemd[1]: network-cleanup.service: Deactivated successfully.
Jul  2 07:06:01.651748 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jul  2 07:06:01.655604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jul  2 07:06:01.655971 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jul  2 07:06:02.039336 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jul  2 07:06:02.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:02.039488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jul  2 07:06:02.048705 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jul  2 07:06:02.052973 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jul  2 07:06:02.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:02.053045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jul  2 07:06:02.073183 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jul  2 07:06:02.087252 systemd[1]: Switching root.
Jul  2 07:06:02.115528 systemd-journald[178]: Journal stopped
Jul  2 07:06:06.131711 systemd-journald[178]: Received SIGTERM from PID 1 (systemd).
Jul  2 07:06:06.131738 kernel: SELinux:  Permission cmd in class io_uring not defined in policy.
Jul  2 07:06:06.131751 kernel: SELinux: the above unknown classes and permissions will be allowed
Jul  2 07:06:06.131761 kernel: SELinux:  policy capability network_peer_controls=1
Jul  2 07:06:06.131769 kernel: SELinux:  policy capability open_perms=1
Jul  2 07:06:06.131780 kernel: SELinux:  policy capability extended_socket_class=1
Jul  2 07:06:06.131789 kernel: SELinux:  policy capability always_check_network=0
Jul  2 07:06:06.131802 kernel: SELinux:  policy capability cgroup_seclabel=1
Jul  2 07:06:06.131811 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jul  2 07:06:06.131819 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jul  2 07:06:06.131830 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jul  2 07:06:06.131839 systemd[1]: Successfully loaded SELinux policy in 260.292ms.
Jul  2 07:06:06.131851 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.724ms.
Jul  2 07:06:06.131862 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jul  2 07:06:06.131886 systemd[1]: Detected virtualization microsoft.
Jul  2 07:06:06.131896 systemd[1]: Detected architecture x86-64.
Jul  2 07:06:06.131908 systemd[1]: Detected first boot.
Jul  2 07:06:06.131917 systemd[1]: Hostname set to <ci-3815.2.5-a-b9d6671d68>.
Jul  2 07:06:06.131930 systemd[1]: Initializing machine ID from random generator.
Jul  2 07:06:06.131943 systemd[1]: Populated /etc with preset unit settings.
Jul  2 07:06:06.131954 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jul  2 07:06:06.131963 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jul  2 07:06:06.131975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jul  2 07:06:06.131985 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jul  2 07:06:06.131997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jul  2 07:06:06.132007 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jul  2 07:06:06.132022 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jul  2 07:06:06.132031 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jul  2 07:06:06.132043 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jul  2 07:06:06.132053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jul  2 07:06:06.132062 systemd[1]: Created slice user.slice - User and Session Slice.
Jul  2 07:06:06.132074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jul  2 07:06:06.132084 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jul  2 07:06:06.132096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jul  2 07:06:06.132108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jul  2 07:06:06.132124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jul  2 07:06:06.132134 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jul  2 07:06:06.132146 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jul  2 07:06:06.132155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jul  2 07:06:06.132167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jul  2 07:06:06.132180 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jul  2 07:06:06.132192 systemd[1]: Reached target slices.target - Slice Units.
Jul  2 07:06:06.132204 systemd[1]: Reached target swap.target - Swaps.
Jul  2 07:06:06.132215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jul  2 07:06:06.132227 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jul  2 07:06:06.132236 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe.
Jul  2 07:06:06.132249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jul  2 07:06:06.132259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jul  2 07:06:06.132271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jul  2 07:06:06.132281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jul  2 07:06:06.132295 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jul  2 07:06:06.132306 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jul  2 07:06:06.132318 systemd[1]: Mounting media.mount - External Media Directory...
Jul  2 07:06:06.132328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:06.132339 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jul  2 07:06:06.132354 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jul  2 07:06:06.132367 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jul  2 07:06:06.132378 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jul  2 07:06:06.132391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jul  2 07:06:06.132401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jul  2 07:06:06.132414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jul  2 07:06:06.132424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jul  2 07:06:06.132436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jul  2 07:06:06.132449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jul  2 07:06:06.132462 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jul  2 07:06:06.132472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jul  2 07:06:06.132484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jul  2 07:06:06.132495 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jul  2 07:06:06.132507 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jul  2 07:06:06.132518 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jul  2 07:06:06.132528 systemd[1]: Stopped systemd-fsck-usr.service.
Jul  2 07:06:06.132540 systemd[1]: Stopped systemd-journald.service - Journal Service.
Jul  2 07:06:06.132553 systemd[1]: Starting systemd-journald.service - Journal Service...
Jul  2 07:06:06.132565 kernel: fuse: init (API version 7.37)
Jul  2 07:06:06.132574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jul  2 07:06:06.132587 kernel: loop: module loaded
Jul  2 07:06:06.132596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jul  2 07:06:06.132610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jul  2 07:06:06.132621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jul  2 07:06:06.132633 systemd[1]: verity-setup.service: Deactivated successfully.
Jul  2 07:06:06.132646 systemd[1]: Stopped verity-setup.service.
Jul  2 07:06:06.132658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:06.132672 systemd-journald[1208]: Journal started
Jul  2 07:06:06.132714 systemd-journald[1208]: Runtime Journal (/run/log/journal/690ea1beff094d318077cd829f7a1468) is 8.0M, max 158.8M, 150.8M free.
Jul  2 07:06:03.510000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Jul  2 07:06:03.914000 audit: BPF prog-id=10 op=LOAD
Jul  2 07:06:03.914000 audit: BPF prog-id=10 op=UNLOAD
Jul  2 07:06:03.914000 audit: BPF prog-id=11 op=LOAD
Jul  2 07:06:03.914000 audit: BPF prog-id=11 op=UNLOAD
Jul  2 07:06:05.629000 audit: BPF prog-id=12 op=LOAD
Jul  2 07:06:05.629000 audit: BPF prog-id=3 op=UNLOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=13 op=LOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=14 op=LOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=4 op=UNLOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=5 op=UNLOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=15 op=LOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=12 op=UNLOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=16 op=LOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=17 op=LOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=13 op=UNLOAD
Jul  2 07:06:05.630000 audit: BPF prog-id=14 op=UNLOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=18 op=LOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=15 op=UNLOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=19 op=LOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=20 op=LOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=16 op=UNLOAD
Jul  2 07:06:05.631000 audit: BPF prog-id=17 op=UNLOAD
Jul  2 07:06:05.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:05.643000 audit: BPF prog-id=18 op=UNLOAD
Jul  2 07:06:05.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:05.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.042000 audit: BPF prog-id=21 op=LOAD
Jul  2 07:06:06.042000 audit: BPF prog-id=22 op=LOAD
Jul  2 07:06:06.042000 audit: BPF prog-id=23 op=LOAD
Jul  2 07:06:06.042000 audit: BPF prog-id=19 op=UNLOAD
Jul  2 07:06:06.042000 audit: BPF prog-id=20 op=UNLOAD
Jul  2 07:06:06.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.128000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Jul  2 07:06:06.128000 audit[1208]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe8333c1e0 a2=4000 a3=7ffe8333c27c items=0 ppid=1 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:06:06.128000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Jul  2 07:06:05.618715 systemd[1]: Queued start job for default target multi-user.target.
Jul  2 07:06:05.618728 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Jul  2 07:06:05.633068 systemd[1]: systemd-journald.service: Deactivated successfully.
Jul  2 07:06:06.145198 systemd[1]: Started systemd-journald.service - Journal Service.
Jul  2 07:06:06.146110 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jul  2 07:06:06.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.149615 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jul  2 07:06:06.152923 systemd[1]: Mounted media.mount - External Media Directory.
Jul  2 07:06:06.155802 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jul  2 07:06:06.160139 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jul  2 07:06:06.165719 kernel: ACPI: bus type drm_connector registered
Jul  2 07:06:06.165642 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jul  2 07:06:06.168482 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jul  2 07:06:06.185543 kernel: kauditd_printk_skb: 58 callbacks suppressed
Jul  2 07:06:06.185605 kernel: audit: type=1130 audit(1719903966.170:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.172001 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jul  2 07:06:06.185057 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jul  2 07:06:06.185258 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jul  2 07:06:06.199927 kernel: audit: type=1130 audit(1719903966.183:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.188741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jul  2 07:06:06.188955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jul  2 07:06:06.199555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jul  2 07:06:06.199733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jul  2 07:06:06.203018 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jul  2 07:06:06.203185 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jul  2 07:06:06.206393 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jul  2 07:06:06.206560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jul  2 07:06:06.220459 kernel: audit: type=1130 audit(1719903966.187:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.220084 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jul  2 07:06:06.220271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jul  2 07:06:06.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.230222 kernel: audit: type=1131 audit(1719903966.187:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.230794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jul  2 07:06:06.239884 kernel: audit: type=1130 audit(1719903966.199:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.242354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jul  2 07:06:06.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.252959 kernel: audit: type=1131 audit(1719903966.199:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.252823 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jul  2 07:06:06.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.273338 kernel: audit: type=1130 audit(1719903966.201:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.273438 kernel: audit: type=1131 audit(1719903966.201:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.274550 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jul  2 07:06:06.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.292427 kernel: audit: type=1130 audit(1719903966.205:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.292481 kernel: audit: type=1131 audit(1719903966.205:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.303114 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jul  2 07:06:06.312938 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jul  2 07:06:06.316145 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jul  2 07:06:06.318683 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jul  2 07:06:06.324026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jul  2 07:06:06.330667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jul  2 07:06:06.337094 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed...
Jul  2 07:06:06.340116 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jul  2 07:06:06.342544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jul  2 07:06:06.347626 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jul  2 07:06:06.352999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jul  2 07:06:06.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.358393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jul  2 07:06:06.361912 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jul  2 07:06:06.365605 systemd-journald[1208]: Time spent on flushing to /var/log/journal/690ea1beff094d318077cd829f7a1468 is 32.469ms for 1111 entries.
Jul  2 07:06:06.365605 systemd-journald[1208]: System Journal (/var/log/journal/690ea1beff094d318077cd829f7a1468) is 8.0M, max 2.6G, 2.6G free.
Jul  2 07:06:06.415826 systemd-journald[1208]: Received client request to flush runtime journal.
Jul  2 07:06:06.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.370125 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jul  2 07:06:06.417171 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jul  2 07:06:06.374438 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed.
Jul  2 07:06:06.377949 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jul  2 07:06:06.417384 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jul  2 07:06:06.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.448541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jul  2 07:06:06.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.534377 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jul  2 07:06:06.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:06.542100 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jul  2 07:06:06.656063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jul  2 07:06:06.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:07.660557 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jul  2 07:06:07.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:07.669000 audit: BPF prog-id=24 op=LOAD
Jul  2 07:06:07.669000 audit: BPF prog-id=25 op=LOAD
Jul  2 07:06:07.669000 audit: BPF prog-id=7 op=UNLOAD
Jul  2 07:06:07.669000 audit: BPF prog-id=8 op=UNLOAD
Jul  2 07:06:07.677197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jul  2 07:06:07.712690 systemd-udevd[1230]: Using default interface naming scheme 'v252'.
Jul  2 07:06:07.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:07.892000 audit: BPF prog-id=26 op=LOAD
Jul  2 07:06:07.888276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jul  2 07:06:07.898122 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jul  2 07:06:07.970918 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Jul  2 07:06:07.979923 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1248)
Jul  2 07:06:07.980000 audit: BPF prog-id=27 op=LOAD
Jul  2 07:06:07.981000 audit: BPF prog-id=28 op=LOAD
Jul  2 07:06:07.981000 audit: BPF prog-id=29 op=LOAD
Jul  2 07:06:07.986105 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jul  2 07:06:08.059337 kernel: mousedev: PS/2 mouse device common for all mice
Jul  2 07:06:08.084225 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jul  2 07:06:08.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:08.094934 kernel: hv_vmbus: registering driver hyperv_fb
Jul  2 07:06:08.109866 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Jul  2 07:06:08.110175 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Jul  2 07:06:08.117730 kernel: Console: switching to colour dummy device 80x25
Jul  2 07:06:08.121502 kernel: Console: switching to colour frame buffer device 128x48
Jul  2 07:06:08.147793 kernel: hv_utils: Registering HyperV Utility Driver
Jul  2 07:06:08.147901 kernel: hv_vmbus: registering driver hv_utils
Jul  2 07:06:08.161904 kernel: hv_vmbus: registering driver hv_balloon
Jul  2 07:06:08.166954 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Jul  2 07:06:08.176354 kernel: hv_utils: Shutdown IC version 3.2
Jul  2 07:06:08.176459 kernel: hv_utils: Heartbeat IC version 3.0
Jul  2 07:06:08.176505 kernel: hv_utils: TimeSync IC version 4.0
Jul  2 07:06:08.554886 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1252)
Jul  2 07:06:08.623013 systemd-networkd[1236]: lo: Link UP
Jul  2 07:06:08.623385 systemd-networkd[1236]: lo: Gained carrier
Jul  2 07:06:08.624572 systemd-networkd[1236]: Enumeration completed
Jul  2 07:06:08.625101 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jul  2 07:06:08.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:08.633465 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jul  2 07:06:08.633615 systemd-networkd[1236]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jul  2 07:06:08.635097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jul  2 07:06:08.685710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Jul  2 07:06:08.708890 kernel: mlx5_core 9e6e:00:02.0 enP40558s1: Link up
Jul  2 07:06:08.729884 kernel: hv_netvsc 000d3aba-9c54-000d-3aba-9c54000d3aba eth0: Data path switched to VF: enP40558s1
Jul  2 07:06:08.730507 systemd-networkd[1236]: enP40558s1: Link UP
Jul  2 07:06:08.730799 systemd-networkd[1236]: eth0: Link UP
Jul  2 07:06:08.730887 systemd-networkd[1236]: eth0: Gained carrier
Jul  2 07:06:08.730965 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jul  2 07:06:08.736181 systemd-networkd[1236]: enP40558s1: Gained carrier
Jul  2 07:06:08.763892 kernel: KVM: vmx: using Hyper-V Enlightened VMCS
Jul  2 07:06:08.773007 systemd-networkd[1236]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jul  2 07:06:08.934287 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jul  2 07:06:08.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:08.945204 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jul  2 07:06:09.002910 lvm[1312]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jul  2 07:06:09.068283 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jul  2 07:06:09.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:09.072511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jul  2 07:06:09.088182 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jul  2 07:06:09.096548 lvm[1313]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jul  2 07:06:09.123219 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jul  2 07:06:09.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:09.127181 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jul  2 07:06:09.131549 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jul  2 07:06:09.131597 systemd[1]: Reached target local-fs.target - Local File Systems.
Jul  2 07:06:09.134949 systemd[1]: Reached target machines.target - Containers.
Jul  2 07:06:09.153197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jul  2 07:06:09.156732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jul  2 07:06:09.156843 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:09.158990 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update...
Jul  2 07:06:09.164692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jul  2 07:06:09.171016 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jul  2 07:06:09.178638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jul  2 07:06:09.194997 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1315 (bootctl)
Jul  2 07:06:09.201104 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM...
Jul  2 07:06:09.314890 kernel: loop0: detected capacity change from 0 to 55568
Jul  2 07:06:09.340822 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jul  2 07:06:09.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:09.343117 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jul  2 07:06:09.350546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jul  2 07:06:09.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:09.645892 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jul  2 07:06:09.715895 kernel: loop1: detected capacity change from 0 to 139360
Jul  2 07:06:10.116890 kernel: loop2: detected capacity change from 0 to 210664
Jul  2 07:06:10.145901 kernel: loop3: detected capacity change from 0 to 80600
Jul  2 07:06:10.232900 systemd-fsck[1322]: fsck.fat 4.2 (2021-01-31)
Jul  2 07:06:10.232900 systemd-fsck[1322]: /dev/sda1: 808 files, 120378/258078 clusters
Jul  2 07:06:10.235540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM.
Jul  2 07:06:10.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:10.246858 systemd[1]: Mounting boot.mount - Boot partition...
Jul  2 07:06:10.260562 systemd[1]: Mounted boot.mount - Boot partition.
Jul  2 07:06:10.277473 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update.
Jul  2 07:06:10.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:10.518903 kernel: loop4: detected capacity change from 0 to 55568
Jul  2 07:06:10.525897 kernel: loop5: detected capacity change from 0 to 139360
Jul  2 07:06:10.539020 kernel: loop6: detected capacity change from 0 to 210664
Jul  2 07:06:10.546907 kernel: loop7: detected capacity change from 0 to 80600
Jul  2 07:06:10.555257 (sd-sysext)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'.
Jul  2 07:06:10.555818 (sd-sysext)[1332]: Merged extensions into '/usr'.
Jul  2 07:06:10.557681 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jul  2 07:06:10.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:10.565194 systemd[1]: Starting ensure-sysext.service...
Jul  2 07:06:10.569912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories...
Jul  2 07:06:10.594576 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Jul  2 07:06:10.609800 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jul  2 07:06:10.610567 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jul  2 07:06:10.613037 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jul  2 07:06:10.616678 systemd[1]: Reloading.
Jul  2 07:06:10.788032 systemd-networkd[1236]: eth0: Gained IPv6LL
Jul  2 07:06:10.832839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jul  2 07:06:10.908000 audit: BPF prog-id=30 op=LOAD
Jul  2 07:06:10.908000 audit: BPF prog-id=26 op=UNLOAD
Jul  2 07:06:10.909000 audit: BPF prog-id=31 op=LOAD
Jul  2 07:06:10.909000 audit: BPF prog-id=21 op=UNLOAD
Jul  2 07:06:10.909000 audit: BPF prog-id=32 op=LOAD
Jul  2 07:06:10.910000 audit: BPF prog-id=33 op=LOAD
Jul  2 07:06:10.910000 audit: BPF prog-id=22 op=UNLOAD
Jul  2 07:06:10.910000 audit: BPF prog-id=23 op=UNLOAD
Jul  2 07:06:10.910000 audit: BPF prog-id=34 op=LOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=27 op=UNLOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=35 op=LOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=36 op=LOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=28 op=UNLOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=29 op=UNLOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=37 op=LOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=38 op=LOAD
Jul  2 07:06:10.911000 audit: BPF prog-id=24 op=UNLOAD
Jul  2 07:06:10.912000 audit: BPF prog-id=25 op=UNLOAD
Jul  2 07:06:10.918233 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jul  2 07:06:10.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:10.927111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories.
Jul  2 07:06:10.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:10.940150 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jul  2 07:06:10.970212 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jul  2 07:06:10.977448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jul  2 07:06:10.982000 audit: BPF prog-id=39 op=LOAD
Jul  2 07:06:10.987178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jul  2 07:06:10.991000 audit: BPF prog-id=40 op=LOAD
Jul  2 07:06:11.003179 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jul  2 07:06:11.018139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jul  2 07:06:11.031613 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.031997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jul  2 07:06:11.032000 audit[1422]: SYSTEM_BOOT pid=1422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.039999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jul  2 07:06:11.051389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jul  2 07:06:11.069443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jul  2 07:06:11.072971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jul  2 07:06:11.073235 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:11.073429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.075167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jul  2 07:06:11.075376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jul  2 07:06:11.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.086522 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.087018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jul  2 07:06:11.100264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jul  2 07:06:11.104053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jul  2 07:06:11.104333 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:11.104552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.106157 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jul  2 07:06:11.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.115612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jul  2 07:06:11.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.116929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jul  2 07:06:11.121245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jul  2 07:06:11.121425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jul  2 07:06:11.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.129835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jul  2 07:06:11.133464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.133919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jul  2 07:06:11.139496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jul  2 07:06:11.145037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jul  2 07:06:11.158511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jul  2 07:06:11.162188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jul  2 07:06:11.162467 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:11.162740 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Jul  2 07:06:11.164390 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jul  2 07:06:11.164622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jul  2 07:06:11.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.169002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jul  2 07:06:11.169216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jul  2 07:06:11.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.176662 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jul  2 07:06:11.176879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jul  2 07:06:11.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.185311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jul  2 07:06:11.185530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jul  2 07:06:11.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.190780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jul  2 07:06:11.190979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jul  2 07:06:11.194312 systemd[1]: Finished ensure-sysext.service.
Jul  2 07:06:11.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.212772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jul  2 07:06:11.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.233337 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jul  2 07:06:11.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:06:11.237311 systemd[1]: Reached target time-set.target - System Time Set.
Jul  2 07:06:11.264537 systemd-resolved[1417]: Positive Trust Anchors:
Jul  2 07:06:11.264559 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jul  2 07:06:11.264596 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Jul  2 07:06:11.280121 augenrules[1440]: No rules
Jul  2 07:06:11.279000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Jul  2 07:06:11.279000 audit[1440]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6fcf1f00 a2=420 a3=0 items=0 ppid=1413 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:06:11.279000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Jul  2 07:06:11.281049 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jul  2 07:06:11.295570 systemd-resolved[1417]: Using system hostname 'ci-3815.2.5-a-b9d6671d68'.
Jul  2 07:06:11.297580 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jul  2 07:06:11.301202 systemd[1]: Reached target network.target - Network.
Jul  2 07:06:11.306741 systemd[1]: Reached target network-online.target - Network is Online.
Jul  2 07:06:11.310351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jul  2 07:06:11.317965 systemd-timesyncd[1418]: Contacted time server 193.1.12.167:123 (0.flatcar.pool.ntp.org).
Jul  2 07:06:11.318122 systemd-timesyncd[1418]: Initial clock synchronization to Tue 2024-07-02 07:06:11.318297 UTC.
Jul  2 07:06:12.109551 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jul  2 07:06:12.113067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jul  2 07:06:14.191538 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jul  2 07:06:14.205281 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jul  2 07:06:14.212159 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jul  2 07:06:14.224896 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jul  2 07:06:14.228349 systemd[1]: Reached target sysinit.target - System Initialization.
Jul  2 07:06:14.231405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jul  2 07:06:14.234736 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jul  2 07:06:14.237993 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jul  2 07:06:14.243235 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jul  2 07:06:14.246462 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jul  2 07:06:14.249527 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jul  2 07:06:14.249579 systemd[1]: Reached target paths.target - Path Units.
Jul  2 07:06:14.251968 systemd[1]: Reached target timers.target - Timer Units.
Jul  2 07:06:14.255431 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jul  2 07:06:14.260108 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jul  2 07:06:14.277915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jul  2 07:06:14.281398 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:14.281984 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jul  2 07:06:14.285077 systemd[1]: Reached target sockets.target - Socket Units.
Jul  2 07:06:14.287880 systemd[1]: Reached target basic.target - Basic System.
Jul  2 07:06:14.291083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jul  2 07:06:14.291117 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jul  2 07:06:14.301099 systemd[1]: Starting containerd.service - containerd container runtime...
Jul  2 07:06:14.306714 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Jul  2 07:06:14.311388 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jul  2 07:06:14.315805 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jul  2 07:06:14.320816 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jul  2 07:06:14.322525 jq[1454]: false
Jul  2 07:06:14.324422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jul  2 07:06:14.326950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:06:14.334212 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jul  2 07:06:14.338803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jul  2 07:06:14.344235 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jul  2 07:06:14.349156 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jul  2 07:06:14.354088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jul  2 07:06:14.361512 systemd[1]: Starting systemd-logind.service - User Login Management...
Jul  2 07:06:14.365221 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jul  2 07:06:14.365356 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jul  2 07:06:14.366085 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jul  2 07:06:14.368021 systemd[1]: Starting update-engine.service - Update Engine...
Jul  2 07:06:14.379767 extend-filesystems[1455]: Found loop4
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found loop5
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found loop6
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found loop7
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda1
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda2
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda3
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found usr
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda4
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda6
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda7
Jul  2 07:06:14.398109 extend-filesystems[1455]: Found sda9
Jul  2 07:06:14.398109 extend-filesystems[1455]: Checking size of /dev/sda9
Jul  2 07:06:14.381047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jul  2 07:06:14.388592 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jul  2 07:06:14.388920 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jul  2 07:06:14.391000 systemd[1]: motdgen.service: Deactivated successfully.
Jul  2 07:06:14.391309 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jul  2 07:06:14.416064 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jul  2 07:06:14.416351 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jul  2 07:06:14.442806 jq[1472]: true
Jul  2 07:06:14.472765 jq[1483]: true
Jul  2 07:06:14.481447 update_engine[1468]: I0702 07:06:14.481350  1468 main.cc:92] Flatcar Update Engine starting
Jul  2 07:06:14.495421 extend-filesystems[1455]: Old size kept for /dev/sda9
Jul  2 07:06:14.495421 extend-filesystems[1455]: Found sr0
Jul  2 07:06:14.493079 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jul  2 07:06:14.493326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jul  2 07:06:14.506410 tar[1479]: linux-amd64/helm
Jul  2 07:06:14.509211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jul  2 07:06:14.567307 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Jul  2 07:06:14.572783 systemd-logind[1465]: New seat seat0.
Jul  2 07:06:14.593408 dbus-daemon[1453]: [system] SELinux support is enabled
Jul  2 07:06:14.593660 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jul  2 07:06:14.600079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jul  2 07:06:14.600116 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jul  2 07:06:14.603793 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jul  2 07:06:14.603821 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jul  2 07:06:14.615396 update_engine[1468]: I0702 07:06:14.615337  1468 update_check_scheduler.cc:74] Next update check in 7m44s
Jul  2 07:06:14.615583 systemd[1]: Started update-engine.service - Update Engine.
Jul  2 07:06:14.626183 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jul  2 07:06:14.631690 systemd[1]: Started systemd-logind.service - User Login Management.
Jul  2 07:06:14.694515 bash[1504]: Updated "/home/core/.ssh/authorized_keys"
Jul  2 07:06:14.695330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jul  2 07:06:14.701458 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jul  2 07:06:14.985824 coreos-metadata[1450]: Jul 02 07:06:14.985 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Jul  2 07:06:14.993058 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1511)
Jul  2 07:06:14.995857 coreos-metadata[1450]: Jul 02 07:06:14.995 INFO Fetch successful
Jul  2 07:06:14.995857 coreos-metadata[1450]: Jul 02 07:06:14.995 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1
Jul  2 07:06:15.001515 coreos-metadata[1450]: Jul 02 07:06:15.001 INFO Fetch successful
Jul  2 07:06:15.001515 coreos-metadata[1450]: Jul 02 07:06:15.001 INFO Fetching http://168.63.129.16/machine/e64eb067-ebc0-4ed1-97d9-2e5f50bfa36b/7f0555b3%2D3aed%2D4738%2Dbb53%2D9f6c0f431206.%5Fci%2D3815.2.5%2Da%2Db9d6671d68?comp=config&type=sharedConfig&incarnation=1: Attempt #1
Jul  2 07:06:15.003547 coreos-metadata[1450]: Jul 02 07:06:15.003 INFO Fetch successful
Jul  2 07:06:15.003547 coreos-metadata[1450]: Jul 02 07:06:15.003 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1
Jul  2 07:06:15.017894 coreos-metadata[1450]: Jul 02 07:06:15.017 INFO Fetch successful
Jul  2 07:06:15.038055 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Jul  2 07:06:15.042329 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jul  2 07:06:15.088590 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jul  2 07:06:15.690458 tar[1479]: linux-amd64/LICENSE
Jul  2 07:06:15.692031 tar[1479]: linux-amd64/README.md
Jul  2 07:06:15.703632 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jul  2 07:06:15.711702 containerd[1481]: time="2024-07-02T07:06:15.711601317Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13
Jul  2 07:06:15.774771 containerd[1481]: time="2024-07-02T07:06:15.774664743Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul  2 07:06:15.775029 containerd[1481]: time="2024-07-02T07:06:15.774996251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.777465 containerd[1481]: time="2024-07-02T07:06:15.777419610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul  2 07:06:15.777626 containerd[1481]: time="2024-07-02T07:06:15.777609314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.778051 containerd[1481]: time="2024-07-02T07:06:15.778023624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul  2 07:06:15.778170 containerd[1481]: time="2024-07-02T07:06:15.778150027Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul  2 07:06:15.778382 containerd[1481]: time="2024-07-02T07:06:15.778361432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.778557 containerd[1481]: time="2024-07-02T07:06:15.778527936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul  2 07:06:15.778669 containerd[1481]: time="2024-07-02T07:06:15.778653139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.778840 containerd[1481]: time="2024-07-02T07:06:15.778824444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.779190 containerd[1481]: time="2024-07-02T07:06:15.779170152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.779284 containerd[1481]: time="2024-07-02T07:06:15.779267654Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jul  2 07:06:15.779369 containerd[1481]: time="2024-07-02T07:06:15.779355656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul  2 07:06:15.779622 containerd[1481]: time="2024-07-02T07:06:15.779603062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul  2 07:06:15.779704 containerd[1481]: time="2024-07-02T07:06:15.779691065Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul  2 07:06:15.779828 containerd[1481]: time="2024-07-02T07:06:15.779812468Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jul  2 07:06:15.779931 containerd[1481]: time="2024-07-02T07:06:15.779919270Z" level=info msg="metadata content store policy set" policy=shared
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909500606Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909571508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909591708Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909714011Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909743512Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909760012Z" level=info msg="NRI interface is disabled by configuration."
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909778513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.909985018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910005518Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910025919Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910044919Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910072520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910099020Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.912899 containerd[1481]: time="2024-07-02T07:06:15.910117321Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910135121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910154322Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910174422Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910191523Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910208323Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910314026Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910634833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910664334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910682935Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910712335Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910787237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910804037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910820838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.913534 containerd[1481]: time="2024-07-02T07:06:15.910836938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.910855239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911035943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911068344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911093044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911124445Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911438853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911471354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911494154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911516455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911660158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911688559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911711559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914047 containerd[1481]: time="2024-07-02T07:06:15.911734360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jul  2 07:06:15.914505 containerd[1481]: time="2024-07-02T07:06:15.912344575Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jul  2 07:06:15.914505 containerd[1481]: time="2024-07-02T07:06:15.912509679Z" level=info msg="Connect containerd service"
Jul  2 07:06:15.914505 containerd[1481]: time="2024-07-02T07:06:15.912566380Z" level=info msg="using legacy CRI server"
Jul  2 07:06:15.914505 containerd[1481]: time="2024-07-02T07:06:15.912625682Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jul  2 07:06:15.920269 containerd[1481]: time="2024-07-02T07:06:15.920207565Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jul  2 07:06:15.922571 containerd[1481]: time="2024-07-02T07:06:15.922535121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jul  2 07:06:15.924043 containerd[1481]: time="2024-07-02T07:06:15.923949456Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul  2 07:06:15.925663 containerd[1481]: time="2024-07-02T07:06:15.925635996Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.924018957Z" level=info msg="Start subscribing containerd event"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926400515Z" level=info msg="Start recovering state"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926496717Z" level=info msg="Start event monitor"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926512518Z" level=info msg="Start snapshots syncer"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926524718Z" level=info msg="Start cni network conf syncer for default"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926535218Z" level=info msg="Start streaming server"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926312713Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.926800425Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.927127132Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jul  2 07:06:15.927266 containerd[1481]: time="2024-07-02T07:06:15.927175934Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jul  2 07:06:15.927366 systemd[1]: Started containerd.service - containerd container runtime.
Jul  2 07:06:15.931378 containerd[1481]: time="2024-07-02T07:06:15.931347235Z" level=info msg="containerd successfully booted in 0.224340s"
Jul  2 07:06:16.159022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:16.763098 kubelet[1570]: E0702 07:06:16.763045    1570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:06:16.765675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:06:16.765839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:06:16.766156 systemd[1]: kubelet.service: Consumed 1.048s CPU time.
Jul  2 07:06:17.278359 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jul  2 07:06:17.300176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jul  2 07:06:17.311474 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jul  2 07:06:17.316693 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent...
Jul  2 07:06:17.326474 systemd[1]: issuegen.service: Deactivated successfully.
Jul  2 07:06:17.326701 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jul  2 07:06:17.333587 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent.
Jul  2 07:06:17.344410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jul  2 07:06:17.357187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jul  2 07:06:17.364513 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jul  2 07:06:17.370273 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Jul  2 07:06:17.374819 systemd[1]: Reached target getty.target - Login Prompts.
Jul  2 07:06:17.377848 systemd[1]: Reached target multi-user.target - Multi-User System.
Jul  2 07:06:17.389460 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP...
Jul  2 07:06:17.402081 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jul  2 07:06:17.402290 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP.
Jul  2 07:06:17.406183 systemd[1]: Startup finished in 726ms (firmware) + 29.890s (loader) + 1.107s (kernel) + 11.538s (initrd) + 13.837s (userspace) = 57.100s.
Jul  2 07:06:17.839209 login[1594]: pam_lastlog(login:session): file /var/log/lastlog is locked/write
Jul  2 07:06:17.840073 login[1593]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Jul  2 07:06:17.849649 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jul  2 07:06:17.859411 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jul  2 07:06:17.863525 systemd-logind[1465]: New session 1 of user core.
Jul  2 07:06:17.875018 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jul  2 07:06:17.882288 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jul  2 07:06:17.885247 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:06:18.256658 systemd[1597]: Queued start job for default target default.target.
Jul  2 07:06:18.264512 systemd[1597]: Reached target paths.target - Paths.
Jul  2 07:06:18.264542 systemd[1597]: Reached target sockets.target - Sockets.
Jul  2 07:06:18.264558 systemd[1597]: Reached target timers.target - Timers.
Jul  2 07:06:18.264571 systemd[1597]: Reached target basic.target - Basic System.
Jul  2 07:06:18.264643 systemd[1597]: Reached target default.target - Main User Target.
Jul  2 07:06:18.264682 systemd[1597]: Startup finished in 371ms.
Jul  2 07:06:18.264727 systemd[1]: Started user@500.service - User Manager for UID 500.
Jul  2 07:06:18.266740 systemd[1]: Started session-1.scope - Session 1 of User core.
Jul  2 07:06:18.841203 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Jul  2 07:06:18.846712 systemd-logind[1465]: New session 2 of user core.
Jul  2 07:06:18.857149 systemd[1]: Started session-2.scope - Session 2 of User core.
Jul  2 07:06:19.537803 waagent[1591]: 2024-07-02T07:06:19.537680Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.539321Z INFO Daemon Daemon OS: flatcar 3815.2.5
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.540355Z INFO Daemon Daemon Python: 3.11.6
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.541097Z INFO Daemon Daemon Run daemon
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.541431Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.5'
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.542320Z INFO Daemon Daemon Using waagent for provisioning
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.542947Z INFO Daemon Daemon Activate resource disk
Jul  2 07:06:19.545913 waagent[1591]: 2024-07-02T07:06:19.543801Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Jul  2 07:06:19.571907 waagent[1591]: 2024-07-02T07:06:19.548245Z INFO Daemon Daemon Found device: None
Jul  2 07:06:19.571907 waagent[1591]: 2024-07-02T07:06:19.548959Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Jul  2 07:06:19.571907 waagent[1591]: 2024-07-02T07:06:19.549875Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Jul  2 07:06:19.571907 waagent[1591]: 2024-07-02T07:06:19.550766Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Jul  2 07:06:19.571907 waagent[1591]: 2024-07-02T07:06:19.551841Z INFO Daemon Daemon Running default provisioning handler
Jul  2 07:06:19.581167 waagent[1591]: 2024-07-02T07:06:19.581078Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1.
Jul  2 07:06:19.588268 waagent[1591]: 2024-07-02T07:06:19.588184Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Jul  2 07:06:19.589033 waagent[1591]: 2024-07-02T07:06:19.588966Z INFO Daemon Daemon cloud-init is enabled: False
Jul  2 07:06:19.589199 waagent[1591]: 2024-07-02T07:06:19.589149Z INFO Daemon Daemon Copying ovf-env.xml
Jul  2 07:06:21.480946 waagent[1591]: 2024-07-02T07:06:21.480705Z INFO Daemon Daemon Successfully mounted dvd
Jul  2 07:06:21.930813 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Jul  2 07:06:22.321245 waagent[1591]: 2024-07-02T07:06:22.321077Z INFO Daemon Daemon Detect protocol endpoint
Jul  2 07:06:22.324381 waagent[1591]: 2024-07-02T07:06:22.324272Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Jul  2 07:06:22.327818 waagent[1591]: 2024-07-02T07:06:22.327727Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Jul  2 07:06:22.332104 waagent[1591]: 2024-07-02T07:06:22.332002Z INFO Daemon Daemon Test for route to 168.63.129.16
Jul  2 07:06:22.335476 waagent[1591]: 2024-07-02T07:06:22.335380Z INFO Daemon Daemon Route to 168.63.129.16 exists
Jul  2 07:06:22.338971 waagent[1591]: 2024-07-02T07:06:22.338880Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Jul  2 07:06:22.354146 waagent[1591]: 2024-07-02T07:06:22.354075Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Jul  2 07:06:22.362825 waagent[1591]: 2024-07-02T07:06:22.355752Z INFO Daemon Daemon Wire protocol version:2012-11-30
Jul  2 07:06:22.362825 waagent[1591]: 2024-07-02T07:06:22.356533Z INFO Daemon Daemon Server preferred version:2015-04-05
Jul  2 07:06:22.987850 waagent[1591]: 2024-07-02T07:06:22.987733Z INFO Daemon Daemon Initializing goal state during protocol detection
Jul  2 07:06:22.992288 waagent[1591]: 2024-07-02T07:06:22.992196Z INFO Daemon Daemon Forcing an update of the goal state.
Jul  2 07:06:22.999216 waagent[1591]: 2024-07-02T07:06:22.999152Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1]
Jul  2 07:06:23.019642 waagent[1591]: 2024-07-02T07:06:23.019553Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151
Jul  2 07:06:23.024026 waagent[1591]: 2024-07-02T07:06:23.023954Z INFO Daemon
Jul  2 07:06:23.026147 waagent[1591]: 2024-07-02T07:06:23.026075Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 76362a36-0414-4bde-8b7a-dacdf30efd62 eTag: 5706786953323734736 source: Fabric]
Jul  2 07:06:23.044245 waagent[1591]: 2024-07-02T07:06:23.028588Z INFO Daemon The vmSettings originated via Fabric; will ignore them.
Jul  2 07:06:23.044245 waagent[1591]: 2024-07-02T07:06:23.033941Z INFO Daemon
Jul  2 07:06:23.044245 waagent[1591]: 2024-07-02T07:06:23.035273Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1]
Jul  2 07:06:23.044245 waagent[1591]: 2024-07-02T07:06:23.040793Z INFO Daemon Daemon Downloading artifacts profile blob
Jul  2 07:06:23.134363 waagent[1591]: 2024-07-02T07:06:23.134263Z INFO Daemon Downloaded certificate {'thumbprint': 'AC78DA8AEC6F456CEF226ACD2B2B5BDEFE6AFD1A', 'hasPrivateKey': True}
Jul  2 07:06:23.141097 waagent[1591]: 2024-07-02T07:06:23.141009Z INFO Daemon Downloaded certificate {'thumbprint': '990A8A1FF984F8663C303C46D4CEC172972DC12E', 'hasPrivateKey': False}
Jul  2 07:06:23.147214 waagent[1591]: 2024-07-02T07:06:23.147129Z INFO Daemon Fetch goal state completed
Jul  2 07:06:23.161227 waagent[1591]: 2024-07-02T07:06:23.161157Z INFO Daemon Daemon Starting provisioning
Jul  2 07:06:23.169714 waagent[1591]: 2024-07-02T07:06:23.164059Z INFO Daemon Daemon Handle ovf-env.xml.
Jul  2 07:06:23.169714 waagent[1591]: 2024-07-02T07:06:23.165679Z INFO Daemon Daemon Set hostname [ci-3815.2.5-a-b9d6671d68]
Jul  2 07:06:23.199796 waagent[1591]: 2024-07-02T07:06:23.199703Z INFO Daemon Daemon Publish hostname [ci-3815.2.5-a-b9d6671d68]
Jul  2 07:06:23.207993 waagent[1591]: 2024-07-02T07:06:23.202431Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Jul  2 07:06:23.208226 waagent[1591]: 2024-07-02T07:06:23.208035Z INFO Daemon Daemon Primary interface is [eth0]
Jul  2 07:06:23.276755 systemd-networkd[1236]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jul  2 07:06:23.276765 systemd-networkd[1236]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jul  2 07:06:23.276817 systemd-networkd[1236]: eth0: DHCP lease lost
Jul  2 07:06:23.278359 waagent[1591]: 2024-07-02T07:06:23.278268Z INFO Daemon Daemon Create user account if not exists
Jul  2 07:06:23.296897 waagent[1591]: 2024-07-02T07:06:23.279936Z INFO Daemon Daemon User core already exists, skip useradd
Jul  2 07:06:23.296897 waagent[1591]: 2024-07-02T07:06:23.281003Z INFO Daemon Daemon Configure sudoer
Jul  2 07:06:23.296897 waagent[1591]: 2024-07-02T07:06:23.281834Z INFO Daemon Daemon Configure sshd
Jul  2 07:06:23.296897 waagent[1591]: 2024-07-02T07:06:23.282272Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive.
Jul  2 07:06:23.296897 waagent[1591]: 2024-07-02T07:06:23.282719Z INFO Daemon Daemon Deploy ssh public key.
Jul  2 07:06:23.300035 systemd-networkd[1236]: eth0: DHCPv6 lease lost
Jul  2 07:06:23.328958 systemd-networkd[1236]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16
Jul  2 07:06:24.707372 waagent[1591]: 2024-07-02T07:06:24.707288Z INFO Daemon Daemon Provisioning complete
Jul  2 07:06:24.727689 waagent[1591]: 2024-07-02T07:06:24.727603Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Jul  2 07:06:24.732720 waagent[1591]: 2024-07-02T07:06:24.731992Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Jul  2 07:06:24.734817 waagent[1591]: 2024-07-02T07:06:24.734725Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent
Jul  2 07:06:24.874039 waagent[1644]: 2024-07-02T07:06:24.873913Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1)
Jul  2 07:06:24.874512 waagent[1644]: 2024-07-02T07:06:24.874125Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.5
Jul  2 07:06:24.874512 waagent[1644]: 2024-07-02T07:06:24.874213Z INFO ExtHandler ExtHandler Python: 3.11.6
Jul  2 07:06:26.214492 waagent[1644]: 2024-07-02T07:06:26.214384Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Jul  2 07:06:26.215070 waagent[1644]: 2024-07-02T07:06:26.214713Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jul  2 07:06:26.215070 waagent[1644]: 2024-07-02T07:06:26.214834Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Jul  2 07:06:26.223039 waagent[1644]: 2024-07-02T07:06:26.222962Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Jul  2 07:06:26.230271 waagent[1644]: 2024-07-02T07:06:26.230211Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151
Jul  2 07:06:26.230805 waagent[1644]: 2024-07-02T07:06:26.230751Z INFO ExtHandler
Jul  2 07:06:26.230961 waagent[1644]: 2024-07-02T07:06:26.230852Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b18e9604-dae2-480a-b842-0a5e07f6f6e0 eTag: 5706786953323734736 source: Fabric]
Jul  2 07:06:26.231258 waagent[1644]: 2024-07-02T07:06:26.231211Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Jul  2 07:06:26.348246 waagent[1644]: 2024-07-02T07:06:26.348088Z INFO ExtHandler
Jul  2 07:06:26.348643 waagent[1644]: 2024-07-02T07:06:26.348577Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Jul  2 07:06:26.353710 waagent[1644]: 2024-07-02T07:06:26.353657Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Jul  2 07:06:27.016824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jul  2 07:06:27.017143 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:27.017215 systemd[1]: kubelet.service: Consumed 1.048s CPU time.
Jul  2 07:06:27.024364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:06:27.485517 waagent[1644]: 2024-07-02T07:06:27.485330Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC78DA8AEC6F456CEF226ACD2B2B5BDEFE6AFD1A', 'hasPrivateKey': True}
Jul  2 07:06:27.487247 waagent[1644]: 2024-07-02T07:06:27.487159Z INFO ExtHandler Downloaded certificate {'thumbprint': '990A8A1FF984F8663C303C46D4CEC172972DC12E', 'hasPrivateKey': False}
Jul  2 07:06:27.488229 waagent[1644]: 2024-07-02T07:06:27.488168Z INFO ExtHandler Fetch goal state completed
Jul  2 07:06:27.506446 waagent[1644]: 2024-07-02T07:06:27.506356Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1644
Jul  2 07:06:27.506624 waagent[1644]: 2024-07-02T07:06:27.506584Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ********
Jul  2 07:06:27.508334 waagent[1644]: 2024-07-02T07:06:27.508271Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.5', '', 'Flatcar Container Linux by Kinvolk']
Jul  2 07:06:27.508761 waagent[1644]: 2024-07-02T07:06:27.508713Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Jul  2 07:06:31.236556 waagent[1644]: 2024-07-02T07:06:31.236489Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Jul  2 07:06:31.237005 waagent[1644]: 2024-07-02T07:06:31.236785Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Jul  2 07:06:31.245482 waagent[1644]: 2024-07-02T07:06:31.245436Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Jul  2 07:06:31.254304 systemd[1]: Reloading.
Jul  2 07:06:31.468355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jul  2 07:06:31.552097 waagent[1644]: 2024-07-02T07:06:31.551934Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service
Jul  2 07:06:31.562426 systemd[1]: Reloading.
Jul  2 07:06:31.764016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jul  2 07:06:32.560536 waagent[1644]: 2024-07-02T07:06:31.843954Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service
Jul  2 07:06:32.560536 waagent[1644]: 2024-07-02T07:06:32.558899Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully
Jul  2 07:06:32.586775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:32.747923 kubelet[1822]: E0702 07:06:32.747847    1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:06:32.751324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:06:32.751501 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:06:33.946378 waagent[1644]: 2024-07-02T07:06:33.946282Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Jul  2 07:06:33.947120 waagent[1644]: 2024-07-02T07:06:33.947051Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True]
Jul  2 07:06:33.947924 waagent[1644]: 2024-07-02T07:06:33.947853Z INFO ExtHandler ExtHandler Starting env monitor service.
Jul  2 07:06:33.948080 waagent[1644]: 2024-07-02T07:06:33.948019Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jul  2 07:06:33.948507 waagent[1644]: 2024-07-02T07:06:33.948462Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Jul  2 07:06:33.948587 waagent[1644]: 2024-07-02T07:06:33.948534Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Jul  2 07:06:33.948825 waagent[1644]: 2024-07-02T07:06:33.948778Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Jul  2 07:06:33.949456 waagent[1644]: 2024-07-02T07:06:33.949404Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Jul  2 07:06:33.949538 waagent[1644]: 2024-07-02T07:06:33.949474Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Jul  2 07:06:33.949611 waagent[1644]: 2024-07-02T07:06:33.949571Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Jul  2 07:06:33.949817 waagent[1644]: 2024-07-02T07:06:33.949750Z INFO EnvHandler ExtHandler Configure routes
Jul  2 07:06:33.949914 waagent[1644]: 2024-07-02T07:06:33.949825Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Jul  2 07:06:33.950141 waagent[1644]: 2024-07-02T07:06:33.950086Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Jul  2 07:06:33.950141 waagent[1644]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Jul  2 07:06:33.950141 waagent[1644]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Jul  2 07:06:33.950141 waagent[1644]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Jul  2 07:06:33.950141 waagent[1644]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Jul  2 07:06:33.950141 waagent[1644]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Jul  2 07:06:33.950141 waagent[1644]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Jul  2 07:06:33.950594 waagent[1644]: 2024-07-02T07:06:33.950531Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Jul  2 07:06:33.950707 waagent[1644]: 2024-07-02T07:06:33.950669Z INFO EnvHandler ExtHandler Gateway:None
Jul  2 07:06:33.950955 waagent[1644]: 2024-07-02T07:06:33.950910Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Jul  2 07:06:33.951508 waagent[1644]: 2024-07-02T07:06:33.951464Z INFO EnvHandler ExtHandler Routes:None
Jul  2 07:06:33.952499 waagent[1644]: 2024-07-02T07:06:33.952455Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Jul  2 07:06:33.962491 waagent[1644]: 2024-07-02T07:06:33.962445Z INFO ExtHandler ExtHandler
Jul  2 07:06:33.962694 waagent[1644]: 2024-07-02T07:06:33.962664Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 163fabeb-151a-4f19-ac3d-420fd46a3e26 correlation d248f174-2687-42f0-9e59-cddfebe2a0c7 created: 2024-07-02T07:05:10.395264Z]
Jul  2 07:06:33.963166 waagent[1644]: 2024-07-02T07:06:33.963127Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Jul  2 07:06:33.963794 waagent[1644]: 2024-07-02T07:06:33.963760Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms]
Jul  2 07:06:34.004630 waagent[1644]: 2024-07-02T07:06:34.004558Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F2B22F6F-C435-4F45-88B7-348AF1E980D2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0]
Jul  2 07:06:34.059964 waagent[1644]: 2024-07-02T07:06:34.059812Z INFO MonitorHandler ExtHandler Network interfaces:
Jul  2 07:06:34.059964 waagent[1644]: Executing ['ip', '-a', '-o', 'link']:
Jul  2 07:06:34.059964 waagent[1644]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Jul  2 07:06:34.059964 waagent[1644]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:ba:9c:54 brd ff:ff:ff:ff:ff:ff
Jul  2 07:06:34.059964 waagent[1644]: 3: enP40558s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:ba:9c:54 brd ff:ff:ff:ff:ff:ff\    altname enP40558p0s2
Jul  2 07:06:34.059964 waagent[1644]: Executing ['ip', '-4', '-a', '-o', 'address']:
Jul  2 07:06:34.059964 waagent[1644]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Jul  2 07:06:34.059964 waagent[1644]: 2: eth0    inet 10.200.8.44/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Jul  2 07:06:34.059964 waagent[1644]: Executing ['ip', '-6', '-a', '-o', 'address']:
Jul  2 07:06:34.059964 waagent[1644]: 1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
Jul  2 07:06:34.059964 waagent[1644]: 2: eth0    inet6 fe80::20d:3aff:feba:9c54/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Jul  2 07:06:34.075369 waagent[1644]: 2024-07-02T07:06:34.075301Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules:
Jul  2 07:06:34.075369 waagent[1644]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.075369 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.075369 waagent[1644]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.075369 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.075369 waagent[1644]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.075369 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.075369 waagent[1644]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Jul  2 07:06:34.075369 waagent[1644]:       10      932 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Jul  2 07:06:34.075369 waagent[1644]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Jul  2 07:06:34.079738 waagent[1644]: 2024-07-02T07:06:34.079682Z INFO EnvHandler ExtHandler Current Firewall rules:
Jul  2 07:06:34.079738 waagent[1644]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.079738 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.079738 waagent[1644]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.079738 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.079738 waagent[1644]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Jul  2 07:06:34.079738 waagent[1644]:     pkts      bytes target     prot opt in     out     source               destination
Jul  2 07:06:34.079738 waagent[1644]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Jul  2 07:06:34.079738 waagent[1644]:       13     1465 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Jul  2 07:06:34.079738 waagent[1644]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Jul  2 07:06:34.080264 waagent[1644]: 2024-07-02T07:06:34.080022Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Jul  2 07:06:43.002563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jul  2 07:06:43.002853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:43.010381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:06:43.598971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:43.735814 kubelet[1862]: E0702 07:06:43.735752    1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:06:43.738017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:06:43.738192 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:06:53.876544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Jul  2 07:06:53.876836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:53.884650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:06:53.993504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:06:54.581357 kubelet[1873]: E0702 07:06:54.581297    1873 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:06:54.583393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:06:54.583725 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:06:56.592247 kernel: hv_balloon: Max. dynamic memory size: 8192 MB
Jul  2 07:06:59.956121 update_engine[1468]: I0702 07:06:59.956018  1468 update_attempter.cc:509] Updating boot flags...
Jul  2 07:07:00.039911 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1891)
Jul  2 07:07:04.626516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Jul  2 07:07:04.626849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:04.633416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:04.961557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:05.007580 kubelet[1922]: E0702 07:07:05.007521    1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:05.009699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:05.009891 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:15.126621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Jul  2 07:07:15.127017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:15.134525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:15.234478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:15.770416 kubelet[1935]: E0702 07:07:15.770362    1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:15.772544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:15.772670 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:17.291363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jul  2 07:07:17.298593 systemd[1]: Started sshd@0-10.200.8.44:22-10.200.16.10:49222.service - OpenSSH per-connection server daemon (10.200.16.10:49222).
Jul  2 07:07:19.461252 sshd[1942]: Accepted publickey for core from 10.200.16.10 port 49222 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:19.463013 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:19.467383 systemd-logind[1465]: New session 3 of user core.
Jul  2 07:07:19.474108 systemd[1]: Started session-3.scope - Session 3 of User core.
Jul  2 07:07:20.033669 systemd[1]: Started sshd@1-10.200.8.44:22-10.200.16.10:33304.service - OpenSSH per-connection server daemon (10.200.16.10:33304).
Jul  2 07:07:20.683455 sshd[1947]: Accepted publickey for core from 10.200.16.10 port 33304 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:20.685089 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:20.690720 systemd-logind[1465]: New session 4 of user core.
Jul  2 07:07:20.693086 systemd[1]: Started session-4.scope - Session 4 of User core.
Jul  2 07:07:21.150296 sshd[1947]: pam_unix(sshd:session): session closed for user core
Jul  2 07:07:21.153986 systemd[1]: sshd@1-10.200.8.44:22-10.200.16.10:33304.service: Deactivated successfully.
Jul  2 07:07:21.154924 systemd[1]: session-4.scope: Deactivated successfully.
Jul  2 07:07:21.155638 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit.
Jul  2 07:07:21.156503 systemd-logind[1465]: Removed session 4.
Jul  2 07:07:21.271616 systemd[1]: Started sshd@2-10.200.8.44:22-10.200.16.10:33316.service - OpenSSH per-connection server daemon (10.200.16.10:33316).
Jul  2 07:07:21.914100 sshd[1953]: Accepted publickey for core from 10.200.16.10 port 33316 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:21.915892 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:21.920984 systemd-logind[1465]: New session 5 of user core.
Jul  2 07:07:21.929121 systemd[1]: Started session-5.scope - Session 5 of User core.
Jul  2 07:07:22.369064 sshd[1953]: pam_unix(sshd:session): session closed for user core
Jul  2 07:07:22.373156 systemd[1]: sshd@2-10.200.8.44:22-10.200.16.10:33316.service: Deactivated successfully.
Jul  2 07:07:22.374441 systemd[1]: session-5.scope: Deactivated successfully.
Jul  2 07:07:22.375307 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit.
Jul  2 07:07:22.376292 systemd-logind[1465]: Removed session 5.
Jul  2 07:07:22.494552 systemd[1]: Started sshd@3-10.200.8.44:22-10.200.16.10:33320.service - OpenSSH per-connection server daemon (10.200.16.10:33320).
Jul  2 07:07:23.132989 sshd[1959]: Accepted publickey for core from 10.200.16.10 port 33320 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:23.134536 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:23.139288 systemd-logind[1465]: New session 6 of user core.
Jul  2 07:07:23.146108 systemd[1]: Started session-6.scope - Session 6 of User core.
Jul  2 07:07:23.590757 sshd[1959]: pam_unix(sshd:session): session closed for user core
Jul  2 07:07:23.594318 systemd[1]: sshd@3-10.200.8.44:22-10.200.16.10:33320.service: Deactivated successfully.
Jul  2 07:07:23.595427 systemd[1]: session-6.scope: Deactivated successfully.
Jul  2 07:07:23.596311 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit.
Jul  2 07:07:23.597304 systemd-logind[1465]: Removed session 6.
Jul  2 07:07:23.722599 systemd[1]: Started sshd@4-10.200.8.44:22-10.200.16.10:33332.service - OpenSSH per-connection server daemon (10.200.16.10:33332).
Jul  2 07:07:24.362925 sshd[1965]: Accepted publickey for core from 10.200.16.10 port 33332 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:24.364492 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:24.369616 systemd-logind[1465]: New session 7 of user core.
Jul  2 07:07:24.377121 systemd[1]: Started session-7.scope - Session 7 of User core.
Jul  2 07:07:24.807759 sudo[1968]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jul  2 07:07:24.808146 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Jul  2 07:07:24.836139 sudo[1968]: pam_unix(sudo:session): session closed for user root
Jul  2 07:07:24.941756 sshd[1965]: pam_unix(sshd:session): session closed for user core
Jul  2 07:07:24.945656 systemd[1]: sshd@4-10.200.8.44:22-10.200.16.10:33332.service: Deactivated successfully.
Jul  2 07:07:24.946655 systemd[1]: session-7.scope: Deactivated successfully.
Jul  2 07:07:24.947402 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit.
Jul  2 07:07:24.948304 systemd-logind[1465]: Removed session 7.
Jul  2 07:07:25.060375 systemd[1]: Started sshd@5-10.200.8.44:22-10.200.16.10:33348.service - OpenSSH per-connection server daemon (10.200.16.10:33348).
Jul  2 07:07:25.703287 sshd[1972]: Accepted publickey for core from 10.200.16.10 port 33348 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:25.705300 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:25.711441 systemd-logind[1465]: New session 8 of user core.
Jul  2 07:07:25.718087 systemd[1]: Started session-8.scope - Session 8 of User core.
Jul  2 07:07:25.876478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Jul  2 07:07:25.876758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:25.882581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:25.995247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:26.060479 sudo[1986]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jul  2 07:07:26.061151 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Jul  2 07:07:26.064636 sudo[1986]: pam_unix(sudo:session): session closed for user root
Jul  2 07:07:26.070290 sudo[1985]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Jul  2 07:07:26.070620 sudo[1985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Jul  2 07:07:26.090505 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Jul  2 07:07:26.091000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Jul  2 07:07:26.092969 auditctl[1989]: No rules
Jul  2 07:07:26.096988 kernel: kauditd_printk_skb: 77 callbacks suppressed
Jul  2 07:07:26.098527 kernel: audit: type=1305 audit(1719904046.091:209): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Jul  2 07:07:26.093463 systemd[1]: audit-rules.service: Deactivated successfully.
Jul  2 07:07:26.093629 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Jul  2 07:07:26.105147 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jul  2 07:07:26.091000 audit[1989]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc124094a0 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:26.124100 kernel: audit: type=1300 audit(1719904046.091:209): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc124094a0 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:26.124275 kernel: audit: type=1327 audit(1719904046.091:209): proctitle=2F7362696E2F617564697463746C002D44
Jul  2 07:07:26.091000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44
Jul  2 07:07:26.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:26.132151 kernel: audit: type=1131 audit(1719904046.092:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:26.538526 kubelet[1979]: E0702 07:07:26.538465    1979 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:26.540525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:26.540649 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:26.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:26.548946 kernel: audit: type=1131 audit(1719904046.540:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:27.070875 augenrules[2006]: No rules
Jul  2 07:07:27.071607 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jul  2 07:07:27.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.078944 sudo[1985]: pam_unix(sudo:session): session closed for user root
Jul  2 07:07:27.089455 kernel: audit: type=1130 audit(1719904047.071:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.090514 kernel: audit: type=1106 audit(1719904047.078:213): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.078000 audit[1985]: USER_END pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.078000 audit[1985]: CRED_DISP pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.109012 kernel: audit: type=1104 audit(1719904047.078:214): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.183000 audit[1972]: USER_END pid=1972 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.183428 sshd[1972]: pam_unix(sshd:session): session closed for user core
Jul  2 07:07:27.191353 systemd[1]: sshd@5-10.200.8.44:22-10.200.16.10:33348.service: Deactivated successfully.
Jul  2 07:07:27.192300 systemd[1]: session-8.scope: Deactivated successfully.
Jul  2 07:07:27.193298 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit.
Jul  2 07:07:27.194110 systemd-logind[1465]: Removed session 8.
Jul  2 07:07:27.184000 audit[1972]: CRED_DISP pid=1972 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.203892 kernel: audit: type=1106 audit(1719904047.183:215): pid=1972 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.203967 kernel: audit: type=1104 audit(1719904047.184:216): pid=1972 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.44:22-10.200.16.10:33348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.303912 systemd[1]: Started sshd@6-10.200.8.44:22-10.200.16.10:33358.service - OpenSSH per-connection server daemon (10.200.16.10:33358).
Jul  2 07:07:27.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.44:22-10.200.16.10:33358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:27.945000 audit[2012]: USER_ACCT pid=2012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.947066 sshd[2012]: Accepted publickey for core from 10.200.16.10 port 33358 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:07:27.947000 audit[2012]: CRED_ACQ pid=2012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.947000 audit[2012]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe719928a0 a2=3 a3=7fbdfd347480 items=0 ppid=1 pid=2012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:27.947000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:07:27.948805 sshd[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:07:27.954169 systemd-logind[1465]: New session 9 of user core.
Jul  2 07:07:27.963096 systemd[1]: Started session-9.scope - Session 9 of User core.
Jul  2 07:07:27.966000 audit[2012]: USER_START pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:27.968000 audit[2014]: CRED_ACQ pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:07:28.311000 audit[2015]: USER_ACCT pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:28.311000 audit[2015]: CRED_REFR pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:28.312443 sudo[2015]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jul  2 07:07:28.312795 sudo[2015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Jul  2 07:07:28.314000 audit[2015]: USER_START pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:30.574566 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jul  2 07:07:33.126953 dockerd[2024]: time="2024-07-02T07:07:33.126881989Z" level=info msg="Starting up"
Jul  2 07:07:33.272745 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1322584438-merged.mount: Deactivated successfully.
Jul  2 07:07:36.480297 systemd[1]: var-lib-docker-metacopy\x2dcheck2813594087-merged.mount: Deactivated successfully.
Jul  2 07:07:36.626385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
Jul  2 07:07:36.672769 kernel: kauditd_printk_skb: 12 callbacks suppressed
Jul  2 07:07:36.672908 kernel: audit: type=1130 audit(1719904056.624:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:36.672961 kernel: audit: type=1131 audit(1719904056.624:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:36.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:36.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:36.626649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:36.652498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:36.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:36.775472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:36.793891 kernel: audit: type=1130 audit(1719904056.774:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:37.371488 kubelet[2037]: E0702 07:07:37.371419    2037 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:40.231939 kernel: audit: type=1131 audit(1719904057.372:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:37.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:37.373581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:37.373707 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:40.713639 dockerd[2024]: time="2024-07-02T07:07:40.713577712Z" level=info msg="Loading containers: start."
Jul  2 07:07:40.810000 audit[2063]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.841010 kernel: audit: type=1325 audit(1719904060.810:231): table=nat:5 family=2 entries=2 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.841167 kernel: audit: type=1300 audit(1719904060.810:231): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd290b3a80 a2=0 a3=7f7a1a930e90 items=0 ppid=2024 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.810000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd290b3a80 a2=0 a3=7f7a1a930e90 items=0 ppid=2024 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.810000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552
Jul  2 07:07:40.847985 kernel: audit: type=1327 audit(1719904060.810:231): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552
Jul  2 07:07:40.820000 audit[2065]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.854717 kernel: audit: type=1325 audit(1719904060.820:232): table=filter:6 family=2 entries=2 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.820000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd7102b4a0 a2=0 a3=7fa5049b3e90 items=0 ppid=2024 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.867751 kernel: audit: type=1300 audit(1719904060.820:232): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd7102b4a0 a2=0 a3=7fa5049b3e90 items=0 ppid=2024 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.820000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552
Jul  2 07:07:40.877888 kernel: audit: type=1327 audit(1719904060.820:232): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552
Jul  2 07:07:40.825000 audit[2067]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.825000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff487d8f60 a2=0 a3=7f21e1f07e90 items=0 ppid=2024 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.825000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31
Jul  2 07:07:40.830000 audit[2069]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.830000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe45ec6d40 a2=0 a3=7f54f9067e90 items=0 ppid=2024 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.830000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32
Jul  2 07:07:40.835000 audit[2071]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.835000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea6ca6590 a2=0 a3=7f84ae6b8e90 items=0 ppid=2024 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.835000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E
Jul  2 07:07:40.839000 audit[2073]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.839000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe44e6cb20 a2=0 a3=7fda64264e90 items=0 ppid=2024 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.839000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E
Jul  2 07:07:40.877000 audit[2075]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.877000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcba40b400 a2=0 a3=7f30803fde90 items=0 ppid=2024 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.877000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552
Jul  2 07:07:40.879000 audit[2077]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2077 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.879000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdf12d7120 a2=0 a3=7fdccb0d6e90 items=0 ppid=2024 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.879000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E
Jul  2 07:07:40.881000 audit[2079]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.881000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe8f5d3c50 a2=0 a3=7fd6eef9ce90 items=0 ppid=2024 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.881000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Jul  2 07:07:40.965000 audit[2083]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.965000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe5a4db270 a2=0 a3=7f720a387e90 items=0 ppid=2024 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.965000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
Jul  2 07:07:40.967000 audit[2084]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:40.967000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffcef76400 a2=0 a3=7faf4897fe90 items=0 ppid=2024 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:40.967000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Jul  2 07:07:41.325896 kernel: Initializing XFRM netlink socket
Jul  2 07:07:41.407000 audit[2092]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.407000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff362597f0 a2=0 a3=7fee9cbf2e90 items=0 ppid=2024 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445
Jul  2 07:07:41.423000 audit[2095]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.423000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffda341e7c0 a2=0 a3=7ff72bc54e90 items=0 ppid=2024 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.423000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E
Jul  2 07:07:41.430000 audit[2099]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.430000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc08eff20 a2=0 a3=7f37ad2bde90 items=0 ppid=2024 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.430000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054
Jul  2 07:07:41.432000 audit[2101]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.432000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd5d0eb820 a2=0 a3=7f153a787e90 items=0 ppid=2024 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.432000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054
Jul  2 07:07:41.434000 audit[2103]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.434000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff136a8680 a2=0 a3=7fb00d7ace90 items=0 ppid=2024 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.434000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552
Jul  2 07:07:41.437000 audit[2105]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.437000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd3ab94640 a2=0 a3=7fe687da1e90 items=0 ppid=2024 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38
Jul  2 07:07:41.439000 audit[2107]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.439000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff2b6f59c0 a2=0 a3=7fe322792e90 items=0 ppid=2024 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.439000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552
Jul  2 07:07:41.442000 audit[2109]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.442000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffdf40d86d0 a2=0 a3=7fef3490de90 items=0 ppid=2024 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054
Jul  2 07:07:41.444000 audit[2111]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.444000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd796f64d0 a2=0 a3=7f4f0c5aae90 items=0 ppid=2024 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.444000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31
Jul  2 07:07:41.447000 audit[2113]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.447000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc366620f0 a2=0 a3=7f4be0976e90 items=0 ppid=2024 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.447000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32
Jul  2 07:07:41.449000 audit[2115]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.449000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd71881240 a2=0 a3=7f31d3872e90 items=0 ppid=2024 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.449000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50
Jul  2 07:07:41.450997 systemd-networkd[1236]: docker0: Link UP
Jul  2 07:07:41.515000 audit[2119]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.515000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8d251310 a2=0 a3=7f5171e2ce90 items=0 ppid=2024 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.515000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552
Jul  2 07:07:41.516000 audit[2120]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:07:41.516000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffc5147580 a2=0 a3=7ff505127e90 items=0 ppid=2024 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:07:41.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552
Jul  2 07:07:41.518247 dockerd[2024]: time="2024-07-02T07:07:41.518196145Z" level=info msg="Loading containers: done."
Jul  2 07:07:42.193414 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck503390616-merged.mount: Deactivated successfully.
Jul  2 07:07:45.117105 dockerd[2024]: time="2024-07-02T07:07:45.117030266Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jul  2 07:07:45.117697 dockerd[2024]: time="2024-07-02T07:07:45.117359872Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9
Jul  2 07:07:45.117697 dockerd[2024]: time="2024-07-02T07:07:45.117535475Z" level=info msg="Daemon has completed initialization"
Jul  2 07:07:45.297635 dockerd[2024]: time="2024-07-02T07:07:45.297543978Z" level=info msg="API listen on /run/docker.sock"
Jul  2 07:07:45.301430 systemd[1]: Started docker.service - Docker Application Container Engine.
Jul  2 07:07:45.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:45.304273 kernel: kauditd_printk_skb: 66 callbacks suppressed
Jul  2 07:07:45.304373 kernel: audit: type=1130 audit(1719904065.300:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.376395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
Jul  2 07:07:47.376674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:47.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.396335 kernel: audit: type=1130 audit(1719904067.375:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.396503 kernel: audit: type=1131 audit(1719904067.375:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.403628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:47.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.570154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:47.579948 kernel: audit: type=1130 audit(1719904067.569:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:47.614354 kubelet[2165]: E0702 07:07:47.614311    2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:47.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:47.616341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:47.616465 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:47.626028 kernel: audit: type=1131 audit(1719904067.615:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:51.609458 containerd[1481]: time="2024-07-02T07:07:51.609407561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\""
Jul  2 07:07:54.915715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073932691.mount: Deactivated successfully.
Jul  2 07:07:57.626426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
Jul  2 07:07:57.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:57.626705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:57.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:57.644747 kernel: audit: type=1130 audit(1719904077.625:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:57.644907 kernel: audit: type=1131 audit(1719904077.625:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:57.647424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:07:57.741526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:07:57.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:57.752949 kernel: audit: type=1130 audit(1719904077.741:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:07:58.287100 kubelet[2185]: E0702 07:07:58.287044    2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:07:58.289012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:07:58.289184 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:07:58.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:07:58.298883 kernel: audit: type=1131 audit(1719904078.288:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:05.520435 containerd[1481]: time="2024-07-02T07:08:05.520366235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:05.524965 containerd[1481]: time="2024-07-02T07:08:05.524888688Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771809"
Jul  2 07:08:05.533516 containerd[1481]: time="2024-07-02T07:08:05.533456889Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:05.542196 containerd[1481]: time="2024-07-02T07:08:05.542135091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:05.563106 containerd[1481]: time="2024-07-02T07:08:05.563043038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:05.564664 containerd[1481]: time="2024-07-02T07:08:05.564605856Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 13.955147294s"
Jul  2 07:08:05.564889 containerd[1481]: time="2024-07-02T07:08:05.564845559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\""
Jul  2 07:08:05.593699 containerd[1481]: time="2024-07-02T07:08:05.593648899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\""
Jul  2 07:08:07.432742 containerd[1481]: time="2024-07-02T07:08:07.432662545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:07.440273 containerd[1481]: time="2024-07-02T07:08:07.440196831Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588682"
Jul  2 07:08:07.448443 containerd[1481]: time="2024-07-02T07:08:07.448378723Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:07.453217 containerd[1481]: time="2024-07-02T07:08:07.453151177Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:07.462095 containerd[1481]: time="2024-07-02T07:08:07.462038378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:07.463373 containerd[1481]: time="2024-07-02T07:08:07.463316693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 1.869614293s"
Jul  2 07:08:07.465950 containerd[1481]: time="2024-07-02T07:08:07.463377093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\""
Jul  2 07:08:07.488139 containerd[1481]: time="2024-07-02T07:08:07.488080873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\""
Jul  2 07:08:08.376451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
Jul  2 07:08:08.389415 kernel: audit: type=1130 audit(1719904088.375:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.376726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:08.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.400888 kernel: audit: type=1131 audit(1719904088.375:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.401432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:08.566056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:08.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.585886 kernel: audit: type=1130 audit(1719904088.565:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:08.673121 kubelet[2258]: E0702 07:08:08.672556    2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:08:08.675804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:08:08.676022 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:08:08.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:08.690894 kernel: audit: type=1131 audit(1719904088.675:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:09.146846 containerd[1481]: time="2024-07-02T07:08:09.146769190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:09.150819 containerd[1481]: time="2024-07-02T07:08:09.150755234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778128"
Jul  2 07:08:09.155853 containerd[1481]: time="2024-07-02T07:08:09.155805289Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:09.163012 containerd[1481]: time="2024-07-02T07:08:09.162966967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:09.169680 containerd[1481]: time="2024-07-02T07:08:09.169631539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:09.170733 containerd[1481]: time="2024-07-02T07:08:09.170684451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.682548677s"
Jul  2 07:08:09.170930 containerd[1481]: time="2024-07-02T07:08:09.170903053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\""
Jul  2 07:08:09.194596 containerd[1481]: time="2024-07-02T07:08:09.194548511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\""
Jul  2 07:08:13.753507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311573916.mount: Deactivated successfully.
Jul  2 07:08:14.766620 containerd[1481]: time="2024-07-02T07:08:14.766555159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:14.814161 containerd[1481]: time="2024-07-02T07:08:14.814061532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035446"
Jul  2 07:08:15.113345 containerd[1481]: time="2024-07-02T07:08:15.113034587Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:15.158996 containerd[1481]: time="2024-07-02T07:08:15.158932735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:15.163292 containerd[1481]: time="2024-07-02T07:08:15.163230577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:15.164346 containerd[1481]: time="2024-07-02T07:08:15.164297388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 5.969699277s"
Jul  2 07:08:15.164527 containerd[1481]: time="2024-07-02T07:08:15.164499890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\""
Jul  2 07:08:15.187790 containerd[1481]: time="2024-07-02T07:08:15.187742117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jul  2 07:08:18.876425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11.
Jul  2 07:08:18.876761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:18.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:18.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:18.894907 kernel: audit: type=1130 audit(1719904098.875:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:18.895027 kernel: audit: type=1131 audit(1719904098.875:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:18.902422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:19.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:19.001772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:19.012065 kernel: audit: type=1130 audit(1719904099.001:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:19.047679 kubelet[2286]: E0702 07:08:19.047623    2286 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:08:19.049604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:08:19.049775 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:08:19.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:19.062889 kernel: audit: type=1131 audit(1719904099.049:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:22.625583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564796349.mount: Deactivated successfully.
Jul  2 07:08:29.126577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12.
Jul  2 07:08:29.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.126967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:29.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.148318 kernel: audit: type=1130 audit(1719904109.126:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.148481 kernel: audit: type=1131 audit(1719904109.126:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.151434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:29.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.603747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:29.621897 kernel: audit: type=1130 audit(1719904109.603:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:29.667018 kubelet[2304]: E0702 07:08:29.666969    2304 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:08:29.669052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:08:29.669226 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:08:29.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:29.918752 kernel: audit: type=1131 audit(1719904109.668:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:33.946475 containerd[1481]: time="2024-07-02T07:08:33.946405290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:33.954131 containerd[1481]: time="2024-07-02T07:08:33.954053847Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769"
Jul  2 07:08:33.960429 containerd[1481]: time="2024-07-02T07:08:33.960371194Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:33.978108 containerd[1481]: time="2024-07-02T07:08:33.978043925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:33.986116 containerd[1481]: time="2024-07-02T07:08:33.986060985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:33.987535 containerd[1481]: time="2024-07-02T07:08:33.987471195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 18.799685378s"
Jul  2 07:08:33.987726 containerd[1481]: time="2024-07-02T07:08:33.987699397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Jul  2 07:08:34.011388 containerd[1481]: time="2024-07-02T07:08:34.011344872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Jul  2 07:08:34.749444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578001546.mount: Deactivated successfully.
Jul  2 07:08:34.782208 containerd[1481]: time="2024-07-02T07:08:34.782153922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:34.785361 containerd[1481]: time="2024-07-02T07:08:34.785293145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298"
Jul  2 07:08:34.791887 containerd[1481]: time="2024-07-02T07:08:34.791812993Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:34.797192 containerd[1481]: time="2024-07-02T07:08:34.797145432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:34.800880 containerd[1481]: time="2024-07-02T07:08:34.800817159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:34.801636 containerd[1481]: time="2024-07-02T07:08:34.801588865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 790.003392ms"
Jul  2 07:08:34.801793 containerd[1481]: time="2024-07-02T07:08:34.801639665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Jul  2 07:08:34.823565 containerd[1481]: time="2024-07-02T07:08:34.823524426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\""
Jul  2 07:08:35.510962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467863109.mount: Deactivated successfully.
Jul  2 07:08:38.248561 containerd[1481]: time="2024-07-02T07:08:38.248484407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:38.251357 containerd[1481]: time="2024-07-02T07:08:38.251281527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579"
Jul  2 07:08:38.255384 containerd[1481]: time="2024-07-02T07:08:38.255329255Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:38.259485 containerd[1481]: time="2024-07-02T07:08:38.259435884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:38.265005 containerd[1481]: time="2024-07-02T07:08:38.264956822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:08:38.266162 containerd[1481]: time="2024-07-02T07:08:38.266098330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.442528904s"
Jul  2 07:08:38.266346 containerd[1481]: time="2024-07-02T07:08:38.266320432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\""
Jul  2 07:08:39.876528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13.
Jul  2 07:08:39.876812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:39.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:39.895961 kernel: audit: type=1130 audit(1719904119.875:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:39.896122 kernel: audit: type=1131 audit(1719904119.875:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:39.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:39.897419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:40.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.046498 kernel: audit: type=1130 audit(1719904120.034:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.034463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:40.582379 kubelet[2467]: E0702 07:08:40.582322    2467 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jul  2 07:08:40.584624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul  2 07:08:40.584799 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul  2 07:08:40.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:40.595944 kernel: audit: type=1131 audit(1719904120.584:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:40.871848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:40.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.890614 kernel: audit: type=1130 audit(1719904120.871:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.890733 kernel: audit: type=1131 audit(1719904120.871:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:40.893425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:40.913784 systemd[1]: Reloading.
Jul  2 07:08:41.137022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jul  2 07:08:41.229000 audit: BPF prog-id=72 op=LOAD
Jul  2 07:08:41.242892 kernel: audit: type=1334 audit(1719904121.229:282): prog-id=72 op=LOAD
Jul  2 07:08:41.243008 kernel: audit: type=1334 audit(1719904121.229:283): prog-id=58 op=UNLOAD
Jul  2 07:08:41.243034 kernel: audit: type=1334 audit(1719904121.235:284): prog-id=73 op=LOAD
Jul  2 07:08:41.229000 audit: BPF prog-id=58 op=UNLOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=73 op=LOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=59 op=UNLOAD
Jul  2 07:08:41.247581 kernel: audit: type=1334 audit(1719904121.235:285): prog-id=59 op=UNLOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=74 op=LOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=75 op=LOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=60 op=UNLOAD
Jul  2 07:08:41.235000 audit: BPF prog-id=61 op=UNLOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=76 op=LOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=62 op=UNLOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=77 op=LOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=63 op=UNLOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=78 op=LOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=79 op=LOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=64 op=UNLOAD
Jul  2 07:08:41.237000 audit: BPF prog-id=65 op=UNLOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=80 op=LOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=66 op=UNLOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=81 op=LOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=82 op=LOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=67 op=UNLOAD
Jul  2 07:08:41.240000 audit: BPF prog-id=68 op=UNLOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=83 op=LOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=84 op=LOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=69 op=UNLOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=70 op=UNLOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=85 op=LOAD
Jul  2 07:08:41.241000 audit: BPF prog-id=71 op=UNLOAD
Jul  2 07:08:41.430315 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Jul  2 07:08:41.430446 systemd[1]: kubelet.service: Failed with result 'signal'.
Jul  2 07:08:41.430782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:41.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jul  2 07:08:41.436612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:08:46.852840 kernel: kauditd_printk_skb: 25 callbacks suppressed
Jul  2 07:08:46.853009 kernel: audit: type=1130 audit(1719904126.848:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:46.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:08:46.848927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:08:46.901301 kubelet[2561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul  2 07:08:46.901301 kubelet[2561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jul  2 07:08:46.901301 kubelet[2561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul  2 07:08:46.901888 kubelet[2561]: I0702 07:08:46.901383    2561 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jul  2 07:08:47.347272 kubelet[2561]: I0702 07:08:47.347221    2561 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jul  2 07:08:47.347272 kubelet[2561]: I0702 07:08:47.347256    2561 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jul  2 07:08:47.347582 kubelet[2561]: I0702 07:08:47.347561    2561 server.go:927] "Client rotation is on, will bootstrap in background"
Jul  2 07:08:47.362131 kubelet[2561]: I0702 07:08:47.362053    2561 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jul  2 07:08:47.362682 kubelet[2561]: E0702 07:08:47.362653    2561 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.372903 kubelet[2561]: I0702 07:08:47.372872    2561 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jul  2 07:08:47.374182 kubelet[2561]: I0702 07:08:47.374126    2561 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jul  2 07:08:47.374412 kubelet[2561]: I0702 07:08:47.374179    2561 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.5-a-b9d6671d68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jul  2 07:08:47.374798 kubelet[2561]: I0702 07:08:47.374779    2561 topology_manager.go:138] "Creating topology manager with none policy"
Jul  2 07:08:47.374900 kubelet[2561]: I0702 07:08:47.374804    2561 container_manager_linux.go:301] "Creating device plugin manager"
Jul  2 07:08:47.374996 kubelet[2561]: I0702 07:08:47.374979    2561 state_mem.go:36] "Initialized new in-memory state store"
Jul  2 07:08:47.375748 kubelet[2561]: I0702 07:08:47.375731    2561 kubelet.go:400] "Attempting to sync node with API server"
Jul  2 07:08:47.375846 kubelet[2561]: I0702 07:08:47.375755    2561 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jul  2 07:08:47.375846 kubelet[2561]: I0702 07:08:47.375788    2561 kubelet.go:312] "Adding apiserver pod source"
Jul  2 07:08:47.375846 kubelet[2561]: I0702 07:08:47.375819    2561 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jul  2 07:08:47.380290 kubelet[2561]: I0702 07:08:47.380263    2561 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1"
Jul  2 07:08:47.381823 kubelet[2561]: I0702 07:08:47.381794    2561 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jul  2 07:08:47.381939 kubelet[2561]: W0702 07:08:47.381903    2561 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul  2 07:08:47.382619 kubelet[2561]: I0702 07:08:47.382598    2561 server.go:1264] "Started kubelet"
Jul  2 07:08:47.382820 kubelet[2561]: W0702 07:08:47.382764    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.382903 kubelet[2561]: E0702 07:08:47.382838    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.382996 kubelet[2561]: W0702 07:08:47.382945    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.383051 kubelet[2561]: E0702 07:08:47.383020    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.393614 kubelet[2561]: E0702 07:08:47.393497    2561 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-a-b9d6671d68.17de53b1005dd1bb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-a-b9d6671d68,UID:ci-3815.2.5-a-b9d6671d68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-b9d6671d68,},FirstTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,LastTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-b9d6671d68,}"
Jul  2 07:08:47.393801 kubelet[2561]: I0702 07:08:47.393626    2561 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jul  2 07:08:47.395743 kubelet[2561]: I0702 07:08:47.395713    2561 server.go:455] "Adding debug handlers to kubelet server"
Jul  2 07:08:47.396049 kubelet[2561]: I0702 07:08:47.395991    2561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jul  2 07:08:47.396511 kubelet[2561]: I0702 07:08:47.396491    2561 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jul  2 07:08:47.399996 kubelet[2561]: I0702 07:08:47.399314    2561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jul  2 07:08:47.401085 kubelet[2561]: I0702 07:08:47.401068    2561 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jul  2 07:08:47.401562 kubelet[2561]: I0702 07:08:47.401547    2561 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jul  2 07:08:47.401685 kubelet[2561]: I0702 07:08:47.401677    2561 reconciler.go:26] "Reconciler: start to sync state"
Jul  2 07:08:47.403176 kubelet[2561]: W0702 07:08:47.403131    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.403309 kubelet[2561]: E0702 07:08:47.403294    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:47.404585 kubelet[2561]: E0702 07:08:47.404562    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="200ms"
Jul  2 07:08:47.405443 kubelet[2561]: I0702 07:08:47.405429    2561 factory.go:221] Registration of the containerd container factory successfully
Jul  2 07:08:47.405523 kubelet[2561]: I0702 07:08:47.405517    2561 factory.go:221] Registration of the systemd container factory successfully
Jul  2 07:08:47.405634 kubelet[2561]: I0702 07:08:47.405624    2561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jul  2 07:08:47.410000 audit[2571]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.419956 kernel: audit: type=1325 audit(1719904127.410:312): table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.410000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff3a62d360 a2=0 a3=7f349d1fce90 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.433016 kernel: audit: type=1300 audit(1719904127.410:312): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff3a62d360 a2=0 a3=7f349d1fce90 items=0 ppid=2561 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Jul  2 07:08:47.433000 audit[2574]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.445619 kernel: audit: type=1327 audit(1719904127.410:312): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Jul  2 07:08:47.445695 kernel: audit: type=1325 audit(1719904127.433:313): table=filter:30 family=2 entries=1 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.433000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcef3edff0 a2=0 a3=7f5e73c7be90 items=0 ppid=2561 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.456890 kernel: audit: type=1300 audit(1719904127.433:313): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcef3edff0 a2=0 a3=7f5e73c7be90 items=0 ppid=2561 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Jul  2 07:08:47.463908 kernel: audit: type=1327 audit(1719904127.433:313): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Jul  2 07:08:47.435000 audit[2577]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.464882 kernel: audit: type=1325 audit(1719904127.435:314): table=filter:31 family=2 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.435000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd099f6ac0 a2=0 a3=7f3ca37f1e90 items=0 ppid=2561 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Jul  2 07:08:47.488372 kernel: audit: type=1300 audit(1719904127.435:314): arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd099f6ac0 a2=0 a3=7f3ca37f1e90 items=0 ppid=2561 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.488445 kernel: audit: type=1327 audit(1719904127.435:314): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Jul  2 07:08:47.437000 audit[2579]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:47.437000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff05a81df0 a2=0 a3=7fb24af3be90 items=0 ppid=2561 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:47.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Jul  2 07:08:47.576506 kubelet[2561]: I0702 07:08:47.576472    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:47.577482 kubelet[2561]: E0702 07:08:47.577450    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:47.577641 kubelet[2561]: I0702 07:08:47.577496    2561 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jul  2 07:08:47.577641 kubelet[2561]: I0702 07:08:47.577554    2561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jul  2 07:08:47.577641 kubelet[2561]: I0702 07:08:47.577590    2561 state_mem.go:36] "Initialized new in-memory state store"
Jul  2 07:08:47.606037 kubelet[2561]: E0702 07:08:47.605846    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="400ms"
Jul  2 07:08:47.780349 kubelet[2561]: I0702 07:08:47.780308    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:47.780799 kubelet[2561]: E0702 07:08:47.780764    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:48.007061 kubelet[2561]: E0702 07:08:48.006998    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="800ms"
Jul  2 07:08:48.183612 kubelet[2561]: I0702 07:08:48.183575    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:48.184037 kubelet[2561]: E0702 07:08:48.184002    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:48.307518 kubelet[2561]: W0702 07:08:48.307358    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:48.307518 kubelet[2561]: E0702 07:08:48.307433    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.125600 kubelet[2561]: W0702 07:08:48.575297    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.125600 kubelet[2561]: E0702 07:08:48.575405    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.125600 kubelet[2561]: E0702 07:08:48.808127    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="1.6s"
Jul  2 07:08:49.125600 kubelet[2561]: W0702 07:08:48.963991    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.125600 kubelet[2561]: E0702 07:08:48.964036    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.125600 kubelet[2561]: I0702 07:08:48.987200    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.125600 kubelet[2561]: E0702 07:08:48.987577    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.264000 audit[2584]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:49.264000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeb5da9910 a2=0 a3=7f03c72bae90 items=0 ppid=2561 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38
Jul  2 07:08:49.266345 kubelet[2561]: I0702 07:08:49.266234    2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jul  2 07:08:49.266000 audit[2585]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:08:49.266000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffb502df20 a2=0 a3=7f2cebcf5e90 items=0 ppid=2561 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Jul  2 07:08:49.269477 kubelet[2561]: I0702 07:08:49.269449    2561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jul  2 07:08:49.269623 kubelet[2561]: I0702 07:08:49.269610    2561 status_manager.go:217] "Starting to sync pod status with apiserver"
Jul  2 07:08:49.268000 audit[2586]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:49.268000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1d311190 a2=0 a3=7f0f51db8e90 items=0 ppid=2561 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Jul  2 07:08:49.270000 audit[2588]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:08:49.270000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4c1213c0 a2=0 a3=7f900055ae90 items=0 ppid=2561 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Jul  2 07:08:49.273327 kubelet[2561]: I0702 07:08:49.270134    2561 kubelet.go:2337] "Starting kubelet main sync loop"
Jul  2 07:08:49.273327 kubelet[2561]: E0702 07:08:49.270183    2561 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jul  2 07:08:49.273327 kubelet[2561]: W0702 07:08:49.270812    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.273327 kubelet[2561]: E0702 07:08:49.270925    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.273000 audit[2589]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:49.273000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffced21ea10 a2=0 a3=7f0875bbce90 items=0 ppid=2561 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Jul  2 07:08:49.274000 audit[2590]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:08:49.274000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff8d156550 a2=0 a3=7f726daf4e90 items=0 ppid=2561 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Jul  2 07:08:49.276648 kubelet[2561]: I0702 07:08:49.276627    2561 policy_none.go:49] "None policy: Start"
Jul  2 07:08:49.277533 kubelet[2561]: I0702 07:08:49.277516    2561 memory_manager.go:170] "Starting memorymanager" policy="None"
Jul  2 07:08:49.276000 audit[2591]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:08:49.276000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd4761460 a2=0 a3=7ff0c7897e90 items=0 ppid=2561 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Jul  2 07:08:49.277974 kubelet[2561]: I0702 07:08:49.277952    2561 state_mem.go:35] "Initializing new in-memory state store"
Jul  2 07:08:49.278000 audit[2592]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:08:49.278000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc85753930 a2=0 a3=7f3cbc4dfe90 items=0 ppid=2561 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:49.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Jul  2 07:08:49.370997 kubelet[2561]: E0702 07:08:49.370943    2561 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jul  2 07:08:49.415043 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jul  2 07:08:49.427540 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jul  2 07:08:49.430632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jul  2 07:08:49.437659 kubelet[2561]: I0702 07:08:49.437624    2561 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jul  2 07:08:49.437934 kubelet[2561]: I0702 07:08:49.437888    2561 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jul  2 07:08:49.438042 kubelet[2561]: I0702 07:08:49.438032    2561 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jul  2 07:08:49.442152 kubelet[2561]: E0702 07:08:49.442126    2561 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:08:49.464258 kubelet[2561]: E0702 07:08:49.464214    2561 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:49.572113 kubelet[2561]: I0702 07:08:49.572029    2561 topology_manager.go:215] "Topology Admit Handler" podUID="2e08ab65cc4317b0bb8c99bd4d52a10b" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.574254 kubelet[2561]: I0702 07:08:49.574203    2561 topology_manager.go:215] "Topology Admit Handler" podUID="c4c423135a13d65348033bff1ba62872" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.576295 kubelet[2561]: I0702 07:08:49.576130    2561 topology_manager.go:215] "Topology Admit Handler" podUID="fda5fd723536f947d0002b9b05d98d8d" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.584670 systemd[1]: Created slice kubepods-burstable-pod2e08ab65cc4317b0bb8c99bd4d52a10b.slice - libcontainer container kubepods-burstable-pod2e08ab65cc4317b0bb8c99bd4d52a10b.slice.
Jul  2 07:08:49.595711 systemd[1]: Created slice kubepods-burstable-podc4c423135a13d65348033bff1ba62872.slice - libcontainer container kubepods-burstable-podc4c423135a13d65348033bff1ba62872.slice.
Jul  2 07:08:49.604775 systemd[1]: Created slice kubepods-burstable-podfda5fd723536f947d0002b9b05d98d8d.slice - libcontainer container kubepods-burstable-podfda5fd723536f947d0002b9b05d98d8d.slice.
Jul  2 07:08:49.615728 kubelet[2561]: I0702 07:08:49.615687    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615728 kubelet[2561]: I0702 07:08:49.615732    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615995 kubelet[2561]: I0702 07:08:49.615760    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615995 kubelet[2561]: I0702 07:08:49.615780    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615995 kubelet[2561]: I0702 07:08:49.615801    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fda5fd723536f947d0002b9b05d98d8d-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-a-b9d6671d68\" (UID: \"fda5fd723536f947d0002b9b05d98d8d\") " pod="kube-system/kube-scheduler-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615995 kubelet[2561]: I0702 07:08:49.615833    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.615995 kubelet[2561]: I0702 07:08:49.615855    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.616148 kubelet[2561]: I0702 07:08:49.615892    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.616148 kubelet[2561]: I0702 07:08:49.615914    2561 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:49.894648 containerd[1481]: time="2024-07-02T07:08:49.894576243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-a-b9d6671d68,Uid:2e08ab65cc4317b0bb8c99bd4d52a10b,Namespace:kube-system,Attempt:0,}"
Jul  2 07:08:49.904357 containerd[1481]: time="2024-07-02T07:08:49.904309804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-a-b9d6671d68,Uid:c4c423135a13d65348033bff1ba62872,Namespace:kube-system,Attempt:0,}"
Jul  2 07:08:49.908306 containerd[1481]: time="2024-07-02T07:08:49.908263828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-a-b9d6671d68,Uid:fda5fd723536f947d0002b9b05d98d8d,Namespace:kube-system,Attempt:0,}"
Jul  2 07:08:50.409548 kubelet[2561]: E0702 07:08:50.409489    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="3.2s"
Jul  2 07:08:50.589649 kubelet[2561]: I0702 07:08:50.589606    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:50.590124 kubelet[2561]: E0702 07:08:50.590079    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:50.669218 kubelet[2561]: W0702 07:08:50.669099    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.669218 kubelet[2561]: E0702 07:08:50.669146    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.748349 kubelet[2561]: W0702 07:08:50.748297    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.748349 kubelet[2561]: E0702 07:08:50.748353    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.917499 kubelet[2561]: W0702 07:08:50.917451    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.917499 kubelet[2561]: E0702 07:08:50.917503    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.980440 kubelet[2561]: W0702 07:08:50.980317    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:50.980440 kubelet[2561]: E0702 07:08:50.980365    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:52.030095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638948583.mount: Deactivated successfully.
Jul  2 07:08:52.361148 containerd[1481]: time="2024-07-02T07:08:52.360640439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.406436 containerd[1481]: time="2024-07-02T07:08:52.406347215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.411256 containerd[1481]: time="2024-07-02T07:08:52.411178344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064"
Jul  2 07:08:52.471331 containerd[1481]: time="2024-07-02T07:08:52.471256408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.511176 containerd[1481]: time="2024-07-02T07:08:52.511084549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jul  2 07:08:52.561953 containerd[1481]: time="2024-07-02T07:08:52.561848356Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.565465 containerd[1481]: time="2024-07-02T07:08:52.565398577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jul  2 07:08:52.620215 containerd[1481]: time="2024-07-02T07:08:52.619360903Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.670815 containerd[1481]: time="2024-07-02T07:08:52.670738814Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.677251 containerd[1481]: time="2024-07-02T07:08:52.677185753Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.708173 containerd[1481]: time="2024-07-02T07:08:52.708085040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.772628 containerd[1481]: time="2024-07-02T07:08:52.772539930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.788211 containerd[1481]: time="2024-07-02T07:08:52.788129524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.789559 containerd[1481]: time="2024-07-02T07:08:52.789498832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.894775488s"
Jul  2 07:08:52.829802 containerd[1481]: time="2024-07-02T07:08:52.829730676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.830707 containerd[1481]: time="2024-07-02T07:08:52.830646881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.922285652s"
Jul  2 07:08:52.840565 kubelet[2561]: W0702 07:08:52.840520    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:52.840950 kubelet[2561]: E0702 07:08:52.840574    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:52.866192 containerd[1481]: time="2024-07-02T07:08:52.866119496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jul  2 07:08:52.867272 containerd[1481]: time="2024-07-02T07:08:52.867202102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.962770798s"
Jul  2 07:08:53.266069 kubelet[2561]: E0702 07:08:53.265934    2561 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-a-b9d6671d68.17de53b1005dd1bb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-a-b9d6671d68,UID:ci-3815.2.5-a-b9d6671d68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-b9d6671d68,},FirstTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,LastTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-b9d6671d68,}"
Jul  2 07:08:53.566025 kubelet[2561]: E0702 07:08:53.565888    2561 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:53.611030 kubelet[2561]: E0702 07:08:53.610981    2561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="6.4s"
Jul  2 07:08:53.793029 kubelet[2561]: I0702 07:08:53.792983    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:53.793481 kubelet[2561]: E0702 07:08:53.793433    2561 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:08:54.346887 containerd[1481]: time="2024-07-02T07:08:54.346753163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:08:54.347458 containerd[1481]: time="2024-07-02T07:08:54.346748863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:08:54.347458 containerd[1481]: time="2024-07-02T07:08:54.346816764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.347458 containerd[1481]: time="2024-07-02T07:08:54.346844364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:08:54.347458 containerd[1481]: time="2024-07-02T07:08:54.346883464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.347813 containerd[1481]: time="2024-07-02T07:08:54.347754669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.348104 containerd[1481]: time="2024-07-02T07:08:54.348036671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:08:54.348289 containerd[1481]: time="2024-07-02T07:08:54.348238472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.349702 containerd[1481]: time="2024-07-02T07:08:54.349617980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:08:54.349832 containerd[1481]: time="2024-07-02T07:08:54.349680981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.349832 containerd[1481]: time="2024-07-02T07:08:54.349706381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:08:54.349832 containerd[1481]: time="2024-07-02T07:08:54.349725281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:08:54.386125 systemd[1]: Started cri-containerd-9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd.scope - libcontainer container 9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd.
Jul  2 07:08:54.411071 systemd[1]: Started cri-containerd-b399876a18704254306fc724f3d6093aaf32ba0ef28159be06da7afe4340e565.scope - libcontainer container b399876a18704254306fc724f3d6093aaf32ba0ef28159be06da7afe4340e565.
Jul  2 07:08:54.412000 audit: BPF prog-id=86 op=LOAD
Jul  2 07:08:54.418036 kernel: kauditd_printk_skb: 27 callbacks suppressed
Jul  2 07:08:54.418162 kernel: audit: type=1334 audit(1719904134.412:324): prog-id=86 op=LOAD
Jul  2 07:08:54.415000 audit: BPF prog-id=87 op=LOAD
Jul  2 07:08:54.415000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2623 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.454665 kernel: audit: type=1334 audit(1719904134.415:325): prog-id=87 op=LOAD
Jul  2 07:08:54.454842 kernel: audit: type=1300 audit(1719904134.415:325): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2623 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961366463653936623339623762396130353635393533366532373237
Jul  2 07:08:54.475661 kernel: audit: type=1327 audit(1719904134.415:325): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961366463653936623339623762396130353635393533366532373237
Jul  2 07:08:54.415000 audit: BPF prog-id=88 op=LOAD
Jul  2 07:08:54.482012 kernel: audit: type=1334 audit(1719904134.415:326): prog-id=88 op=LOAD
Jul  2 07:08:54.480540 systemd[1]: Started cri-containerd-f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7.scope - libcontainer container f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7.
Jul  2 07:08:54.415000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2623 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.524639 kernel: audit: type=1300 audit(1719904134.415:326): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2623 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961366463653936623339623762396130353635393533366532373237
Jul  2 07:08:54.562927 kernel: audit: type=1327 audit(1719904134.415:326): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961366463653936623339623762396130353635393533366532373237
Jul  2 07:08:54.415000 audit: BPF prog-id=88 op=UNLOAD
Jul  2 07:08:54.589397 kernel: audit: type=1334 audit(1719904134.415:327): prog-id=88 op=UNLOAD
Jul  2 07:08:54.598305 kernel: audit: type=1334 audit(1719904134.415:328): prog-id=87 op=UNLOAD
Jul  2 07:08:54.598448 kernel: audit: type=1334 audit(1719904134.415:329): prog-id=89 op=LOAD
Jul  2 07:08:54.415000 audit: BPF prog-id=87 op=UNLOAD
Jul  2 07:08:54.415000 audit: BPF prog-id=89 op=LOAD
Jul  2 07:08:54.415000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2623 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961366463653936623339623762396130353635393533366532373237
Jul  2 07:08:54.483000 audit: BPF prog-id=90 op=LOAD
Jul  2 07:08:54.488000 audit: BPF prog-id=91 op=LOAD
Jul  2 07:08:54.488000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2624 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393938373661313837303432353433303666633732346633643630
Jul  2 07:08:54.488000 audit: BPF prog-id=92 op=LOAD
Jul  2 07:08:54.488000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2624 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393938373661313837303432353433303666633732346633643630
Jul  2 07:08:54.488000 audit: BPF prog-id=92 op=UNLOAD
Jul  2 07:08:54.488000 audit: BPF prog-id=91 op=UNLOAD
Jul  2 07:08:54.488000 audit: BPF prog-id=93 op=LOAD
Jul  2 07:08:54.488000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2624 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393938373661313837303432353433303666633732346633643630
Jul  2 07:08:54.530000 audit: BPF prog-id=94 op=LOAD
Jul  2 07:08:54.532000 audit: BPF prog-id=95 op=LOAD
Jul  2 07:08:54.532000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b9988 a2=78 a3=0 items=0 ppid=2622 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623733643466613939343532663362396265386132353763636637
Jul  2 07:08:54.532000 audit: BPF prog-id=96 op=LOAD
Jul  2 07:08:54.532000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b9720 a2=78 a3=0 items=0 ppid=2622 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623733643466613939343532663362396265386132353763636637
Jul  2 07:08:54.532000 audit: BPF prog-id=96 op=UNLOAD
Jul  2 07:08:54.532000 audit: BPF prog-id=95 op=UNLOAD
Jul  2 07:08:54.532000 audit: BPF prog-id=97 op=LOAD
Jul  2 07:08:54.532000 audit[2653]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b9be0 a2=78 a3=0 items=0 ppid=2622 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:54.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635623733643466613939343532663362396265386132353763636637
Jul  2 07:08:54.600252 containerd[1481]: time="2024-07-02T07:08:54.592017722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-a-b9d6671d68,Uid:fda5fd723536f947d0002b9b05d98d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd\""
Jul  2 07:08:54.604931 containerd[1481]: time="2024-07-02T07:08:54.604833698Z" level=info msg="CreateContainer within sandbox \"9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jul  2 07:08:54.618960 containerd[1481]: time="2024-07-02T07:08:54.618857181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-a-b9d6671d68,Uid:c4c423135a13d65348033bff1ba62872,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7\""
Jul  2 07:08:54.622595 containerd[1481]: time="2024-07-02T07:08:54.619102883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-a-b9d6671d68,Uid:2e08ab65cc4317b0bb8c99bd4d52a10b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b399876a18704254306fc724f3d6093aaf32ba0ef28159be06da7afe4340e565\""
Jul  2 07:08:54.623925 containerd[1481]: time="2024-07-02T07:08:54.623882011Z" level=info msg="CreateContainer within sandbox \"b399876a18704254306fc724f3d6093aaf32ba0ef28159be06da7afe4340e565\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jul  2 07:08:54.624958 containerd[1481]: time="2024-07-02T07:08:54.624921317Z" level=info msg="CreateContainer within sandbox \"f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jul  2 07:08:54.723333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151727851.mount: Deactivated successfully.
Jul  2 07:08:55.028916 kubelet[2561]: W0702 07:08:55.028830    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:55.028916 kubelet[2561]: E0702 07:08:55.028917    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:55.118665 containerd[1481]: time="2024-07-02T07:08:55.118558147Z" level=info msg="CreateContainer within sandbox \"9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0\""
Jul  2 07:08:55.120195 containerd[1481]: time="2024-07-02T07:08:55.120122556Z" level=info msg="StartContainer for \"10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0\""
Jul  2 07:08:55.145090 systemd[1]: Started cri-containerd-10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0.scope - libcontainer container 10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0.
Jul  2 07:08:55.155000 audit: BPF prog-id=98 op=LOAD
Jul  2 07:08:55.156000 audit: BPF prog-id=99 op=LOAD
Jul  2 07:08:55.156000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2623 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130663934363637396232343663656538336530373563373931623062
Jul  2 07:08:55.156000 audit: BPF prog-id=100 op=LOAD
Jul  2 07:08:55.156000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2623 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130663934363637396232343663656538336530373563373931623062
Jul  2 07:08:55.156000 audit: BPF prog-id=100 op=UNLOAD
Jul  2 07:08:55.156000 audit: BPF prog-id=99 op=UNLOAD
Jul  2 07:08:55.156000 audit: BPF prog-id=101 op=LOAD
Jul  2 07:08:55.156000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2623 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130663934363637396232343663656538336530373563373931623062
Jul  2 07:08:55.372622 kubelet[2561]: W0702 07:08:55.372434    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:55.372622 kubelet[2561]: E0702 07:08:55.372526    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:56.016039 containerd[1481]: time="2024-07-02T07:08:56.015828937Z" level=info msg="StartContainer for \"10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0\" returns successfully"
Jul  2 07:08:56.043447 kubelet[2561]: W0702 07:08:56.043363    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:56.043447 kubelet[2561]: E0702 07:08:56.043446    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:57.092976 kubelet[2561]: W0702 07:08:57.092890    2561 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:57.092976 kubelet[2561]: E0702 07:08:57.092978    2561 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-b9d6671d68&limit=500&resourceVersion=0": dial tcp 10.200.8.44:6443: connect: connection refused
Jul  2 07:08:57.319694 containerd[1481]: time="2024-07-02T07:08:57.319583049Z" level=info msg="CreateContainer within sandbox \"b399876a18704254306fc724f3d6093aaf32ba0ef28159be06da7afe4340e565\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"39630c01b5b92b27d957943452ca6bcfafb4c9755777f77d047222dd677cfa27\""
Jul  2 07:08:57.320926 containerd[1481]: time="2024-07-02T07:08:57.320851056Z" level=info msg="StartContainer for \"39630c01b5b92b27d957943452ca6bcfafb4c9755777f77d047222dd677cfa27\""
Jul  2 07:08:57.359112 systemd[1]: Started cri-containerd-39630c01b5b92b27d957943452ca6bcfafb4c9755777f77d047222dd677cfa27.scope - libcontainer container 39630c01b5b92b27d957943452ca6bcfafb4c9755777f77d047222dd677cfa27.
Jul  2 07:08:57.374000 audit: BPF prog-id=102 op=LOAD
Jul  2 07:08:57.375447 containerd[1481]: time="2024-07-02T07:08:57.375396273Z" level=info msg="CreateContainer within sandbox \"f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697\""
Jul  2 07:08:57.375000 audit: BPF prog-id=103 op=LOAD
Jul  2 07:08:57.375000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2624 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363330633031623562393262323764393537393433343532636136
Jul  2 07:08:57.375000 audit: BPF prog-id=104 op=LOAD
Jul  2 07:08:57.375000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2624 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363330633031623562393262323764393537393433343532636136
Jul  2 07:08:57.375000 audit: BPF prog-id=104 op=UNLOAD
Jul  2 07:08:57.375000 audit: BPF prog-id=103 op=UNLOAD
Jul  2 07:08:57.375000 audit: BPF prog-id=105 op=LOAD
Jul  2 07:08:57.375000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2624 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339363330633031623562393262323764393537393433343532636136
Jul  2 07:08:57.376953 containerd[1481]: time="2024-07-02T07:08:57.376800181Z" level=info msg="StartContainer for \"3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697\""
Jul  2 07:08:57.409045 systemd[1]: Started cri-containerd-3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697.scope - libcontainer container 3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697.
Jul  2 07:08:57.424000 audit: BPF prog-id=106 op=LOAD
Jul  2 07:08:57.425000 audit: BPF prog-id=107 op=LOAD
Jul  2 07:08:57.425000 audit[2796]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2622 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343237643862616236646538316536336564396233663338386234
Jul  2 07:08:57.425000 audit: BPF prog-id=108 op=LOAD
Jul  2 07:08:57.425000 audit[2796]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2622 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343237643862616236646538316536336564396233663338386234
Jul  2 07:08:57.425000 audit: BPF prog-id=108 op=UNLOAD
Jul  2 07:08:57.425000 audit: BPF prog-id=107 op=UNLOAD
Jul  2 07:08:57.425000 audit: BPF prog-id=109 op=LOAD
Jul  2 07:08:57.425000 audit[2796]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2622 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:08:57.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343237643862616236646538316536336564396233663338386234
Jul  2 07:08:57.459126 containerd[1481]: time="2024-07-02T07:08:57.459065658Z" level=info msg="StartContainer for \"39630c01b5b92b27d957943452ca6bcfafb4c9755777f77d047222dd677cfa27\" returns successfully"
Jul  2 07:08:57.484376 containerd[1481]: time="2024-07-02T07:08:57.484313705Z" level=info msg="StartContainer for \"3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697\" returns successfully"
Jul  2 07:08:59.043000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.043000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000a6c030 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:08:59.043000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:08:59.044000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.044000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000b63380 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:08:59.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:08:59.234000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.234000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c00407d0e0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.234000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.234000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.234000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c004e19220 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.234000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.235000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=5730576 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.235000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c00407d200 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.235000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.238000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.238000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4c a1=c004e19500 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.238000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.238000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.238000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4c a1=c00407dcb0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.238000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.240000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:08:59.240000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=53 a1=c00445f020 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:08:59.240000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:08:59.443160 kubelet[2561]: E0702 07:08:59.443116    2561 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:08:59.722922 kubelet[2561]: E0702 07:08:59.722702    2561 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.5-a-b9d6671d68" not found
Jul  2 07:09:00.015206 kubelet[2561]: E0702 07:09:00.015088    2561 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.5-a-b9d6671d68\" not found" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:00.084614 kubelet[2561]: E0702 07:09:00.084576    2561 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.5-a-b9d6671d68" not found
Jul  2 07:09:00.195622 kubelet[2561]: I0702 07:09:00.195580    2561 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:00.203760 kubelet[2561]: I0702 07:09:00.203711    2561 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:00.215101 kubelet[2561]: E0702 07:09:00.215059    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.315570 kubelet[2561]: E0702 07:09:00.315400    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.416613 kubelet[2561]: E0702 07:09:00.416553    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.517408 kubelet[2561]: E0702 07:09:00.517363    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.618154 kubelet[2561]: E0702 07:09:00.618025    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.718738 kubelet[2561]: E0702 07:09:00.718686    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.819586 kubelet[2561]: E0702 07:09:00.819532    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:00.919763 kubelet[2561]: E0702 07:09:00.919701    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.020375 kubelet[2561]: E0702 07:09:01.020313    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.120935 kubelet[2561]: E0702 07:09:01.120882    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.221771 kubelet[2561]: E0702 07:09:01.221625    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.322489 kubelet[2561]: E0702 07:09:01.322433    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.423654 kubelet[2561]: E0702 07:09:01.423595    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.450301 systemd[1]: Reloading.
Jul  2 07:09:01.524614 kubelet[2561]: E0702 07:09:01.524472    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.625070 kubelet[2561]: E0702 07:09:01.625027    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.653529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jul  2 07:09:01.725736 kubelet[2561]: E0702 07:09:01.725688    2561 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-b9d6671d68\" not found"
Jul  2 07:09:01.748555 kernel: kauditd_printk_skb: 86 callbacks suppressed
Jul  2 07:09:01.748727 kernel: audit: type=1334 audit(1719904141.737:368): prog-id=110 op=LOAD
Jul  2 07:09:01.737000 audit: BPF prog-id=110 op=LOAD
Jul  2 07:09:01.753328 kernel: audit: type=1334 audit(1719904141.738:369): prog-id=90 op=UNLOAD
Jul  2 07:09:01.738000 audit: BPF prog-id=90 op=UNLOAD
Jul  2 07:09:01.738000 audit: BPF prog-id=111 op=LOAD
Jul  2 07:09:01.757940 kernel: audit: type=1334 audit(1719904141.738:370): prog-id=111 op=LOAD
Jul  2 07:09:01.738000 audit: BPF prog-id=72 op=UNLOAD
Jul  2 07:09:01.769407 kernel: audit: type=1334 audit(1719904141.738:371): prog-id=72 op=UNLOAD
Jul  2 07:09:01.769521 kernel: audit: type=1334 audit(1719904141.741:372): prog-id=112 op=LOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=112 op=LOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=73 op=UNLOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=113 op=LOAD
Jul  2 07:09:01.775628 kernel: audit: type=1334 audit(1719904141.741:373): prog-id=73 op=UNLOAD
Jul  2 07:09:01.775719 kernel: audit: type=1334 audit(1719904141.741:374): prog-id=113 op=LOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=114 op=LOAD
Jul  2 07:09:01.778891 kernel: audit: type=1334 audit(1719904141.741:375): prog-id=114 op=LOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=74 op=UNLOAD
Jul  2 07:09:01.780418 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:09:01.781213 kubelet[2561]: E0702 07:09:01.781081    2561 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3815.2.5-a-b9d6671d68.17de53b1005dd1bb  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-a-b9d6671d68,UID:ci-3815.2.5-a-b9d6671d68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-b9d6671d68,},FirstTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,LastTimestamp:2024-07-02 07:08:47.382573499 +0000 UTC m=+0.518033191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-b9d6671d68,}"
Jul  2 07:09:01.783121 kernel: audit: type=1334 audit(1719904141.741:376): prog-id=74 op=UNLOAD
Jul  2 07:09:01.786275 kernel: audit: type=1334 audit(1719904141.741:377): prog-id=75 op=UNLOAD
Jul  2 07:09:01.741000 audit: BPF prog-id=75 op=UNLOAD
Jul  2 07:09:01.742000 audit: BPF prog-id=115 op=LOAD
Jul  2 07:09:01.742000 audit: BPF prog-id=76 op=UNLOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=116 op=LOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=77 op=UNLOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=117 op=LOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=118 op=LOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=78 op=UNLOAD
Jul  2 07:09:01.743000 audit: BPF prog-id=79 op=UNLOAD
Jul  2 07:09:01.744000 audit: BPF prog-id=119 op=LOAD
Jul  2 07:09:01.744000 audit: BPF prog-id=98 op=UNLOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=120 op=LOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=80 op=UNLOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=121 op=LOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=122 op=LOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=81 op=UNLOAD
Jul  2 07:09:01.746000 audit: BPF prog-id=82 op=UNLOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=123 op=LOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=124 op=LOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=83 op=UNLOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=84 op=UNLOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=125 op=LOAD
Jul  2 07:09:01.747000 audit: BPF prog-id=85 op=UNLOAD
Jul  2 07:09:01.748000 audit: BPF prog-id=126 op=LOAD
Jul  2 07:09:01.748000 audit: BPF prog-id=102 op=UNLOAD
Jul  2 07:09:01.749000 audit: BPF prog-id=127 op=LOAD
Jul  2 07:09:01.749000 audit: BPF prog-id=106 op=UNLOAD
Jul  2 07:09:01.749000 audit: BPF prog-id=128 op=LOAD
Jul  2 07:09:01.749000 audit: BPF prog-id=86 op=UNLOAD
Jul  2 07:09:01.751000 audit: BPF prog-id=129 op=LOAD
Jul  2 07:09:01.751000 audit: BPF prog-id=94 op=UNLOAD
Jul  2 07:09:01.799308 systemd[1]: kubelet.service: Deactivated successfully.
Jul  2 07:09:01.799558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:09:01.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:01.808582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jul  2 07:09:02.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:02.000975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jul  2 07:09:02.048278 kubelet[2922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul  2 07:09:02.048278 kubelet[2922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jul  2 07:09:02.048278 kubelet[2922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul  2 07:09:02.048957 kubelet[2922]: I0702 07:09:02.048905    2922 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jul  2 07:09:02.054303 kubelet[2922]: I0702 07:09:02.054267    2922 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jul  2 07:09:02.054493 kubelet[2922]: I0702 07:09:02.054481    2922 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jul  2 07:09:02.054764 kubelet[2922]: I0702 07:09:02.054749    2922 server.go:927] "Client rotation is on, will bootstrap in background"
Jul  2 07:09:02.056150 kubelet[2922]: I0702 07:09:02.056125    2922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul  2 07:09:02.057395 kubelet[2922]: I0702 07:09:02.057369    2922 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jul  2 07:09:02.064589 kubelet[2922]: I0702 07:09:02.064551    2922 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jul  2 07:09:02.064924 kubelet[2922]: I0702 07:09:02.064893    2922 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jul  2 07:09:02.065132 kubelet[2922]: I0702 07:09:02.064921    2922 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.5-a-b9d6671d68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jul  2 07:09:02.065274 kubelet[2922]: I0702 07:09:02.065157    2922 topology_manager.go:138] "Creating topology manager with none policy"
Jul  2 07:09:02.065274 kubelet[2922]: I0702 07:09:02.065171    2922 container_manager_linux.go:301] "Creating device plugin manager"
Jul  2 07:09:02.065274 kubelet[2922]: I0702 07:09:02.065232    2922 state_mem.go:36] "Initialized new in-memory state store"
Jul  2 07:09:02.065394 kubelet[2922]: I0702 07:09:02.065365    2922 kubelet.go:400] "Attempting to sync node with API server"
Jul  2 07:09:02.065394 kubelet[2922]: I0702 07:09:02.065380    2922 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jul  2 07:09:02.066954 kubelet[2922]: I0702 07:09:02.066933    2922 kubelet.go:312] "Adding apiserver pod source"
Jul  2 07:09:02.067091 kubelet[2922]: I0702 07:09:02.067081    2922 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jul  2 07:09:02.074402 kubelet[2922]: I0702 07:09:02.074373    2922 apiserver.go:52] "Watching apiserver"
Jul  2 07:09:02.076771 kubelet[2922]: I0702 07:09:02.076743    2922 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1"
Jul  2 07:09:02.077796 kubelet[2922]: I0702 07:09:02.076996    2922 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jul  2 07:09:02.077796 kubelet[2922]: I0702 07:09:02.077543    2922 server.go:1264] "Started kubelet"
Jul  2 07:09:02.081140 kubelet[2922]: I0702 07:09:02.081067    2922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jul  2 07:09:02.085420 kubelet[2922]: I0702 07:09:02.085380    2922 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jul  2 07:09:02.087661 kubelet[2922]: I0702 07:09:02.087640    2922 server.go:455] "Adding debug handlers to kubelet server"
Jul  2 07:09:02.089400 kubelet[2922]: I0702 07:09:02.089346    2922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jul  2 07:09:02.089741 kubelet[2922]: I0702 07:09:02.089725    2922 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jul  2 07:09:02.091904 kubelet[2922]: I0702 07:09:02.091885    2922 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jul  2 07:09:02.094229 kubelet[2922]: I0702 07:09:02.092329    2922 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jul  2 07:09:02.094229 kubelet[2922]: I0702 07:09:02.092484    2922 reconciler.go:26] "Reconciler: start to sync state"
Jul  2 07:09:02.098138 kubelet[2922]: I0702 07:09:02.098099    2922 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jul  2 07:09:02.100381 kubelet[2922]: I0702 07:09:02.100361    2922 factory.go:221] Registration of the containerd container factory successfully
Jul  2 07:09:02.100505 kubelet[2922]: I0702 07:09:02.100494    2922 factory.go:221] Registration of the systemd container factory successfully
Jul  2 07:09:02.101203 kubelet[2922]: I0702 07:09:02.101173    2922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jul  2 07:09:02.102518 kubelet[2922]: I0702 07:09:02.102494    2922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jul  2 07:09:02.102604 kubelet[2922]: I0702 07:09:02.102534    2922 status_manager.go:217] "Starting to sync pod status with apiserver"
Jul  2 07:09:02.102604 kubelet[2922]: I0702 07:09:02.102555    2922 kubelet.go:2337] "Starting kubelet main sync loop"
Jul  2 07:09:02.102690 kubelet[2922]: E0702 07:09:02.102601    2922 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jul  2 07:09:02.124806 kubelet[2922]: E0702 07:09:02.124766    2922 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jul  2 07:09:02.155424 kubelet[2922]: I0702 07:09:02.155394    2922 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jul  2 07:09:02.155424 kubelet[2922]: I0702 07:09:02.155422    2922 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jul  2 07:09:02.155696 kubelet[2922]: I0702 07:09:02.155444    2922 state_mem.go:36] "Initialized new in-memory state store"
Jul  2 07:09:02.155696 kubelet[2922]: I0702 07:09:02.155658    2922 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jul  2 07:09:02.155696 kubelet[2922]: I0702 07:09:02.155673    2922 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jul  2 07:09:02.155696 kubelet[2922]: I0702 07:09:02.155696    2922 policy_none.go:49] "None policy: Start"
Jul  2 07:09:02.156360 kubelet[2922]: I0702 07:09:02.156339    2922 memory_manager.go:170] "Starting memorymanager" policy="None"
Jul  2 07:09:02.156466 kubelet[2922]: I0702 07:09:02.156364    2922 state_mem.go:35] "Initializing new in-memory state store"
Jul  2 07:09:02.156555 kubelet[2922]: I0702 07:09:02.156539    2922 state_mem.go:75] "Updated machine memory state"
Jul  2 07:09:02.160696 kubelet[2922]: I0702 07:09:02.160665    2922 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jul  2 07:09:02.160913 kubelet[2922]: I0702 07:09:02.160853    2922 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jul  2 07:09:02.161013 kubelet[2922]: I0702 07:09:02.161000    2922 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jul  2 07:09:02.194774 kubelet[2922]: I0702 07:09:02.194746    2922 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.203094 kubelet[2922]: I0702 07:09:02.202981    2922 topology_manager.go:215] "Topology Admit Handler" podUID="2e08ab65cc4317b0bb8c99bd4d52a10b" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.205234 kubelet[2922]: I0702 07:09:02.205201    2922 topology_manager.go:215] "Topology Admit Handler" podUID="c4c423135a13d65348033bff1ba62872" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.205541 kubelet[2922]: I0702 07:09:02.205515    2922 topology_manager.go:215] "Topology Admit Handler" podUID="fda5fd723536f947d0002b9b05d98d8d" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.460142 kubelet[2922]: I0702 07:09:02.205429    2922 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.460142 kubelet[2922]: I0702 07:09:02.459050    2922 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.461477 kubelet[2922]: I0702 07:09:02.461447    2922 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jul  2 07:09:02.473099 kubelet[2922]: W0702 07:09:02.473061    2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Jul  2 07:09:02.473663 kubelet[2922]: W0702 07:09:02.473628    2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Jul  2 07:09:02.491022 kubelet[2922]: W0702 07:09:02.490969    2922 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Jul  2 07:09:02.562362 kubelet[2922]: I0702 07:09:02.562314    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562613 kubelet[2922]: I0702 07:09:02.562589    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562705 kubelet[2922]: I0702 07:09:02.562625    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562705 kubelet[2922]: I0702 07:09:02.562657    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562705 kubelet[2922]: I0702 07:09:02.562682    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562841 kubelet[2922]: I0702 07:09:02.562707    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562841 kubelet[2922]: I0702 07:09:02.562731    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fda5fd723536f947d0002b9b05d98d8d-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-a-b9d6671d68\" (UID: \"fda5fd723536f947d0002b9b05d98d8d\") " pod="kube-system/kube-scheduler-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562841 kubelet[2922]: I0702 07:09:02.562755    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e08ab65cc4317b0bb8c99bd4d52a10b-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-a-b9d6671d68\" (UID: \"2e08ab65cc4317b0bb8c99bd4d52a10b\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:02.562841 kubelet[2922]: I0702 07:09:02.562782    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4c423135a13d65348033bff1ba62872-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-a-b9d6671d68\" (UID: \"c4c423135a13d65348033bff1ba62872\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68"
Jul  2 07:09:03.120888 kubelet[2922]: I0702 07:09:03.120795    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.5-a-b9d6671d68" podStartSLOduration=1.12077345 podStartE2EDuration="1.12077345s" podCreationTimestamp="2024-07-02 07:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:09:03.107964279 +0000 UTC m=+1.101033558" watchObservedRunningTime="2024-07-02 07:09:03.12077345 +0000 UTC m=+1.113842729"
Jul  2 07:09:03.121466 kubelet[2922]: I0702 07:09:03.120940    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.5-a-b9d6671d68" podStartSLOduration=1.120930751 podStartE2EDuration="1.120930751s" podCreationTimestamp="2024-07-02 07:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:09:03.119144841 +0000 UTC m=+1.112214120" watchObservedRunningTime="2024-07-02 07:09:03.120930751 +0000 UTC m=+1.114000030"
Jul  2 07:09:03.155054 kubelet[2922]: I0702 07:09:03.154988    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68" podStartSLOduration=1.15496684 podStartE2EDuration="1.15496684s" podCreationTimestamp="2024-07-02 07:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:09:03.129185297 +0000 UTC m=+1.122254676" watchObservedRunningTime="2024-07-02 07:09:03.15496684 +0000 UTC m=+1.148036219"
Jul  2 07:09:06.036000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=5209638 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0
Jul  2 07:09:06.036000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c00137b0c0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:06.036000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:07.548000 audit[2015]: USER_END pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:07.551401 kernel: kauditd_printk_skb: 35 callbacks suppressed
Jul  2 07:09:07.551483 kernel: audit: type=1106 audit(1719904147.548:411): pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:07.548954 sudo[2015]: pam_unix(sudo:session): session closed for user root
Jul  2 07:09:07.554000 audit[2015]: CRED_DISP pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:07.579145 kernel: audit: type=1104 audit(1719904147.554:412): pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:07.659128 sshd[2012]: pam_unix(sshd:session): session closed for user core
Jul  2 07:09:07.660000 audit[2012]: USER_END pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:09:07.664743 systemd[1]: sshd@6-10.200.8.44:22-10.200.16.10:33358.service: Deactivated successfully.
Jul  2 07:09:07.665627 systemd[1]: session-9.scope: Deactivated successfully.
Jul  2 07:09:07.665789 systemd[1]: session-9.scope: Consumed 4.589s CPU time.
Jul  2 07:09:07.667573 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit.
Jul  2 07:09:07.668889 systemd-logind[1465]: Removed session 9.
Jul  2 07:09:07.660000 audit[2012]: CRED_DISP pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:09:07.696363 kernel: audit: type=1106 audit(1719904147.660:413): pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:09:07.696524 kernel: audit: type=1104 audit(1719904147.660:414): pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:09:07.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.44:22-10.200.16.10:33358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:07.707887 kernel: audit: type=1131 audit(1719904147.660:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.44:22-10.200.16.10:33358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:09:16.580000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.580000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0010e3440 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.623028 kernel: audit: type=1400 audit(1719904156.580:416): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.623189 kernel: audit: type=1300 audit(1719904156.580:416): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0010e3440 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.580000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.636650 kernel: audit: type=1327 audit(1719904156.580:416): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.586000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.648229 kernel: audit: type=1400 audit(1719904156.586:417): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.586000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0010e3780 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.663784 kernel: audit: type=1300 audit(1719904156.586:417): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0010e3780 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.586000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.586000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.688211 kernel: audit: type=1327 audit(1719904156.586:417): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.688353 kernel: audit: type=1400 audit(1719904156.586:418): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.586000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0010e39e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.703693 kernel: audit: type=1300 audit(1719904156.586:418): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0010e39e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.586000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.716493 kernel: audit: type=1327 audit(1719904156.586:418): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:16.586000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.730750 kernel: audit: type=1400 audit(1719904156.586:419): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:16.586000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0010e3a00 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:16.586000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:17.137076 kubelet[2922]: I0702 07:09:17.137036    2922 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jul  2 07:09:17.137640 containerd[1481]: time="2024-07-02T07:09:17.137596336Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jul  2 07:09:17.138014 kubelet[2922]: I0702 07:09:17.137879    2922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jul  2 07:09:17.807219 kubelet[2922]: I0702 07:09:17.807170    2922 topology_manager.go:215] "Topology Admit Handler" podUID="1665be3e-9b65-40b9-a837-6db60d0b0d2f" podNamespace="kube-system" podName="kube-proxy-75q67"
Jul  2 07:09:17.814351 systemd[1]: Created slice kubepods-besteffort-pod1665be3e_9b65_40b9_a837_6db60d0b0d2f.slice - libcontainer container kubepods-besteffort-pod1665be3e_9b65_40b9_a837_6db60d0b0d2f.slice.
Jul  2 07:09:17.975052 kubelet[2922]: I0702 07:09:17.975009    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1665be3e-9b65-40b9-a837-6db60d0b0d2f-lib-modules\") pod \"kube-proxy-75q67\" (UID: \"1665be3e-9b65-40b9-a837-6db60d0b0d2f\") " pod="kube-system/kube-proxy-75q67"
Jul  2 07:09:17.975308 kubelet[2922]: I0702 07:09:17.975282    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb9n\" (UniqueName: \"kubernetes.io/projected/1665be3e-9b65-40b9-a837-6db60d0b0d2f-kube-api-access-9hb9n\") pod \"kube-proxy-75q67\" (UID: \"1665be3e-9b65-40b9-a837-6db60d0b0d2f\") " pod="kube-system/kube-proxy-75q67"
Jul  2 07:09:17.975413 kubelet[2922]: I0702 07:09:17.975324    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1665be3e-9b65-40b9-a837-6db60d0b0d2f-kube-proxy\") pod \"kube-proxy-75q67\" (UID: \"1665be3e-9b65-40b9-a837-6db60d0b0d2f\") " pod="kube-system/kube-proxy-75q67"
Jul  2 07:09:17.975413 kubelet[2922]: I0702 07:09:17.975408    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1665be3e-9b65-40b9-a837-6db60d0b0d2f-xtables-lock\") pod \"kube-proxy-75q67\" (UID: \"1665be3e-9b65-40b9-a837-6db60d0b0d2f\") " pod="kube-system/kube-proxy-75q67"
Jul  2 07:09:18.123560 containerd[1481]: time="2024-07-02T07:09:18.123412697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75q67,Uid:1665be3e-9b65-40b9-a837-6db60d0b0d2f,Namespace:kube-system,Attempt:0,}"
Jul  2 07:09:18.153343 kubelet[2922]: I0702 07:09:18.153160    2922 topology_manager.go:215] "Topology Admit Handler" podUID="dfb06764-044b-42b8-96de-03ed7db96937" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-kl8t7"
Jul  2 07:09:18.164221 systemd[1]: Created slice kubepods-besteffort-poddfb06764_044b_42b8_96de_03ed7db96937.slice - libcontainer container kubepods-besteffort-poddfb06764_044b_42b8_96de_03ed7db96937.slice.
Jul  2 07:09:18.177528 kubelet[2922]: I0702 07:09:18.177484    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfb06764-044b-42b8-96de-03ed7db96937-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-kl8t7\" (UID: \"dfb06764-044b-42b8-96de-03ed7db96937\") " pod="tigera-operator/tigera-operator-76ff79f7fd-kl8t7"
Jul  2 07:09:18.177929 kubelet[2922]: I0702 07:09:18.177835    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p96s\" (UniqueName: \"kubernetes.io/projected/dfb06764-044b-42b8-96de-03ed7db96937-kube-api-access-9p96s\") pod \"tigera-operator-76ff79f7fd-kl8t7\" (UID: \"dfb06764-044b-42b8-96de-03ed7db96937\") " pod="tigera-operator/tigera-operator-76ff79f7fd-kl8t7"
Jul  2 07:09:18.192478 containerd[1481]: time="2024-07-02T07:09:18.192365249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:09:18.192478 containerd[1481]: time="2024-07-02T07:09:18.192437750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:18.193081 containerd[1481]: time="2024-07-02T07:09:18.192939952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:09:18.193172 containerd[1481]: time="2024-07-02T07:09:18.193124353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:18.225063 systemd[1]: Started cri-containerd-15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d.scope - libcontainer container 15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d.
Jul  2 07:09:18.235000 audit: BPF prog-id=130 op=LOAD
Jul  2 07:09:18.236000 audit: BPF prog-id=131 op=LOAD
Jul  2 07:09:18.236000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019d988 a2=78 a3=0 items=0 ppid=3008 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135633763346263633465633438316264643664663132313434633432
Jul  2 07:09:18.236000 audit: BPF prog-id=132 op=LOAD
Jul  2 07:09:18.236000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019d720 a2=78 a3=0 items=0 ppid=3008 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135633763346263633465633438316264643664663132313434633432
Jul  2 07:09:18.236000 audit: BPF prog-id=132 op=UNLOAD
Jul  2 07:09:18.236000 audit: BPF prog-id=131 op=UNLOAD
Jul  2 07:09:18.236000 audit: BPF prog-id=133 op=LOAD
Jul  2 07:09:18.236000 audit[3018]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019dbe0 a2=78 a3=0 items=0 ppid=3008 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135633763346263633465633438316264643664663132313434633432
Jul  2 07:09:18.252784 containerd[1481]: time="2024-07-02T07:09:18.252737458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75q67,Uid:1665be3e-9b65-40b9-a837-6db60d0b0d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d\""
Jul  2 07:09:18.258072 containerd[1481]: time="2024-07-02T07:09:18.258029585Z" level=info msg="CreateContainer within sandbox \"15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jul  2 07:09:18.309413 containerd[1481]: time="2024-07-02T07:09:18.309333147Z" level=info msg="CreateContainer within sandbox \"15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0e8dd9bb24c666b33203d1305d6d97c6c3ef3dbd00bb3b27e4498051cd2d312\""
Jul  2 07:09:18.310817 containerd[1481]: time="2024-07-02T07:09:18.310139952Z" level=info msg="StartContainer for \"f0e8dd9bb24c666b33203d1305d6d97c6c3ef3dbd00bb3b27e4498051cd2d312\""
Jul  2 07:09:18.340064 systemd[1]: Started cri-containerd-f0e8dd9bb24c666b33203d1305d6d97c6c3ef3dbd00bb3b27e4498051cd2d312.scope - libcontainer container f0e8dd9bb24c666b33203d1305d6d97c6c3ef3dbd00bb3b27e4498051cd2d312.
Jul  2 07:09:18.351000 audit: BPF prog-id=134 op=LOAD
Jul  2 07:09:18.351000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3008 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653864643962623234633636366233333230336431333035643664
Jul  2 07:09:18.351000 audit: BPF prog-id=135 op=LOAD
Jul  2 07:09:18.351000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3008 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653864643962623234633636366233333230336431333035643664
Jul  2 07:09:18.351000 audit: BPF prog-id=135 op=UNLOAD
Jul  2 07:09:18.351000 audit: BPF prog-id=134 op=UNLOAD
Jul  2 07:09:18.351000 audit: BPF prog-id=136 op=LOAD
Jul  2 07:09:18.351000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3008 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.351000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630653864643962623234633636366233333230336431333035643664
Jul  2 07:09:18.378335 containerd[1481]: time="2024-07-02T07:09:18.378265100Z" level=info msg="StartContainer for \"f0e8dd9bb24c666b33203d1305d6d97c6c3ef3dbd00bb3b27e4498051cd2d312\" returns successfully"
Jul  2 07:09:18.446000 audit[3097]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.446000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd45e28220 a2=0 a3=7ffd45e2820c items=0 ppid=3058 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Jul  2 07:09:18.449000 audit[3098]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.449000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e43ec90 a2=0 a3=7fff8e43ec7c items=0 ppid=3058 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Jul  2 07:09:18.451000 audit[3099]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.451000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb91d0ce0 a2=0 a3=7fffb91d0ccc items=0 ppid=3058 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Jul  2 07:09:18.454000 audit[3100]: NETFILTER_CFG table=mangle:44 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.454000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6cf77c30 a2=0 a3=7ffd6cf77c1c items=0 ppid=3058 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Jul  2 07:09:18.456000 audit[3101]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.456000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4a115630 a2=0 a3=7ffc4a11561c items=0 ppid=3058 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Jul  2 07:09:18.458000 audit[3102]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.458000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff190d9c30 a2=0 a3=7fff190d9c1c items=0 ppid=3058 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Jul  2 07:09:18.478409 containerd[1481]: time="2024-07-02T07:09:18.478361912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-kl8t7,Uid:dfb06764-044b-42b8-96de-03ed7db96937,Namespace:tigera-operator,Attempt:0,}"
Jul  2 07:09:18.535504 containerd[1481]: time="2024-07-02T07:09:18.535399903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:09:18.536548 containerd[1481]: time="2024-07-02T07:09:18.536479509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:18.536548 containerd[1481]: time="2024-07-02T07:09:18.536508809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:09:18.536774 containerd[1481]: time="2024-07-02T07:09:18.536732510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:18.549000 audit[3130]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.549000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe8026330 a2=0 a3=7fffe802631c items=0 ppid=3058 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Jul  2 07:09:18.557000 audit[3132]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.557000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc738290f0 a2=0 a3=7ffc738290dc items=0 ppid=3058 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365
Jul  2 07:09:18.559142 systemd[1]: Started cri-containerd-1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c.scope - libcontainer container 1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c.
Jul  2 07:09:18.563000 audit[3135]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.563000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffec9479900 a2=0 a3=7ffec94798ec items=0 ppid=3058 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669
Jul  2 07:09:18.566000 audit[3136]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.566000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7e698dd0 a2=0 a3=7ffd7e698dbc items=0 ppid=3058 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Jul  2 07:09:18.573000 audit[3145]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.573000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7cb66e00 a2=0 a3=7ffc7cb66dec items=0 ppid=3058 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Jul  2 07:09:18.577000 audit[3146]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.577000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdf9d20a0 a2=0 a3=7ffcdf9d208c items=0 ppid=3058 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Jul  2 07:09:18.582000 audit[3148]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.582000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd1d3fbd0 a2=0 a3=7ffdd1d3fbbc items=0 ppid=3058 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Jul  2 07:09:18.583000 audit: BPF prog-id=137 op=LOAD
Jul  2 07:09:18.584000 audit: BPF prog-id=138 op=LOAD
Jul  2 07:09:18.584000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3111 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162303334356561396434306332663039316366336636343239623434
Jul  2 07:09:18.584000 audit: BPF prog-id=139 op=LOAD
Jul  2 07:09:18.584000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3111 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162303334356561396434306332663039316366336636343239623434
Jul  2 07:09:18.584000 audit: BPF prog-id=139 op=UNLOAD
Jul  2 07:09:18.584000 audit: BPF prog-id=138 op=UNLOAD
Jul  2 07:09:18.584000 audit: BPF prog-id=140 op=LOAD
Jul  2 07:09:18.584000 audit[3121]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3111 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162303334356561396434306332663039316366336636343239623434
Jul  2 07:09:18.592000 audit[3151]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.592000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe05eb3530 a2=0 a3=7ffe05eb351c items=0 ppid=3058 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53
Jul  2 07:09:18.595000 audit[3152]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.595000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd58bb17b0 a2=0 a3=7ffd58bb179c items=0 ppid=3058 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Jul  2 07:09:18.615000 audit[3154]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.615000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd32f3c530 a2=0 a3=7ffd32f3c51c items=0 ppid=3058 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Jul  2 07:09:18.618000 audit[3155]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.618000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedbcb0830 a2=0 a3=7ffedbcb081c items=0 ppid=3058 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Jul  2 07:09:18.627000 audit[3163]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.627000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc7976c70 a2=0 a3=7fffc7976c5c items=0 ppid=3058 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Jul  2 07:09:18.633442 containerd[1481]: time="2024-07-02T07:09:18.633389004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-kl8t7,Uid:dfb06764-044b-42b8-96de-03ed7db96937,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c\""
Jul  2 07:09:18.636491 containerd[1481]: time="2024-07-02T07:09:18.636449320Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\""
Jul  2 07:09:18.638000 audit[3166]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.638000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec32af8e0 a2=0 a3=7ffec32af8cc items=0 ppid=3058 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Jul  2 07:09:18.644000 audit[3169]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.644000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf60f5940 a2=0 a3=7ffdf60f592c items=0 ppid=3058 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Jul  2 07:09:18.646000 audit[3170]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.646000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca84f7940 a2=0 a3=7ffca84f792c items=0 ppid=3058 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Jul  2 07:09:18.649000 audit[3172]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.649000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe11bce4a0 a2=0 a3=7ffe11bce48c items=0 ppid=3058 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Jul  2 07:09:18.653000 audit[3175]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.653000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcba89daa0 a2=0 a3=7ffcba89da8c items=0 ppid=3058 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Jul  2 07:09:18.654000 audit[3176]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.654000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff354fa160 a2=0 a3=7fff354fa14c items=0 ppid=3058 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Jul  2 07:09:18.657000 audit[3178]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Jul  2 07:09:18.657000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc2892d1a0 a2=0 a3=7ffc2892d18c items=0 ppid=3058 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Jul  2 07:09:18.718000 audit[3184]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:18.718000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffeeac5ec10 a2=0 a3=7ffeeac5ebfc items=0 ppid=3058 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:18.737000 audit[3184]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:18.737000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeeac5ec10 a2=0 a3=7ffeeac5ebfc items=0 ppid=3058 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:18.739000 audit[3190]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.739000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe1810b7e0 a2=0 a3=7ffe1810b7cc items=0 ppid=3058 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Jul  2 07:09:18.742000 audit[3192]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.742000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe4d171f90 a2=0 a3=7ffe4d171f7c items=0 ppid=3058 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963
Jul  2 07:09:18.747000 audit[3195]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.747000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff462433e0 a2=0 a3=7fff462433cc items=0 ppid=3058 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276
Jul  2 07:09:18.748000 audit[3196]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.748000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef7161010 a2=0 a3=7ffef7160ffc items=0 ppid=3058 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Jul  2 07:09:18.751000 audit[3198]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.751000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe6fea1ec0 a2=0 a3=7ffe6fea1eac items=0 ppid=3058 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Jul  2 07:09:18.753000 audit[3199]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.753000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9aaa8240 a2=0 a3=7ffc9aaa822c items=0 ppid=3058 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Jul  2 07:09:18.756000 audit[3201]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.756000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe872fc670 a2=0 a3=7ffe872fc65c items=0 ppid=3058 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245
Jul  2 07:09:18.760000 audit[3204]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.760000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc1a0b7c20 a2=0 a3=7ffc1a0b7c0c items=0 ppid=3058 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Jul  2 07:09:18.761000 audit[3205]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.761000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2f7409c0 a2=0 a3=7ffe2f7409ac items=0 ppid=3058 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Jul  2 07:09:18.764000 audit[3207]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.764000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff47828540 a2=0 a3=7fff4782852c items=0 ppid=3058 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Jul  2 07:09:18.766000 audit[3208]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.766000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd8cfc010 a2=0 a3=7ffcd8cfbffc items=0 ppid=3058 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Jul  2 07:09:18.769000 audit[3210]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.769000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdab3ff80 a2=0 a3=7fffdab3ff6c items=0 ppid=3058 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Jul  2 07:09:18.773000 audit[3213]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.773000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3fd6abb0 a2=0 a3=7ffc3fd6ab9c items=0 ppid=3058 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Jul  2 07:09:18.777000 audit[3216]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.777000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc970db900 a2=0 a3=7ffc970db8ec items=0 ppid=3058 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C
Jul  2 07:09:18.778000 audit[3217]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.778000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd634e5340 a2=0 a3=7ffd634e532c items=0 ppid=3058 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Jul  2 07:09:18.781000 audit[3219]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.781000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffde9b631c0 a2=0 a3=7ffde9b631ac items=0 ppid=3058 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Jul  2 07:09:18.785000 audit[3222]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.785000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffeddf826b0 a2=0 a3=7ffeddf8269c items=0 ppid=3058 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Jul  2 07:09:18.787000 audit[3223]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.787000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff15d02e70 a2=0 a3=7fff15d02e5c items=0 ppid=3058 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Jul  2 07:09:18.789000 audit[3225]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.789000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdaec894d0 a2=0 a3=7ffdaec894bc items=0 ppid=3058 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Jul  2 07:09:18.791000 audit[3226]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.791000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbaa43880 a2=0 a3=7fffbaa4386c items=0 ppid=3058 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Jul  2 07:09:18.793000 audit[3228]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.793000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff5bcb3990 a2=0 a3=7fff5bcb397c items=0 ppid=3058 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Jul  2 07:09:18.797000 audit[3231]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Jul  2 07:09:18.797000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7d98a460 a2=0 a3=7ffc7d98a44c items=0 ppid=3058 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Jul  2 07:09:18.803000 audit[3233]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Jul  2 07:09:18.803000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc7b94cd40 a2=0 a3=7ffc7b94cd2c items=0 ppid=3058 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.803000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:18.804000 audit[3233]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Jul  2 07:09:18.804000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc7b94cd40 a2=0 a3=7ffc7b94cd2c items=0 ppid=3058 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:18.804000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:19.093820 systemd[1]: run-containerd-runc-k8s.io-15c7c4bcc4ec481bdd6df12144c42c018befc0cf915f2b389003229027c4af2d-runc.UWYXND.mount: Deactivated successfully.
Jul  2 07:09:21.168258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196738109.mount: Deactivated successfully.
Jul  2 07:09:22.113503 kubelet[2922]: I0702 07:09:22.113435    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75q67" podStartSLOduration=5.113398053 podStartE2EDuration="5.113398053s" podCreationTimestamp="2024-07-02 07:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:09:19.185800124 +0000 UTC m=+17.178869403" watchObservedRunningTime="2024-07-02 07:09:22.113398053 +0000 UTC m=+20.106467432"
Jul  2 07:09:22.269539 containerd[1481]: time="2024-07-02T07:09:22.269478138Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:22.303927 containerd[1481]: time="2024-07-02T07:09:22.303806110Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076100"
Jul  2 07:09:22.308007 containerd[1481]: time="2024-07-02T07:09:22.307948531Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:22.363447 containerd[1481]: time="2024-07-02T07:09:22.363389010Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:22.410746 containerd[1481]: time="2024-07-02T07:09:22.410659947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:22.412109 containerd[1481]: time="2024-07-02T07:09:22.412050054Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.775376433s"
Jul  2 07:09:22.412235 containerd[1481]: time="2024-07-02T07:09:22.412114154Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\""
Jul  2 07:09:22.415737 containerd[1481]: time="2024-07-02T07:09:22.415687072Z" level=info msg="CreateContainer within sandbox \"1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Jul  2 07:09:22.576925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370584963.mount: Deactivated successfully.
Jul  2 07:09:22.678052 containerd[1481]: time="2024-07-02T07:09:22.677389188Z" level=info msg="CreateContainer within sandbox \"1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392\""
Jul  2 07:09:22.679071 containerd[1481]: time="2024-07-02T07:09:22.679007896Z" level=info msg="StartContainer for \"9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392\""
Jul  2 07:09:22.710058 systemd[1]: Started cri-containerd-9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392.scope - libcontainer container 9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392.
Jul  2 07:09:22.725908 kernel: kauditd_printk_skb: 190 callbacks suppressed
Jul  2 07:09:22.726023 kernel: audit: type=1334 audit(1719904162.720:488): prog-id=141 op=LOAD
Jul  2 07:09:22.720000 audit: BPF prog-id=141 op=LOAD
Jul  2 07:09:22.724000 audit: BPF prog-id=142 op=LOAD
Jul  2 07:09:22.746934 kernel: audit: type=1334 audit(1719904162.724:489): prog-id=142 op=LOAD
Jul  2 07:09:22.747066 kernel: audit: type=1300 audit(1719904162.724:489): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3111 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:22.724000 audit[3252]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3111 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:22.748154 kernel: audit: type=1327 audit(1719904162.724:489): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965336339383930643830613361326465663338336636363366313963
Jul  2 07:09:22.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965336339383930643830613361326465663338336636363366313963
Jul  2 07:09:22.724000 audit: BPF prog-id=143 op=LOAD
Jul  2 07:09:22.766309 kernel: audit: type=1334 audit(1719904162.724:490): prog-id=143 op=LOAD
Jul  2 07:09:22.724000 audit[3252]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3111 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:22.776430 kernel: audit: type=1300 audit(1719904162.724:490): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3111 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:22.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965336339383930643830613361326465663338336636363366313963
Jul  2 07:09:22.787756 kernel: audit: type=1327 audit(1719904162.724:490): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965336339383930643830613361326465663338336636363366313963
Jul  2 07:09:22.724000 audit: BPF prog-id=143 op=UNLOAD
Jul  2 07:09:22.793336 kernel: audit: type=1334 audit(1719904162.724:491): prog-id=143 op=UNLOAD
Jul  2 07:09:22.798077 kernel: audit: type=1334 audit(1719904162.724:492): prog-id=142 op=UNLOAD
Jul  2 07:09:22.798211 kernel: audit: type=1334 audit(1719904162.724:493): prog-id=144 op=LOAD
Jul  2 07:09:22.724000 audit: BPF prog-id=142 op=UNLOAD
Jul  2 07:09:22.724000 audit: BPF prog-id=144 op=LOAD
Jul  2 07:09:22.724000 audit[3252]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3111 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:22.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965336339383930643830613361326465663338336636363366313963
Jul  2 07:09:22.805600 containerd[1481]: time="2024-07-02T07:09:22.805548232Z" level=info msg="StartContainer for \"9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392\" returns successfully"
Jul  2 07:09:25.865000 audit[3284]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:25.865000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffedea68e20 a2=0 a3=7ffedea68e0c items=0 ppid=3058 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:25.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:25.866000 audit[3284]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:25.866000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedea68e20 a2=0 a3=0 items=0 ppid=3058 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:25.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:25.873000 audit[3286]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:25.873000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff69183e10 a2=0 a3=7fff69183dfc items=0 ppid=3058 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:25.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:25.873000 audit[3286]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:25.873000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff69183e10 a2=0 a3=0 items=0 ppid=3058 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:25.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:26.095083 kubelet[2922]: I0702 07:09:26.094996    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-kl8t7" podStartSLOduration=4.316891913 podStartE2EDuration="8.09496826s" podCreationTimestamp="2024-07-02 07:09:18 +0000 UTC" firstStartedPulling="2024-07-02 07:09:18.635485215 +0000 UTC m=+16.628554494" lastFinishedPulling="2024-07-02 07:09:22.413561462 +0000 UTC m=+20.406630841" observedRunningTime="2024-07-02 07:09:23.188340952 +0000 UTC m=+21.181410231" watchObservedRunningTime="2024-07-02 07:09:26.09496826 +0000 UTC m=+24.088037639"
Jul  2 07:09:26.095712 kubelet[2922]: I0702 07:09:26.095242    2922 topology_manager.go:215] "Topology Admit Handler" podUID="5d459649-f716-4671-935a-9a514459d667" podNamespace="calico-system" podName="calico-typha-6d766dcf56-nrd4z"
Jul  2 07:09:26.106022 systemd[1]: Created slice kubepods-besteffort-pod5d459649_f716_4671_935a_9a514459d667.slice - libcontainer container kubepods-besteffort-pod5d459649_f716_4671_935a_9a514459d667.slice.
Jul  2 07:09:26.139527 kubelet[2922]: I0702 07:09:26.139467    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d459649-f716-4671-935a-9a514459d667-tigera-ca-bundle\") pod \"calico-typha-6d766dcf56-nrd4z\" (UID: \"5d459649-f716-4671-935a-9a514459d667\") " pod="calico-system/calico-typha-6d766dcf56-nrd4z"
Jul  2 07:09:26.139527 kubelet[2922]: I0702 07:09:26.139522    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5d459649-f716-4671-935a-9a514459d667-typha-certs\") pod \"calico-typha-6d766dcf56-nrd4z\" (UID: \"5d459649-f716-4671-935a-9a514459d667\") " pod="calico-system/calico-typha-6d766dcf56-nrd4z"
Jul  2 07:09:26.139768 kubelet[2922]: I0702 07:09:26.139557    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4nv\" (UniqueName: \"kubernetes.io/projected/5d459649-f716-4671-935a-9a514459d667-kube-api-access-fx4nv\") pod \"calico-typha-6d766dcf56-nrd4z\" (UID: \"5d459649-f716-4671-935a-9a514459d667\") " pod="calico-system/calico-typha-6d766dcf56-nrd4z"
Jul  2 07:09:26.394699 kubelet[2922]: I0702 07:09:26.394541    2922 topology_manager.go:215] "Topology Admit Handler" podUID="f17e5129-b2c1-40ca-965a-c6047abe01c8" podNamespace="calico-system" podName="calico-node-54hx8"
Jul  2 07:09:26.402377 systemd[1]: Created slice kubepods-besteffort-podf17e5129_b2c1_40ca_965a_c6047abe01c8.slice - libcontainer container kubepods-besteffort-podf17e5129_b2c1_40ca_965a_c6047abe01c8.slice.
Jul  2 07:09:26.428382 containerd[1481]: time="2024-07-02T07:09:26.428306704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d766dcf56-nrd4z,Uid:5d459649-f716-4671-935a-9a514459d667,Namespace:calico-system,Attempt:0,}"
Jul  2 07:09:26.442046 kubelet[2922]: I0702 07:09:26.441992    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-cni-bin-dir\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442046 kubelet[2922]: I0702 07:09:26.442046    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-cni-log-dir\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442330 kubelet[2922]: I0702 07:09:26.442072    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-var-run-calico\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442330 kubelet[2922]: I0702 07:09:26.442096    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-lib-modules\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442330 kubelet[2922]: I0702 07:09:26.442118    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f17e5129-b2c1-40ca-965a-c6047abe01c8-tigera-ca-bundle\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442330 kubelet[2922]: I0702 07:09:26.442141    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-xtables-lock\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442330 kubelet[2922]: I0702 07:09:26.442161    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-policysync\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442523 kubelet[2922]: I0702 07:09:26.442180    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f17e5129-b2c1-40ca-965a-c6047abe01c8-node-certs\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442523 kubelet[2922]: I0702 07:09:26.442217    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-flexvol-driver-host\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442523 kubelet[2922]: I0702 07:09:26.442241    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddjtf\" (UniqueName: \"kubernetes.io/projected/f17e5129-b2c1-40ca-965a-c6047abe01c8-kube-api-access-ddjtf\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442523 kubelet[2922]: I0702 07:09:26.442265    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-var-lib-calico\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.442523 kubelet[2922]: I0702 07:09:26.442286    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f17e5129-b2c1-40ca-965a-c6047abe01c8-cni-net-dir\") pod \"calico-node-54hx8\" (UID: \"f17e5129-b2c1-40ca-965a-c6047abe01c8\") " pod="calico-system/calico-node-54hx8"
Jul  2 07:09:26.544716 kubelet[2922]: E0702 07:09:26.544684    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.544975 kubelet[2922]: W0702 07:09:26.544955    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.545126 kubelet[2922]: E0702 07:09:26.545109    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.545440 kubelet[2922]: E0702 07:09:26.545416    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.545440 kubelet[2922]: W0702 07:09:26.545437    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.545599 kubelet[2922]: E0702 07:09:26.545470    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.545718 kubelet[2922]: E0702 07:09:26.545701    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.545797 kubelet[2922]: W0702 07:09:26.545719    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.545898 kubelet[2922]: E0702 07:09:26.545811    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.546039 kubelet[2922]: E0702 07:09:26.546020    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.546039 kubelet[2922]: W0702 07:09:26.546039    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.546185 kubelet[2922]: E0702 07:09:26.546126    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.546277 kubelet[2922]: E0702 07:09:26.546260    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.546351 kubelet[2922]: W0702 07:09:26.546276    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.546414 kubelet[2922]: E0702 07:09:26.546361    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.546559 kubelet[2922]: E0702 07:09:26.546501    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.546559 kubelet[2922]: W0702 07:09:26.546513    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.546687 kubelet[2922]: E0702 07:09:26.546598    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.546760 kubelet[2922]: E0702 07:09:26.546737    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.546760 kubelet[2922]: W0702 07:09:26.546753    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.546854 kubelet[2922]: E0702 07:09:26.546836    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.547042 kubelet[2922]: E0702 07:09:26.546976    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.547042 kubelet[2922]: W0702 07:09:26.546989    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.547163 kubelet[2922]: E0702 07:09:26.547101    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.547302 kubelet[2922]: E0702 07:09:26.547229    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.547302 kubelet[2922]: W0702 07:09:26.547241    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.547424 kubelet[2922]: E0702 07:09:26.547330    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.547493 kubelet[2922]: E0702 07:09:26.547473    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.547548 kubelet[2922]: W0702 07:09:26.547491    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.547619 kubelet[2922]: E0702 07:09:26.547577    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.547756 kubelet[2922]: E0702 07:09:26.547702    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.547756 kubelet[2922]: W0702 07:09:26.547713    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.547888 kubelet[2922]: E0702 07:09:26.547791    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.547947 kubelet[2922]: E0702 07:09:26.547941    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.547996 kubelet[2922]: W0702 07:09:26.547950    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.548043 kubelet[2922]: E0702 07:09:26.548028    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.548194 kubelet[2922]: E0702 07:09:26.548151    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.548194 kubelet[2922]: W0702 07:09:26.548162    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.548332 kubelet[2922]: E0702 07:09:26.548250    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.548388 kubelet[2922]: E0702 07:09:26.548372    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.548388 kubelet[2922]: W0702 07:09:26.548380    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.548482 kubelet[2922]: E0702 07:09:26.548455    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.548632 kubelet[2922]: E0702 07:09:26.548573    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.548632 kubelet[2922]: W0702 07:09:26.548583    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.548756 kubelet[2922]: E0702 07:09:26.548657    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.548809 kubelet[2922]: E0702 07:09:26.548780    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.548809 kubelet[2922]: W0702 07:09:26.548788    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.548932 kubelet[2922]: E0702 07:09:26.548883    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549013    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.549559 kubelet[2922]: W0702 07:09:26.549023    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549125    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549257    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.549559 kubelet[2922]: W0702 07:09:26.549265    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549336    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549468    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.549559 kubelet[2922]: W0702 07:09:26.549479    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.549559 kubelet[2922]: E0702 07:09:26.549558    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.551998 kubelet[2922]: E0702 07:09:26.551978    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.551998 kubelet[2922]: W0702 07:09:26.551996    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.552157 kubelet[2922]: E0702 07:09:26.552099    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.552322 kubelet[2922]: E0702 07:09:26.552303    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.552322 kubelet[2922]: W0702 07:09:26.552321    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.552440 kubelet[2922]: E0702 07:09:26.552409    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.552598 kubelet[2922]: E0702 07:09:26.552575    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.552598 kubelet[2922]: W0702 07:09:26.552591    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.552731 kubelet[2922]: E0702 07:09:26.552678    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.552828 kubelet[2922]: E0702 07:09:26.552813    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.552913 kubelet[2922]: W0702 07:09:26.552828    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.553024 kubelet[2922]: E0702 07:09:26.553004    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.553259 kubelet[2922]: E0702 07:09:26.553240    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.553259 kubelet[2922]: W0702 07:09:26.553258    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.553376 kubelet[2922]: E0702 07:09:26.553367    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.553532 kubelet[2922]: E0702 07:09:26.553515    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.553532 kubelet[2922]: W0702 07:09:26.553531    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.553660 kubelet[2922]: E0702 07:09:26.553643    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.553802 kubelet[2922]: E0702 07:09:26.553794    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.553878 kubelet[2922]: W0702 07:09:26.553804    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.554002 kubelet[2922]: E0702 07:09:26.553967    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.554074 kubelet[2922]: E0702 07:09:26.554002    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.554074 kubelet[2922]: W0702 07:09:26.554011    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.554160 kubelet[2922]: E0702 07:09:26.554113    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.554602 kubelet[2922]: E0702 07:09:26.554577    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.554602 kubelet[2922]: W0702 07:09:26.554597    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.554854 kubelet[2922]: E0702 07:09:26.554811    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.554854 kubelet[2922]: W0702 07:09:26.554823    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.554854 kubelet[2922]: E0702 07:09:26.554836    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.554854 kubelet[2922]: E0702 07:09:26.554858    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.585436 kubelet[2922]: E0702 07:09:26.578065    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.585436 kubelet[2922]: W0702 07:09:26.578091    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.585436 kubelet[2922]: E0702 07:09:26.578117    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.589445 kubelet[2922]: E0702 07:09:26.589402    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.589445 kubelet[2922]: W0702 07:09:26.589438    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.589683 kubelet[2922]: E0702 07:09:26.589468    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.602937 kubelet[2922]: I0702 07:09:26.602890    2922 topology_manager.go:215] "Topology Admit Handler" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551" podNamespace="calico-system" podName="csi-node-driver-zhl5r"
Jul  2 07:09:26.603617 kubelet[2922]: E0702 07:09:26.603579    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:26.636294 kubelet[2922]: E0702 07:09:26.636259    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.636571 kubelet[2922]: W0702 07:09:26.636544    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.636712 kubelet[2922]: E0702 07:09:26.636695    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.637122 kubelet[2922]: E0702 07:09:26.637104    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.637256 kubelet[2922]: W0702 07:09:26.637241    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.637349 kubelet[2922]: E0702 07:09:26.637337    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.639348 kubelet[2922]: E0702 07:09:26.639322    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.639482 kubelet[2922]: W0702 07:09:26.639469    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.639568 kubelet[2922]: E0702 07:09:26.639557    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.639902 kubelet[2922]: E0702 07:09:26.639886    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.640015 kubelet[2922]: W0702 07:09:26.640004    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.640108 kubelet[2922]: E0702 07:09:26.640097    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.640437 kubelet[2922]: E0702 07:09:26.640423    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.640538 kubelet[2922]: W0702 07:09:26.640527    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.640623 kubelet[2922]: E0702 07:09:26.640612    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.640932 kubelet[2922]: E0702 07:09:26.640918    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.641048 kubelet[2922]: W0702 07:09:26.641036    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.641133 kubelet[2922]: E0702 07:09:26.641122    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.641440 kubelet[2922]: E0702 07:09:26.641415    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.641540 kubelet[2922]: W0702 07:09:26.641529    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.641622 kubelet[2922]: E0702 07:09:26.641611    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.642930 kubelet[2922]: E0702 07:09:26.642915    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.643051 kubelet[2922]: W0702 07:09:26.643038    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.643132 kubelet[2922]: E0702 07:09:26.643122    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.643450 kubelet[2922]: E0702 07:09:26.643437    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.643545 kubelet[2922]: W0702 07:09:26.643534    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.643625 kubelet[2922]: E0702 07:09:26.643606    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.643977 kubelet[2922]: E0702 07:09:26.643964    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.644082 kubelet[2922]: W0702 07:09:26.644071    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.644164 kubelet[2922]: E0702 07:09:26.644153    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.644922 kubelet[2922]: E0702 07:09:26.644829    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.645028 kubelet[2922]: W0702 07:09:26.645013    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.645109 kubelet[2922]: E0702 07:09:26.645099    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.647943 kubelet[2922]: E0702 07:09:26.647923    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.648055 kubelet[2922]: W0702 07:09:26.648039    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.648133 kubelet[2922]: E0702 07:09:26.648122    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.648449 kubelet[2922]: E0702 07:09:26.648436    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.648540 kubelet[2922]: W0702 07:09:26.648528    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.648638 kubelet[2922]: E0702 07:09:26.648626    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.648930 kubelet[2922]: E0702 07:09:26.648917    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.649022 kubelet[2922]: W0702 07:09:26.649008    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.649114 kubelet[2922]: E0702 07:09:26.649103    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.649372 kubelet[2922]: E0702 07:09:26.649360    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.649465 kubelet[2922]: W0702 07:09:26.649453    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.649541 kubelet[2922]: E0702 07:09:26.649529    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.649798 kubelet[2922]: E0702 07:09:26.649787    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.649935 kubelet[2922]: W0702 07:09:26.649920    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.650018 kubelet[2922]: E0702 07:09:26.650006    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.650314 kubelet[2922]: E0702 07:09:26.650302    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.650407 kubelet[2922]: W0702 07:09:26.650396    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.650542 kubelet[2922]: E0702 07:09:26.650530    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.650797 kubelet[2922]: E0702 07:09:26.650785    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.650902 kubelet[2922]: W0702 07:09:26.650888    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.650995 kubelet[2922]: E0702 07:09:26.650983    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.651275 kubelet[2922]: E0702 07:09:26.651261    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.651371 kubelet[2922]: W0702 07:09:26.651359    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.651452 kubelet[2922]: E0702 07:09:26.651440    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.651714 kubelet[2922]: E0702 07:09:26.651703    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.651804 kubelet[2922]: W0702 07:09:26.651792    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.651926 kubelet[2922]: E0702 07:09:26.651913    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.652299 kubelet[2922]: E0702 07:09:26.652286    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.652404 kubelet[2922]: W0702 07:09:26.652391    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.652478 kubelet[2922]: E0702 07:09:26.652467    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.652564 kubelet[2922]: I0702 07:09:26.652552    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/df40d4db-1fad-4103-96f4-e2848ac4f551-varrun\") pod \"csi-node-driver-zhl5r\" (UID: \"df40d4db-1fad-4103-96f4-e2848ac4f551\") " pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:26.652932 kubelet[2922]: E0702 07:09:26.652920    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.653028 kubelet[2922]: W0702 07:09:26.653017    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.653105 kubelet[2922]: E0702 07:09:26.653093    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.654950 kubelet[2922]: E0702 07:09:26.654933    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.655051 kubelet[2922]: W0702 07:09:26.655039    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.655127 kubelet[2922]: E0702 07:09:26.655117    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.655316 kubelet[2922]: I0702 07:09:26.655300    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/df40d4db-1fad-4103-96f4-e2848ac4f551-socket-dir\") pod \"csi-node-driver-zhl5r\" (UID: \"df40d4db-1fad-4103-96f4-e2848ac4f551\") " pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:26.655446 kubelet[2922]: E0702 07:09:26.655422    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.655525 kubelet[2922]: W0702 07:09:26.655515    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.655590 kubelet[2922]: E0702 07:09:26.655580    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.655931 kubelet[2922]: E0702 07:09:26.655918    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.656027 kubelet[2922]: W0702 07:09:26.656016    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.656102 kubelet[2922]: E0702 07:09:26.656092    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.656309 kubelet[2922]: I0702 07:09:26.656293    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df40d4db-1fad-4103-96f4-e2848ac4f551-registration-dir\") pod \"csi-node-driver-zhl5r\" (UID: \"df40d4db-1fad-4103-96f4-e2848ac4f551\") " pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:26.656454 kubelet[2922]: E0702 07:09:26.656413    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.656532 kubelet[2922]: W0702 07:09:26.656521    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.656603 kubelet[2922]: E0702 07:09:26.656593    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.656995 kubelet[2922]: E0702 07:09:26.656980    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.657094 kubelet[2922]: W0702 07:09:26.657082    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.657168 kubelet[2922]: E0702 07:09:26.657159    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.658949 kubelet[2922]: E0702 07:09:26.658934    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.659049 kubelet[2922]: W0702 07:09:26.659038    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.659120 kubelet[2922]: E0702 07:09:26.659110    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.659297 kubelet[2922]: I0702 07:09:26.659282    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df40d4db-1fad-4103-96f4-e2848ac4f551-kubelet-dir\") pod \"csi-node-driver-zhl5r\" (UID: \"df40d4db-1fad-4103-96f4-e2848ac4f551\") " pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:26.659428 kubelet[2922]: E0702 07:09:26.659407    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.659492 kubelet[2922]: W0702 07:09:26.659482    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.659560 kubelet[2922]: E0702 07:09:26.659549    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.659848 kubelet[2922]: E0702 07:09:26.659834    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.659994 kubelet[2922]: W0702 07:09:26.659979    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.660079 kubelet[2922]: E0702 07:09:26.660064    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.660333 kubelet[2922]: E0702 07:09:26.660320    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.660417 kubelet[2922]: W0702 07:09:26.660407    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.660488 kubelet[2922]: E0702 07:09:26.660478    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.660658 kubelet[2922]: I0702 07:09:26.660644    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmk7\" (UniqueName: \"kubernetes.io/projected/df40d4db-1fad-4103-96f4-e2848ac4f551-kube-api-access-mcmk7\") pod \"csi-node-driver-zhl5r\" (UID: \"df40d4db-1fad-4103-96f4-e2848ac4f551\") " pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:26.660779 kubelet[2922]: E0702 07:09:26.660757    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.660851 kubelet[2922]: W0702 07:09:26.660841    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.660943 kubelet[2922]: E0702 07:09:26.660931    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.661218 kubelet[2922]: E0702 07:09:26.661206    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.661321 kubelet[2922]: W0702 07:09:26.661310    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.661397 kubelet[2922]: E0702 07:09:26.661386    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.668058 kubelet[2922]: E0702 07:09:26.668021    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.668323 kubelet[2922]: W0702 07:09:26.668299    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.668455 kubelet[2922]: E0702 07:09:26.668439    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.669105 kubelet[2922]: E0702 07:09:26.669079    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.669239 kubelet[2922]: W0702 07:09:26.669224    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.669350 kubelet[2922]: E0702 07:09:26.669336    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.702030 containerd[1481]: time="2024-07-02T07:09:26.701906634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:09:26.702307 containerd[1481]: time="2024-07-02T07:09:26.702277032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:26.702425 containerd[1481]: time="2024-07-02T07:09:26.702404231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:09:26.702530 containerd[1481]: time="2024-07-02T07:09:26.702511630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:26.706657 containerd[1481]: time="2024-07-02T07:09:26.706608003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-54hx8,Uid:f17e5129-b2c1-40ca-965a-c6047abe01c8,Namespace:calico-system,Attempt:0,}"
Jul  2 07:09:26.734673 systemd[1]: Started cri-containerd-1973007232f15e00dd65d7bfdecc840d71904a3621fa1968dd94f669c0ed8f77.scope - libcontainer container 1973007232f15e00dd65d7bfdecc840d71904a3621fa1968dd94f669c0ed8f77.
Jul  2 07:09:26.751000 audit: BPF prog-id=145 op=LOAD
Jul  2 07:09:26.751000 audit: BPF prog-id=146 op=LOAD
Jul  2 07:09:26.751000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139373330303732333266313565303064643635643762666465636338
Jul  2 07:09:26.751000 audit: BPF prog-id=147 op=LOAD
Jul  2 07:09:26.751000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139373330303732333266313565303064643635643762666465636338
Jul  2 07:09:26.751000 audit: BPF prog-id=147 op=UNLOAD
Jul  2 07:09:26.751000 audit: BPF prog-id=146 op=UNLOAD
Jul  2 07:09:26.751000 audit: BPF prog-id=148 op=LOAD
Jul  2 07:09:26.751000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139373330303732333266313565303064643635643762666465636338
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.762222    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.763926 kubelet[2922]: W0702 07:09:26.762256    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.762285    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.762632    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.763926 kubelet[2922]: W0702 07:09:26.762645    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.762664    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.762974    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.763926 kubelet[2922]: W0702 07:09:26.762986    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.763011    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.763926 kubelet[2922]: E0702 07:09:26.763276    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.764433 kubelet[2922]: W0702 07:09:26.763309    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.764433 kubelet[2922]: E0702 07:09:26.763332    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.764433 kubelet[2922]: E0702 07:09:26.763627    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.764433 kubelet[2922]: W0702 07:09:26.763639    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.764433 kubelet[2922]: E0702 07:09:26.763661    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.764773    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.765508 kubelet[2922]: W0702 07:09:26.764798    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.764910    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.765101    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.765508 kubelet[2922]: W0702 07:09:26.765112    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.765216    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.765371    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.765508 kubelet[2922]: W0702 07:09:26.765380    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.765508 kubelet[2922]: E0702 07:09:26.765478    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.766505 kubelet[2922]: E0702 07:09:26.766149    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.766505 kubelet[2922]: W0702 07:09:26.766170    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.766505 kubelet[2922]: E0702 07:09:26.766267    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.766505 kubelet[2922]: E0702 07:09:26.766418    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.766505 kubelet[2922]: W0702 07:09:26.766427    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.767066 kubelet[2922]: E0702 07:09:26.766804    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.767066 kubelet[2922]: E0702 07:09:26.766977    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.767066 kubelet[2922]: W0702 07:09:26.766987    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.767383 kubelet[2922]: E0702 07:09:26.767268    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.767612 kubelet[2922]: E0702 07:09:26.767517    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.767612 kubelet[2922]: W0702 07:09:26.767530    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.767858 kubelet[2922]: E0702 07:09:26.767743    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.768068 kubelet[2922]: E0702 07:09:26.768055    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.768148 kubelet[2922]: W0702 07:09:26.768136    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.768300 kubelet[2922]: E0702 07:09:26.768287    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.768520 kubelet[2922]: E0702 07:09:26.768508    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.768613 kubelet[2922]: W0702 07:09:26.768600    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.768763 kubelet[2922]: E0702 07:09:26.768750    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.769001 kubelet[2922]: E0702 07:09:26.768988    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.769093 kubelet[2922]: W0702 07:09:26.769072    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.769272 kubelet[2922]: E0702 07:09:26.769257    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.769502 kubelet[2922]: E0702 07:09:26.769490    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.769590 kubelet[2922]: W0702 07:09:26.769579    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.769749 kubelet[2922]: E0702 07:09:26.769731    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.770024 kubelet[2922]: E0702 07:09:26.770011    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.770124 kubelet[2922]: W0702 07:09:26.770111    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.770273 kubelet[2922]: E0702 07:09:26.770260    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.770647 kubelet[2922]: E0702 07:09:26.770632    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.770750 kubelet[2922]: W0702 07:09:26.770738    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.771041 kubelet[2922]: E0702 07:09:26.771023    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.771331 kubelet[2922]: E0702 07:09:26.771317    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.771424 kubelet[2922]: W0702 07:09:26.771412    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.771580 kubelet[2922]: E0702 07:09:26.771568    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.771796 kubelet[2922]: E0702 07:09:26.771786    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.771989 kubelet[2922]: W0702 07:09:26.771975    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.772207 kubelet[2922]: E0702 07:09:26.772193    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.772467 kubelet[2922]: E0702 07:09:26.772454    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.772571 kubelet[2922]: W0702 07:09:26.772557    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.772770 kubelet[2922]: E0702 07:09:26.772756    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.773038 kubelet[2922]: E0702 07:09:26.773025    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.773153 kubelet[2922]: W0702 07:09:26.773137    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.773354 kubelet[2922]: E0702 07:09:26.773340    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.773587 kubelet[2922]: E0702 07:09:26.773575    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.773670 kubelet[2922]: W0702 07:09:26.773659    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.773844 kubelet[2922]: E0702 07:09:26.773828    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.774145 kubelet[2922]: E0702 07:09:26.774132    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.774238 kubelet[2922]: W0702 07:09:26.774226    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.774388 kubelet[2922]: E0702 07:09:26.774376    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.774590 kubelet[2922]: E0702 07:09:26.774580    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.774663 kubelet[2922]: W0702 07:09:26.774652    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.774735 kubelet[2922]: E0702 07:09:26.774724    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.805028 containerd[1481]: time="2024-07-02T07:09:26.804976567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d766dcf56-nrd4z,Uid:5d459649-f716-4671-935a-9a514459d667,Namespace:calico-system,Attempt:0,} returns sandbox id \"1973007232f15e00dd65d7bfdecc840d71904a3621fa1968dd94f669c0ed8f77\""
Jul  2 07:09:26.807638 containerd[1481]: time="2024-07-02T07:09:26.807593850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\""
Jul  2 07:09:26.864740 kubelet[2922]: E0702 07:09:26.864696    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.864740 kubelet[2922]: W0702 07:09:26.864721    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.864740 kubelet[2922]: E0702 07:09:26.864749    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.889000 audit[3424]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:26.889000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff270d5340 a2=0 a3=7fff270d532c items=0 ppid=3058 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:26.890000 audit[3424]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:26.890000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff270d5340 a2=0 a3=0 items=0 ppid=3058 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:26.912879 kubelet[2922]: E0702 07:09:26.912803    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:26.913102 kubelet[2922]: W0702 07:09:26.912858    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:26.913102 kubelet[2922]: E0702 07:09:26.912950    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:26.934467 containerd[1481]: time="2024-07-02T07:09:26.934233131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:09:26.934467 containerd[1481]: time="2024-07-02T07:09:26.934292231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:26.934467 containerd[1481]: time="2024-07-02T07:09:26.934312130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:09:26.934467 containerd[1481]: time="2024-07-02T07:09:26.934325830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:09:26.955054 systemd[1]: Started cri-containerd-d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e.scope - libcontainer container d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e.
Jul  2 07:09:26.967000 audit: BPF prog-id=149 op=LOAD
Jul  2 07:09:26.968000 audit: BPF prog-id=150 op=LOAD
Jul  2 07:09:26.968000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3435 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436656562313032613535393765663338626665633763346262303833
Jul  2 07:09:26.968000 audit: BPF prog-id=151 op=LOAD
Jul  2 07:09:26.968000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3435 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436656562313032613535393765663338626665633763346262303833
Jul  2 07:09:26.968000 audit: BPF prog-id=151 op=UNLOAD
Jul  2 07:09:26.968000 audit: BPF prog-id=150 op=UNLOAD
Jul  2 07:09:26.968000 audit: BPF prog-id=152 op=LOAD
Jul  2 07:09:26.968000 audit[3445]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3435 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:26.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436656562313032613535393765663338626665633763346262303833
Jul  2 07:09:26.988104 containerd[1481]: time="2024-07-02T07:09:26.988038283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-54hx8,Uid:f17e5129-b2c1-40ca-965a-c6047abe01c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\""
Jul  2 07:09:28.104740 kubelet[2922]: E0702 07:09:28.104677    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:30.105142 kubelet[2922]: E0702 07:09:30.105077    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:31.657058 containerd[1481]: time="2024-07-02T07:09:31.656997634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:31.660169 containerd[1481]: time="2024-07-02T07:09:31.660097816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030"
Jul  2 07:09:31.705648 containerd[1481]: time="2024-07-02T07:09:31.705564055Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:31.766895 containerd[1481]: time="2024-07-02T07:09:31.766798704Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:31.773484 containerd[1481]: time="2024-07-02T07:09:31.773425365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:31.774406 containerd[1481]: time="2024-07-02T07:09:31.774354560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.966318813s"
Jul  2 07:09:31.774567 containerd[1481]: time="2024-07-02T07:09:31.774410160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\""
Jul  2 07:09:31.788703 containerd[1481]: time="2024-07-02T07:09:31.788657178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\""
Jul  2 07:09:31.795489 containerd[1481]: time="2024-07-02T07:09:31.795451239Z" level=info msg="CreateContainer within sandbox \"1973007232f15e00dd65d7bfdecc840d71904a3621fa1968dd94f669c0ed8f77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Jul  2 07:09:32.103344 kubelet[2922]: E0702 07:09:32.103199    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:32.120253 containerd[1481]: time="2024-07-02T07:09:32.120191291Z" level=info msg="CreateContainer within sandbox \"1973007232f15e00dd65d7bfdecc840d71904a3621fa1968dd94f669c0ed8f77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"061074220961434263a727c624346fe64d90786a275c7dcc38c5f58f90bb2603\""
Jul  2 07:09:32.121217 containerd[1481]: time="2024-07-02T07:09:32.121168086Z" level=info msg="StartContainer for \"061074220961434263a727c624346fe64d90786a275c7dcc38c5f58f90bb2603\""
Jul  2 07:09:32.166145 systemd[1]: Started cri-containerd-061074220961434263a727c624346fe64d90786a275c7dcc38c5f58f90bb2603.scope - libcontainer container 061074220961434263a727c624346fe64d90786a275c7dcc38c5f58f90bb2603.
Jul  2 07:09:32.186465 kernel: kauditd_printk_skb: 44 callbacks suppressed
Jul  2 07:09:32.187396 kernel: audit: type=1334 audit(1719904172.180:512): prog-id=153 op=LOAD
Jul  2 07:09:32.180000 audit: BPF prog-id=153 op=LOAD
Jul  2 07:09:32.188000 audit: BPF prog-id=154 op=LOAD
Jul  2 07:09:32.193619 kernel: audit: type=1334 audit(1719904172.188:513): prog-id=154 op=LOAD
Jul  2 07:09:32.220960 kernel: audit: type=1300 audit(1719904172.188:513): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3365 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:32.188000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3365 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:32.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313037343232303936313433343236336137323763363234333436
Jul  2 07:09:32.241946 kernel: audit: type=1327 audit(1719904172.188:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313037343232303936313433343236336137323763363234333436
Jul  2 07:09:32.188000 audit: BPF prog-id=155 op=LOAD
Jul  2 07:09:32.249890 kernel: audit: type=1334 audit(1719904172.188:514): prog-id=155 op=LOAD
Jul  2 07:09:32.188000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3365 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:32.275906 kernel: audit: type=1300 audit(1719904172.188:514): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3365 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:32.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313037343232303936313433343236336137323763363234333436
Jul  2 07:09:32.300030 kernel: audit: type=1327 audit(1719904172.188:514): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313037343232303936313433343236336137323763363234333436
Jul  2 07:09:32.188000 audit: BPF prog-id=155 op=UNLOAD
Jul  2 07:09:32.310088 kernel: audit: type=1334 audit(1719904172.188:515): prog-id=155 op=UNLOAD
Jul  2 07:09:32.310320 kernel: audit: type=1334 audit(1719904172.188:516): prog-id=154 op=UNLOAD
Jul  2 07:09:32.188000 audit: BPF prog-id=154 op=UNLOAD
Jul  2 07:09:32.188000 audit: BPF prog-id=156 op=LOAD
Jul  2 07:09:32.321494 kernel: audit: type=1334 audit(1719904172.188:517): prog-id=156 op=LOAD
Jul  2 07:09:32.188000 audit[3480]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3365 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:32.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036313037343232303936313433343236336137323763363234333436
Jul  2 07:09:32.363122 containerd[1481]: time="2024-07-02T07:09:32.362967331Z" level=info msg="StartContainer for \"061074220961434263a727c624346fe64d90786a275c7dcc38c5f58f90bb2603\" returns successfully"
Jul  2 07:09:33.298588 kubelet[2922]: E0702 07:09:33.298536    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.298588 kubelet[2922]: W0702 07:09:33.298575    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.298603    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.298883    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299324 kubelet[2922]: W0702 07:09:33.298897    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.298914    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.299119    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299324 kubelet[2922]: W0702 07:09:33.299129    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.299142    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299324 kubelet[2922]: E0702 07:09:33.299329    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299720 kubelet[2922]: W0702 07:09:33.299338    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299720 kubelet[2922]: E0702 07:09:33.299350    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299720 kubelet[2922]: E0702 07:09:33.299542    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299720 kubelet[2922]: W0702 07:09:33.299552    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299720 kubelet[2922]: E0702 07:09:33.299564    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299969 kubelet[2922]: E0702 07:09:33.299726    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299969 kubelet[2922]: W0702 07:09:33.299735    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299969 kubelet[2922]: E0702 07:09:33.299746    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.299969 kubelet[2922]: E0702 07:09:33.299947    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.299969 kubelet[2922]: W0702 07:09:33.299957    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.299969 kubelet[2922]: E0702 07:09:33.299969    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.300254 kubelet[2922]: E0702 07:09:33.300148    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.300254 kubelet[2922]: W0702 07:09:33.300157    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.300254 kubelet[2922]: E0702 07:09:33.300168    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.300408 kubelet[2922]: E0702 07:09:33.300362    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.300408 kubelet[2922]: W0702 07:09:33.300371    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.300408 kubelet[2922]: E0702 07:09:33.300383    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.300559 kubelet[2922]: E0702 07:09:33.300548    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.300605 kubelet[2922]: W0702 07:09:33.300559    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.300605 kubelet[2922]: E0702 07:09:33.300570    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.300757 kubelet[2922]: E0702 07:09:33.300734    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.300757 kubelet[2922]: W0702 07:09:33.300750    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.300930 kubelet[2922]: E0702 07:09:33.300762    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.300998 kubelet[2922]: E0702 07:09:33.300982    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.301075 kubelet[2922]: W0702 07:09:33.300999    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.301075 kubelet[2922]: E0702 07:09:33.301012    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.301233 kubelet[2922]: E0702 07:09:33.301217    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.301233 kubelet[2922]: W0702 07:09:33.301232    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.301363 kubelet[2922]: E0702 07:09:33.301245    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.301448 kubelet[2922]: E0702 07:09:33.301433    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.301514 kubelet[2922]: W0702 07:09:33.301448    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.301514 kubelet[2922]: E0702 07:09:33.301461    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.301646 kubelet[2922]: E0702 07:09:33.301635    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.301717 kubelet[2922]: W0702 07:09:33.301647    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.301717 kubelet[2922]: E0702 07:09:33.301660    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.311148 kubelet[2922]: E0702 07:09:33.311118    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.311148 kubelet[2922]: W0702 07:09:33.311136    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.311148 kubelet[2922]: E0702 07:09:33.311159    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.311509 kubelet[2922]: E0702 07:09:33.311486    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.311509 kubelet[2922]: W0702 07:09:33.311500    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.311907 kubelet[2922]: E0702 07:09:33.311685    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.311997 kubelet[2922]: E0702 07:09:33.311977    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.311997 kubelet[2922]: W0702 07:09:33.311993    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.312114 kubelet[2922]: E0702 07:09:33.312011    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.312271 kubelet[2922]: E0702 07:09:33.312249    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.312362 kubelet[2922]: W0702 07:09:33.312342    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.312443 kubelet[2922]: E0702 07:09:33.312368    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.312626 kubelet[2922]: E0702 07:09:33.312606    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.312626 kubelet[2922]: W0702 07:09:33.312620    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.312774 kubelet[2922]: E0702 07:09:33.312634    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.312904 kubelet[2922]: E0702 07:09:33.312843    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.312904 kubelet[2922]: W0702 07:09:33.312859    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.313029 kubelet[2922]: E0702 07:09:33.312984    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.313183 kubelet[2922]: E0702 07:09:33.313166    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.313183 kubelet[2922]: W0702 07:09:33.313179    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.313349 kubelet[2922]: E0702 07:09:33.313333    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.313446 kubelet[2922]: E0702 07:09:33.313383    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.313523 kubelet[2922]: W0702 07:09:33.313446    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.313622 kubelet[2922]: E0702 07:09:33.313604    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.313723 kubelet[2922]: E0702 07:09:33.313666    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.313811 kubelet[2922]: W0702 07:09:33.313723    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.313811 kubelet[2922]: E0702 07:09:33.313738    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.314450 kubelet[2922]: E0702 07:09:33.314430    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.314450 kubelet[2922]: W0702 07:09:33.314443    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.314609 kubelet[2922]: E0702 07:09:33.314462    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.314674 kubelet[2922]: E0702 07:09:33.314658    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.314728 kubelet[2922]: W0702 07:09:33.314675    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.314728 kubelet[2922]: E0702 07:09:33.314694    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.315045 kubelet[2922]: E0702 07:09:33.315028    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.315045 kubelet[2922]: W0702 07:09:33.315041    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.315189 kubelet[2922]: E0702 07:09:33.315135    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.315658 kubelet[2922]: E0702 07:09:33.315638    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.315658 kubelet[2922]: W0702 07:09:33.315652    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.315938 kubelet[2922]: E0702 07:09:33.315772    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.316057 kubelet[2922]: E0702 07:09:33.316032    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.316136 kubelet[2922]: W0702 07:09:33.316061    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.316182 kubelet[2922]: E0702 07:09:33.316150    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.316340 kubelet[2922]: E0702 07:09:33.316325    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.316340 kubelet[2922]: W0702 07:09:33.316338    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.316490 kubelet[2922]: E0702 07:09:33.316359    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.316584 kubelet[2922]: E0702 07:09:33.316566    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.316659 kubelet[2922]: W0702 07:09:33.316583    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.316659 kubelet[2922]: E0702 07:09:33.316597    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.316988 kubelet[2922]: E0702 07:09:33.316853    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.316988 kubelet[2922]: W0702 07:09:33.316875    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.316988 kubelet[2922]: E0702 07:09:33.316889    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:33.318711 kubelet[2922]: E0702 07:09:33.317288    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:33.318711 kubelet[2922]: W0702 07:09:33.317297    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:33.318711 kubelet[2922]: E0702 07:09:33.317318    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.107545 kubelet[2922]: E0702 07:09:34.107480    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:34.256375 kubelet[2922]: I0702 07:09:34.256341    2922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jul  2 07:09:34.311252 kubelet[2922]: E0702 07:09:34.311215    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.311252 kubelet[2922]: W0702 07:09:34.311242    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.311954 kubelet[2922]: E0702 07:09:34.311269    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.311954 kubelet[2922]: E0702 07:09:34.311543    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.311954 kubelet[2922]: W0702 07:09:34.311558    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.311954 kubelet[2922]: E0702 07:09:34.311574    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.311954 kubelet[2922]: E0702 07:09:34.311817    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.311954 kubelet[2922]: W0702 07:09:34.311829    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.311954 kubelet[2922]: E0702 07:09:34.311844    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.312336 kubelet[2922]: E0702 07:09:34.312106    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.312336 kubelet[2922]: W0702 07:09:34.312159    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.312336 kubelet[2922]: E0702 07:09:34.312174    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.312550 kubelet[2922]: E0702 07:09:34.312396    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.312550 kubelet[2922]: W0702 07:09:34.312408    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.312550 kubelet[2922]: E0702 07:09:34.312421    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.312793 kubelet[2922]: E0702 07:09:34.312650    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.312793 kubelet[2922]: W0702 07:09:34.312662    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.312793 kubelet[2922]: E0702 07:09:34.312675    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.313010 kubelet[2922]: E0702 07:09:34.312890    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.313010 kubelet[2922]: W0702 07:09:34.312902    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.313010 kubelet[2922]: E0702 07:09:34.312917    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.313330 kubelet[2922]: E0702 07:09:34.313310    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.313330 kubelet[2922]: W0702 07:09:34.313325    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.313490 kubelet[2922]: E0702 07:09:34.313341    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.313618 kubelet[2922]: E0702 07:09:34.313597    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.313618 kubelet[2922]: W0702 07:09:34.313615    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.313770 kubelet[2922]: E0702 07:09:34.313631    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.313908 kubelet[2922]: E0702 07:09:34.313842    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.313908 kubelet[2922]: W0702 07:09:34.313858    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.313908 kubelet[2922]: E0702 07:09:34.313901    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.314124 kubelet[2922]: E0702 07:09:34.314114    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.314180 kubelet[2922]: W0702 07:09:34.314125    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.314180 kubelet[2922]: E0702 07:09:34.314138    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.314523 kubelet[2922]: E0702 07:09:34.314338    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.314523 kubelet[2922]: W0702 07:09:34.314354    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.314523 kubelet[2922]: E0702 07:09:34.314366    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.314779 kubelet[2922]: E0702 07:09:34.314599    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.314779 kubelet[2922]: W0702 07:09:34.314610    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.314779 kubelet[2922]: E0702 07:09:34.314623    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.314996 kubelet[2922]: E0702 07:09:34.314805    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.314996 kubelet[2922]: W0702 07:09:34.314815    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.314996 kubelet[2922]: E0702 07:09:34.314826    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.315123 kubelet[2922]: E0702 07:09:34.315046    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.315123 kubelet[2922]: W0702 07:09:34.315056    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.315123 kubelet[2922]: E0702 07:09:34.315069    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.318381 kubelet[2922]: E0702 07:09:34.318363    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.318381 kubelet[2922]: W0702 07:09:34.318377    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.318530 kubelet[2922]: E0702 07:09:34.318392    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.318701 kubelet[2922]: E0702 07:09:34.318684    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.318701 kubelet[2922]: W0702 07:09:34.318697    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.318826 kubelet[2922]: E0702 07:09:34.318716    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.319050 kubelet[2922]: E0702 07:09:34.319034    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.319050 kubelet[2922]: W0702 07:09:34.319047    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.319157 kubelet[2922]: E0702 07:09:34.319067    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.319339 kubelet[2922]: E0702 07:09:34.319322    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.319339 kubelet[2922]: W0702 07:09:34.319336    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.319469 kubelet[2922]: E0702 07:09:34.319357    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.319605 kubelet[2922]: E0702 07:09:34.319589    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.319605 kubelet[2922]: W0702 07:09:34.319602    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.319721 kubelet[2922]: E0702 07:09:34.319621    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.319851 kubelet[2922]: E0702 07:09:34.319835    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.319851 kubelet[2922]: W0702 07:09:34.319849    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.319997 kubelet[2922]: E0702 07:09:34.319963    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.320138 kubelet[2922]: E0702 07:09:34.320123    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.320138 kubelet[2922]: W0702 07:09:34.320135    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.320249 kubelet[2922]: E0702 07:09:34.320218    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.320428 kubelet[2922]: E0702 07:09:34.320371    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.320428 kubelet[2922]: W0702 07:09:34.320387    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.320559 kubelet[2922]: E0702 07:09:34.320470    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.320635 kubelet[2922]: E0702 07:09:34.320618    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.320635 kubelet[2922]: W0702 07:09:34.320632    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.320728 kubelet[2922]: E0702 07:09:34.320649    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.321131 kubelet[2922]: E0702 07:09:34.321113    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.321131 kubelet[2922]: W0702 07:09:34.321127    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.321283 kubelet[2922]: E0702 07:09:34.321147    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.321376 kubelet[2922]: E0702 07:09:34.321362    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.321376 kubelet[2922]: W0702 07:09:34.321374    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.321475 kubelet[2922]: E0702 07:09:34.321391    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.321636 kubelet[2922]: E0702 07:09:34.321619    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.321636 kubelet[2922]: W0702 07:09:34.321633    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.321757 kubelet[2922]: E0702 07:09:34.321717    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.322023 kubelet[2922]: E0702 07:09:34.322007    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.322023 kubelet[2922]: W0702 07:09:34.322019    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.322179 kubelet[2922]: E0702 07:09:34.322164    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.322257 kubelet[2922]: E0702 07:09:34.322199    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.322310 kubelet[2922]: W0702 07:09:34.322257    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.322310 kubelet[2922]: E0702 07:09:34.322274    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.322504 kubelet[2922]: E0702 07:09:34.322488    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.322504 kubelet[2922]: W0702 07:09:34.322501    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.322625 kubelet[2922]: E0702 07:09:34.322520    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.322758 kubelet[2922]: E0702 07:09:34.322742    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.322758 kubelet[2922]: W0702 07:09:34.322755    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.322987 kubelet[2922]: E0702 07:09:34.322768    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.323062 kubelet[2922]: E0702 07:09:34.323042    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.323062 kubelet[2922]: W0702 07:09:34.323058    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.323155 kubelet[2922]: E0702 07:09:34.323073    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.323428 kubelet[2922]: E0702 07:09:34.323412    2922 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jul  2 07:09:34.323428 kubelet[2922]: W0702 07:09:34.323425    2922 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jul  2 07:09:34.323525 kubelet[2922]: E0702 07:09:34.323438    2922 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jul  2 07:09:34.656215 containerd[1481]: time="2024-07-02T07:09:34.656146698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:34.659946 containerd[1481]: time="2024-07-02T07:09:34.659847778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568"
Jul  2 07:09:34.720116 containerd[1481]: time="2024-07-02T07:09:34.720052857Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:34.769577 containerd[1481]: time="2024-07-02T07:09:34.769524593Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:34.814092 containerd[1481]: time="2024-07-02T07:09:34.814031856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:34.814920 containerd[1481]: time="2024-07-02T07:09:34.814844852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 3.026124774s"
Jul  2 07:09:34.815071 containerd[1481]: time="2024-07-02T07:09:34.814927251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\""
Jul  2 07:09:34.818425 containerd[1481]: time="2024-07-02T07:09:34.818252533Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Jul  2 07:09:34.968561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169197855.mount: Deactivated successfully.
Jul  2 07:09:35.116151 containerd[1481]: time="2024-07-02T07:09:35.116087761Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf\""
Jul  2 07:09:35.117172 containerd[1481]: time="2024-07-02T07:09:35.117132055Z" level=info msg="StartContainer for \"a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf\""
Jul  2 07:09:35.169659 systemd[1]: Started cri-containerd-a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf.scope - libcontainer container a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf.
Jul  2 07:09:35.210000 audit: BPF prog-id=157 op=LOAD
Jul  2 07:09:35.210000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3435 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:35.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393832666238343961326532333166383664376364623637383534
Jul  2 07:09:35.210000 audit: BPF prog-id=158 op=LOAD
Jul  2 07:09:35.210000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3435 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:35.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393832666238343961326532333166383664376364623637383534
Jul  2 07:09:35.210000 audit: BPF prog-id=158 op=UNLOAD
Jul  2 07:09:35.211000 audit: BPF prog-id=157 op=UNLOAD
Jul  2 07:09:35.211000 audit: BPF prog-id=159 op=LOAD
Jul  2 07:09:35.211000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3435 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:35.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393832666238343961326532333166383664376364623637383534
Jul  2 07:09:35.241009 containerd[1481]: time="2024-07-02T07:09:35.240800712Z" level=info msg="StartContainer for \"a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf\" returns successfully"
Jul  2 07:09:35.252833 systemd[1]: cri-containerd-a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf.scope: Deactivated successfully.
Jul  2 07:09:35.254000 audit: BPF prog-id=159 op=UNLOAD
Jul  2 07:09:35.292906 kubelet[2922]: I0702 07:09:35.292820    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d766dcf56-nrd4z" podStartSLOduration=4.323821345 podStartE2EDuration="9.292785542s" podCreationTimestamp="2024-07-02 07:09:26 +0000 UTC" firstStartedPulling="2024-07-02 07:09:26.806654956 +0000 UTC m=+24.799724235" lastFinishedPulling="2024-07-02 07:09:31.775619153 +0000 UTC m=+29.768688432" observedRunningTime="2024-07-02 07:09:33.268893392 +0000 UTC m=+31.261962771" watchObservedRunningTime="2024-07-02 07:09:35.292785542 +0000 UTC m=+33.285854821"
Jul  2 07:09:35.961914 systemd[1]: run-containerd-runc-k8s.io-a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf-runc.baDTgl.mount: Deactivated successfully.
Jul  2 07:09:35.962031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf-rootfs.mount: Deactivated successfully.
Jul  2 07:09:36.103677 kubelet[2922]: E0702 07:09:36.103631    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:37.885909 containerd[1481]: time="2024-07-02T07:09:37.885818720Z" level=info msg="shim disconnected" id=a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf namespace=k8s.io
Jul  2 07:09:37.886421 containerd[1481]: time="2024-07-02T07:09:37.886384917Z" level=warning msg="cleaning up after shim disconnected" id=a6982fb849a2e231f86d7cdb6785439c9671e9342551a531f3c523de5b7fb8cf namespace=k8s.io
Jul  2 07:09:37.886421 containerd[1481]: time="2024-07-02T07:09:37.886415517Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:09:38.103651 kubelet[2922]: E0702 07:09:38.103584    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:38.269330 containerd[1481]: time="2024-07-02T07:09:38.268838462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\""
Jul  2 07:09:40.104335 kubelet[2922]: E0702 07:09:40.104284    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:42.103459 kubelet[2922]: E0702 07:09:42.103314    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:42.642077 kubelet[2922]: I0702 07:09:42.641271    2922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jul  2 07:09:42.714000 audit[3651]: NETFILTER_CFG table=filter:98 family=2 entries=15 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:42.726911 kernel: kauditd_printk_skb: 14 callbacks suppressed
Jul  2 07:09:42.727083 kernel: audit: type=1325 audit(1719904182.714:524): table=filter:98 family=2 entries=15 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:42.714000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe7bac8740 a2=0 a3=7ffe7bac872c items=0 ppid=3058 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.750131 kernel: audit: type=1300 audit(1719904182.714:524): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe7bac8740 a2=0 a3=7ffe7bac872c items=0 ppid=3058 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.750264 kernel: audit: type=1327 audit(1719904182.714:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:42.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:42.741000 audit[3651]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:42.750894 kernel: audit: type=1325 audit(1719904182.741:525): table=nat:99 family=2 entries=19 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:09:42.741000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7bac8740 a2=0 a3=7ffe7bac872c items=0 ppid=3058 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:42.776344 kernel: audit: type=1300 audit(1719904182.741:525): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7bac8740 a2=0 a3=7ffe7bac872c items=0 ppid=3058 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.776525 kernel: audit: type=1327 audit(1719904182.741:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:09:42.832538 containerd[1481]: time="2024-07-02T07:09:42.832489335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:42.835596 containerd[1481]: time="2024-07-02T07:09:42.835530922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850"
Jul  2 07:09:42.842046 containerd[1481]: time="2024-07-02T07:09:42.841992294Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:42.849408 containerd[1481]: time="2024-07-02T07:09:42.849351062Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:42.852747 containerd[1481]: time="2024-07-02T07:09:42.852689548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:09:42.853614 containerd[1481]: time="2024-07-02T07:09:42.853566644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.584655482s"
Jul  2 07:09:42.853794 containerd[1481]: time="2024-07-02T07:09:42.853766643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\""
Jul  2 07:09:42.858013 containerd[1481]: time="2024-07-02T07:09:42.857960925Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jul  2 07:09:42.895544 containerd[1481]: time="2024-07-02T07:09:42.894785866Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6\""
Jul  2 07:09:42.897297 containerd[1481]: time="2024-07-02T07:09:42.896148860Z" level=info msg="StartContainer for \"42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6\""
Jul  2 07:09:42.934125 systemd[1]: run-containerd-runc-k8s.io-42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6-runc.pngz6M.mount: Deactivated successfully.
Jul  2 07:09:42.943075 systemd[1]: Started cri-containerd-42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6.scope - libcontainer container 42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6.
Jul  2 07:09:42.957000 audit: BPF prog-id=160 op=LOAD
Jul  2 07:09:42.957000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3435 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.975142 kernel: audit: type=1334 audit(1719904182.957:526): prog-id=160 op=LOAD
Jul  2 07:09:42.975296 kernel: audit: type=1300 audit(1719904182.957:526): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3435 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.980906 kernel: audit: type=1327 audit(1719904182.957:526): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432353831623835323961343364666661613330396365663332303635
Jul  2 07:09:42.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432353831623835323961343364666661613330396365663332303635
Jul  2 07:09:42.994889 kernel: audit: type=1334 audit(1719904182.957:527): prog-id=161 op=LOAD
Jul  2 07:09:42.957000 audit: BPF prog-id=161 op=LOAD
Jul  2 07:09:42.957000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3435 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432353831623835323961343364666661613330396365663332303635
Jul  2 07:09:42.959000 audit: BPF prog-id=161 op=UNLOAD
Jul  2 07:09:42.959000 audit: BPF prog-id=160 op=UNLOAD
Jul  2 07:09:42.959000 audit: BPF prog-id=162 op=LOAD
Jul  2 07:09:42.959000 audit[3664]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3435 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:09:42.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432353831623835323961343364666661613330396365663332303635
Jul  2 07:09:43.010106 containerd[1481]: time="2024-07-02T07:09:43.010048668Z" level=info msg="StartContainer for \"42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6\" returns successfully"
Jul  2 07:09:44.103558 kubelet[2922]: E0702 07:09:44.103498    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:44.401339 systemd[1]: cri-containerd-42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6.scope: Deactivated successfully.
Jul  2 07:09:44.403000 audit: BPF prog-id=162 op=UNLOAD
Jul  2 07:09:44.436824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6-rootfs.mount: Deactivated successfully.
Jul  2 07:09:44.487904 kubelet[2922]: I0702 07:09:44.487655    2922 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.515240    2922 topology_manager.go:215] "Topology Admit Handler" podUID="8de6fb26-aba2-46d0-b934-35c3682baf1f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-87tzr"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.528313    2922 topology_manager.go:215] "Topology Admit Handler" podUID="0b47e120-86a6-4239-83a3-6d30cbbde07c" podNamespace="calico-system" podName="calico-kube-controllers-7c9974ff94-rnszf"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.531480    2922 topology_manager.go:215] "Topology Admit Handler" podUID="a1d12bc4-6d1b-42f2-bbef-a982b18d5205" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cql52"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.700244    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fkq\" (UniqueName: \"kubernetes.io/projected/8de6fb26-aba2-46d0-b934-35c3682baf1f-kube-api-access-k7fkq\") pod \"coredns-7db6d8ff4d-87tzr\" (UID: \"8de6fb26-aba2-46d0-b934-35c3682baf1f\") " pod="kube-system/coredns-7db6d8ff4d-87tzr"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.700294    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b47e120-86a6-4239-83a3-6d30cbbde07c-tigera-ca-bundle\") pod \"calico-kube-controllers-7c9974ff94-rnszf\" (UID: \"0b47e120-86a6-4239-83a3-6d30cbbde07c\") " pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf"
Jul  2 07:09:45.313311 kubelet[2922]: I0702 07:09:44.700324    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm5g\" (UniqueName: \"kubernetes.io/projected/0b47e120-86a6-4239-83a3-6d30cbbde07c-kube-api-access-lsm5g\") pod \"calico-kube-controllers-7c9974ff94-rnszf\" (UID: \"0b47e120-86a6-4239-83a3-6d30cbbde07c\") " pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf"
Jul  2 07:09:44.524269 systemd[1]: Created slice kubepods-burstable-pod8de6fb26_aba2_46d0_b934_35c3682baf1f.slice - libcontainer container kubepods-burstable-pod8de6fb26_aba2_46d0_b934_35c3682baf1f.slice.
Jul  2 07:09:45.314116 kubelet[2922]: I0702 07:09:44.700345    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1d12bc4-6d1b-42f2-bbef-a982b18d5205-config-volume\") pod \"coredns-7db6d8ff4d-cql52\" (UID: \"a1d12bc4-6d1b-42f2-bbef-a982b18d5205\") " pod="kube-system/coredns-7db6d8ff4d-cql52"
Jul  2 07:09:45.314116 kubelet[2922]: I0702 07:09:44.700404    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8de6fb26-aba2-46d0-b934-35c3682baf1f-config-volume\") pod \"coredns-7db6d8ff4d-87tzr\" (UID: \"8de6fb26-aba2-46d0-b934-35c3682baf1f\") " pod="kube-system/coredns-7db6d8ff4d-87tzr"
Jul  2 07:09:45.314116 kubelet[2922]: I0702 07:09:44.700426    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbw5k\" (UniqueName: \"kubernetes.io/projected/a1d12bc4-6d1b-42f2-bbef-a982b18d5205-kube-api-access-dbw5k\") pod \"coredns-7db6d8ff4d-cql52\" (UID: \"a1d12bc4-6d1b-42f2-bbef-a982b18d5205\") " pod="kube-system/coredns-7db6d8ff4d-cql52"
Jul  2 07:09:44.535567 systemd[1]: Created slice kubepods-besteffort-pod0b47e120_86a6_4239_83a3_6d30cbbde07c.slice - libcontainer container kubepods-besteffort-pod0b47e120_86a6_4239_83a3_6d30cbbde07c.slice.
Jul  2 07:09:44.543245 systemd[1]: Created slice kubepods-burstable-poda1d12bc4_6d1b_42f2_bbef_a982b18d5205.slice - libcontainer container kubepods-burstable-poda1d12bc4_6d1b_42f2_bbef_a982b18d5205.slice.
Jul  2 07:09:46.018836 containerd[1481]: time="2024-07-02T07:09:46.018778546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9974ff94-rnszf,Uid:0b47e120-86a6-4239-83a3-6d30cbbde07c,Namespace:calico-system,Attempt:0,}"
Jul  2 07:09:46.019625 containerd[1481]: time="2024-07-02T07:09:46.019421144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cql52,Uid:a1d12bc4-6d1b-42f2-bbef-a982b18d5205,Namespace:kube-system,Attempt:0,}"
Jul  2 07:09:46.019625 containerd[1481]: time="2024-07-02T07:09:46.019482743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87tzr,Uid:8de6fb26-aba2-46d0-b934-35c3682baf1f,Namespace:kube-system,Attempt:0,}"
Jul  2 07:09:46.110174 systemd[1]: Created slice kubepods-besteffort-poddf40d4db_1fad_4103_96f4_e2848ac4f551.slice - libcontainer container kubepods-besteffort-poddf40d4db_1fad_4103_96f4_e2848ac4f551.slice.
Jul  2 07:09:46.116475 containerd[1481]: time="2024-07-02T07:09:46.116421668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhl5r,Uid:df40d4db-1fad-4103-96f4-e2848ac4f551,Namespace:calico-system,Attempt:0,}"
Jul  2 07:09:51.662656 containerd[1481]: time="2024-07-02T07:09:51.662568425Z" level=info msg="shim disconnected" id=42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6 namespace=k8s.io
Jul  2 07:09:51.662656 containerd[1481]: time="2024-07-02T07:09:51.662644124Z" level=warning msg="cleaning up after shim disconnected" id=42581b8529a43dffaa309cef32065490c1ecb2c348aa2da9cf0baee78face8e6 namespace=k8s.io
Jul  2 07:09:51.662656 containerd[1481]: time="2024-07-02T07:09:51.662658124Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:09:52.302917 containerd[1481]: time="2024-07-02T07:09:52.302517717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\""
Jul  2 07:09:53.030053 containerd[1481]: time="2024-07-02T07:09:53.029969664Z" level=error msg="Failed to destroy network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.030724 containerd[1481]: time="2024-07-02T07:09:53.030677861Z" level=error msg="encountered an error cleaning up failed sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.030845 containerd[1481]: time="2024-07-02T07:09:53.030755561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87tzr,Uid:8de6fb26-aba2-46d0-b934-35c3682baf1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.033282 kubelet[2922]: E0702 07:09:53.031075    2922 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.033282 kubelet[2922]: E0702 07:09:53.031157    2922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-87tzr"
Jul  2 07:09:53.033282 kubelet[2922]: E0702 07:09:53.031187    2922 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-87tzr"
Jul  2 07:09:53.034828 kubelet[2922]: E0702 07:09:53.031250    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-87tzr_kube-system(8de6fb26-aba2-46d0-b934-35c3682baf1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-87tzr_kube-system(8de6fb26-aba2-46d0-b934-35c3682baf1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-87tzr" podUID="8de6fb26-aba2-46d0-b934-35c3682baf1f"
Jul  2 07:09:53.083740 containerd[1481]: time="2024-07-02T07:09:53.083651995Z" level=error msg="Failed to destroy network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.084295 containerd[1481]: time="2024-07-02T07:09:53.084248593Z" level=error msg="encountered an error cleaning up failed sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.084424 containerd[1481]: time="2024-07-02T07:09:53.084330093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhl5r,Uid:df40d4db-1fad-4103-96f4-e2848ac4f551,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.084648 kubelet[2922]: E0702 07:09:53.084601    2922 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.084750 kubelet[2922]: E0702 07:09:53.084663    2922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:53.084750 kubelet[2922]: E0702 07:09:53.084692    2922 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zhl5r"
Jul  2 07:09:53.084840 kubelet[2922]: E0702 07:09:53.084751    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zhl5r_calico-system(df40d4db-1fad-4103-96f4-e2848ac4f551)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zhl5r_calico-system(df40d4db-1fad-4103-96f4-e2848ac4f551)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:53.190244 containerd[1481]: time="2024-07-02T07:09:53.190147261Z" level=error msg="Failed to destroy network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.190845 containerd[1481]: time="2024-07-02T07:09:53.190794859Z" level=error msg="encountered an error cleaning up failed sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.191088 containerd[1481]: time="2024-07-02T07:09:53.191050058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9974ff94-rnszf,Uid:0b47e120-86a6-4239-83a3-6d30cbbde07c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.191527 kubelet[2922]: E0702 07:09:53.191484    2922 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.191632 kubelet[2922]: E0702 07:09:53.191553    2922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf"
Jul  2 07:09:53.191632 kubelet[2922]: E0702 07:09:53.191579    2922 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf"
Jul  2 07:09:53.191730 kubelet[2922]: E0702 07:09:53.191636    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c9974ff94-rnszf_calico-system(0b47e120-86a6-4239-83a3-6d30cbbde07c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c9974ff94-rnszf_calico-system(0b47e120-86a6-4239-83a3-6d30cbbde07c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf" podUID="0b47e120-86a6-4239-83a3-6d30cbbde07c"
Jul  2 07:09:53.236635 containerd[1481]: time="2024-07-02T07:09:53.236567515Z" level=error msg="Failed to destroy network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.237024 containerd[1481]: time="2024-07-02T07:09:53.236982914Z" level=error msg="encountered an error cleaning up failed sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.237146 containerd[1481]: time="2024-07-02T07:09:53.237057513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cql52,Uid:a1d12bc4-6d1b-42f2-bbef-a982b18d5205,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.237401 kubelet[2922]: E0702 07:09:53.237363    2922 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.237507 kubelet[2922]: E0702 07:09:53.237439    2922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cql52"
Jul  2 07:09:53.237507 kubelet[2922]: E0702 07:09:53.237484    2922 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cql52"
Jul  2 07:09:53.237896 kubelet[2922]: E0702 07:09:53.237548    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cql52_kube-system(a1d12bc4-6d1b-42f2-bbef-a982b18d5205)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cql52_kube-system(a1d12bc4-6d1b-42f2-bbef-a982b18d5205)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cql52" podUID="a1d12bc4-6d1b-42f2-bbef-a982b18d5205"
Jul  2 07:09:53.310161 kubelet[2922]: I0702 07:09:53.306302    2922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:09:53.310473 containerd[1481]: time="2024-07-02T07:09:53.309390086Z" level=info msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\""
Jul  2 07:09:53.310473 containerd[1481]: time="2024-07-02T07:09:53.310368883Z" level=info msg="Ensure that sandbox 76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056 in task-service has been cleanup successfully"
Jul  2 07:09:53.315282 kubelet[2922]: I0702 07:09:53.313992    2922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:09:53.318353 containerd[1481]: time="2024-07-02T07:09:53.318293558Z" level=info msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\""
Jul  2 07:09:53.319499 containerd[1481]: time="2024-07-02T07:09:53.319456555Z" level=info msg="Ensure that sandbox fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b in task-service has been cleanup successfully"
Jul  2 07:09:53.323765 kubelet[2922]: I0702 07:09:53.323735    2922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:09:53.326595 containerd[1481]: time="2024-07-02T07:09:53.326514333Z" level=info msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\""
Jul  2 07:09:53.327694 containerd[1481]: time="2024-07-02T07:09:53.327659929Z" level=info msg="Ensure that sandbox 9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289 in task-service has been cleanup successfully"
Jul  2 07:09:53.329123 kubelet[2922]: I0702 07:09:53.328545    2922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:09:53.329954 containerd[1481]: time="2024-07-02T07:09:53.329632223Z" level=info msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\""
Jul  2 07:09:53.330537 containerd[1481]: time="2024-07-02T07:09:53.330511420Z" level=info msg="Ensure that sandbox 96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7 in task-service has been cleanup successfully"
Jul  2 07:09:53.403125 containerd[1481]: time="2024-07-02T07:09:53.403055592Z" level=error msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" failed" error="failed to destroy network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.403886 kubelet[2922]: E0702 07:09:53.403583    2922 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:09:53.403886 kubelet[2922]: E0702 07:09:53.403666    2922 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"}
Jul  2 07:09:53.403886 kubelet[2922]: E0702 07:09:53.403757    2922 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df40d4db-1fad-4103-96f4-e2848ac4f551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Jul  2 07:09:53.403886 kubelet[2922]: E0702 07:09:53.403789    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df40d4db-1fad-4103-96f4-e2848ac4f551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zhl5r" podUID="df40d4db-1fad-4103-96f4-e2848ac4f551"
Jul  2 07:09:53.424526 containerd[1481]: time="2024-07-02T07:09:53.424438925Z" level=error msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" failed" error="failed to destroy network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.424808 kubelet[2922]: E0702 07:09:53.424748    2922 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:09:53.425035 kubelet[2922]: E0702 07:09:53.424833    2922 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"}
Jul  2 07:09:53.425035 kubelet[2922]: E0702 07:09:53.424968    2922 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1d12bc4-6d1b-42f2-bbef-a982b18d5205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Jul  2 07:09:53.425199 kubelet[2922]: E0702 07:09:53.425019    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1d12bc4-6d1b-42f2-bbef-a982b18d5205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cql52" podUID="a1d12bc4-6d1b-42f2-bbef-a982b18d5205"
Jul  2 07:09:53.434237 containerd[1481]: time="2024-07-02T07:09:53.434165895Z" level=error msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" failed" error="failed to destroy network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.435089 kubelet[2922]: E0702 07:09:53.434765    2922 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:09:53.435089 kubelet[2922]: E0702 07:09:53.434877    2922 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"}
Jul  2 07:09:53.435089 kubelet[2922]: E0702 07:09:53.434924    2922 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8de6fb26-aba2-46d0-b934-35c3682baf1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Jul  2 07:09:53.435089 kubelet[2922]: E0702 07:09:53.435026    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8de6fb26-aba2-46d0-b934-35c3682baf1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-87tzr" podUID="8de6fb26-aba2-46d0-b934-35c3682baf1f"
Jul  2 07:09:53.448007 containerd[1481]: time="2024-07-02T07:09:53.447938851Z" level=error msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" failed" error="failed to destroy network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jul  2 07:09:53.448287 kubelet[2922]: E0702 07:09:53.448238    2922 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:09:53.448398 kubelet[2922]: E0702 07:09:53.448302    2922 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"}
Jul  2 07:09:53.448398 kubelet[2922]: E0702 07:09:53.448347    2922 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b47e120-86a6-4239-83a3-6d30cbbde07c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Jul  2 07:09:53.448398 kubelet[2922]: E0702 07:09:53.448376    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b47e120-86a6-4239-83a3-6d30cbbde07c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf" podUID="0b47e120-86a6-4239-83a3-6d30cbbde07c"
Jul  2 07:09:53.763189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7-shm.mount: Deactivated successfully.
Jul  2 07:09:53.763315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056-shm.mount: Deactivated successfully.
Jul  2 07:09:53.763403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289-shm.mount: Deactivated successfully.
Jul  2 07:09:59.044000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.049944 kernel: kauditd_printk_skb: 8 callbacks suppressed
Jul  2 07:09:59.050075 kernel: audit: type=1400 audit(1719904199.044:533): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.044000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.044000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00113c870 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:59.084920 kernel: audit: type=1400 audit(1719904199.044:532): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.085108 kernel: audit: type=1300 audit(1719904199.044:532): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00113c870 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:59.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:59.098488 kernel: audit: type=1327 audit(1719904199.044:532): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:59.044000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002484560 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:59.044000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:59.134390 kernel: audit: type=1300 audit(1719904199.044:533): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002484560 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:09:59.134593 kernel: audit: type=1327 audit(1719904199.044:533): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:09:59.234000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.239000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.273887 kernel: audit: type=1400 audit(1719904199.234:534): avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.274070 kernel: audit: type=1400 audit(1719904199.239:535): avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.239000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006184510 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.290899 kernel: audit: type=1300 audit(1719904199.239:535): arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006184510 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.239000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.304436 kernel: audit: type=1327 audit(1719904199.239:535): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.244000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.244000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006435700 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.244000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.244000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=5730576 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.244000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006184720 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.244000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.244000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.244000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006435720 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.244000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.244000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:09:59.244000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006184840 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.244000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:09:59.234000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=60 a1=c00617a990 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:09:59.234000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:03.773296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187719607.mount: Deactivated successfully.
Jul  2 07:10:04.063460 containerd[1481]: time="2024-07-02T07:10:04.063293540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:04.109603 containerd[1481]: time="2024-07-02T07:10:04.109519841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750"
Jul  2 07:10:04.112929 containerd[1481]: time="2024-07-02T07:10:04.112880534Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:04.159056 containerd[1481]: time="2024-07-02T07:10:04.158992835Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:04.206976 containerd[1481]: time="2024-07-02T07:10:04.206857132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:04.214268 containerd[1481]: time="2024-07-02T07:10:04.214208017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 11.9116326s"
Jul  2 07:10:04.214579 containerd[1481]: time="2024-07-02T07:10:04.214472816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\""
Jul  2 07:10:04.233004 containerd[1481]: time="2024-07-02T07:10:04.232956677Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Jul  2 07:10:04.425278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818575637.mount: Deactivated successfully.
Jul  2 07:10:04.569930 containerd[1481]: time="2024-07-02T07:10:04.569833056Z" level=info msg="CreateContainer within sandbox \"d6eeb102a5597ef38bfec7c4bb08344b5dd23eb471d528f693b08b2700bac57e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc\""
Jul  2 07:10:04.572315 containerd[1481]: time="2024-07-02T07:10:04.570824753Z" level=info msg="StartContainer for \"f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc\""
Jul  2 07:10:04.604047 systemd[1]: Started cri-containerd-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc.scope - libcontainer container f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc.
Jul  2 07:10:04.623602 kernel: kauditd_printk_skb: 14 callbacks suppressed
Jul  2 07:10:04.623758 kernel: audit: type=1334 audit(1719904204.620:540): prog-id=163 op=LOAD
Jul  2 07:10:04.620000 audit: BPF prog-id=163 op=LOAD
Jul  2 07:10:04.620000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.639932 kernel: audit: type=1300 audit(1719904204.620:540): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626435313937316530383631353466663532383038353564363736
Jul  2 07:10:04.650053 kernel: audit: type=1327 audit(1719904204.620:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626435313937316530383631353466663532383038353564363736
Jul  2 07:10:04.651763 containerd[1481]: time="2024-07-02T07:10:04.651705380Z" level=info msg="StartContainer for \"f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc\" returns successfully"
Jul  2 07:10:04.620000 audit: BPF prog-id=164 op=LOAD
Jul  2 07:10:04.673734 kernel: audit: type=1334 audit(1719904204.620:541): prog-id=164 op=LOAD
Jul  2 07:10:04.673926 kernel: audit: type=1300 audit(1719904204.620:541): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.620000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626435313937316530383631353466663532383038353564363736
Jul  2 07:10:04.690888 kernel: audit: type=1327 audit(1719904204.620:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626435313937316530383631353466663532383038353564363736
Jul  2 07:10:04.691039 kernel: audit: type=1334 audit(1719904204.620:542): prog-id=164 op=UNLOAD
Jul  2 07:10:04.620000 audit: BPF prog-id=164 op=UNLOAD
Jul  2 07:10:04.620000 audit: BPF prog-id=163 op=UNLOAD
Jul  2 07:10:04.694908 kernel: audit: type=1334 audit(1719904204.620:543): prog-id=163 op=UNLOAD
Jul  2 07:10:04.620000 audit: BPF prog-id=165 op=LOAD
Jul  2 07:10:04.620000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.708504 kernel: audit: type=1334 audit(1719904204.620:544): prog-id=165 op=LOAD
Jul  2 07:10:04.708589 kernel: audit: type=1300 audit(1719904204.620:544): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3435 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626435313937316530383631353466663532383038353564363736
Jul  2 07:10:04.939338 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jul  2 07:10:04.939562 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Jul  2 07:10:05.104083 containerd[1481]: time="2024-07-02T07:10:05.103928321Z" level=info msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\""
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.180 [INFO][4004] k8s.go 608: Cleaning up netns ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.180 [INFO][4004] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" iface="eth0" netns="/var/run/netns/cni-562a3248-a209-2597-7a65-d6eca6435a7b"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.180 [INFO][4004] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" iface="eth0" netns="/var/run/netns/cni-562a3248-a209-2597-7a65-d6eca6435a7b"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.181 [INFO][4004] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" iface="eth0" netns="/var/run/netns/cni-562a3248-a209-2597-7a65-d6eca6435a7b"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.181 [INFO][4004] k8s.go 615: Releasing IP address(es) ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.181 [INFO][4004] utils.go 188: Calico CNI releasing IP address ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.221 [INFO][4010] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.221 [INFO][4010] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.221 [INFO][4010] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.227 [WARNING][4010] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.227 [INFO][4010] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.231 [INFO][4010] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:05.234487 containerd[1481]: 2024-07-02 07:10:05.232 [INFO][4004] k8s.go 621: Teardown processing complete. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:10:05.240543 systemd[1]: run-netns-cni\x2d562a3248\x2da209\x2d2597\x2d7a65\x2dd6eca6435a7b.mount: Deactivated successfully.
Jul  2 07:10:05.241465 containerd[1481]: time="2024-07-02T07:10:05.241395738Z" level=info msg="TearDown network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" successfully"
Jul  2 07:10:05.241787 containerd[1481]: time="2024-07-02T07:10:05.241711338Z" level=info msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" returns successfully"
Jul  2 07:10:05.243250 containerd[1481]: time="2024-07-02T07:10:05.243211234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87tzr,Uid:8de6fb26-aba2-46d0-b934-35c3682baf1f,Namespace:kube-system,Attempt:1,}"
Jul  2 07:10:05.386076 kubelet[2922]: I0702 07:10:05.385359    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-54hx8" podStartSLOduration=2.159895596 podStartE2EDuration="39.385326342s" podCreationTimestamp="2024-07-02 07:09:26 +0000 UTC" firstStartedPulling="2024-07-02 07:09:26.990459467 +0000 UTC m=+24.983528746" lastFinishedPulling="2024-07-02 07:10:04.215890213 +0000 UTC m=+62.208959492" observedRunningTime="2024-07-02 07:10:05.384895743 +0000 UTC m=+63.377965022" watchObservedRunningTime="2024-07-02 07:10:05.385326342 +0000 UTC m=+63.378395621"
Jul  2 07:10:05.513559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jul  2 07:10:05.513686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4406b4dd0ab: link becomes ready
Jul  2 07:10:05.509563 systemd-networkd[1236]: cali4406b4dd0ab: Link UP
Jul  2 07:10:05.514173 systemd-networkd[1236]: cali4406b4dd0ab: Gained carrier
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.338 [INFO][4018] utils.go 100: File /var/lib/calico/mtu does not exist
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.354 [INFO][4018] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0 coredns-7db6d8ff4d- kube-system  8de6fb26-aba2-46d0-b934-35c3682baf1f 709 0 2024-07-02 07:09:18 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  coredns-7db6d8ff4d-87tzr eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali4406b4dd0ab  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.354 [INFO][4018] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.427 [INFO][4039] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" HandleID="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.439 [INFO][4039] ipam_plugin.go 264: Auto assigning IP ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" HandleID="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"coredns-7db6d8ff4d-87tzr", "timestamp":"2024-07-02 07:10:05.427944854 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.439 [INFO][4039] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.439 [INFO][4039] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.439 [INFO][4039] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.441 [INFO][4039] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.446 [INFO][4039] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.450 [INFO][4039] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.459 [INFO][4039] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.463 [INFO][4039] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.463 [INFO][4039] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.467 [INFO][4039] ipam.go 1685: Creating new handle: k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.471 [INFO][4039] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.477 [INFO][4039] ipam.go 1216: Successfully claimed IPs: [192.168.60.129/26] block=192.168.60.128/26 handle="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.477 [INFO][4039] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.129/26] handle="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.477 [INFO][4039] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:05.531614 containerd[1481]: 2024-07-02 07:10:05.477 [INFO][4039] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.129/26] IPv6=[] ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" HandleID="k8s-pod-network.eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.480 [INFO][4018] k8s.go 386: Populated endpoint ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8de6fb26-aba2-46d0-b934-35c3682baf1f", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"coredns-7db6d8ff4d-87tzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4406b4dd0ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.480 [INFO][4018] k8s.go 387: Calico CNI using IPs: [192.168.60.129/32] ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.480 [INFO][4018] dataplane_linux.go 68: Setting the host side veth name to cali4406b4dd0ab ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.515 [INFO][4018] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.515 [INFO][4018] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8de6fb26-aba2-46d0-b934-35c3682baf1f", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad", Pod:"coredns-7db6d8ff4d-87tzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4406b4dd0ab", MAC:"ca:a9:c9:dd:fa:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:05.532853 containerd[1481]: 2024-07-02 07:10:05.528 [INFO][4018] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-87tzr" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:10:05.558322 containerd[1481]: time="2024-07-02T07:10:05.558209386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:05.558555 containerd[1481]: time="2024-07-02T07:10:05.558288086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:05.558555 containerd[1481]: time="2024-07-02T07:10:05.558316886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:05.558555 containerd[1481]: time="2024-07-02T07:10:05.558337986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:05.581064 systemd[1]: Started cri-containerd-eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad.scope - libcontainer container eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad.
Jul  2 07:10:05.591000 audit: BPF prog-id=166 op=LOAD
Jul  2 07:10:05.592000 audit: BPF prog-id=167 op=LOAD
Jul  2 07:10:05.592000 audit[4093]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4084 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613131363063393638383165343261623933616261313036343338
Jul  2 07:10:05.592000 audit: BPF prog-id=168 op=LOAD
Jul  2 07:10:05.592000 audit[4093]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4084 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613131363063393638383165343261623933616261313036343338
Jul  2 07:10:05.592000 audit: BPF prog-id=168 op=UNLOAD
Jul  2 07:10:05.592000 audit: BPF prog-id=167 op=UNLOAD
Jul  2 07:10:05.592000 audit: BPF prog-id=169 op=LOAD
Jul  2 07:10:05.592000 audit[4093]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4084 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563613131363063393638383165343261623933616261313036343338
Jul  2 07:10:05.624347 containerd[1481]: time="2024-07-02T07:10:05.624304450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-87tzr,Uid:8de6fb26-aba2-46d0-b934-35c3682baf1f,Namespace:kube-system,Attempt:1,} returns sandbox id \"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad\""
Jul  2 07:10:05.628244 containerd[1481]: time="2024-07-02T07:10:05.628192442Z" level=info msg="CreateContainer within sandbox \"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jul  2 07:10:05.667942 containerd[1481]: time="2024-07-02T07:10:05.667779961Z" level=info msg="CreateContainer within sandbox \"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5ff498a7812c2f1cf660dd952251f415a931a49feb4dd83c05344e4558b90f1\""
Jul  2 07:10:05.670407 containerd[1481]: time="2024-07-02T07:10:05.669980856Z" level=info msg="StartContainer for \"a5ff498a7812c2f1cf660dd952251f415a931a49feb4dd83c05344e4558b90f1\""
Jul  2 07:10:05.698077 systemd[1]: Started cri-containerd-a5ff498a7812c2f1cf660dd952251f415a931a49feb4dd83c05344e4558b90f1.scope - libcontainer container a5ff498a7812c2f1cf660dd952251f415a931a49feb4dd83c05344e4558b90f1.
Jul  2 07:10:05.710000 audit: BPF prog-id=170 op=LOAD
Jul  2 07:10:05.710000 audit: BPF prog-id=171 op=LOAD
Jul  2 07:10:05.710000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4084 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666634393861373831326332663163663636306464393532323531
Jul  2 07:10:05.710000 audit: BPF prog-id=172 op=LOAD
Jul  2 07:10:05.710000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4084 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666634393861373831326332663163663636306464393532323531
Jul  2 07:10:05.710000 audit: BPF prog-id=172 op=UNLOAD
Jul  2 07:10:05.710000 audit: BPF prog-id=171 op=UNLOAD
Jul  2 07:10:05.710000 audit: BPF prog-id=173 op=LOAD
Jul  2 07:10:05.710000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4084 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135666634393861373831326332663163663636306464393532323531
Jul  2 07:10:05.738853 containerd[1481]: time="2024-07-02T07:10:05.738797215Z" level=info msg="StartContainer for \"a5ff498a7812c2f1cf660dd952251f415a931a49feb4dd83c05344e4558b90f1\" returns successfully"
Jul  2 07:10:05.775931 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.2PUob6.mount: Deactivated successfully.
Jul  2 07:10:06.413455 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.FqJG8M.mount: Deactivated successfully.
Jul  2 07:10:06.432621 kubelet[2922]: I0702 07:10:06.432547    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-87tzr" podStartSLOduration=48.432523922 podStartE2EDuration="48.432523922s" podCreationTimestamp="2024-07-02 07:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:10:06.392671701 +0000 UTC m=+64.385740980" watchObservedRunningTime="2024-07-02 07:10:06.432523922 +0000 UTC m=+64.425593401"
Jul  2 07:10:06.528000 audit[4205]: NETFILTER_CFG table=filter:100 family=2 entries=14 op=nft_register_rule pid=4205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:06.528000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fffec1af3b0 a2=0 a3=7fffec1af39c items=0 ppid=3058 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:06.529000 audit[4205]: NETFILTER_CFG table=nat:101 family=2 entries=14 op=nft_register_rule pid=4205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:06.529000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffec1af3b0 a2=0 a3=0 items=0 ppid=3058 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:06.543000 audit[4211]: NETFILTER_CFG table=filter:102 family=2 entries=11 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:06.543000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff1f9380c0 a2=0 a3=7fff1f9380ac items=0 ppid=3058 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:06.552000 audit[4211]: NETFILTER_CFG table=nat:103 family=2 entries=35 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:06.552000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff1f9380c0 a2=0 a3=7fff1f9380ac items=0 ppid=3058 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:06.602000 audit[4232]: AVC avc:  denied  { write } for  pid=4232 comm="tee" name="fd" dev="proc" ino=33076 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.602000 audit[4232]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff368e5a0c a2=241 a3=1b6 items=1 ppid=4163 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.602000 audit: CWD cwd="/etc/service/enabled/felix/log"
Jul  2 07:10:06.602000 audit: PATH item=0 name="/dev/fd/63" inode=32102 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.638000 audit[4249]: AVC avc:  denied  { write } for  pid=4249 comm="tee" name="fd" dev="proc" ino=32113 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.638000 audit[4249]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2636e9fd a2=241 a3=1b6 items=1 ppid=4174 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.638000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log"
Jul  2 07:10:06.638000 audit: PATH item=0 name="/dev/fd/63" inode=33079 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.645000 audit[4215]: AVC avc:  denied  { write } for  pid=4215 comm="tee" name="fd" dev="proc" ino=33090 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.649000 audit[4244]: AVC avc:  denied  { write } for  pid=4244 comm="tee" name="fd" dev="proc" ino=33093 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.649000 audit[4244]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd5a894a0d a2=241 a3=1b6 items=1 ppid=4168 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.652000 audit[4260]: AVC avc:  denied  { write } for  pid=4260 comm="tee" name="fd" dev="proc" ino=32117 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.652000 audit[4260]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd8e607a0e a2=241 a3=1b6 items=1 ppid=4173 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.652000 audit: CWD cwd="/etc/service/enabled/cni/log"
Jul  2 07:10:06.652000 audit: PATH item=0 name="/dev/fd/63" inode=33087 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.652000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.649000 audit: CWD cwd="/etc/service/enabled/bird/log"
Jul  2 07:10:06.649000 audit: PATH item=0 name="/dev/fd/63" inode=33069 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.645000 audit[4215]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd69b0a0c a2=241 a3=1b6 items=1 ppid=4171 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.657000 audit[4256]: AVC avc:  denied  { write } for  pid=4256 comm="tee" name="fd" dev="proc" ino=32121 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.657000 audit[4256]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe514e9fc a2=241 a3=1b6 items=1 ppid=4164 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.657000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log"
Jul  2 07:10:06.657000 audit: PATH item=0 name="/dev/fd/63" inode=33084 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.659000 audit[4222]: AVC avc:  denied  { write } for  pid=4222 comm="tee" name="fd" dev="proc" ino=33098 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0
Jul  2 07:10:06.645000 audit: CWD cwd="/etc/service/enabled/confd/log"
Jul  2 07:10:06.645000 audit: PATH item=0 name="/dev/fd/63" inode=32094 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.659000 audit[4222]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc801f3a0c a2=241 a3=1b6 items=1 ppid=4180 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:06.659000 audit: CWD cwd="/etc/service/enabled/bird6/log"
Jul  2 07:10:06.659000 audit: PATH item=0 name="/dev/fd/63" inode=32099 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Jul  2 07:10:06.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633
Jul  2 07:10:06.692174 systemd-networkd[1236]: cali4406b4dd0ab: Gained IPv6LL
Jul  2 07:10:07.081000 audit: BPF prog-id=174 op=LOAD
Jul  2 07:10:07.081000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd69cc240 a2=70 a3=7fb47ccda000 items=0 ppid=4165 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Jul  2 07:10:07.081000 audit: BPF prog-id=174 op=UNLOAD
Jul  2 07:10:07.081000 audit: BPF prog-id=175 op=LOAD
Jul  2 07:10:07.081000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdd69cc240 a2=70 a3=6f items=0 ppid=4165 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Jul  2 07:10:07.081000 audit: BPF prog-id=175 op=UNLOAD
Jul  2 07:10:07.081000 audit: BPF prog-id=176 op=LOAD
Jul  2 07:10:07.081000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdd69cc1d0 a2=70 a3=7ffdd69cc240 items=0 ppid=4165 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Jul  2 07:10:07.081000 audit: BPF prog-id=176 op=UNLOAD
Jul  2 07:10:07.091000 audit: BPF prog-id=177 op=LOAD
Jul  2 07:10:07.091000 audit[4327]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd69cc200 a2=70 a3=0 items=0 ppid=4165 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.091000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470
Jul  2 07:10:07.098510 systemd-networkd[1236]: vxlan.calico: Link UP
Jul  2 07:10:07.098527 systemd-networkd[1236]: vxlan.calico: Gained carrier
Jul  2 07:10:07.108307 containerd[1481]: time="2024-07-02T07:10:07.108249695Z" level=info msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\""
Jul  2 07:10:07.127000 audit: BPF prog-id=177 op=UNLOAD
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.188 [INFO][4349] k8s.go 608: Cleaning up netns ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.189 [INFO][4349] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" iface="eth0" netns="/var/run/netns/cni-07f1dc5e-cbe4-c3bb-d2c5-5e2ed8676041"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.189 [INFO][4349] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" iface="eth0" netns="/var/run/netns/cni-07f1dc5e-cbe4-c3bb-d2c5-5e2ed8676041"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.189 [INFO][4349] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" iface="eth0" netns="/var/run/netns/cni-07f1dc5e-cbe4-c3bb-d2c5-5e2ed8676041"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.189 [INFO][4349] k8s.go 615: Releasing IP address(es) ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.191 [INFO][4349] utils.go 188: Calico CNI releasing IP address ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.222 [INFO][4365] ipam_plugin.go 411: Releasing address using handleID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.222 [INFO][4365] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.223 [INFO][4365] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.228 [WARNING][4365] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.228 [INFO][4365] ipam_plugin.go 439: Releasing address using workloadID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.232 [INFO][4365] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:07.241219 containerd[1481]: 2024-07-02 07:10:07.239 [INFO][4349] k8s.go 621: Teardown processing complete. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:10:07.245268 systemd[1]: run-netns-cni\x2d07f1dc5e\x2dcbe4\x2dc3bb\x2dd2c5\x2d5e2ed8676041.mount: Deactivated successfully.
Jul  2 07:10:07.246054 containerd[1481]: time="2024-07-02T07:10:07.246000034Z" level=info msg="TearDown network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" successfully"
Jul  2 07:10:07.246568 containerd[1481]: time="2024-07-02T07:10:07.246544733Z" level=info msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" returns successfully"
Jul  2 07:10:07.247628 containerd[1481]: time="2024-07-02T07:10:07.247597431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9974ff94-rnszf,Uid:0b47e120-86a6-4239-83a3-6d30cbbde07c,Namespace:calico-system,Attempt:1,}"
Jul  2 07:10:07.284000 audit[4381]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:07.284000 audit[4381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdc7427e70 a2=0 a3=7ffdc7427e5c items=0 ppid=4165 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.284000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:07.302000 audit[4382]: NETFILTER_CFG table=raw:105 family=2 entries=19 op=nft_register_chain pid=4382 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:07.302000 audit[4382]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff187f0860 a2=0 a3=7fff187f084c items=0 ppid=4165 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.302000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:07.305000 audit[4385]: NETFILTER_CFG table=filter:106 family=2 entries=69 op=nft_register_chain pid=4385 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:07.305000 audit[4385]: SYSCALL arch=c000003e syscall=46 success=yes exit=36404 a0=3 a1=7ffe32f27ca0 a2=0 a3=7ffe32f27c8c items=0 ppid=4165 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:07.306000 audit[4383]: NETFILTER_CFG table=mangle:107 family=2 entries=16 op=nft_register_chain pid=4383 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:07.306000 audit[4383]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff07feed10 a2=0 a3=7fff07feecfc items=0 ppid=4165 pid=4383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.306000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:07.439205 systemd-networkd[1236]: caliae2094b87c7: Link UP
Jul  2 07:10:07.440938 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliae2094b87c7: link becomes ready
Jul  2 07:10:07.440820 systemd-networkd[1236]: caliae2094b87c7: Gained carrier
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.354 [INFO][4389] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0 calico-kube-controllers-7c9974ff94- calico-system  0b47e120-86a6-4239-83a3-6d30cbbde07c 738 0 2024-07-02 07:09:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c9974ff94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  calico-kube-controllers-7c9974ff94-rnszf eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae2094b87c7  [] []}} ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.354 [INFO][4389] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.391 [INFO][4401] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" HandleID="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.400 [INFO][4401] ipam_plugin.go 264: Auto assigning IP ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" HandleID="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000599720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"calico-kube-controllers-7c9974ff94-rnszf", "timestamp":"2024-07-02 07:10:07.391629358 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.401 [INFO][4401] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.401 [INFO][4401] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.401 [INFO][4401] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.402 [INFO][4401] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.406 [INFO][4401] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.410 [INFO][4401] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.412 [INFO][4401] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.414 [INFO][4401] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.414 [INFO][4401] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.416 [INFO][4401] ipam.go 1685: Creating new handle: k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.419 [INFO][4401] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.430 [INFO][4401] ipam.go 1216: Successfully claimed IPs: [192.168.60.130/26] block=192.168.60.128/26 handle="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.430 [INFO][4401] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.130/26] handle="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.430 [INFO][4401] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:07.458061 containerd[1481]: 2024-07-02 07:10:07.430 [INFO][4401] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.130/26] IPv6=[] ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" HandleID="k8s-pod-network.c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.432 [INFO][4389] k8s.go 386: Populated endpoint ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0", GenerateName:"calico-kube-controllers-7c9974ff94-", Namespace:"calico-system", SelfLink:"", UID:"0b47e120-86a6-4239-83a3-6d30cbbde07c", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9974ff94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"calico-kube-controllers-7c9974ff94-rnszf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2094b87c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.432 [INFO][4389] k8s.go 387: Calico CNI using IPs: [192.168.60.130/32] ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.432 [INFO][4389] dataplane_linux.go 68: Setting the host side veth name to caliae2094b87c7 ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.442 [INFO][4389] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.442 [INFO][4389] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0", GenerateName:"calico-kube-controllers-7c9974ff94-", Namespace:"calico-system", SelfLink:"", UID:"0b47e120-86a6-4239-83a3-6d30cbbde07c", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9974ff94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a", Pod:"calico-kube-controllers-7c9974ff94-rnszf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2094b87c7", MAC:"2e:05:52:1d:77:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:07.459094 containerd[1481]: 2024-07-02 07:10:07.456 [INFO][4389] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a" Namespace="calico-system" Pod="calico-kube-controllers-7c9974ff94-rnszf" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:10:07.474000 audit[4421]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:07.474000 audit[4421]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffd66de6860 a2=0 a3=7ffd66de684c items=0 ppid=4165 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.474000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:07.491897 containerd[1481]: time="2024-07-02T07:10:07.491773068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:07.492279 containerd[1481]: time="2024-07-02T07:10:07.492215567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:07.492279 containerd[1481]: time="2024-07-02T07:10:07.492252967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:07.492438 containerd[1481]: time="2024-07-02T07:10:07.492402067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:07.525130 systemd[1]: Started cri-containerd-c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a.scope - libcontainer container c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a.
Jul  2 07:10:07.543000 audit: BPF prog-id=178 op=LOAD
Jul  2 07:10:07.544000 audit: BPF prog-id=179 op=LOAD
Jul  2 07:10:07.544000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4428 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386361353462336561666236666237373166383834363139353330
Jul  2 07:10:07.544000 audit: BPF prog-id=180 op=LOAD
Jul  2 07:10:07.544000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4428 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386361353462336561666236666237373166383834363139353330
Jul  2 07:10:07.544000 audit: BPF prog-id=180 op=UNLOAD
Jul  2 07:10:07.544000 audit: BPF prog-id=179 op=UNLOAD
Jul  2 07:10:07.544000 audit: BPF prog-id=181 op=LOAD
Jul  2 07:10:07.544000 audit[4439]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4428 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:07.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331386361353462336561666236666237373166383834363139353330
Jul  2 07:10:07.579316 containerd[1481]: time="2024-07-02T07:10:07.579253602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9974ff94-rnszf,Uid:0b47e120-86a6-4239-83a3-6d30cbbde07c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a\""
Jul  2 07:10:07.581769 containerd[1481]: time="2024-07-02T07:10:07.581708998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\""
Jul  2 07:10:08.104702 containerd[1481]: time="2024-07-02T07:10:08.104648414Z" level=info msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\""
Jul  2 07:10:08.105376 containerd[1481]: time="2024-07-02T07:10:08.105331813Z" level=info msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\""
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.193 [INFO][4496] k8s.go 608: Cleaning up netns ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.194 [INFO][4496] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" iface="eth0" netns="/var/run/netns/cni-7f8ae156-416f-23ee-aea2-6e73379f920d"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.194 [INFO][4496] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" iface="eth0" netns="/var/run/netns/cni-7f8ae156-416f-23ee-aea2-6e73379f920d"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.194 [INFO][4496] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" iface="eth0" netns="/var/run/netns/cni-7f8ae156-416f-23ee-aea2-6e73379f920d"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.194 [INFO][4496] k8s.go 615: Releasing IP address(es) ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.194 [INFO][4496] utils.go 188: Calico CNI releasing IP address ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.245 [INFO][4507] ipam_plugin.go 411: Releasing address using handleID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.246 [INFO][4507] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.246 [INFO][4507] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.256 [WARNING][4507] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.257 [INFO][4507] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.261 [INFO][4507] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:08.270573 containerd[1481]: 2024-07-02 07:10:08.263 [INFO][4496] k8s.go 621: Teardown processing complete. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:10:08.270573 containerd[1481]: time="2024-07-02T07:10:08.270394713Z" level=info msg="TearDown network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" successfully"
Jul  2 07:10:08.270573 containerd[1481]: time="2024-07-02T07:10:08.270445813Z" level=info msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" returns successfully"
Jul  2 07:10:08.268409 systemd[1]: run-netns-cni\x2d7f8ae156\x2d416f\x2d23ee\x2daea2\x2d6e73379f920d.mount: Deactivated successfully.
Jul  2 07:10:08.271918 containerd[1481]: time="2024-07-02T07:10:08.271827010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cql52,Uid:a1d12bc4-6d1b-42f2-bbef-a982b18d5205,Namespace:kube-system,Attempt:1,}"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.209 [INFO][4495] k8s.go 608: Cleaning up netns ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.210 [INFO][4495] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" iface="eth0" netns="/var/run/netns/cni-b961a9a4-b734-1b64-f697-66ff8294a7bc"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.210 [INFO][4495] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" iface="eth0" netns="/var/run/netns/cni-b961a9a4-b734-1b64-f697-66ff8294a7bc"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.210 [INFO][4495] dataplane_linux.go 568: Workload's veth was already gone.  Nothing to do. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" iface="eth0" netns="/var/run/netns/cni-b961a9a4-b734-1b64-f697-66ff8294a7bc"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.210 [INFO][4495] k8s.go 615: Releasing IP address(es) ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.210 [INFO][4495] utils.go 188: Calico CNI releasing IP address ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.262 [INFO][4511] ipam_plugin.go 411: Releasing address using handleID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.262 [INFO][4511] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.262 [INFO][4511] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.275 [WARNING][4511] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.276 [INFO][4511] ipam_plugin.go 439: Releasing address using workloadID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.279 [INFO][4511] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:08.287579 containerd[1481]: 2024-07-02 07:10:08.280 [INFO][4495] k8s.go 621: Teardown processing complete. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:10:08.286196 systemd[1]: run-netns-cni\x2db961a9a4\x2db734\x2d1b64\x2df697\x2d66ff8294a7bc.mount: Deactivated successfully.
Jul  2 07:10:08.288607 containerd[1481]: time="2024-07-02T07:10:08.288090581Z" level=info msg="TearDown network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" successfully"
Jul  2 07:10:08.288607 containerd[1481]: time="2024-07-02T07:10:08.288139180Z" level=info msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" returns successfully"
Jul  2 07:10:08.289212 containerd[1481]: time="2024-07-02T07:10:08.289168979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhl5r,Uid:df40d4db-1fad-4103-96f4-e2848ac4f551,Namespace:calico-system,Attempt:1,}"
Jul  2 07:10:08.570782 systemd-networkd[1236]: cali592e1486d9f: Link UP
Jul  2 07:10:08.577368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jul  2 07:10:08.578975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali592e1486d9f: link becomes ready
Jul  2 07:10:08.582964 systemd-networkd[1236]: cali592e1486d9f: Gained carrier
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.442 [INFO][4533] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0 csi-node-driver- calico-system  df40d4db-1fad-4103-96f4-e2848ac4f551 749 0 2024-07-02 07:09:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  csi-node-driver-zhl5r eth0 default [] []   [kns.calico-system ksa.calico-system.default] cali592e1486d9f  [] []}} ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.442 [INFO][4533] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.504 [INFO][4547] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" HandleID="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4547] ipam_plugin.go 264: Auto assigning IP ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" HandleID="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026fed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"csi-node-driver-zhl5r", "timestamp":"2024-07-02 07:10:08.504717887 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4547] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4547] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4547] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.524 [INFO][4547] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.532 [INFO][4547] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.545 [INFO][4547] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.547 [INFO][4547] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.550 [INFO][4547] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.551 [INFO][4547] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.553 [INFO][4547] ipam.go 1685: Creating new handle: k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.557 [INFO][4547] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.563 [INFO][4547] ipam.go 1216: Successfully claimed IPs: [192.168.60.131/26] block=192.168.60.128/26 handle="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.563 [INFO][4547] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.131/26] handle="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.564 [INFO][4547] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:08.618228 containerd[1481]: 2024-07-02 07:10:08.564 [INFO][4547] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.131/26] IPv6=[] ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" HandleID="k8s-pod-network.9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.566 [INFO][4533] k8s.go 386: Populated endpoint ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df40d4db-1fad-4103-96f4-e2848ac4f551", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"csi-node-driver-zhl5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali592e1486d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.566 [INFO][4533] k8s.go 387: Calico CNI using IPs: [192.168.60.131/32] ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.566 [INFO][4533] dataplane_linux.go 68: Setting the host side veth name to cali592e1486d9f ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.586 [INFO][4533] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.588 [INFO][4533] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df40d4db-1fad-4103-96f4-e2848ac4f551", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4", Pod:"csi-node-driver-zhl5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali592e1486d9f", MAC:"da:6f:43:bf:a1:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:08.618999 containerd[1481]: 2024-07-02 07:10:08.617 [INFO][4533] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4" Namespace="calico-system" Pod="csi-node-driver-zhl5r" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:10:08.662900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali26d4784548c: link becomes ready
Jul  2 07:10:08.663458 systemd-networkd[1236]: cali26d4784548c: Link UP
Jul  2 07:10:08.664683 systemd-networkd[1236]: cali26d4784548c: Gained carrier
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.444 [INFO][4522] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0 coredns-7db6d8ff4d- kube-system  a1d12bc4-6d1b-42f2-bbef-a982b18d5205 748 0 2024-07-02 07:09:18 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  coredns-7db6d8ff4d-cql52 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali26d4784548c  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.444 [INFO][4522] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.507 [INFO][4548] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" HandleID="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4548] ipam_plugin.go 264: Auto assigning IP ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" HandleID="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5e10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"coredns-7db6d8ff4d-cql52", "timestamp":"2024-07-02 07:10:08.507658281 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.519 [INFO][4548] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.563 [INFO][4548] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.563 [INFO][4548] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.582 [INFO][4548] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.591 [INFO][4548] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.601 [INFO][4548] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.605 [INFO][4548] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.615 [INFO][4548] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.615 [INFO][4548] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.621 [INFO][4548] ipam.go 1685: Creating new handle: k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.631 [INFO][4548] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.642 [INFO][4548] ipam.go 1216: Successfully claimed IPs: [192.168.60.132/26] block=192.168.60.128/26 handle="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.642 [INFO][4548] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.132/26] handle="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.643 [INFO][4548] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:08.695204 containerd[1481]: 2024-07-02 07:10:08.643 [INFO][4548] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.132/26] IPv6=[] ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" HandleID="k8s-pod-network.30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.645 [INFO][4522] k8s.go 386: Populated endpoint ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a1d12bc4-6d1b-42f2-bbef-a982b18d5205", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"coredns-7db6d8ff4d-cql52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26d4784548c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.645 [INFO][4522] k8s.go 387: Calico CNI using IPs: [192.168.60.132/32] ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.645 [INFO][4522] dataplane_linux.go 68: Setting the host side veth name to cali26d4784548c ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.666 [INFO][4522] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.666 [INFO][4522] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a1d12bc4-6d1b-42f2-bbef-a982b18d5205", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3", Pod:"coredns-7db6d8ff4d-cql52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26d4784548c", MAC:"4e:96:3b:bc:64:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:08.696295 containerd[1481]: 2024-07-02 07:10:08.693 [INFO][4522] k8s.go 500: Wrote updated endpoint to datastore ContainerID="30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cql52" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:10:08.708127 containerd[1481]: time="2024-07-02T07:10:08.705521522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:08.708127 containerd[1481]: time="2024-07-02T07:10:08.705602322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:08.708127 containerd[1481]: time="2024-07-02T07:10:08.705629822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:08.708127 containerd[1481]: time="2024-07-02T07:10:08.705649422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:08.735292 systemd[1]: Started cri-containerd-9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4.scope - libcontainer container 9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4.
Jul  2 07:10:08.738000 audit[4601]: NETFILTER_CFG table=filter:109 family=2 entries=38 op=nft_register_chain pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:08.738000 audit[4601]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffe3aafa1a0 a2=0 a3=7ffe3aafa18c items=0 ppid=4165 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.738000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:08.762000 audit: BPF prog-id=182 op=LOAD
Jul  2 07:10:08.763000 audit: BPF prog-id=183 op=LOAD
Jul  2 07:10:08.763000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4585 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961323932373466663763303239656161366433633762666233363763
Jul  2 07:10:08.763000 audit: BPF prog-id=184 op=LOAD
Jul  2 07:10:08.763000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4585 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961323932373466663763303239656161366433633762666233363763
Jul  2 07:10:08.764000 audit: BPF prog-id=184 op=UNLOAD
Jul  2 07:10:08.764000 audit: BPF prog-id=183 op=UNLOAD
Jul  2 07:10:08.764000 audit: BPF prog-id=185 op=LOAD
Jul  2 07:10:08.764000 audit[4600]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4585 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961323932373466663763303239656161366433633762666233363763
Jul  2 07:10:08.776304 containerd[1481]: time="2024-07-02T07:10:08.776172193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:08.776473 containerd[1481]: time="2024-07-02T07:10:08.776340293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:08.776473 containerd[1481]: time="2024-07-02T07:10:08.776385893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:08.776473 containerd[1481]: time="2024-07-02T07:10:08.776419093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:08.804687 systemd[1]: Started cri-containerd-30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3.scope - libcontainer container 30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3.
Jul  2 07:10:08.812931 containerd[1481]: time="2024-07-02T07:10:08.812878727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhl5r,Uid:df40d4db-1fad-4103-96f4-e2848ac4f551,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4\""
Jul  2 07:10:08.819000 audit: BPF prog-id=186 op=LOAD
Jul  2 07:10:08.820000 audit: BPF prog-id=187 op=LOAD
Jul  2 07:10:08.820000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4631 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333234626235363437343463383230633362626337633365613433
Jul  2 07:10:08.820000 audit: BPF prog-id=188 op=LOAD
Jul  2 07:10:08.820000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4631 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333234626235363437343463383230633362626337633365613433
Jul  2 07:10:08.820000 audit: BPF prog-id=188 op=UNLOAD
Jul  2 07:10:08.820000 audit: BPF prog-id=187 op=UNLOAD
Jul  2 07:10:08.820000 audit: BPF prog-id=189 op=LOAD
Jul  2 07:10:08.820000 audit[4641]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4631 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333234626235363437343463383230633362626337633365613433
Jul  2 07:10:08.827000 audit[4656]: NETFILTER_CFG table=filter:110 family=2 entries=38 op=nft_register_chain pid=4656 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:08.827000 audit[4656]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffe1e664710 a2=0 a3=7ffe1e6646fc items=0 ppid=4165 pid=4656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.827000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:08.857335 containerd[1481]: time="2024-07-02T07:10:08.857291146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cql52,Uid:a1d12bc4-6d1b-42f2-bbef-a982b18d5205,Namespace:kube-system,Attempt:1,} returns sandbox id \"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3\""
Jul  2 07:10:08.861293 containerd[1481]: time="2024-07-02T07:10:08.861239439Z" level=info msg="CreateContainer within sandbox \"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jul  2 07:10:08.868725 systemd-networkd[1236]: caliae2094b87c7: Gained IPv6LL
Jul  2 07:10:08.906360 containerd[1481]: time="2024-07-02T07:10:08.906297957Z" level=info msg="CreateContainer within sandbox \"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"981883b3dd33d200a8b64a329414d0caf9b7b66b2d56c5205e382a93ef1e6aa6\""
Jul  2 07:10:08.908202 containerd[1481]: time="2024-07-02T07:10:08.908160653Z" level=info msg="StartContainer for \"981883b3dd33d200a8b64a329414d0caf9b7b66b2d56c5205e382a93ef1e6aa6\""
Jul  2 07:10:08.937154 systemd[1]: Started cri-containerd-981883b3dd33d200a8b64a329414d0caf9b7b66b2d56c5205e382a93ef1e6aa6.scope - libcontainer container 981883b3dd33d200a8b64a329414d0caf9b7b66b2d56c5205e382a93ef1e6aa6.
Jul  2 07:10:08.948000 audit: BPF prog-id=190 op=LOAD
Jul  2 07:10:08.949000 audit: BPF prog-id=191 op=LOAD
Jul  2 07:10:08.949000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4631 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938313838336233646433336432303061386236346133323934313464
Jul  2 07:10:08.949000 audit: BPF prog-id=192 op=LOAD
Jul  2 07:10:08.949000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4631 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938313838336233646433336432303061386236346133323934313464
Jul  2 07:10:08.949000 audit: BPF prog-id=192 op=UNLOAD
Jul  2 07:10:08.949000 audit: BPF prog-id=191 op=UNLOAD
Jul  2 07:10:08.949000 audit: BPF prog-id=193 op=LOAD
Jul  2 07:10:08.949000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4631 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:08.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938313838336233646433336432303061386236346133323934313464
Jul  2 07:10:08.972719 containerd[1481]: time="2024-07-02T07:10:08.972673436Z" level=info msg="StartContainer for \"981883b3dd33d200a8b64a329414d0caf9b7b66b2d56c5205e382a93ef1e6aa6\" returns successfully"
Jul  2 07:10:09.124206 systemd-networkd[1236]: vxlan.calico: Gained IPv6LL
Jul  2 07:10:09.416284 kubelet[2922]: I0702 07:10:09.416213    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cql52" podStartSLOduration=51.416188362 podStartE2EDuration="51.416188362s" podCreationTimestamp="2024-07-02 07:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:10:09.407208478 +0000 UTC m=+67.400277757" watchObservedRunningTime="2024-07-02 07:10:09.416188362 +0000 UTC m=+67.409257641"
Jul  2 07:10:09.459000 audit[4711]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:09.459000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc1e4b50d0 a2=0 a3=7ffc1e4b50bc items=0 ppid=3058 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:09.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:09.460000 audit[4711]: NETFILTER_CFG table=nat:112 family=2 entries=44 op=nft_register_rule pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:09.460000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc1e4b50d0 a2=0 a3=7ffc1e4b50bc items=0 ppid=3058 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:09.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:09.469000 audit[4713]: NETFILTER_CFG table=filter:113 family=2 entries=8 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:09.469000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdc516f040 a2=0 a3=7ffdc516f02c items=0 ppid=3058 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:09.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:09.475000 audit[4713]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:09.475000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdc516f040 a2=0 a3=7ffdc516f02c items=0 ppid=3058 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:09.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:10.148148 systemd-networkd[1236]: cali26d4784548c: Gained IPv6LL
Jul  2 07:10:10.212185 systemd-networkd[1236]: cali592e1486d9f: Gained IPv6LL
Jul  2 07:10:10.794590 containerd[1481]: time="2024-07-02T07:10:10.794518924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:10.797752 containerd[1481]: time="2024-07-02T07:10:10.797677119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793"
Jul  2 07:10:10.805360 containerd[1481]: time="2024-07-02T07:10:10.805307006Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:10.810729 containerd[1481]: time="2024-07-02T07:10:10.810667298Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:10.816143 containerd[1481]: time="2024-07-02T07:10:10.816080289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:10.816975 containerd[1481]: time="2024-07-02T07:10:10.816922387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.23489989s"
Jul  2 07:10:10.817773 containerd[1481]: time="2024-07-02T07:10:10.817736986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\""
Jul  2 07:10:10.822889 containerd[1481]: time="2024-07-02T07:10:10.819798282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\""
Jul  2 07:10:10.842261 containerd[1481]: time="2024-07-02T07:10:10.842044645Z" level=info msg="CreateContainer within sandbox \"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Jul  2 07:10:10.895460 containerd[1481]: time="2024-07-02T07:10:10.895393857Z" level=info msg="CreateContainer within sandbox \"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed\""
Jul  2 07:10:10.896196 containerd[1481]: time="2024-07-02T07:10:10.896158255Z" level=info msg="StartContainer for \"a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed\""
Jul  2 07:10:10.945110 systemd[1]: Started cri-containerd-a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed.scope - libcontainer container a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed.
Jul  2 07:10:10.963973 kernel: kauditd_printk_skb: 169 callbacks suppressed
Jul  2 07:10:10.964125 kernel: audit: type=1334 audit(1719904210.960:611): prog-id=194 op=LOAD
Jul  2 07:10:10.960000 audit: BPF prog-id=194 op=LOAD
Jul  2 07:10:10.972853 kernel: audit: type=1334 audit(1719904210.963:612): prog-id=195 op=LOAD
Jul  2 07:10:10.963000 audit: BPF prog-id=195 op=LOAD
Jul  2 07:10:10.963000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4428 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:10.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138393964646239353562643165373330303762663636636462326131
Jul  2 07:10:10.998493 kernel: audit: type=1300 audit(1719904210.963:612): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4428 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:10.998700 kernel: audit: type=1327 audit(1719904210.963:612): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138393964646239353562643165373330303762663636636462326131
Jul  2 07:10:10.963000 audit: BPF prog-id=196 op=LOAD
Jul  2 07:10:11.033924 kernel: audit: type=1334 audit(1719904210.963:613): prog-id=196 op=LOAD
Jul  2 07:10:11.034092 kernel: audit: type=1300 audit(1719904210.963:613): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4428 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:10.963000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4428 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:10.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138393964646239353562643165373330303762663636636462326131
Jul  2 07:10:10.963000 audit: BPF prog-id=196 op=UNLOAD
Jul  2 07:10:11.078965 kernel: audit: type=1327 audit(1719904210.963:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138393964646239353562643165373330303762663636636462326131
Jul  2 07:10:11.079105 kernel: audit: type=1334 audit(1719904210.963:614): prog-id=196 op=UNLOAD
Jul  2 07:10:11.079136 containerd[1481]: time="2024-07-02T07:10:11.078591558Z" level=info msg="StartContainer for \"a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed\" returns successfully"
Jul  2 07:10:11.084351 kernel: audit: type=1334 audit(1719904210.963:615): prog-id=195 op=UNLOAD
Jul  2 07:10:10.963000 audit: BPF prog-id=195 op=UNLOAD
Jul  2 07:10:11.087977 kernel: audit: type=1334 audit(1719904210.963:616): prog-id=197 op=LOAD
Jul  2 07:10:10.963000 audit: BPF prog-id=197 op=LOAD
Jul  2 07:10:10.963000 audit[4729]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4428 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:10.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138393964646239353562643165373330303762663636636462326131
Jul  2 07:10:11.498682 kubelet[2922]: I0702 07:10:11.498591    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c9974ff94-rnszf" podStartSLOduration=42.260880705 podStartE2EDuration="45.49856189s" podCreationTimestamp="2024-07-02 07:09:26 +0000 UTC" firstStartedPulling="2024-07-02 07:10:07.581173399 +0000 UTC m=+65.574242678" lastFinishedPulling="2024-07-02 07:10:10.818854584 +0000 UTC m=+68.811923863" observedRunningTime="2024-07-02 07:10:11.427779403 +0000 UTC m=+69.420848682" watchObservedRunningTime="2024-07-02 07:10:11.49856189 +0000 UTC m=+69.491631169"
Jul  2 07:10:12.787789 containerd[1481]: time="2024-07-02T07:10:12.787729701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:12.793092 containerd[1481]: time="2024-07-02T07:10:12.793016592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062"
Jul  2 07:10:12.803053 containerd[1481]: time="2024-07-02T07:10:12.802989177Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:12.812430 containerd[1481]: time="2024-07-02T07:10:12.812362763Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:12.822510 containerd[1481]: time="2024-07-02T07:10:12.822457848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:12.823448 containerd[1481]: time="2024-07-02T07:10:12.823400646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.003559264s"
Jul  2 07:10:12.823646 containerd[1481]: time="2024-07-02T07:10:12.823619346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\""
Jul  2 07:10:12.829505 containerd[1481]: time="2024-07-02T07:10:12.829457837Z" level=info msg="CreateContainer within sandbox \"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Jul  2 07:10:12.954183 containerd[1481]: time="2024-07-02T07:10:12.954117248Z" level=info msg="CreateContainer within sandbox \"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27\""
Jul  2 07:10:12.955240 containerd[1481]: time="2024-07-02T07:10:12.955196147Z" level=info msg="StartContainer for \"b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27\""
Jul  2 07:10:13.002094 systemd[1]: Started cri-containerd-b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27.scope - libcontainer container b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27.
Jul  2 07:10:13.006314 systemd[1]: run-containerd-runc-k8s.io-b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27-runc.IloTlm.mount: Deactivated successfully.
Jul  2 07:10:13.022000 audit: BPF prog-id=198 op=LOAD
Jul  2 07:10:13.022000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4585 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:13.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393839663538636261383435316130396238323135316433393635
Jul  2 07:10:13.022000 audit: BPF prog-id=199 op=LOAD
Jul  2 07:10:13.022000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4585 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:13.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393839663538636261383435316130396238323135316433393635
Jul  2 07:10:13.022000 audit: BPF prog-id=199 op=UNLOAD
Jul  2 07:10:13.022000 audit: BPF prog-id=198 op=UNLOAD
Jul  2 07:10:13.022000 audit: BPF prog-id=200 op=LOAD
Jul  2 07:10:13.022000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4585 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:13.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393839663538636261383435316130396238323135316433393635
Jul  2 07:10:13.048714 containerd[1481]: time="2024-07-02T07:10:13.048601509Z" level=info msg="StartContainer for \"b6989f58cba8451a09b82151d3965de241877051ac47a06e82febdfca2b4ae27\" returns successfully"
Jul  2 07:10:13.052295 containerd[1481]: time="2024-07-02T07:10:13.052255004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\""
Jul  2 07:10:15.400317 containerd[1481]: time="2024-07-02T07:10:15.400256046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:15.402585 containerd[1481]: time="2024-07-02T07:10:15.402507543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655"
Jul  2 07:10:15.406683 containerd[1481]: time="2024-07-02T07:10:15.406638538Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:15.411622 containerd[1481]: time="2024-07-02T07:10:15.411569432Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:15.416537 containerd[1481]: time="2024-07-02T07:10:15.416487125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:15.417577 containerd[1481]: time="2024-07-02T07:10:15.417525724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.364946221s"
Jul  2 07:10:15.417715 containerd[1481]: time="2024-07-02T07:10:15.417583224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\""
Jul  2 07:10:15.422115 containerd[1481]: time="2024-07-02T07:10:15.422066218Z" level=info msg="CreateContainer within sandbox \"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Jul  2 07:10:15.466778 containerd[1481]: time="2024-07-02T07:10:15.466712360Z" level=info msg="CreateContainer within sandbox \"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"343ed2ef3449010d94d00e905251aa4e58a85ecdf20fe9f67f7efdeb6796b453\""
Jul  2 07:10:15.467578 containerd[1481]: time="2024-07-02T07:10:15.467409059Z" level=info msg="StartContainer for \"343ed2ef3449010d94d00e905251aa4e58a85ecdf20fe9f67f7efdeb6796b453\""
Jul  2 07:10:15.532094 systemd[1]: Started cri-containerd-343ed2ef3449010d94d00e905251aa4e58a85ecdf20fe9f67f7efdeb6796b453.scope - libcontainer container 343ed2ef3449010d94d00e905251aa4e58a85ecdf20fe9f67f7efdeb6796b453.
Jul  2 07:10:15.550000 audit: BPF prog-id=201 op=LOAD
Jul  2 07:10:15.550000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4585 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:15.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334336564326566333434393031306439346430306539303532353161
Jul  2 07:10:15.550000 audit: BPF prog-id=202 op=LOAD
Jul  2 07:10:15.550000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4585 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:15.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334336564326566333434393031306439346430306539303532353161
Jul  2 07:10:15.550000 audit: BPF prog-id=202 op=UNLOAD
Jul  2 07:10:15.551000 audit: BPF prog-id=201 op=UNLOAD
Jul  2 07:10:15.551000 audit: BPF prog-id=203 op=LOAD
Jul  2 07:10:15.551000 audit[4836]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4585 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:15.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334336564326566333434393031306439346430306539303532353161
Jul  2 07:10:15.577461 containerd[1481]: time="2024-07-02T07:10:15.577321316Z" level=info msg="StartContainer for \"343ed2ef3449010d94d00e905251aa4e58a85ecdf20fe9f67f7efdeb6796b453\" returns successfully"
Jul  2 07:10:16.215226 kubelet[2922]: I0702 07:10:16.215177    2922 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Jul  2 07:10:16.215226 kubelet[2922]: I0702 07:10:16.215230    2922 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Jul  2 07:10:16.582000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.586804 kernel: kauditd_printk_skb: 24 callbacks suppressed
Jul  2 07:10:16.587309 kernel: audit: type=1400 audit(1719904216.582:627): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.582000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00158f360 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.624512 kernel: audit: type=1300 audit(1719904216.582:627): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00158f360 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.582000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.638343 kernel: audit: type=1327 audit(1719904216.582:627): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.589000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.652207 kernel: audit: type=1400 audit(1719904216.589:628): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.589000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000daa1c0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.668373 kernel: audit: type=1300 audit(1719904216.589:628): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000daa1c0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.589000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.682164 kernel: audit: type=1327 audit(1719904216.589:628): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.589000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.694278 kernel: audit: type=1400 audit(1719904216.589:629): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.589000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000daa1e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.709420 kernel: audit: type=1300 audit(1719904216.589:629): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000daa1e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.589000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.725932 kernel: audit: type=1327 audit(1719904216.589:629): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:16.726318 kernel: audit: type=1400 audit(1719904216.589:630): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.589000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:16.589000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000daa6e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:16.589000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:34.845704 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.TOe3TJ.mount: Deactivated successfully.
Jul  2 07:10:34.925968 kubelet[2922]: I0702 07:10:34.925210    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zhl5r" podStartSLOduration=62.322258813 podStartE2EDuration="1m8.925172714s" podCreationTimestamp="2024-07-02 07:09:26 +0000 UTC" firstStartedPulling="2024-07-02 07:10:08.81648672 +0000 UTC m=+66.809555999" lastFinishedPulling="2024-07-02 07:10:15.419400621 +0000 UTC m=+73.412469900" observedRunningTime="2024-07-02 07:10:16.431506336 +0000 UTC m=+74.424575615" watchObservedRunningTime="2024-07-02 07:10:34.925172714 +0000 UTC m=+92.918242093"
Jul  2 07:10:35.646285 kubelet[2922]: I0702 07:10:35.646226    2922 topology_manager.go:215] "Topology Admit Handler" podUID="5bcb2308-7f3c-4a21-8148-2499b2536265" podNamespace="calico-apiserver" podName="calico-apiserver-5dfb88d9d9-92klj"
Jul  2 07:10:35.669950 kubelet[2922]: I0702 07:10:35.666793    2922 topology_manager.go:215] "Topology Admit Handler" podUID="44f0361a-2ca7-4667-a3d2-9398f20173f2" podNamespace="calico-apiserver" podName="calico-apiserver-5dfb88d9d9-9vtl7"
Jul  2 07:10:35.672946 systemd[1]: Created slice kubepods-besteffort-pod5bcb2308_7f3c_4a21_8148_2499b2536265.slice - libcontainer container kubepods-besteffort-pod5bcb2308_7f3c_4a21_8148_2499b2536265.slice.
Jul  2 07:10:35.684883 kernel: kauditd_printk_skb: 2 callbacks suppressed
Jul  2 07:10:35.685032 kernel: audit: type=1325 audit(1719904235.681:631): table=filter:115 family=2 entries=9 op=nft_register_rule pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.681000 audit[4938]: NETFILTER_CFG table=filter:115 family=2 entries=9 op=nft_register_rule pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.703648 systemd[1]: Created slice kubepods-besteffort-pod44f0361a_2ca7_4667_a3d2_9398f20173f2.slice - libcontainer container kubepods-besteffort-pod44f0361a_2ca7_4667_a3d2_9398f20173f2.slice.
Jul  2 07:10:35.681000 audit[4938]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdaa50a600 a2=0 a3=7ffdaa50a5ec items=0 ppid=3058 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.733895 kernel: audit: type=1300 audit(1719904235.681:631): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdaa50a600 a2=0 a3=7ffdaa50a5ec items=0 ppid=3058 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.741884 kernel: audit: type=1327 audit(1719904235.681:631): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.727000 audit[4938]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.753007 kernel: audit: type=1325 audit(1719904235.727:632): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.727000 audit[4938]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdaa50a600 a2=0 a3=7ffdaa50a5ec items=0 ppid=3058 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.769895 kernel: audit: type=1300 audit(1719904235.727:632): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdaa50a600 a2=0 a3=7ffdaa50a5ec items=0 ppid=3058 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.767000 audit[4940]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.782745 kernel: audit: type=1327 audit(1719904235.727:632): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.782901 kernel: audit: type=1325 audit(1719904235.767:633): table=filter:117 family=2 entries=10 op=nft_register_rule pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.767000 audit[4940]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc70d217b0 a2=0 a3=7ffc70d2179c items=0 ppid=3058 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.794772 kernel: audit: type=1300 audit(1719904235.767:633): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc70d217b0 a2=0 a3=7ffc70d2179c items=0 ppid=3058 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.770000 audit[4940]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.812381 kernel: audit: type=1327 audit(1719904235.767:633): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.812531 kernel: audit: type=1325 audit(1719904235.770:634): table=nat:118 family=2 entries=20 op=nft_register_rule pid=4940 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:35.770000 audit[4940]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc70d217b0 a2=0 a3=7ffc70d2179c items=0 ppid=3058 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:35.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:35.844446 kubelet[2922]: I0702 07:10:35.844386    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7h9\" (UniqueName: \"kubernetes.io/projected/44f0361a-2ca7-4667-a3d2-9398f20173f2-kube-api-access-mc7h9\") pod \"calico-apiserver-5dfb88d9d9-9vtl7\" (UID: \"44f0361a-2ca7-4667-a3d2-9398f20173f2\") " pod="calico-apiserver/calico-apiserver-5dfb88d9d9-9vtl7"
Jul  2 07:10:35.844872 kubelet[2922]: I0702 07:10:35.844826    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bcb2308-7f3c-4a21-8148-2499b2536265-calico-apiserver-certs\") pod \"calico-apiserver-5dfb88d9d9-92klj\" (UID: \"5bcb2308-7f3c-4a21-8148-2499b2536265\") " pod="calico-apiserver/calico-apiserver-5dfb88d9d9-92klj"
Jul  2 07:10:35.845050 kubelet[2922]: I0702 07:10:35.845034    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8276\" (UniqueName: \"kubernetes.io/projected/5bcb2308-7f3c-4a21-8148-2499b2536265-kube-api-access-h8276\") pod \"calico-apiserver-5dfb88d9d9-92klj\" (UID: \"5bcb2308-7f3c-4a21-8148-2499b2536265\") " pod="calico-apiserver/calico-apiserver-5dfb88d9d9-92klj"
Jul  2 07:10:35.845181 kubelet[2922]: I0702 07:10:35.845167    2922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/44f0361a-2ca7-4667-a3d2-9398f20173f2-calico-apiserver-certs\") pod \"calico-apiserver-5dfb88d9d9-9vtl7\" (UID: \"44f0361a-2ca7-4667-a3d2-9398f20173f2\") " pod="calico-apiserver/calico-apiserver-5dfb88d9d9-9vtl7"
Jul  2 07:10:36.009706 containerd[1481]: time="2024-07-02T07:10:36.008992103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb88d9d9-9vtl7,Uid:44f0361a-2ca7-4667-a3d2-9398f20173f2,Namespace:calico-apiserver,Attempt:0,}"
Jul  2 07:10:36.192607 systemd-networkd[1236]: calie3390ea012b: Link UP
Jul  2 07:10:36.201020 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jul  2 07:10:36.201151 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie3390ea012b: link becomes ready
Jul  2 07:10:36.201628 systemd-networkd[1236]: calie3390ea012b: Gained carrier
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.111 [INFO][4945] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0 calico-apiserver-5dfb88d9d9- calico-apiserver  44f0361a-2ca7-4667-a3d2-9398f20173f2 894 0 2024-07-02 07:10:35 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfb88d9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  calico-apiserver-5dfb88d9d9-9vtl7 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3390ea012b  [] []}} ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.111 [INFO][4945] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.147 [INFO][4957] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" HandleID="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.155 [INFO][4957] ipam_plugin.go 264: Auto assigning IP ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" HandleID="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003662d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"calico-apiserver-5dfb88d9d9-9vtl7", "timestamp":"2024-07-02 07:10:36.147485897 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.155 [INFO][4957] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.155 [INFO][4957] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.155 [INFO][4957] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.157 [INFO][4957] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.160 [INFO][4957] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.164 [INFO][4957] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.165 [INFO][4957] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.173 [INFO][4957] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.173 [INFO][4957] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.175 [INFO][4957] ipam.go 1685: Creating new handle: k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.178 [INFO][4957] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.187 [INFO][4957] ipam.go 1216: Successfully claimed IPs: [192.168.60.133/26] block=192.168.60.128/26 handle="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.187 [INFO][4957] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.133/26] handle="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.187 [INFO][4957] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:36.217471 containerd[1481]: 2024-07-02 07:10:36.187 [INFO][4957] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.133/26] IPv6=[] ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" HandleID="k8s-pod-network.eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.189 [INFO][4945] k8s.go 386: Populated endpoint ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0", GenerateName:"calico-apiserver-5dfb88d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"44f0361a-2ca7-4667-a3d2-9398f20173f2", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 10, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb88d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"calico-apiserver-5dfb88d9d9-9vtl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3390ea012b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.190 [INFO][4945] k8s.go 387: Calico CNI using IPs: [192.168.60.133/32] ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.190 [INFO][4945] dataplane_linux.go 68: Setting the host side veth name to calie3390ea012b ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.203 [INFO][4945] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.203 [INFO][4945] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0", GenerateName:"calico-apiserver-5dfb88d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"44f0361a-2ca7-4667-a3d2-9398f20173f2", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 10, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb88d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38", Pod:"calico-apiserver-5dfb88d9d9-9vtl7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3390ea012b", MAC:"76:e9:e9:9b:fa:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:36.218555 containerd[1481]: 2024-07-02 07:10:36.215 [INFO][4945] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-9vtl7" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--9vtl7-eth0"
Jul  2 07:10:36.260000 audit[4985]: NETFILTER_CFG table=filter:119 family=2 entries=55 op=nft_register_chain pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:36.260000 audit[4985]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffd12683cb0 a2=0 a3=7ffd12683c9c items=0 ppid=4165 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.260000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:36.266106 containerd[1481]: time="2024-07-02T07:10:36.265997991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:36.266253 containerd[1481]: time="2024-07-02T07:10:36.266146291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:36.266253 containerd[1481]: time="2024-07-02T07:10:36.266187191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:36.266253 containerd[1481]: time="2024-07-02T07:10:36.266216991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:36.281399 containerd[1481]: time="2024-07-02T07:10:36.281343290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb88d9d9-92klj,Uid:5bcb2308-7f3c-4a21-8148-2499b2536265,Namespace:calico-apiserver,Attempt:0,}"
Jul  2 07:10:36.293124 systemd[1]: Started cri-containerd-eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38.scope - libcontainer container eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38.
Jul  2 07:10:36.311000 audit: BPF prog-id=204 op=LOAD
Jul  2 07:10:36.312000 audit: BPF prog-id=205 op=LOAD
Jul  2 07:10:36.312000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373766663764386139333237333739323961663065623639383531
Jul  2 07:10:36.312000 audit: BPF prog-id=206 op=LOAD
Jul  2 07:10:36.312000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373766663764386139333237333739323961663065623639383531
Jul  2 07:10:36.313000 audit: BPF prog-id=206 op=UNLOAD
Jul  2 07:10:36.313000 audit: BPF prog-id=205 op=UNLOAD
Jul  2 07:10:36.314000 audit: BPF prog-id=207 op=LOAD
Jul  2 07:10:36.314000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4986 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373766663764386139333237333739323961663065623639383531
Jul  2 07:10:36.377581 containerd[1481]: time="2024-07-02T07:10:36.377522085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb88d9d9-9vtl7,Uid:44f0361a-2ca7-4667-a3d2-9398f20173f2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38\""
Jul  2 07:10:36.381647 containerd[1481]: time="2024-07-02T07:10:36.381598285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\""
Jul  2 07:10:36.504064 systemd-networkd[1236]: cali0080ee67813: Link UP
Jul  2 07:10:36.511224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0080ee67813: link becomes ready
Jul  2 07:10:36.510515 systemd-networkd[1236]: cali0080ee67813: Gained carrier
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.398 [INFO][5014] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0 calico-apiserver-5dfb88d9d9- calico-apiserver  5bcb2308-7f3c-4a21-8148-2499b2536265 892 0 2024-07-02 07:10:35 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfb88d9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-3815.2.5-a-b9d6671d68  calico-apiserver-5dfb88d9d9-92klj eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0080ee67813  [] []}} ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.398 [INFO][5014] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.458 [INFO][5030] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" HandleID="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.471 [INFO][5030] ipam_plugin.go 264: Auto assigning IP ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" HandleID="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edde0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.5-a-b9d6671d68", "pod":"calico-apiserver-5dfb88d9d9-92klj", "timestamp":"2024-07-02 07:10:36.458770281 +0000 UTC"}, Hostname:"ci-3815.2.5-a-b9d6671d68", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.472 [INFO][5030] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.472 [INFO][5030] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.472 [INFO][5030] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.5-a-b9d6671d68'
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.474 [INFO][5030] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.477 [INFO][5030] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.481 [INFO][5030] ipam.go 489: Trying affinity for 192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.483 [INFO][5030] ipam.go 155: Attempting to load block cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.485 [INFO][5030] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.485 [INFO][5030] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.486 [INFO][5030] ipam.go 1685: Creating new handle: k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.490 [INFO][5030] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.497 [INFO][5030] ipam.go 1216: Successfully claimed IPs: [192.168.60.134/26] block=192.168.60.128/26 handle="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.497 [INFO][5030] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.134/26] handle="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" host="ci-3815.2.5-a-b9d6671d68"
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.498 [INFO][5030] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:10:36.527414 containerd[1481]: 2024-07-02 07:10:36.498 [INFO][5030] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.60.134/26] IPv6=[] ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" HandleID="k8s-pod-network.dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.499 [INFO][5014] k8s.go 386: Populated endpoint ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0", GenerateName:"calico-apiserver-5dfb88d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bcb2308-7f3c-4a21-8148-2499b2536265", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 10, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb88d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"", Pod:"calico-apiserver-5dfb88d9d9-92klj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0080ee67813", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.500 [INFO][5014] k8s.go 387: Calico CNI using IPs: [192.168.60.134/32] ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.500 [INFO][5014] dataplane_linux.go 68: Setting the host side veth name to cali0080ee67813 ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.511 [INFO][5014] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.512 [INFO][5014] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0", GenerateName:"calico-apiserver-5dfb88d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bcb2308-7f3c-4a21-8148-2499b2536265", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 10, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb88d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60", Pod:"calico-apiserver-5dfb88d9d9-92klj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0080ee67813", MAC:"3e:02:ea:8e:97:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:10:36.528187 containerd[1481]: 2024-07-02 07:10:36.525 [INFO][5014] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60" Namespace="calico-apiserver" Pod="calico-apiserver-5dfb88d9d9-92klj" WorkloadEndpoint="ci--3815.2.5--a--b9d6671d68-k8s-calico--apiserver--5dfb88d9d9--92klj-eth0"
Jul  2 07:10:36.572952 containerd[1481]: time="2024-07-02T07:10:36.571991176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul  2 07:10:36.572952 containerd[1481]: time="2024-07-02T07:10:36.572073076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:36.572952 containerd[1481]: time="2024-07-02T07:10:36.572104476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul  2 07:10:36.572952 containerd[1481]: time="2024-07-02T07:10:36.572125676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul  2 07:10:36.576000 audit[5067]: NETFILTER_CFG table=filter:120 family=2 entries=49 op=nft_register_chain pid=5067 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re"
Jul  2 07:10:36.576000 audit[5067]: SYSCALL arch=c000003e syscall=46 success=yes exit=24300 a0=3 a1=7fff94535ab0 a2=0 a3=7fff94535a9c items=0 ppid=4165 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.576000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030
Jul  2 07:10:36.596125 systemd[1]: Started cri-containerd-dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60.scope - libcontainer container dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60.
Jul  2 07:10:36.616000 audit: BPF prog-id=208 op=LOAD
Jul  2 07:10:36.617000 audit: BPF prog-id=209 op=LOAD
Jul  2 07:10:36.617000 audit[5068]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=5056 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464346235663663316465373230363564633731656338656431366332
Jul  2 07:10:36.617000 audit: BPF prog-id=210 op=LOAD
Jul  2 07:10:36.617000 audit[5068]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=5056 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464346235663663316465373230363564633731656338656431366332
Jul  2 07:10:36.617000 audit: BPF prog-id=210 op=UNLOAD
Jul  2 07:10:36.617000 audit: BPF prog-id=209 op=UNLOAD
Jul  2 07:10:36.617000 audit: BPF prog-id=211 op=LOAD
Jul  2 07:10:36.617000 audit[5068]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=5056 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:36.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464346235663663316465373230363564633731656338656431366332
Jul  2 07:10:36.652203 containerd[1481]: time="2024-07-02T07:10:36.652146872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb88d9d9-92klj,Uid:5bcb2308-7f3c-4a21-8148-2499b2536265,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60\""
Jul  2 07:10:37.412262 systemd-networkd[1236]: calie3390ea012b: Gained IPv6LL
Jul  2 07:10:38.373166 systemd-networkd[1236]: cali0080ee67813: Gained IPv6LL
Jul  2 07:10:41.284571 containerd[1481]: time="2024-07-02T07:10:41.284507607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.291284 containerd[1481]: time="2024-07-02T07:10:41.291197308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260"
Jul  2 07:10:41.301319 containerd[1481]: time="2024-07-02T07:10:41.301262910Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.305323 containerd[1481]: time="2024-07-02T07:10:41.305268611Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.309689 containerd[1481]: time="2024-07-02T07:10:41.309639412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.310366 containerd[1481]: time="2024-07-02T07:10:41.310319412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.928666627s"
Jul  2 07:10:41.310496 containerd[1481]: time="2024-07-02T07:10:41.310373112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\""
Jul  2 07:10:41.313645 containerd[1481]: time="2024-07-02T07:10:41.312349612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\""
Jul  2 07:10:41.313992 containerd[1481]: time="2024-07-02T07:10:41.313958213Z" level=info msg="CreateContainer within sandbox \"eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jul  2 07:10:41.358075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730925192.mount: Deactivated successfully.
Jul  2 07:10:41.377985 containerd[1481]: time="2024-07-02T07:10:41.377931725Z" level=info msg="CreateContainer within sandbox \"eb77ff7d8a932737929af0eb698518a93cd36682729c1d287bf6df795182eb38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"964a68b8969e7188c9fd9e38bd7abb78bf20a0629f05a150af0f155d080f9c43\""
Jul  2 07:10:41.378981 containerd[1481]: time="2024-07-02T07:10:41.378938725Z" level=info msg="StartContainer for \"964a68b8969e7188c9fd9e38bd7abb78bf20a0629f05a150af0f155d080f9c43\""
Jul  2 07:10:41.421130 systemd[1]: Started cri-containerd-964a68b8969e7188c9fd9e38bd7abb78bf20a0629f05a150af0f155d080f9c43.scope - libcontainer container 964a68b8969e7188c9fd9e38bd7abb78bf20a0629f05a150af0f155d080f9c43.
Jul  2 07:10:41.440000 audit: BPF prog-id=212 op=LOAD
Jul  2 07:10:41.443690 kernel: kauditd_printk_skb: 32 callbacks suppressed
Jul  2 07:10:41.443827 kernel: audit: type=1334 audit(1719904241.440:649): prog-id=212 op=LOAD
Jul  2 07:10:41.448000 audit: BPF prog-id=213 op=LOAD
Jul  2 07:10:41.448000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019d988 a2=78 a3=0 items=0 ppid=4986 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.468280 kernel: audit: type=1334 audit(1719904241.448:650): prog-id=213 op=LOAD
Jul  2 07:10:41.468441 kernel: audit: type=1300 audit(1719904241.448:650): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019d988 a2=78 a3=0 items=0 ppid=4986 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936346136386238393639653731383863396664396533386264376162
Jul  2 07:10:41.448000 audit: BPF prog-id=214 op=LOAD
Jul  2 07:10:41.486635 kernel: audit: type=1327 audit(1719904241.448:650): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936346136386238393639653731383863396664396533386264376162
Jul  2 07:10:41.486789 kernel: audit: type=1334 audit(1719904241.448:651): prog-id=214 op=LOAD
Jul  2 07:10:41.448000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019d720 a2=78 a3=0 items=0 ppid=4986 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.504891 kernel: audit: type=1300 audit(1719904241.448:651): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019d720 a2=78 a3=0 items=0 ppid=4986 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.505042 kernel: audit: type=1327 audit(1719904241.448:651): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936346136386238393639653731383863396664396533386264376162
Jul  2 07:10:41.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936346136386238393639653731383863396664396533386264376162
Jul  2 07:10:41.448000 audit: BPF prog-id=214 op=UNLOAD
Jul  2 07:10:41.521883 kernel: audit: type=1334 audit(1719904241.448:652): prog-id=214 op=UNLOAD
Jul  2 07:10:41.448000 audit: BPF prog-id=213 op=UNLOAD
Jul  2 07:10:41.448000 audit: BPF prog-id=215 op=LOAD
Jul  2 07:10:41.529885 kernel: audit: type=1334 audit(1719904241.448:653): prog-id=213 op=UNLOAD
Jul  2 07:10:41.529938 kernel: audit: type=1334 audit(1719904241.448:654): prog-id=215 op=LOAD
Jul  2 07:10:41.448000 audit[5115]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019dbe0 a2=78 a3=0 items=0 ppid=4986 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936346136386238393639653731383863396664396533386264376162
Jul  2 07:10:41.549803 containerd[1481]: time="2024-07-02T07:10:41.549658758Z" level=info msg="StartContainer for \"964a68b8969e7188c9fd9e38bd7abb78bf20a0629f05a150af0f155d080f9c43\" returns successfully"
Jul  2 07:10:41.792027 containerd[1481]: time="2024-07-02T07:10:41.791968605Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.796264 containerd[1481]: time="2024-07-02T07:10:41.796193606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77"
Jul  2 07:10:41.813246 containerd[1481]: time="2024-07-02T07:10:41.813087409Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.816598 containerd[1481]: time="2024-07-02T07:10:41.816551510Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.821997 containerd[1481]: time="2024-07-02T07:10:41.821949011Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jul  2 07:10:41.823966 containerd[1481]: time="2024-07-02T07:10:41.823900911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 511.510199ms"
Jul  2 07:10:41.824120 containerd[1481]: time="2024-07-02T07:10:41.823969911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\""
Jul  2 07:10:41.830821 containerd[1481]: time="2024-07-02T07:10:41.830735512Z" level=info msg="CreateContainer within sandbox \"dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jul  2 07:10:41.898851 containerd[1481]: time="2024-07-02T07:10:41.898788025Z" level=info msg="CreateContainer within sandbox \"dd4b5f6c1de72065dc71ec8ed16c2b1b168c0f54bbbeb433b2848131b1b28f60\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b303a3af59f17fd8cc44f83116680eb72e1cc9f07f36e82ea7fcd1cada577bfa\""
Jul  2 07:10:41.900301 containerd[1481]: time="2024-07-02T07:10:41.900261626Z" level=info msg="StartContainer for \"b303a3af59f17fd8cc44f83116680eb72e1cc9f07f36e82ea7fcd1cada577bfa\""
Jul  2 07:10:41.936081 systemd[1]: Started cri-containerd-b303a3af59f17fd8cc44f83116680eb72e1cc9f07f36e82ea7fcd1cada577bfa.scope - libcontainer container b303a3af59f17fd8cc44f83116680eb72e1cc9f07f36e82ea7fcd1cada577bfa.
Jul  2 07:10:41.959000 audit: BPF prog-id=216 op=LOAD
Jul  2 07:10:41.960000 audit: BPF prog-id=217 op=LOAD
Jul  2 07:10:41.960000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5056 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303361336166353966313766643863633434663833313136363830
Jul  2 07:10:41.961000 audit: BPF prog-id=218 op=LOAD
Jul  2 07:10:41.961000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5056 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303361336166353966313766643863633434663833313136363830
Jul  2 07:10:41.961000 audit: BPF prog-id=218 op=UNLOAD
Jul  2 07:10:41.961000 audit: BPF prog-id=217 op=UNLOAD
Jul  2 07:10:41.961000 audit: BPF prog-id=219 op=LOAD
Jul  2 07:10:41.961000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5056 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:41.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303361336166353966313766643863633434663833313136363830
Jul  2 07:10:42.016397 containerd[1481]: time="2024-07-02T07:10:42.016335449Z" level=info msg="StartContainer for \"b303a3af59f17fd8cc44f83116680eb72e1cc9f07f36e82ea7fcd1cada577bfa\" returns successfully"
Jul  2 07:10:42.352180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127639784.mount: Deactivated successfully.
Jul  2 07:10:42.542277 kubelet[2922]: I0702 07:10:42.542209    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dfb88d9d9-9vtl7" podStartSLOduration=2.609904648 podStartE2EDuration="7.542182175s" podCreationTimestamp="2024-07-02 07:10:35 +0000 UTC" firstStartedPulling="2024-07-02 07:10:36.379312785 +0000 UTC m=+94.372382064" lastFinishedPulling="2024-07-02 07:10:41.311590312 +0000 UTC m=+99.304659591" observedRunningTime="2024-07-02 07:10:42.52122007 +0000 UTC m=+100.514289349" watchObservedRunningTime="2024-07-02 07:10:42.542182175 +0000 UTC m=+100.535251454"
Jul  2 07:10:42.574828 kubelet[2922]: I0702 07:10:42.574747    2922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dfb88d9d9-92klj" podStartSLOduration=2.4024670439999998 podStartE2EDuration="7.574723883s" podCreationTimestamp="2024-07-02 07:10:35 +0000 UTC" firstStartedPulling="2024-07-02 07:10:36.653828772 +0000 UTC m=+94.646898051" lastFinishedPulling="2024-07-02 07:10:41.826085511 +0000 UTC m=+99.819154890" observedRunningTime="2024-07-02 07:10:42.561002179 +0000 UTC m=+100.554071458" watchObservedRunningTime="2024-07-02 07:10:42.574723883 +0000 UTC m=+100.567793162"
Jul  2 07:10:42.583000 audit[5187]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:42.583000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc454d6630 a2=0 a3=7ffc454d661c items=0 ppid=3058 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:42.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:42.595000 audit[5187]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:42.595000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc454d6630 a2=0 a3=7ffc454d661c items=0 ppid=3058 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:42.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:42.613000 audit[5189]: NETFILTER_CFG table=filter:123 family=2 entries=9 op=nft_register_rule pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:42.613000 audit[5189]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff625e0ec0 a2=0 a3=7fff625e0eac items=0 ppid=3058 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:42.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:42.616000 audit[5189]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:42.616000 audit[5189]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff625e0ec0 a2=0 a3=7fff625e0eac items=0 ppid=3058 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:42.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:43.667000 audit[5191]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=5191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:43.667000 audit[5191]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe83899140 a2=0 a3=7ffe8389912c items=0 ppid=3058 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:43.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:43.670000 audit[5191]: NETFILTER_CFG table=nat:126 family=2 entries=34 op=nft_register_chain pid=5191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:10:43.670000 audit[5191]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffe83899140 a2=0 a3=7ffe8389912c items=0 ppid=3058 pid=5191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:10:43.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:10:59.051632 kernel: kauditd_printk_skb: 32 callbacks suppressed
Jul  2 07:10:59.053896 kernel: audit: type=1400 audit(1719904259.045:667): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.045000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.049000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.075660 kernel: audit: type=1400 audit(1719904259.049:668): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.049000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0020162e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:59.099976 kernel: audit: type=1300 audit(1719904259.049:668): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0020162e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:59.049000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:59.045000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002a3f380 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:59.138481 kernel: audit: type=1327 audit(1719904259.049:668): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:59.138696 kernel: audit: type=1300 audit(1719904259.045:667): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002a3f380 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:10:59.045000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:59.152838 kernel: audit: type=1327 audit(1719904259.045:667): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:10:59.247000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.247000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.275288 kernel: audit: type=1400 audit(1719904259.247:669): avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.275499 kernel: audit: type=1400 audit(1719904259.247:670): avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.247000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00971ece0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.289937 kernel: audit: type=1300 audit(1719904259.247:670): arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00971ece0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.247000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c0042614a0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.304771 kernel: audit: type=1300 audit(1719904259.247:669): arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c0042614a0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.247000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:59.247000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:59.247000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.247000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00a6a72c0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.247000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:59.248000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=5730576 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.248000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c0042615c0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.248000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:59.248000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.248000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c004e186c0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.248000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:10:59.263000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:10:59.263000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00a6a7800 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:10:59.263000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:02.116964 containerd[1481]: time="2024-07-02T07:11:02.116528214Z" level=info msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\""
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.175 [WARNING][5266] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df40d4db-1fad-4103-96f4-e2848ac4f551", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4", Pod:"csi-node-driver-zhl5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali592e1486d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.176 [INFO][5266] k8s.go 608: Cleaning up netns ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.176 [INFO][5266] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" iface="eth0" netns=""
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.176 [INFO][5266] k8s.go 615: Releasing IP address(es) ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.176 [INFO][5266] utils.go 188: Calico CNI releasing IP address ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.202 [INFO][5272] ipam_plugin.go 411: Releasing address using handleID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.202 [INFO][5272] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.202 [INFO][5272] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.208 [WARNING][5272] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.208 [INFO][5272] ipam_plugin.go 439: Releasing address using workloadID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.209 [INFO][5272] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.211883 containerd[1481]: 2024-07-02 07:11:02.210 [INFO][5266] k8s.go 621: Teardown processing complete. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.212500 containerd[1481]: time="2024-07-02T07:11:02.212448513Z" level=info msg="TearDown network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" successfully"
Jul  2 07:11:02.212601 containerd[1481]: time="2024-07-02T07:11:02.212583213Z" level=info msg="StopPodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" returns successfully"
Jul  2 07:11:02.213280 containerd[1481]: time="2024-07-02T07:11:02.213244514Z" level=info msg="RemovePodSandbox for \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\""
Jul  2 07:11:02.213402 containerd[1481]: time="2024-07-02T07:11:02.213281114Z" level=info msg="Forcibly stopping sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\""
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.251 [WARNING][5290] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df40d4db-1fad-4103-96f4-e2848ac4f551", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"9a29274ff7c029eaa6d3c7bfb367c89d29e8add417ea3f5a4d6c51a4fa56bae4", Pod:"csi-node-driver-zhl5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali592e1486d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.252 [INFO][5290] k8s.go 608: Cleaning up netns ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.252 [INFO][5290] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" iface="eth0" netns=""
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.252 [INFO][5290] k8s.go 615: Releasing IP address(es) ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.252 [INFO][5290] utils.go 188: Calico CNI releasing IP address ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.276 [INFO][5296] ipam_plugin.go 411: Releasing address using handleID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.276 [INFO][5296] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.276 [INFO][5296] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.281 [WARNING][5296] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.281 [INFO][5296] ipam_plugin.go 439: Releasing address using workloadID ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" HandleID="k8s-pod-network.76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056" Workload="ci--3815.2.5--a--b9d6671d68-k8s-csi--node--driver--zhl5r-eth0"
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.284 [INFO][5296] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.287372 containerd[1481]: 2024-07-02 07:11:02.286 [INFO][5290] k8s.go 621: Teardown processing complete. ContainerID="76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056"
Jul  2 07:11:02.288125 containerd[1481]: time="2024-07-02T07:11:02.287429989Z" level=info msg="TearDown network for sandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" successfully"
Jul  2 07:11:02.307576 containerd[1481]: time="2024-07-02T07:11:02.307522710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul  2 07:11:02.307769 containerd[1481]: time="2024-07-02T07:11:02.307613710Z" level=info msg="RemovePodSandbox \"76032985c3da1316f9d7b1f89e9cca3300048ca9ddc78ddeee09a179f757b056\" returns successfully"
Jul  2 07:11:02.308315 containerd[1481]: time="2024-07-02T07:11:02.308253511Z" level=info msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\""
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.347 [WARNING][5318] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0", GenerateName:"calico-kube-controllers-7c9974ff94-", Namespace:"calico-system", SelfLink:"", UID:"0b47e120-86a6-4239-83a3-6d30cbbde07c", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9974ff94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a", Pod:"calico-kube-controllers-7c9974ff94-rnszf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2094b87c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.347 [INFO][5318] k8s.go 608: Cleaning up netns ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.347 [INFO][5318] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" iface="eth0" netns=""
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.347 [INFO][5318] k8s.go 615: Releasing IP address(es) ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.347 [INFO][5318] utils.go 188: Calico CNI releasing IP address ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.370 [INFO][5324] ipam_plugin.go 411: Releasing address using handleID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.370 [INFO][5324] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.370 [INFO][5324] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.375 [WARNING][5324] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.375 [INFO][5324] ipam_plugin.go 439: Releasing address using workloadID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.379 [INFO][5324] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.381400 containerd[1481]: 2024-07-02 07:11:02.380 [INFO][5318] k8s.go 621: Teardown processing complete. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.382136 containerd[1481]: time="2024-07-02T07:11:02.381446286Z" level=info msg="TearDown network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" successfully"
Jul  2 07:11:02.382136 containerd[1481]: time="2024-07-02T07:11:02.381489586Z" level=info msg="StopPodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" returns successfully"
Jul  2 07:11:02.382228 containerd[1481]: time="2024-07-02T07:11:02.382158686Z" level=info msg="RemovePodSandbox for \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\""
Jul  2 07:11:02.382273 containerd[1481]: time="2024-07-02T07:11:02.382216587Z" level=info msg="Forcibly stopping sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\""
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.420 [WARNING][5343] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0", GenerateName:"calico-kube-controllers-7c9974ff94-", Namespace:"calico-system", SelfLink:"", UID:"0b47e120-86a6-4239-83a3-6d30cbbde07c", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9974ff94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"c18ca54b3eafb6fb771f8846195301fc241cfb0a952c5832048566bd2db6ba9a", Pod:"calico-kube-controllers-7c9974ff94-rnszf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2094b87c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.420 [INFO][5343] k8s.go 608: Cleaning up netns ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.420 [INFO][5343] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" iface="eth0" netns=""
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.420 [INFO][5343] k8s.go 615: Releasing IP address(es) ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.420 [INFO][5343] utils.go 188: Calico CNI releasing IP address ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.445 [INFO][5350] ipam_plugin.go 411: Releasing address using handleID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.445 [INFO][5350] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.445 [INFO][5350] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.450 [WARNING][5350] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.450 [INFO][5350] ipam_plugin.go 439: Releasing address using workloadID ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" HandleID="k8s-pod-network.96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7" Workload="ci--3815.2.5--a--b9d6671d68-k8s-calico--kube--controllers--7c9974ff94--rnszf-eth0"
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.451 [INFO][5350] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.453944 containerd[1481]: 2024-07-02 07:11:02.452 [INFO][5343] k8s.go 621: Teardown processing complete. ContainerID="96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7"
Jul  2 07:11:02.454658 containerd[1481]: time="2024-07-02T07:11:02.453992160Z" level=info msg="TearDown network for sandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" successfully"
Jul  2 07:11:02.468068 containerd[1481]: time="2024-07-02T07:11:02.467995174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul  2 07:11:02.468301 containerd[1481]: time="2024-07-02T07:11:02.468107675Z" level=info msg="RemovePodSandbox \"96e05315d139e6d4ef0f67f12d2e447491189111ad97b122964ecce0dc22dce7\" returns successfully"
Jul  2 07:11:02.468922 containerd[1481]: time="2024-07-02T07:11:02.468856475Z" level=info msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\""
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.504 [WARNING][5369] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a1d12bc4-6d1b-42f2-bbef-a982b18d5205", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3", Pod:"coredns-7db6d8ff4d-cql52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26d4784548c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.504 [INFO][5369] k8s.go 608: Cleaning up netns ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.504 [INFO][5369] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" iface="eth0" netns=""
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.504 [INFO][5369] k8s.go 615: Releasing IP address(es) ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.504 [INFO][5369] utils.go 188: Calico CNI releasing IP address ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.528 [INFO][5375] ipam_plugin.go 411: Releasing address using handleID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.528 [INFO][5375] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.529 [INFO][5375] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.549 [WARNING][5375] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.550 [INFO][5375] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.556 [INFO][5375] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.559090 containerd[1481]: 2024-07-02 07:11:02.557 [INFO][5369] k8s.go 621: Teardown processing complete. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.559799 containerd[1481]: time="2024-07-02T07:11:02.559139068Z" level=info msg="TearDown network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" successfully"
Jul  2 07:11:02.559799 containerd[1481]: time="2024-07-02T07:11:02.559175968Z" level=info msg="StopPodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" returns successfully"
Jul  2 07:11:02.559799 containerd[1481]: time="2024-07-02T07:11:02.559718468Z" level=info msg="RemovePodSandbox for \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\""
Jul  2 07:11:02.559799 containerd[1481]: time="2024-07-02T07:11:02.559760168Z" level=info msg="Forcibly stopping sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\""
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.663 [WARNING][5394] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a1d12bc4-6d1b-42f2-bbef-a982b18d5205", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"30324bb564744c820c3bbc7c3ea436ac80a8eb485b26d08782260a3dc19397d3", Pod:"coredns-7db6d8ff4d-cql52", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26d4784548c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.664 [INFO][5394] k8s.go 608: Cleaning up netns ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.664 [INFO][5394] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" iface="eth0" netns=""
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.664 [INFO][5394] k8s.go 615: Releasing IP address(es) ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.664 [INFO][5394] utils.go 188: Calico CNI releasing IP address ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.708 [INFO][5400] ipam_plugin.go 411: Releasing address using handleID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.708 [INFO][5400] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.708 [INFO][5400] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.719 [WARNING][5400] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.719 [INFO][5400] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" HandleID="k8s-pod-network.fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--cql52-eth0"
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.721 [INFO][5400] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:02.725269 containerd[1481]: 2024-07-02 07:11:02.723 [INFO][5394] k8s.go 621: Teardown processing complete. ContainerID="fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b"
Jul  2 07:11:02.726180 containerd[1481]: time="2024-07-02T07:11:02.726123739Z" level=info msg="TearDown network for sandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" successfully"
Jul  2 07:11:02.734034 containerd[1481]: time="2024-07-02T07:11:02.733986547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul  2 07:11:02.734313 containerd[1481]: time="2024-07-02T07:11:02.734273847Z" level=info msg="RemovePodSandbox \"fef86916f00ebe150a80cfb4f9f0c1162f1f8ab5b43ada2f73635a5c3916877b\" returns successfully"
Jul  2 07:11:02.735034 containerd[1481]: time="2024-07-02T07:11:02.734995548Z" level=info msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\""
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.919 [WARNING][5419] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8de6fb26-aba2-46d0-b934-35c3682baf1f", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad", Pod:"coredns-7db6d8ff4d-87tzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4406b4dd0ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.920 [INFO][5419] k8s.go 608: Cleaning up netns ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.920 [INFO][5419] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" iface="eth0" netns=""
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.920 [INFO][5419] k8s.go 615: Releasing IP address(es) ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.920 [INFO][5419] utils.go 188: Calico CNI releasing IP address ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.983 [INFO][5429] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.984 [INFO][5429] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.984 [INFO][5429] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.992 [WARNING][5429] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.993 [INFO][5429] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.995 [INFO][5429] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:03.002008 containerd[1481]: 2024-07-02 07:11:02.997 [INFO][5419] k8s.go 621: Teardown processing complete. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.002008 containerd[1481]: time="2024-07-02T07:11:02.999045518Z" level=info msg="TearDown network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" successfully"
Jul  2 07:11:03.002008 containerd[1481]: time="2024-07-02T07:11:02.999113418Z" level=info msg="StopPodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" returns successfully"
Jul  2 07:11:03.003110 containerd[1481]: time="2024-07-02T07:11:03.002114921Z" level=info msg="RemovePodSandbox for \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\""
Jul  2 07:11:03.003110 containerd[1481]: time="2024-07-02T07:11:03.002164721Z" level=info msg="Forcibly stopping sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\""
Jul  2 07:11:03.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.44:22-10.200.16.10:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:03.144774 systemd[1]: Started sshd@7-10.200.8.44:22-10.200.16.10:34788.service - OpenSSH per-connection server daemon (10.200.16.10:34788).
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.083 [WARNING][5450] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8de6fb26-aba2-46d0-b934-35c3682baf1f", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 9, 18, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.5-a-b9d6671d68", ContainerID:"eca1160c96881e42ab93aba1064386933cb9891887cdd5339c183d10e1b1d1ad", Pod:"coredns-7db6d8ff4d-87tzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4406b4dd0ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.084 [INFO][5450] k8s.go 608: Cleaning up netns ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.084 [INFO][5450] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" iface="eth0" netns=""
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.084 [INFO][5450] k8s.go 615: Releasing IP address(es) ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.084 [INFO][5450] utils.go 188: Calico CNI releasing IP address ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.161 [INFO][5456] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.162 [INFO][5456] ipam_plugin.go 352: About to acquire host-wide IPAM lock.
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.162 [INFO][5456] ipam_plugin.go 367: Acquired host-wide IPAM lock.
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.171 [WARNING][5456] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.171 [INFO][5456] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" HandleID="k8s-pod-network.9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289" Workload="ci--3815.2.5--a--b9d6671d68-k8s-coredns--7db6d8ff4d--87tzr-eth0"
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.174 [INFO][5456] ipam_plugin.go 373: Released host-wide IPAM lock.
Jul  2 07:11:03.177150 containerd[1481]: 2024-07-02 07:11:03.175 [INFO][5450] k8s.go 621: Teardown processing complete. ContainerID="9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289"
Jul  2 07:11:03.178209 containerd[1481]: time="2024-07-02T07:11:03.178157707Z" level=info msg="TearDown network for sandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" successfully"
Jul  2 07:11:03.189222 containerd[1481]: time="2024-07-02T07:11:03.189178319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jul  2 07:11:03.189536 containerd[1481]: time="2024-07-02T07:11:03.189507719Z" level=info msg="RemovePodSandbox \"9ed3c4605c99c9e5af486213653284bd3e13e7a4a553fe7cee69ded910123289\" returns successfully"
Jul  2 07:11:03.787000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:03.788654 sshd[5461]: Accepted publickey for core from 10.200.16.10 port 34788 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:03.789000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:03.789000 audit[5461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb17e9f90 a2=3 a3=7f9e44746480 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:03.789000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:03.791029 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:03.796500 systemd-logind[1465]: New session 10 of user core.
Jul  2 07:11:03.801091 systemd[1]: Started session-10.scope - Session 10 of User core.
Jul  2 07:11:03.805000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:03.807000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:04.328293 sshd[5461]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:04.334151 kernel: kauditd_printk_skb: 22 callbacks suppressed
Jul  2 07:11:04.334262 kernel: audit: type=1106 audit(1719904264.329:681): pid=5461 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:04.329000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:04.332167 systemd[1]: sshd@7-10.200.8.44:22-10.200.16.10:34788.service: Deactivated successfully.
Jul  2 07:11:04.333232 systemd[1]: session-10.scope: Deactivated successfully.
Jul  2 07:11:04.335135 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit.
Jul  2 07:11:04.336156 systemd-logind[1465]: Removed session 10.
Jul  2 07:11:04.329000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:04.354505 kernel: audit: type=1104 audit(1719904264.329:682): pid=5461 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:04.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.44:22-10.200.16.10:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:04.364415 kernel: audit: type=1131 audit(1719904264.331:683): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.44:22-10.200.16.10:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:04.845528 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.oJbFAp.mount: Deactivated successfully.
Jul  2 07:11:09.446854 systemd[1]: Started sshd@8-10.200.8.44:22-10.200.16.10:60894.service - OpenSSH per-connection server daemon (10.200.16.10:60894).
Jul  2 07:11:09.462155 kernel: audit: type=1130 audit(1719904269.447:684): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.44:22-10.200.16.10:60894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:09.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.44:22-10.200.16.10:60894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:10.089000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.090503 sshd[5500]: Accepted publickey for core from 10.200.16.10 port 60894 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:10.092635 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:10.089000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.115158 kernel: audit: type=1101 audit(1719904270.089:685): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.115342 kernel: audit: type=1103 audit(1719904270.089:686): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.124943 kernel: audit: type=1006 audit(1719904270.089:687): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1
Jul  2 07:11:10.089000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8d3d2d0 a2=3 a3=7f4477b52480 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:10.126321 systemd-logind[1465]: New session 11 of user core.
Jul  2 07:11:10.143309 kernel: audit: type=1300 audit(1719904270.089:687): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8d3d2d0 a2=3 a3=7f4477b52480 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:10.143361 kernel: audit: type=1327 audit(1719904270.089:687): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:10.089000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:10.143194 systemd[1]: Started session-11.scope - Session 11 of User core.
Jul  2 07:11:10.149000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.161000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.172588 kernel: audit: type=1105 audit(1719904270.149:688): pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.172726 kernel: audit: type=1103 audit(1719904270.161:689): pid=5505 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.611308 sshd[5500]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:10.612000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.625821 systemd[1]: sshd@8-10.200.8.44:22-10.200.16.10:60894.service: Deactivated successfully.
Jul  2 07:11:10.626211 kernel: audit: type=1106 audit(1719904270.612:690): pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.626919 systemd[1]: session-11.scope: Deactivated successfully.
Jul  2 07:11:10.612000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:10.629936 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit.
Jul  2 07:11:10.630973 systemd-logind[1465]: Removed session 11.
Jul  2 07:11:10.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.44:22-10.200.16.10:60894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:10.638343 kernel: audit: type=1104 audit(1719904270.612:691): pid=5500 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:15.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.44:22-10.200.16.10:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:15.729785 systemd[1]: Started sshd@9-10.200.8.44:22-10.200.16.10:60906.service - OpenSSH per-connection server daemon (10.200.16.10:60906).
Jul  2 07:11:15.732373 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:11:15.732502 kernel: audit: type=1130 audit(1719904275.729:693): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.44:22-10.200.16.10:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:16.376000 audit[5521]: USER_ACCT pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.379579 sshd[5521]: Accepted publickey for core from 10.200.16.10 port 60906 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:16.379404 sshd[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:16.386081 systemd-logind[1465]: New session 12 of user core.
Jul  2 07:11:16.414777 kernel: audit: type=1101 audit(1719904276.376:694): pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.414818 kernel: audit: type=1103 audit(1719904276.376:695): pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.414843 kernel: audit: type=1006 audit(1719904276.376:696): pid=5521 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1
Jul  2 07:11:16.414884 kernel: audit: type=1300 audit(1719904276.376:696): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0f86ed10 a2=3 a3=7fce25183480 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:16.376000 audit[5521]: CRED_ACQ pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.376000 audit[5521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0f86ed10 a2=3 a3=7fce25183480 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:16.406237 systemd[1]: Started session-12.scope - Session 12 of User core.
Jul  2 07:11:16.376000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:16.411000 audit[5521]: USER_START pid=5521 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.430796 kernel: audit: type=1327 audit(1719904276.376:696): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:16.430936 kernel: audit: type=1105 audit(1719904276.411:697): pid=5521 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.411000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.440671 kernel: audit: type=1103 audit(1719904276.411:698): pid=5541 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.588000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:16.588000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032bdec0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:16.611297 kernel: audit: type=1400 audit(1719904276.588:699): avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:16.611395 kernel: audit: type=1300 audit(1719904276.588:699): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032bdec0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:16.588000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00304dae0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032bdee0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0032bdf00 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:16.896331 sshd[5521]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:16.896000 audit[5521]: USER_END pid=5521 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.897000 audit[5521]: CRED_DISP pid=5521 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:16.900244 systemd[1]: sshd@9-10.200.8.44:22-10.200.16.10:60906.service: Deactivated successfully.
Jul  2 07:11:16.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.44:22-10.200.16.10:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:16.901319 systemd[1]: session-12.scope: Deactivated successfully.
Jul  2 07:11:16.901931 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit.
Jul  2 07:11:16.903342 systemd-logind[1465]: Removed session 12.
Jul  2 07:11:22.017784 systemd[1]: Started sshd@10-10.200.8.44:22-10.200.16.10:52138.service - OpenSSH per-connection server daemon (10.200.16.10:52138).
Jul  2 07:11:22.020908 kernel: kauditd_printk_skb: 13 callbacks suppressed
Jul  2 07:11:22.021029 kernel: audit: type=1130 audit(1719904282.017:706): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.44:22-10.200.16.10:52138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:22.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.44:22-10.200.16.10:52138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:22.676000 audit[5560]: USER_ACCT pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.677687 sshd[5560]: Accepted publickey for core from 10.200.16.10 port 52138 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:22.680058 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:22.690005 kernel: audit: type=1101 audit(1719904282.676:707): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.676000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.694584 systemd-logind[1465]: New session 13 of user core.
Jul  2 07:11:22.710725 kernel: audit: type=1103 audit(1719904282.676:708): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.711667 kernel: audit: type=1006 audit(1719904282.676:709): pid=5560 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1
Jul  2 07:11:22.709910 systemd[1]: Started session-13.scope - Session 13 of User core.
Jul  2 07:11:22.676000 audit[5560]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeefdcc8d0 a2=3 a3=7fd447426480 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:22.734885 kernel: audit: type=1300 audit(1719904282.676:709): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeefdcc8d0 a2=3 a3=7fd447426480 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:22.735018 kernel: audit: type=1327 audit(1719904282.676:709): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:22.676000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:22.717000 audit[5560]: USER_START pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.747748 kernel: audit: type=1105 audit(1719904282.717:710): pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.720000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:22.759909 kernel: audit: type=1103 audit(1719904282.720:711): pid=5562 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.210165 sshd[5560]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:23.211000 audit[5560]: USER_END pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.214892 systemd[1]: sshd@10-10.200.8.44:22-10.200.16.10:52138.service: Deactivated successfully.
Jul  2 07:11:23.215764 systemd[1]: session-13.scope: Deactivated successfully.
Jul  2 07:11:23.217119 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit.
Jul  2 07:11:23.218103 systemd-logind[1465]: Removed session 13.
Jul  2 07:11:23.212000 audit[5560]: CRED_DISP pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.236053 kernel: audit: type=1106 audit(1719904283.211:712): pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.236155 kernel: audit: type=1104 audit(1719904283.212:713): pid=5560 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.44:22-10.200.16.10:52138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:23.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.44:22-10.200.16.10:52144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:23.336491 systemd[1]: Started sshd@11-10.200.8.44:22-10.200.16.10:52144.service - OpenSSH per-connection server daemon (10.200.16.10:52144).
Jul  2 07:11:23.982000 audit[5572]: USER_ACCT pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.983686 sshd[5572]: Accepted publickey for core from 10.200.16.10 port 52144 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:23.984000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:23.984000 audit[5572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd58a466d0 a2=3 a3=7f89422e0480 items=0 ppid=1 pid=5572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:23.984000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:23.985561 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:23.991202 systemd-logind[1465]: New session 14 of user core.
Jul  2 07:11:23.997085 systemd[1]: Started session-14.scope - Session 14 of User core.
Jul  2 07:11:24.001000 audit[5572]: USER_START pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:24.003000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:24.610370 sshd[5572]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:24.611000 audit[5572]: USER_END pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:24.612000 audit[5572]: CRED_DISP pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:24.614808 systemd[1]: sshd@11-10.200.8.44:22-10.200.16.10:52144.service: Deactivated successfully.
Jul  2 07:11:24.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.44:22-10.200.16.10:52144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:24.615879 systemd[1]: session-14.scope: Deactivated successfully.
Jul  2 07:11:24.616663 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit.
Jul  2 07:11:24.617633 systemd-logind[1465]: Removed session 14.
Jul  2 07:11:24.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.44:22-10.200.16.10:52160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:24.725569 systemd[1]: Started sshd@12-10.200.8.44:22-10.200.16.10:52160.service - OpenSSH per-connection server daemon (10.200.16.10:52160).
Jul  2 07:11:25.363000 audit[5583]: USER_ACCT pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.364301 sshd[5583]: Accepted publickey for core from 10.200.16.10 port 52160 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:25.365000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.365000 audit[5583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1abfb650 a2=3 a3=7fb161b37480 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:25.365000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:25.366433 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:25.371937 systemd-logind[1465]: New session 15 of user core.
Jul  2 07:11:25.377111 systemd[1]: Started session-15.scope - Session 15 of User core.
Jul  2 07:11:25.381000 audit[5583]: USER_START pid=5583 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.383000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.888416 sshd[5583]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:25.889000 audit[5583]: USER_END pid=5583 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.889000 audit[5583]: CRED_DISP pid=5583 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:25.892460 systemd[1]: sshd@12-10.200.8.44:22-10.200.16.10:52160.service: Deactivated successfully.
Jul  2 07:11:25.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.44:22-10.200.16.10:52160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:25.893906 systemd[1]: session-15.scope: Deactivated successfully.
Jul  2 07:11:25.894909 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit.
Jul  2 07:11:25.896067 systemd-logind[1465]: Removed session 15.
Jul  2 07:11:31.023984 systemd[1]: Started sshd@13-10.200.8.44:22-10.200.16.10:60970.service - OpenSSH per-connection server daemon (10.200.16.10:60970).
Jul  2 07:11:31.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.44:22-10.200.16.10:60970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:31.035652 kernel: kauditd_printk_skb: 23 callbacks suppressed
Jul  2 07:11:31.035766 kernel: audit: type=1130 audit(1719904291.023:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.44:22-10.200.16.10:60970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:31.667000 audit[5601]: USER_ACCT pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.673592 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:31.702940 kernel: audit: type=1101 audit(1719904291.667:734): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.702995 kernel: audit: type=1103 audit(1719904291.667:735): pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.703039 kernel: audit: type=1006 audit(1719904291.667:736): pid=5601 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1
Jul  2 07:11:31.667000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.703156 sshd[5601]: Accepted publickey for core from 10.200.16.10 port 60970 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:31.684820 systemd-logind[1465]: New session 16 of user core.
Jul  2 07:11:31.702337 systemd[1]: Started session-16.scope - Session 16 of User core.
Jul  2 07:11:31.667000 audit[5601]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc19067b40 a2=3 a3=7f5144e18480 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:31.715931 kernel: audit: type=1300 audit(1719904291.667:736): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc19067b40 a2=3 a3=7f5144e18480 items=0 ppid=1 pid=5601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:31.667000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:31.721898 kernel: audit: type=1327 audit(1719904291.667:736): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:31.722007 kernel: audit: type=1105 audit(1719904291.703:737): pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.703000 audit[5601]: USER_START pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.720000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:31.742869 kernel: audit: type=1103 audit(1719904291.720:738): pid=5606 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:32.199623 sshd[5601]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:32.200000 audit[5601]: USER_END pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:32.203230 systemd[1]: sshd@13-10.200.8.44:22-10.200.16.10:60970.service: Deactivated successfully.
Jul  2 07:11:32.204123 systemd[1]: session-16.scope: Deactivated successfully.
Jul  2 07:11:32.205762 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit.
Jul  2 07:11:32.206844 systemd-logind[1465]: Removed session 16.
Jul  2 07:11:32.200000 audit[5601]: CRED_DISP pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:32.223917 kernel: audit: type=1106 audit(1719904292.200:739): pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:32.224053 kernel: audit: type=1104 audit(1719904292.200:740): pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:32.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.44:22-10.200.16.10:60970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:34.845740 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.593d0I.mount: Deactivated successfully.
Jul  2 07:11:37.321061 systemd[1]: Started sshd@14-10.200.8.44:22-10.200.16.10:60980.service - OpenSSH per-connection server daemon (10.200.16.10:60980).
Jul  2 07:11:37.336601 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:11:37.336726 kernel: audit: type=1130 audit(1719904297.321:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.44:22-10.200.16.10:60980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:37.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.44:22-10.200.16.10:60980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:37.967000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:37.970078 sshd[5646]: Accepted publickey for core from 10.200.16.10 port 60980 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:37.971032 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:37.977080 systemd-logind[1465]: New session 17 of user core.
Jul  2 07:11:37.991199 kernel: audit: type=1101 audit(1719904297.967:743): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:37.991256 kernel: audit: type=1103 audit(1719904297.969:744): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:37.969000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:37.990346 systemd[1]: Started session-17.scope - Session 17 of User core.
Jul  2 07:11:37.996821 kernel: audit: type=1006 audit(1719904297.969:745): pid=5646 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1
Jul  2 07:11:37.969000 audit[5646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe4e8a900 a2=3 a3=7fc16a764480 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:37.969000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:38.012416 kernel: audit: type=1300 audit(1719904297.969:745): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe4e8a900 a2=3 a3=7fc16a764480 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:38.012512 kernel: audit: type=1327 audit(1719904297.969:745): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:37.998000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.026422 kernel: audit: type=1105 audit(1719904297.998:746): pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.000000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.036541 kernel: audit: type=1103 audit(1719904298.000:747): pid=5648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.503219 sshd[5646]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:38.503000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.507887 systemd[1]: sshd@14-10.200.8.44:22-10.200.16.10:60980.service: Deactivated successfully.
Jul  2 07:11:38.508761 systemd[1]: session-17.scope: Deactivated successfully.
Jul  2 07:11:38.511441 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit.
Jul  2 07:11:38.512491 systemd-logind[1465]: Removed session 17.
Jul  2 07:11:38.505000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.526159 kernel: audit: type=1106 audit(1719904298.503:748): pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.526272 kernel: audit: type=1104 audit(1719904298.505:749): pid=5646 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:38.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.44:22-10.200.16.10:60980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:43.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.44:22-10.200.16.10:42130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:43.630261 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:11:43.630344 kernel: audit: type=1130 audit(1719904303.627:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.44:22-10.200.16.10:42130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:43.627716 systemd[1]: Started sshd@15-10.200.8.44:22-10.200.16.10:42130.service - OpenSSH per-connection server daemon (10.200.16.10:42130).
Jul  2 07:11:44.272000 audit[5663]: USER_ACCT pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.273332 sshd[5663]: Accepted publickey for core from 10.200.16.10 port 42130 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:44.275692 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:44.285236 systemd-logind[1465]: New session 18 of user core.
Jul  2 07:11:44.302725 kernel: audit: type=1101 audit(1719904304.272:752): pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.302762 kernel: audit: type=1103 audit(1719904304.272:753): pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.302786 kernel: audit: type=1006 audit(1719904304.272:754): pid=5663 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1
Jul  2 07:11:44.302808 kernel: audit: type=1300 audit(1719904304.272:754): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca59b8240 a2=3 a3=7f0131e5e480 items=0 ppid=1 pid=5663 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:44.272000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.272000 audit[5663]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca59b8240 a2=3 a3=7f0131e5e480 items=0 ppid=1 pid=5663 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:44.302141 systemd[1]: Started session-18.scope - Session 18 of User core.
Jul  2 07:11:44.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:44.315785 kernel: audit: type=1327 audit(1719904304.272:754): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:44.307000 audit[5663]: USER_START pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.330988 kernel: audit: type=1105 audit(1719904304.307:755): pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.307000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.340571 kernel: audit: type=1103 audit(1719904304.307:756): pid=5665 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.795112 sshd[5663]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:44.796000 audit[5663]: USER_END pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.799134 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit.
Jul  2 07:11:44.801621 systemd[1]: sshd@15-10.200.8.44:22-10.200.16.10:42130.service: Deactivated successfully.
Jul  2 07:11:44.802456 systemd[1]: session-18.scope: Deactivated successfully.
Jul  2 07:11:44.803854 systemd-logind[1465]: Removed session 18.
Jul  2 07:11:44.796000 audit[5663]: CRED_DISP pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.817620 kernel: audit: type=1106 audit(1719904304.796:757): pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.817754 kernel: audit: type=1104 audit(1719904304.796:758): pid=5663 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:44.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.44:22-10.200.16.10:42130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:49.935720 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:11:49.935936 kernel: audit: type=1130 audit(1719904309.915:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.44:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:49.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.44:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:49.916285 systemd[1]: Started sshd@16-10.200.8.44:22-10.200.16.10:35586.service - OpenSSH per-connection server daemon (10.200.16.10:35586).
Jul  2 07:11:50.576000 audit[5708]: USER_ACCT pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.585145 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:50.586382 sshd[5708]: Accepted publickey for core from 10.200.16.10 port 35586 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:50.583000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.602736 systemd-logind[1465]: New session 19 of user core.
Jul  2 07:11:50.625683 kernel: audit: type=1101 audit(1719904310.576:761): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.625725 kernel: audit: type=1103 audit(1719904310.583:762): pid=5708 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.625748 kernel: audit: type=1006 audit(1719904310.583:763): pid=5708 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1
Jul  2 07:11:50.625770 kernel: audit: type=1300 audit(1719904310.583:763): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc337b8140 a2=3 a3=7f68fd602480 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:50.583000 audit[5708]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc337b8140 a2=3 a3=7f68fd602480 items=0 ppid=1 pid=5708 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:50.619276 systemd[1]: Started session-19.scope - Session 19 of User core.
Jul  2 07:11:50.583000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:50.643949 kernel: audit: type=1327 audit(1719904310.583:763): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:50.656050 kernel: audit: type=1105 audit(1719904310.632:764): pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.632000 audit[5708]: USER_START pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.658042 kernel: audit: type=1103 audit(1719904310.636:765): pid=5710 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:50.636000 audit[5710]: CRED_ACQ pid=5710 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.104002 sshd[5708]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:51.104000 audit[5708]: USER_END pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.107796 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit.
Jul  2 07:11:51.109415 systemd[1]: sshd@16-10.200.8.44:22-10.200.16.10:35586.service: Deactivated successfully.
Jul  2 07:11:51.110281 systemd[1]: session-19.scope: Deactivated successfully.
Jul  2 07:11:51.111984 systemd-logind[1465]: Removed session 19.
Jul  2 07:11:51.105000 audit[5708]: CRED_DISP pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.129644 kernel: audit: type=1106 audit(1719904311.104:766): pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.129752 kernel: audit: type=1104 audit(1719904311.105:767): pid=5708 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.44:22-10.200.16.10:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:51.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.44:22-10.200.16.10:35598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:51.228773 systemd[1]: Started sshd@17-10.200.8.44:22-10.200.16.10:35598.service - OpenSSH per-connection server daemon (10.200.16.10:35598).
Jul  2 07:11:51.880692 sshd[5720]: Accepted publickey for core from 10.200.16.10 port 35598 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:51.879000 audit[5720]: USER_ACCT pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.881000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.881000 audit[5720]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe310bb180 a2=3 a3=7f01ca89a480 items=0 ppid=1 pid=5720 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:51.881000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:51.885088 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:51.890719 systemd-logind[1465]: New session 20 of user core.
Jul  2 07:11:51.893092 systemd[1]: Started session-20.scope - Session 20 of User core.
Jul  2 07:11:51.897000 audit[5720]: USER_START pid=5720 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:51.899000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:52.473473 sshd[5720]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:52.474000 audit[5720]: USER_END pid=5720 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:52.474000 audit[5720]: CRED_DISP pid=5720 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:52.477042 systemd[1]: sshd@17-10.200.8.44:22-10.200.16.10:35598.service: Deactivated successfully.
Jul  2 07:11:52.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.44:22-10.200.16.10:35598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:52.478118 systemd[1]: session-20.scope: Deactivated successfully.
Jul  2 07:11:52.478882 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit.
Jul  2 07:11:52.479692 systemd-logind[1465]: Removed session 20.
Jul  2 07:11:52.591485 systemd[1]: Started sshd@18-10.200.8.44:22-10.200.16.10:35602.service - OpenSSH per-connection server daemon (10.200.16.10:35602).
Jul  2 07:11:52.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.44:22-10.200.16.10:35602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:53.239000 audit[5729]: USER_ACCT pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:53.240958 sshd[5729]: Accepted publickey for core from 10.200.16.10 port 35602 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:53.241000 audit[5729]: CRED_ACQ pid=5729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:53.241000 audit[5729]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed49ceab0 a2=3 a3=7f70f910b480 items=0 ppid=1 pid=5729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:53.241000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:53.242728 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:53.248508 systemd-logind[1465]: New session 21 of user core.
Jul  2 07:11:53.254096 systemd[1]: Started session-21.scope - Session 21 of User core.
Jul  2 07:11:53.258000 audit[5729]: USER_START pid=5729 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:53.260000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:55.309000 audit[5746]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.312589 kernel: kauditd_printk_skb: 20 callbacks suppressed
Jul  2 07:11:55.312714 kernel: audit: type=1325 audit(1719904315.309:784): table=filter:127 family=2 entries=20 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.309000 audit[5746]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc55896710 a2=0 a3=7ffc558966fc items=0 ppid=3058 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.339223 kernel: audit: type=1300 audit(1719904315.309:784): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc55896710 a2=0 a3=7ffc558966fc items=0 ppid=3058 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.339384 kernel: audit: type=1327 audit(1719904315.309:784): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.343000 audit[5746]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.352895 kernel: audit: type=1325 audit(1719904315.343:785): table=nat:128 family=2 entries=22 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.343000 audit[5746]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc55896710 a2=0 a3=0 items=0 ppid=3058 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.366895 kernel: audit: type=1300 audit(1719904315.343:785): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc55896710 a2=0 a3=0 items=0 ppid=3058 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.374880 kernel: audit: type=1327 audit(1719904315.343:785): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.376000 audit[5748]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.376000 audit[5748]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc7359eb40 a2=0 a3=7ffc7359eb2c items=0 ppid=3058 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.396204 kernel: audit: type=1325 audit(1719904315.376:786): table=filter:129 family=2 entries=32 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.396323 kernel: audit: type=1300 audit(1719904315.376:786): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc7359eb40 a2=0 a3=7ffc7359eb2c items=0 ppid=3058 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.403574 kernel: audit: type=1327 audit(1719904315.376:786): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.389000 audit[5748]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.410294 sshd[5729]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:55.389000 audit[5748]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc7359eb40 a2=0 a3=0 items=0 ppid=3058 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:55.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:11:55.410000 audit[5729]: USER_END pid=5729 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:55.411000 audit[5729]: CRED_DISP pid=5729 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:55.413880 kernel: audit: type=1325 audit(1719904315.389:787): table=nat:130 family=2 entries=22 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:11:55.414964 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit.
Jul  2 07:11:55.416309 systemd[1]: sshd@18-10.200.8.44:22-10.200.16.10:35602.service: Deactivated successfully.
Jul  2 07:11:55.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.44:22-10.200.16.10:35602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:55.417418 systemd[1]: session-21.scope: Deactivated successfully.
Jul  2 07:11:55.418350 systemd-logind[1465]: Removed session 21.
Jul  2 07:11:55.529894 systemd[1]: Started sshd@19-10.200.8.44:22-10.200.16.10:35618.service - OpenSSH per-connection server daemon (10.200.16.10:35618).
Jul  2 07:11:55.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.44:22-10.200.16.10:35618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:56.174000 audit[5751]: USER_ACCT pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.175610 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 35618 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:56.176000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.176000 audit[5751]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeac0a97f0 a2=3 a3=7fd41dbd8480 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:56.176000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:56.177363 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:56.182777 systemd-logind[1465]: New session 22 of user core.
Jul  2 07:11:56.185066 systemd[1]: Started session-22.scope - Session 22 of User core.
Jul  2 07:11:56.189000 audit[5751]: USER_START pid=5751 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.191000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.818517 sshd[5751]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:56.819000 audit[5751]: USER_END pid=5751 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.819000 audit[5751]: CRED_DISP pid=5751 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:56.822351 systemd[1]: sshd@19-10.200.8.44:22-10.200.16.10:35618.service: Deactivated successfully.
Jul  2 07:11:56.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.44:22-10.200.16.10:35618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:56.823519 systemd[1]: session-22.scope: Deactivated successfully.
Jul  2 07:11:56.824430 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit.
Jul  2 07:11:56.825474 systemd-logind[1465]: Removed session 22.
Jul  2 07:11:56.939509 systemd[1]: Started sshd@20-10.200.8.44:22-10.200.16.10:35630.service - OpenSSH per-connection server daemon (10.200.16.10:35630).
Jul  2 07:11:56.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.44:22-10.200.16.10:35630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:57.146700 systemd[1]: run-containerd-runc-k8s.io-a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed-runc.0DUn5i.mount: Deactivated successfully.
Jul  2 07:11:57.588000 audit[5763]: USER_ACCT pid=5763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:57.589524 sshd[5763]: Accepted publickey for core from 10.200.16.10 port 35630 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:11:57.590000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:57.590000 audit[5763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2f3eed60 a2=3 a3=7f65e003a480 items=0 ppid=1 pid=5763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:11:57.590000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:11:57.591555 sshd[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:11:57.597648 systemd-logind[1465]: New session 23 of user core.
Jul  2 07:11:57.603326 systemd[1]: Started session-23.scope - Session 23 of User core.
Jul  2 07:11:57.608000 audit[5763]: USER_START pid=5763 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:57.610000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:58.111553 sshd[5763]: pam_unix(sshd:session): session closed for user core
Jul  2 07:11:58.112000 audit[5763]: USER_END pid=5763 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:58.112000 audit[5763]: CRED_DISP pid=5763 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:11:58.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.44:22-10.200.16.10:35630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:11:58.115225 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit.
Jul  2 07:11:58.115502 systemd[1]: sshd@20-10.200.8.44:22-10.200.16.10:35630.service: Deactivated successfully.
Jul  2 07:11:58.116632 systemd[1]: session-23.scope: Deactivated successfully.
Jul  2 07:11:58.117656 systemd-logind[1465]: Removed session 23.
Jul  2 07:11:59.049000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.049000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001cd42e0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:59.049000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:59.053000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.053000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0011d1fb0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:11:59.053000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:11:59.249000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.249000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.249000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c006348520 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.249000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:59.249000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.249000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c0119126f0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.249000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:59.249000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.249000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c011912900 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.249000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:59.249000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=5730576 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.249000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c011912930 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.249000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:59.249000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c006677e80 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.249000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:11:59.266000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:11:59.266000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c011912bd0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:11:59.266000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:03.252053 kernel: kauditd_printk_skb: 51 callbacks suppressed
Jul  2 07:12:03.252223 kernel: audit: type=1130 audit(1719904323.229:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.44:22-10.200.16.10:34490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:03.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.44:22-10.200.16.10:34490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:03.230509 systemd[1]: Started sshd@21-10.200.8.44:22-10.200.16.10:34490.service - OpenSSH per-connection server daemon (10.200.16.10:34490).
Jul  2 07:12:03.876000 audit[5797]: USER_ACCT pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.877410 sshd[5797]: Accepted publickey for core from 10.200.16.10 port 34490 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:03.879437 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:03.886595 systemd-logind[1465]: New session 24 of user core.
Jul  2 07:12:03.900845 kernel: audit: type=1101 audit(1719904323.876:818): pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.900905 kernel: audit: type=1103 audit(1719904323.878:819): pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.878000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.900267 systemd[1]: Started session-24.scope - Session 24 of User core.
Jul  2 07:12:03.915471 kernel: audit: type=1006 audit(1719904323.878:820): pid=5797 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1
Jul  2 07:12:03.878000 audit[5797]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe31b08990 a2=3 a3=7ff5b412a480 items=0 ppid=1 pid=5797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:03.926180 kernel: audit: type=1300 audit(1719904323.878:820): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe31b08990 a2=3 a3=7ff5b412a480 items=0 ppid=1 pid=5797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:03.878000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:03.931056 kernel: audit: type=1327 audit(1719904323.878:820): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:03.906000 audit[5797]: USER_START pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.941450 kernel: audit: type=1105 audit(1719904323.906:821): pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.908000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:03.951395 kernel: audit: type=1103 audit(1719904323.908:822): pid=5799 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:04.397150 sshd[5797]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:04.397000 audit[5797]: USER_END pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:04.403144 systemd[1]: sshd@21-10.200.8.44:22-10.200.16.10:34490.service: Deactivated successfully.
Jul  2 07:12:04.403942 systemd[1]: session-24.scope: Deactivated successfully.
Jul  2 07:12:04.406617 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit.
Jul  2 07:12:04.407641 systemd-logind[1465]: Removed session 24.
Jul  2 07:12:04.397000 audit[5797]: CRED_DISP pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:04.424463 kernel: audit: type=1106 audit(1719904324.397:823): pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:04.424587 kernel: audit: type=1104 audit(1719904324.397:824): pid=5797 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:04.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.44:22-10.200.16.10:34490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:04.843146 systemd[1]: run-containerd-runc-k8s.io-f4bd51971e086154ff5280855d676c5ad0804a2d58ad10b6d6c77ed523b47fdc-runc.NCB0Mn.mount: Deactivated successfully.
Jul  2 07:12:09.525879 systemd[1]: Started sshd@22-10.200.8.44:22-10.200.16.10:35114.service - OpenSSH per-connection server daemon (10.200.16.10:35114).
Jul  2 07:12:09.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.44:22-10.200.16.10:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:09.536791 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:09.536885 kernel: audit: type=1130 audit(1719904329.524:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.44:22-10.200.16.10:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:10.174000 audit[5836]: USER_ACCT pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.177988 sshd[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:10.179576 sshd[5836]: Accepted publickey for core from 10.200.16.10 port 35114 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:10.185060 systemd-logind[1465]: New session 25 of user core.
Jul  2 07:12:10.208274 kernel: audit: type=1101 audit(1719904330.174:827): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.208333 kernel: audit: type=1103 audit(1719904330.175:828): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.208359 kernel: audit: type=1006 audit(1719904330.175:829): pid=5836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1
Jul  2 07:12:10.208384 kernel: audit: type=1300 audit(1719904330.175:829): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea2391300 a2=3 a3=7ff79e56b480 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:10.175000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.175000 audit[5836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea2391300 a2=3 a3=7ff79e56b480 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:10.207346 systemd[1]: Started session-25.scope - Session 25 of User core.
Jul  2 07:12:10.175000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:10.224909 kernel: audit: type=1327 audit(1719904330.175:829): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:10.213000 audit[5836]: USER_START pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.240342 kernel: audit: type=1105 audit(1719904330.213:830): pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.215000 audit[5838]: CRED_ACQ pid=5838 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.255926 kernel: audit: type=1103 audit(1719904330.215:831): pid=5838 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.713645 sshd[5836]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:10.713000 audit[5836]: USER_END pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.717623 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit.
Jul  2 07:12:10.718900 systemd[1]: sshd@22-10.200.8.44:22-10.200.16.10:35114.service: Deactivated successfully.
Jul  2 07:12:10.719702 systemd[1]: session-25.scope: Deactivated successfully.
Jul  2 07:12:10.721098 systemd-logind[1465]: Removed session 25.
Jul  2 07:12:10.713000 audit[5836]: CRED_DISP pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.727888 kernel: audit: type=1106 audit(1719904330.713:832): pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.727928 kernel: audit: type=1104 audit(1719904330.713:833): pid=5836 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:10.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.44:22-10.200.16.10:35114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:15.549000 audit[5853]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:12:15.553394 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:15.553504 kernel: audit: type=1325 audit(1719904335.549:835): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:12:15.564887 kernel: audit: type=1300 audit(1719904335.549:835): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc380c0fe0 a2=0 a3=7ffc380c0fcc items=0 ppid=3058 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:15.549000 audit[5853]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc380c0fe0 a2=0 a3=7ffc380c0fcc items=0 ppid=3058 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:15.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:12:15.552000 audit[5853]: NETFILTER_CFG table=nat:132 family=2 entries=106 op=nft_register_chain pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:12:15.588794 kernel: audit: type=1327 audit(1719904335.549:835): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:12:15.588945 kernel: audit: type=1325 audit(1719904335.552:836): table=nat:132 family=2 entries=106 op=nft_register_chain pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Jul  2 07:12:15.552000 audit[5853]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc380c0fe0 a2=0 a3=7ffc380c0fcc items=0 ppid=3058 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:15.603515 kernel: audit: type=1300 audit(1719904335.552:836): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc380c0fe0 a2=0 a3=7ffc380c0fcc items=0 ppid=3058 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:15.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:12:15.610899 kernel: audit: type=1327 audit(1719904335.552:836): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Jul  2 07:12:15.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.44:22-10.200.16.10:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:15.838903 systemd[1]: Started sshd@23-10.200.8.44:22-10.200.16.10:35124.service - OpenSSH per-connection server daemon (10.200.16.10:35124).
Jul  2 07:12:15.851944 kernel: audit: type=1130 audit(1719904335.837:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.44:22-10.200.16.10:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:16.046547 systemd[1]: run-containerd-runc-k8s.io-a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed-runc.iY5xLE.mount: Deactivated successfully.
Jul  2 07:12:16.478000 audit[5856]: USER_ACCT pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.486679 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:16.487347 sshd[5856]: Accepted publickey for core from 10.200.16.10 port 35124 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:16.500728 systemd-logind[1465]: New session 26 of user core.
Jul  2 07:12:16.522148 kernel: audit: type=1101 audit(1719904336.478:838): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.522216 kernel: audit: type=1103 audit(1719904336.484:839): pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.522242 kernel: audit: type=1006 audit(1719904336.484:840): pid=5856 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1
Jul  2 07:12:16.484000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.484000 audit[5856]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc64b68c0 a2=3 a3=7f54fda54480 items=0 ppid=1 pid=5856 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:16.484000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:16.521274 systemd[1]: Started session-26.scope - Session 26 of User core.
Jul  2 07:12:16.529000 audit[5856]: USER_START pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.531000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.588000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:16.588000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002016de0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:16.588000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002016e20 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002af0280 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:16.592000 audit[2807]: AVC avc:  denied  { watch } for  pid=2807 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:16.592000 audit[2807]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c002af02a0 a2=fc6 a3=0 items=0 ppid=2622 pid=2807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:16.592000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:16.999164 sshd[5856]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:16.999000 audit[5856]: USER_END pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:16.999000 audit[5856]: CRED_DISP pid=5856 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:17.003192 systemd[1]: sshd@23-10.200.8.44:22-10.200.16.10:35124.service: Deactivated successfully.
Jul  2 07:12:17.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.44:22-10.200.16.10:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:17.004374 systemd[1]: session-26.scope: Deactivated successfully.
Jul  2 07:12:17.005267 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit.
Jul  2 07:12:17.006183 systemd-logind[1465]: Removed session 26.
Jul  2 07:12:22.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.44:22-10.200.16.10:60428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:22.124049 kernel: kauditd_printk_skb: 19 callbacks suppressed
Jul  2 07:12:22.124138 kernel: audit: type=1130 audit(1719904342.119:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.44:22-10.200.16.10:60428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:22.121505 systemd[1]: Started sshd@24-10.200.8.44:22-10.200.16.10:60428.service - OpenSSH per-connection server daemon (10.200.16.10:60428).
Jul  2 07:12:22.785000 audit[5889]: USER_ACCT pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.789551 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:22.790380 sshd[5889]: Accepted publickey for core from 10.200.16.10 port 60428 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:22.796225 systemd-logind[1465]: New session 27 of user core.
Jul  2 07:12:22.813324 kernel: audit: type=1101 audit(1719904342.785:851): pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.813396 kernel: audit: type=1103 audit(1719904342.785:852): pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.785000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.812358 systemd[1]: Started session-27.scope - Session 27 of User core.
Jul  2 07:12:22.820618 kernel: audit: type=1006 audit(1719904342.785:853): pid=5889 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1
Jul  2 07:12:22.820716 kernel: audit: type=1300 audit(1719904342.785:853): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc777f4300 a2=3 a3=7f21a03b9480 items=0 ppid=1 pid=5889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:22.785000 audit[5889]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc777f4300 a2=3 a3=7f21a03b9480 items=0 ppid=1 pid=5889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:22.785000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:22.834167 kernel: audit: type=1327 audit(1719904342.785:853): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:22.813000 audit[5889]: USER_START pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.845107 kernel: audit: type=1105 audit(1719904342.813:854): pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.819000 audit[5891]: CRED_ACQ pid=5891 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:22.854958 kernel: audit: type=1103 audit(1719904342.819:855): pid=5891 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:23.307772 sshd[5889]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:23.308000 audit[5889]: USER_END pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:23.317129 systemd[1]: sshd@24-10.200.8.44:22-10.200.16.10:60428.service: Deactivated successfully.
Jul  2 07:12:23.317912 systemd[1]: session-27.scope: Deactivated successfully.
Jul  2 07:12:23.319366 systemd-logind[1465]: Session 27 logged out. Waiting for processes to exit.
Jul  2 07:12:23.325458 systemd-logind[1465]: Removed session 27.
Jul  2 07:12:23.310000 audit[5889]: CRED_DISP pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:23.340013 kernel: audit: type=1106 audit(1719904343.308:856): pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:23.340137 kernel: audit: type=1104 audit(1719904343.310:857): pid=5889 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:23.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.44:22-10.200.16.10:60428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:28.427951 systemd[1]: Started sshd@25-10.200.8.44:22-10.200.16.10:51700.service - OpenSSH per-connection server daemon (10.200.16.10:51700).
Jul  2 07:12:28.439351 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:28.439469 kernel: audit: type=1130 audit(1719904348.426:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.44:22-10.200.16.10:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:28.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.44:22-10.200.16.10:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:29.071000 audit[5906]: USER_ACCT pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.084623 systemd-logind[1465]: New session 28 of user core.
Jul  2 07:12:29.117137 kernel: audit: type=1101 audit(1719904349.071:860): pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.117176 kernel: audit: type=1103 audit(1719904349.074:861): pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.117203 kernel: audit: type=1006 audit(1719904349.074:862): pid=5906 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1
Jul  2 07:12:29.117227 kernel: audit: type=1300 audit(1719904349.074:862): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdda59050 a2=3 a3=7fd2b0452480 items=0 ppid=1 pid=5906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:29.074000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.074000 audit[5906]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdda59050 a2=3 a3=7fd2b0452480 items=0 ppid=1 pid=5906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:29.117425 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 51700 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:29.076768 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:29.111271 systemd[1]: Started session-28.scope - Session 28 of User core.
Jul  2 07:12:29.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:29.125650 kernel: audit: type=1327 audit(1719904349.074:862): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:29.118000 audit[5906]: USER_START pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.136061 kernel: audit: type=1105 audit(1719904349.118:863): pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.120000 audit[5908]: CRED_ACQ pid=5908 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.145786 kernel: audit: type=1103 audit(1719904349.120:864): pid=5908 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.592914 sshd[5906]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:29.592000 audit[5906]: USER_END pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.596634 systemd-logind[1465]: Session 28 logged out. Waiting for processes to exit.
Jul  2 07:12:29.598174 systemd[1]: sshd@25-10.200.8.44:22-10.200.16.10:51700.service: Deactivated successfully.
Jul  2 07:12:29.599033 systemd[1]: session-28.scope: Deactivated successfully.
Jul  2 07:12:29.600664 systemd-logind[1465]: Removed session 28.
Jul  2 07:12:29.592000 audit[5906]: CRED_DISP pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.616122 kernel: audit: type=1106 audit(1719904349.592:865): pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.616248 kernel: audit: type=1104 audit(1719904349.592:866): pid=5906 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:29.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.8.44:22-10.200.16.10:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:34.715081 systemd[1]: Started sshd@26-10.200.8.44:22-10.200.16.10:51702.service - OpenSSH per-connection server daemon (10.200.16.10:51702).
Jul  2 07:12:34.726797 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:34.726919 kernel: audit: type=1130 audit(1719904354.714:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.44:22-10.200.16.10:51702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:34.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.44:22-10.200.16.10:51702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:35.362000 audit[5918]: USER_ACCT pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.364913 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 51702 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:35.367015 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:35.372993 systemd-logind[1465]: New session 29 of user core.
Jul  2 07:12:35.395584 kernel: audit: type=1101 audit(1719904355.362:869): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.395636 kernel: audit: type=1103 audit(1719904355.364:870): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.395663 kernel: audit: type=1006 audit(1719904355.364:871): pid=5918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1
Jul  2 07:12:35.395687 kernel: audit: type=1300 audit(1719904355.364:871): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbade0830 a2=3 a3=7f7f66ba4480 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:35.364000 audit[5918]: CRED_ACQ pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.364000 audit[5918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbade0830 a2=3 a3=7f7f66ba4480 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:35.393335 systemd[1]: Started session-29.scope - Session 29 of User core.
Jul  2 07:12:35.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:35.408450 kernel: audit: type=1327 audit(1719904355.364:871): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:35.399000 audit[5918]: USER_START pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.419097 kernel: audit: type=1105 audit(1719904355.399:872): pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.404000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.428742 kernel: audit: type=1103 audit(1719904355.404:873): pid=5941 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.893181 sshd[5918]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:35.893000 audit[5918]: USER_END pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.902005 systemd[1]: sshd@26-10.200.8.44:22-10.200.16.10:51702.service: Deactivated successfully.
Jul  2 07:12:35.902897 systemd[1]: session-29.scope: Deactivated successfully.
Jul  2 07:12:35.904235 systemd-logind[1465]: Session 29 logged out. Waiting for processes to exit.
Jul  2 07:12:35.905261 systemd-logind[1465]: Removed session 29.
Jul  2 07:12:35.893000 audit[5918]: CRED_DISP pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.916233 kernel: audit: type=1106 audit(1719904355.893:874): pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.916320 kernel: audit: type=1104 audit(1719904355.893:875): pid=5918 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:35.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.8.44:22-10.200.16.10:51702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:41.010493 systemd[1]: Started sshd@27-10.200.8.44:22-10.200.16.10:43586.service - OpenSSH per-connection server daemon (10.200.16.10:43586).
Jul  2 07:12:41.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.44:22-10.200.16.10:43586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:41.014214 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:41.014308 kernel: audit: type=1130 audit(1719904361.008:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.44:22-10.200.16.10:43586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:41.656000 audit[5956]: USER_ACCT pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.660171 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Jul  2 07:12:41.662342 sshd[5956]: Accepted publickey for core from 10.200.16.10 port 43586 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4
Jul  2 07:12:41.669314 systemd-logind[1465]: New session 30 of user core.
Jul  2 07:12:41.704808 kernel: audit: type=1101 audit(1719904361.656:878): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.704896 kernel: audit: type=1103 audit(1719904361.657:879): pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.704934 kernel: audit: type=1006 audit(1719904361.657:880): pid=5956 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1
Jul  2 07:12:41.704964 kernel: audit: type=1300 audit(1719904361.657:880): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe9353690 a2=3 a3=7f7db3609480 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:41.657000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.657000 audit[5956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe9353690 a2=3 a3=7f7db3609480 items=0 ppid=1 pid=5956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:41.704700 systemd[1]: Started session-30.scope - Session 30 of User core.
Jul  2 07:12:41.657000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:41.710970 kernel: audit: type=1327 audit(1719904361.657:880): proctitle=737368643A20636F7265205B707269765D
Jul  2 07:12:41.712000 audit[5956]: USER_START pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.730888 kernel: audit: type=1105 audit(1719904361.712:881): pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.731007 kernel: audit: type=1103 audit(1719904361.718:882): pid=5958 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:41.718000 audit[5958]: CRED_ACQ pid=5958 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:42.183448 sshd[5956]: pam_unix(sshd:session): session closed for user core
Jul  2 07:12:42.183000 audit[5956]: USER_END pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:42.187909 systemd[1]: sshd@27-10.200.8.44:22-10.200.16.10:43586.service: Deactivated successfully.
Jul  2 07:12:42.188699 systemd[1]: session-30.scope: Deactivated successfully.
Jul  2 07:12:42.196518 systemd-logind[1465]: Session 30 logged out. Waiting for processes to exit.
Jul  2 07:12:42.197442 systemd-logind[1465]: Removed session 30.
Jul  2 07:12:42.213958 kernel: audit: type=1106 audit(1719904362.183:883): pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:42.184000 audit[5956]: CRED_DISP pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:42.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.8.44:22-10.200.16.10:43586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  2 07:12:42.244898 kernel: audit: type=1104 audit(1719904362.184:884): pid=5956 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success'
Jul  2 07:12:46.049946 systemd[1]: run-containerd-runc-k8s.io-a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed-runc.QKZ1wd.mount: Deactivated successfully.
Jul  2 07:12:56.416366 systemd[1]: cri-containerd-3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697.scope: Deactivated successfully.
Jul  2 07:12:56.448405 kernel: kauditd_printk_skb: 1 callbacks suppressed
Jul  2 07:12:56.448497 kernel: audit: type=1334 audit(1719904376.420:886): prog-id=109 op=UNLOAD
Jul  2 07:12:56.448526 kernel: audit: type=1334 audit(1719904376.420:887): prog-id=127 op=UNLOAD
Jul  2 07:12:56.420000 audit: BPF prog-id=109 op=UNLOAD
Jul  2 07:12:56.420000 audit: BPF prog-id=127 op=UNLOAD
Jul  2 07:12:56.416702 systemd[1]: cri-containerd-3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697.scope: Consumed 4.015s CPU time.
Jul  2 07:12:56.466661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697-rootfs.mount: Deactivated successfully.
Jul  2 07:12:56.468785 containerd[1481]: time="2024-07-02T07:12:56.468711352Z" level=info msg="shim disconnected" id=3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697 namespace=k8s.io
Jul  2 07:12:56.468785 containerd[1481]: time="2024-07-02T07:12:56.468781252Z" level=warning msg="cleaning up after shim disconnected" id=3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697 namespace=k8s.io
Jul  2 07:12:56.469373 containerd[1481]: time="2024-07-02T07:12:56.468795452Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:12:56.846839 kubelet[2922]: I0702 07:12:56.846085    2922 scope.go:117] "RemoveContainer" containerID="3f427d8bab6de81e63ed9b3f388b44ee15de21a82c9d2a05582fb489ae833697"
Jul  2 07:12:56.849881 containerd[1481]: time="2024-07-02T07:12:56.849829316Z" level=info msg="CreateContainer within sandbox \"f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Jul  2 07:12:56.979885 containerd[1481]: time="2024-07-02T07:12:56.979812778Z" level=info msg="CreateContainer within sandbox \"f5b73d4fa99452f3b9be8a257ccf7afa6b5cd51446dd6c53a03d2aae7e8d2be7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9c0ec5bd0fd681862a9b6734f58555782b075d5eb9726d9a7a2144bf5b2d6080\""
Jul  2 07:12:56.980594 containerd[1481]: time="2024-07-02T07:12:56.980557081Z" level=info msg="StartContainer for \"9c0ec5bd0fd681862a9b6734f58555782b075d5eb9726d9a7a2144bf5b2d6080\""
Jul  2 07:12:57.019098 systemd[1]: Started cri-containerd-9c0ec5bd0fd681862a9b6734f58555782b075d5eb9726d9a7a2144bf5b2d6080.scope - libcontainer container 9c0ec5bd0fd681862a9b6734f58555782b075d5eb9726d9a7a2144bf5b2d6080.
Jul  2 07:12:57.036000 audit: BPF prog-id=220 op=LOAD
Jul  2 07:12:57.041892 kernel: audit: type=1334 audit(1719904377.036:888): prog-id=220 op=LOAD
Jul  2 07:12:57.043536 kernel: audit: type=1334 audit(1719904377.039:889): prog-id=221 op=LOAD
Jul  2 07:12:57.039000 audit: BPF prog-id=221 op=LOAD
Jul  2 07:12:57.039000 audit[6036]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2622 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:57.060521 kernel: audit: type=1300 audit(1719904377.039:889): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2622 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306563356264306664363831383632613962363733346635383535
Jul  2 07:12:57.075660 kernel: audit: type=1327 audit(1719904377.039:889): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306563356264306664363831383632613962363733346635383535
Jul  2 07:12:57.039000 audit: BPF prog-id=222 op=LOAD
Jul  2 07:12:57.081989 kernel: audit: type=1334 audit(1719904377.039:890): prog-id=222 op=LOAD
Jul  2 07:12:57.039000 audit[6036]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2622 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:57.097035 kernel: audit: type=1300 audit(1719904377.039:890): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2622 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306563356264306664363831383632613962363733346635383535
Jul  2 07:12:57.111901 kernel: audit: type=1327 audit(1719904377.039:890): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306563356264306664363831383632613962363733346635383535
Jul  2 07:12:57.112114 kernel: audit: type=1334 audit(1719904377.039:891): prog-id=222 op=UNLOAD
Jul  2 07:12:57.039000 audit: BPF prog-id=222 op=UNLOAD
Jul  2 07:12:57.039000 audit: BPF prog-id=221 op=UNLOAD
Jul  2 07:12:57.039000 audit: BPF prog-id=223 op=LOAD
Jul  2 07:12:57.039000 audit[6036]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2622 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963306563356264306664363831383632613962363733346635383535
Jul  2 07:12:57.147801 containerd[1481]: time="2024-07-02T07:12:57.147746348Z" level=info msg="StartContainer for \"9c0ec5bd0fd681862a9b6734f58555782b075d5eb9726d9a7a2144bf5b2d6080\" returns successfully"
Jul  2 07:12:57.864657 systemd[1]: cri-containerd-9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392.scope: Deactivated successfully.
Jul  2 07:12:57.865005 systemd[1]: cri-containerd-9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392.scope: Consumed 6.520s CPU time.
Jul  2 07:12:57.864000 audit: BPF prog-id=141 op=UNLOAD
Jul  2 07:12:57.868000 audit: BPF prog-id=144 op=UNLOAD
Jul  2 07:12:57.904000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392-rootfs.mount: Deactivated successfully.
Jul  2 07:12:57.907157 containerd[1481]: time="2024-07-02T07:12:57.907085272Z" level=info msg="shim disconnected" id=9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392 namespace=k8s.io
Jul  2 07:12:57.907157 containerd[1481]: time="2024-07-02T07:12:57.907156872Z" level=warning msg="cleaning up after shim disconnected" id=9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392 namespace=k8s.io
Jul  2 07:12:57.907630 containerd[1481]: time="2024-07-02T07:12:57.907169672Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:12:58.078000 audit[6045]: AVC avc:  denied  { watch } for  pid=6045 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:58.078000 audit[6045]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000580cc0 a2=fc6 a3=0 items=0 ppid=2622 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:58.078000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:58.078000 audit[6045]: AVC avc:  denied  { watch } for  pid=6045 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c853,c1005 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:58.078000 audit[6045]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c00053c380 a2=fc6 a3=0 items=0 ppid=2622 pid=6045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c853,c1005 key=(null)
Jul  2 07:12:58.078000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269
Jul  2 07:12:58.852703 kubelet[2922]: I0702 07:12:58.852659    2922 scope.go:117] "RemoveContainer" containerID="9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392"
Jul  2 07:12:58.855543 containerd[1481]: time="2024-07-02T07:12:58.855350929Z" level=info msg="CreateContainer within sandbox \"1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}"
Jul  2 07:12:58.915411 containerd[1481]: time="2024-07-02T07:12:58.915346997Z" level=info msg="CreateContainer within sandbox \"1b0345ea9d40c2f091cf3f6429b440438e832c2e441d7ec62dbf99999bb8513c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b\""
Jul  2 07:12:58.916363 containerd[1481]: time="2024-07-02T07:12:58.916322900Z" level=info msg="StartContainer for \"b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b\""
Jul  2 07:12:58.957814 systemd[1]: run-containerd-runc-k8s.io-b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b-runc.hCtIG3.mount: Deactivated successfully.
Jul  2 07:12:58.966133 systemd[1]: Started cri-containerd-b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b.scope - libcontainer container b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b.
Jul  2 07:12:58.986000 audit: BPF prog-id=224 op=LOAD
Jul  2 07:12:58.987000 audit: BPF prog-id=225 op=LOAD
Jul  2 07:12:58.987000 audit[6115]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3111 pid=6115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:58.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231333232653235373834666662313831663836353063346232386166
Jul  2 07:12:58.987000 audit: BPF prog-id=226 op=LOAD
Jul  2 07:12:58.987000 audit[6115]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3111 pid=6115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:58.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231333232653235373834666662313831663836353063346232386166
Jul  2 07:12:58.987000 audit: BPF prog-id=226 op=UNLOAD
Jul  2 07:12:58.987000 audit: BPF prog-id=225 op=UNLOAD
Jul  2 07:12:58.987000 audit: BPF prog-id=227 op=LOAD
Jul  2 07:12:58.987000 audit[6115]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3111 pid=6115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:12:58.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231333232653235373834666662313831663836353063346232386166
Jul  2 07:12:59.011434 containerd[1481]: time="2024-07-02T07:12:59.011377566Z" level=info msg="StartContainer for \"b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b\" returns successfully"
Jul  2 07:12:59.250000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=5730576 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.250000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.250000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=63 a1=c009e70160 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.250000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:59.250000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c0111e0cc0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.250000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:59.250000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.250000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c0111e0cf0 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.250000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:59.250000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=5730582 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.250000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c0111e0d50 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.250000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:59.251000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=5730574 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.251000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=63 a1=c009e70180 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.251000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:12:59.267000 audit[2781]: AVC avc:  denied  { watch } for  pid=2781 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=5730580 scontext=system_u:system_r:container_t:s0:c108,c388 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
Jul  2 07:12:59.267000 audit[2781]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=62 a1=c0111e0f90 a2=fc6 a3=0 items=0 ppid=2624 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c108,c388 key=(null)
Jul  2 07:12:59.267000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E382E3434002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265
Jul  2 07:13:01.422561 systemd[1]: cri-containerd-10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0.scope: Deactivated successfully.
Jul  2 07:13:01.422957 systemd[1]: cri-containerd-10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0.scope: Consumed 1.320s CPU time.
Jul  2 07:13:01.429815 kernel: kauditd_printk_skb: 42 callbacks suppressed
Jul  2 07:13:01.430002 kernel: audit: type=1334 audit(1719904381.425:910): prog-id=101 op=UNLOAD
Jul  2 07:13:01.425000 audit: BPF prog-id=101 op=UNLOAD
Jul  2 07:13:01.437104 kernel: audit: type=1334 audit(1719904381.425:911): prog-id=119 op=UNLOAD
Jul  2 07:13:01.425000 audit: BPF prog-id=119 op=UNLOAD
Jul  2 07:13:01.462856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0-rootfs.mount: Deactivated successfully.
Jul  2 07:13:01.464941 containerd[1481]: time="2024-07-02T07:13:01.464792365Z" level=info msg="shim disconnected" id=10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0 namespace=k8s.io
Jul  2 07:13:01.464941 containerd[1481]: time="2024-07-02T07:13:01.464858366Z" level=warning msg="cleaning up after shim disconnected" id=10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0 namespace=k8s.io
Jul  2 07:13:01.464941 containerd[1481]: time="2024-07-02T07:13:01.464887766Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:13:01.671153 kubelet[2922]: E0702 07:13:01.670801    2922 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.44:35558->10.200.8.26:2379: read: connection timed out"
Jul  2 07:13:01.863075 kubelet[2922]: I0702 07:13:01.862309    2922 scope.go:117] "RemoveContainer" containerID="10f946679b246cee83e075c791b0b27ab9f34e73e27f9d3fa4cf3916246b0be0"
Jul  2 07:13:01.865056 containerd[1481]: time="2024-07-02T07:13:01.865005693Z" level=info msg="CreateContainer within sandbox \"9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Jul  2 07:13:01.900547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906385888.mount: Deactivated successfully.
Jul  2 07:13:01.913114 containerd[1481]: time="2024-07-02T07:13:01.913065429Z" level=info msg="CreateContainer within sandbox \"9a6dce96b39b7b9a05659536e2727f802596aeab92b9bca2d7f6b98b3494a7fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f70aebbf8d8b004a89ad860f64e682958d1569bcbca225a4fba50b65f14db6fa\""
Jul  2 07:13:01.913700 containerd[1481]: time="2024-07-02T07:13:01.913655130Z" level=info msg="StartContainer for \"f70aebbf8d8b004a89ad860f64e682958d1569bcbca225a4fba50b65f14db6fa\""
Jul  2 07:13:01.945081 systemd[1]: Started cri-containerd-f70aebbf8d8b004a89ad860f64e682958d1569bcbca225a4fba50b65f14db6fa.scope - libcontainer container f70aebbf8d8b004a89ad860f64e682958d1569bcbca225a4fba50b65f14db6fa.
Jul  2 07:13:01.956000 audit: BPF prog-id=228 op=LOAD
Jul  2 07:13:01.963967 kernel: audit: type=1334 audit(1719904381.956:912): prog-id=228 op=LOAD
Jul  2 07:13:01.964127 kernel: audit: type=1334 audit(1719904381.956:913): prog-id=229 op=LOAD
Jul  2 07:13:01.956000 audit: BPF prog-id=229 op=LOAD
Jul  2 07:13:01.975911 kernel: audit: type=1300 audit(1719904381.956:913): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2623 pid=6177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:13:01.956000 audit[6177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2623 pid=6177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:13:01.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306165626266386438623030346138396164383630663634653638
Jul  2 07:13:01.986751 kernel: audit: type=1327 audit(1719904381.956:913): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306165626266386438623030346138396164383630663634653638
Jul  2 07:13:01.956000 audit: BPF prog-id=230 op=LOAD
Jul  2 07:13:01.956000 audit[6177]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2623 pid=6177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:13:02.003894 kernel: audit: type=1334 audit(1719904381.956:914): prog-id=230 op=LOAD
Jul  2 07:13:02.015411 kernel: audit: type=1300 audit(1719904381.956:914): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2623 pid=6177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:13:02.015480 kernel: audit: type=1327 audit(1719904381.956:914): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306165626266386438623030346138396164383630663634653638
Jul  2 07:13:01.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306165626266386438623030346138396164383630663634653638
Jul  2 07:13:02.016574 containerd[1481]: time="2024-07-02T07:13:02.015636318Z" level=info msg="StartContainer for \"f70aebbf8d8b004a89ad860f64e682958d1569bcbca225a4fba50b65f14db6fa\" returns successfully"
Jul  2 07:13:01.956000 audit: BPF prog-id=230 op=UNLOAD
Jul  2 07:13:01.956000 audit: BPF prog-id=229 op=UNLOAD
Jul  2 07:13:02.019913 kernel: audit: type=1334 audit(1719904381.956:915): prog-id=230 op=UNLOAD
Jul  2 07:13:01.956000 audit: BPF prog-id=231 op=LOAD
Jul  2 07:13:01.956000 audit[6177]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2623 pid=6177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Jul  2 07:13:01.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306165626266386438623030346138396164383630663634653638
Jul  2 07:13:02.565389 kubelet[2922]: E0702 07:13:02.565215    2922 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.44:35372->10.200.8.26:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3815.2.5-a-b9d6671d68.17de53e9fa49d3d5  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3815.2.5-a-b9d6671d68,UID:2e08ab65cc4317b0bb8c99bd4d52a10b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-b9d6671d68,},FirstTimestamp:2024-07-02 07:12:52.093735893 +0000 UTC m=+230.086805172,LastTimestamp:2024-07-02 07:12:52.093735893 +0000 UTC m=+230.086805172,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-b9d6671d68,}"
Jul  2 07:13:07.301152 kubelet[2922]: I0702 07:13:07.301101    2922 status_manager.go:853] "Failed to get status for pod" podUID="c4c423135a13d65348033bff1ba62872" pod="kube-system/kube-controller-manager-ci-3815.2.5-a-b9d6671d68" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.44:35484->10.200.8.26:2379: read: connection timed out"
Jul  2 07:13:10.506000 audit: BPF prog-id=224 op=UNLOAD
Jul  2 07:13:10.507117 systemd[1]: cri-containerd-b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b.scope: Deactivated successfully.
Jul  2 07:13:10.515163 kernel: kauditd_printk_skb: 4 callbacks suppressed
Jul  2 07:13:10.515296 kernel: audit: type=1334 audit(1719904390.506:918): prog-id=224 op=UNLOAD
Jul  2 07:13:10.524000 audit: BPF prog-id=227 op=UNLOAD
Jul  2 07:13:10.531919 kernel: audit: type=1334 audit(1719904390.524:919): prog-id=227 op=UNLOAD
Jul  2 07:13:10.554606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b-rootfs.mount: Deactivated successfully.
Jul  2 07:13:10.633940 containerd[1481]: time="2024-07-02T07:13:10.633726310Z" level=info msg="shim disconnected" id=b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b namespace=k8s.io
Jul  2 07:13:10.633940 containerd[1481]: time="2024-07-02T07:13:10.633834610Z" level=warning msg="cleaning up after shim disconnected" id=b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b namespace=k8s.io
Jul  2 07:13:10.633940 containerd[1481]: time="2024-07-02T07:13:10.633849610Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul  2 07:13:10.885578 kubelet[2922]: I0702 07:13:10.885453    2922 scope.go:117] "RemoveContainer" containerID="9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392"
Jul  2 07:13:10.886313 kubelet[2922]: I0702 07:13:10.886015    2922 scope.go:117] "RemoveContainer" containerID="b1322e25784ffb181f8650c4b28afb3f28f188cf8351178d1203b05c792c1a5b"
Jul  2 07:13:10.886497 kubelet[2922]: E0702 07:13:10.886458    2922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76ff79f7fd-kl8t7_tigera-operator(dfb06764-044b-42b8-96de-03ed7db96937)\"" pod="tigera-operator/tigera-operator-76ff79f7fd-kl8t7" podUID="dfb06764-044b-42b8-96de-03ed7db96937"
Jul  2 07:13:10.887748 containerd[1481]: time="2024-07-02T07:13:10.887712137Z" level=info msg="RemoveContainer for \"9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392\""
Jul  2 07:13:10.898150 containerd[1481]: time="2024-07-02T07:13:10.898095566Z" level=info msg="RemoveContainer for \"9e3c9890d80a3a2def383f663f19cb8408221bf90b15c8551ffbdd43ace8a392\" returns successfully"
Jul  2 07:13:11.671717 kubelet[2922]: E0702 07:13:11.671647    2922 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-b9d6671d68?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Jul  2 07:13:16.046313 systemd[1]: run-containerd-runc-k8s.io-a899ddb955bd1e73007bf66cdb2a1064e53eb4fa246cee1e59ea2dc85a142fed-runc.tITJbB.mount: Deactivated successfully.
Jul  2 07:13:18.319903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.331788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.345019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.358443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.370311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.384963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.385336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.399191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.399555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.407362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.414244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.419321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.424131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.429458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.431578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.438893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.443651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.453546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.454140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.454379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.466830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.467242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.478026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.478422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.487429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.487839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.496887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.499140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.507894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.514068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.514224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.522163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.522596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.536356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.538824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.555284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.555710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.571415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.571825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.571997 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.579325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.579768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.587313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.587693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.595411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.599363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.599518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.609678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.610186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.616990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.625532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.625931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.626085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.633106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.633474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.640825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.641204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.648688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.652888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.653050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.660236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.660578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.667947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.668350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.675739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.676115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.684195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.684542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.692059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.692390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.707153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.707538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.714889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.715314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.724892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.725355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.736989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.737364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.744587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.744969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.752114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.752462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.763629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.764054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.764199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.771365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.775341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.775551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.783905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.784282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.794209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.794574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.802985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.803351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.812983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.813357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.822332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.825096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.832505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.832882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.844617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.849927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.858339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.858486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.866393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.866740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.875467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.882280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.889430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.889660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.901512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.906324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.906489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.906623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.913827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.914210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.928262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.928687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.935928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.936332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.948449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.948848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.959684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.965011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.969179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.969334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.976981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.977352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.985024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.985457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.993426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:18.993783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.006218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.006574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.013661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.014080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.023783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.024225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.033613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.038249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.039254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.047490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.048697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.057982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.058326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.067829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.071953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.079035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.084035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.087580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.093677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.094101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.103112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.109703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.110150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.119723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.120168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.129007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.129913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.139063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.143791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.144349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.153252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.160342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.160542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.172990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.173352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.182380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.182743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.192196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.197373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.197966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.207294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.211428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.217743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.218170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.226835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.227225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.236582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.242591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.242751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.251172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.261306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.264305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.264468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.270992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.271367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.281304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.281673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.291262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.292230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.300771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.301255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.337036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.347445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.347932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.361912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.362335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.362467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.372445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.372942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.382143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.387616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.387989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.397728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.408052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.408453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.408616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.417390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.417961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.427029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.427387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.441543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.441975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.442107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.452400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.458099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.464204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.470206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.477919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.478347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.490589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.491020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.503129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.503840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.515900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.521420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.521567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.533301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.539184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.546669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.550115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.556790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.557157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.567743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.570071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.587968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.588353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.588507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.593176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.606754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.607153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.617790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.620051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.635727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.636159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.636302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.644901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.656702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.657148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.657284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.666470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.680295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.680768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.686047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.691816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.692033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.701766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.702257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.712145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.714146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.722006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.722480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.734388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.735044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.745598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.746485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.746803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.753545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.753855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.760522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.764319 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.764474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.770895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.771221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.778044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.778389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.785732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.786113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.792911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.793213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.802752 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.812388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.812789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.812968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.822733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.824887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.832476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.833022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.846363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.847032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.856346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.857002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.875103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.875582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.875714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.885013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.885483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.894348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.894818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.917671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.918121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.918252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.918364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.932994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.933446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.940758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.941211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.950148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.950599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.957990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.958398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.965677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.966156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.973767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.974235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.982506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.982922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.993848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:19.994291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.001569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.014271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.014705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.014855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.022036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.022427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.032403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.037501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.042287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.050453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.050780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.058892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.059328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.059503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.067443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.067759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.084822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.085404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.085537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.096501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.097015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.115077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.119106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.119296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.126503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.126960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.134401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.134750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.143932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.144275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.147927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.159476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.160042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.167702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.168109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.183054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.183552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.188015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.192309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.196502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.200679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.204999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.213071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.213398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.213545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.223599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.224012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.231503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.231842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.239596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.239980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.247222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.247574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.255951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.256328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.260905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.269148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.275475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.275663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.282859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.283230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.287309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.298793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.299222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.306421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.306813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.316666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.324875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.325223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.325405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.332467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.339920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.340204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.344158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.347981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.348300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.356017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.356354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.363490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.371142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.371684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.375988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.376251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.383309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.390060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.394298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.394453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.401537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.401859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.408587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.408945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.416128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.416447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.424434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.424841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.434990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.435367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.448228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.452322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.452483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.452616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.461500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.461896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.469302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.473263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.473406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.480262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.484349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.484502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.492372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.492703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.500322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.500651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.507709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.508051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.517427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.517783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.525367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.529782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.529990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.542415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.542837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.549794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.559567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.568032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.574062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.574251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.574392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.582242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.588672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.588827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.599495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.599885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.612956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.613289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.620399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.620734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.632352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.632712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.640040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.640421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.647754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.648114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.655097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.655496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.664838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.665256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.672156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.672560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.679647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.680007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.687141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.687494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.694715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.695032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.702194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001
Jul  2 07:13:20.702549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001