May 17 00:33:25.055120 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:33:25.055145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:25.055156 kernel: BIOS-provided physical RAM map: May 17 00:33:25.055161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:33:25.055169 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:33:25.055176 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:33:25.055188 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 17 00:33:25.055193 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:33:25.055201 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:33:25.055208 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:33:25.055216 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:33:25.055223 kernel: printk: bootconsole [earlyser0] enabled May 17 00:33:25.055228 kernel: NX (Execute Disable) protection: active May 17 00:33:25.055234 kernel: efi: EFI v2.70 by Microsoft May 17 00:33:25.055247 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 May 17 00:33:25.055257 kernel: random: crng init done May 17 00:33:25.055263 kernel: SMBIOS 3.1.0 present. May 17 00:33:25.055271 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:33:25.055279 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:33:25.055287 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:33:25.055295 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 17 00:33:25.055301 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:33:25.055312 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:33:25.055318 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:33:25.055328 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:33:25.055334 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:33:25.055343 kernel: tsc: Detected 2593.905 MHz processor May 17 00:33:25.055351 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:33:25.055360 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:33:25.055367 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:33:25.055374 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:33:25.055383 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:33:25.055395 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:33:25.055401 kernel: Using GB pages for direct mapping May 17 00:33:25.055408 kernel: Secure boot disabled May 17 00:33:25.055417 kernel: ACPI: Early table checksum verification disabled May 17 00:33:25.055426 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:33:25.055433 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055439 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055449 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:33:25.055465 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:33:25.055472 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055481 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055489 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055499 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055506 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055518 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055525 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:33:25.055535 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:33:25.055542 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:33:25.055551 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:33:25.055559 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:33:25.055569 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:33:25.062619 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:33:25.062649 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:33:25.062662 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:33:25.062676 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:33:25.062689 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:33:25.062701 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:33:25.062713 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:33:25.062727 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:33:25.062739 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:33:25.062752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:33:25.062767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:33:25.062779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:33:25.062793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:33:25.062804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:33:25.062816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:33:25.062829 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:33:25.062840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:33:25.062856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:33:25.062868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:33:25.062883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:33:25.062893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:33:25.062904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:33:25.062916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:33:25.062928 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:33:25.062940 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:33:25.062952 kernel: Zone ranges: May 17 00:33:25.062965 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:33:25.062976 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:33:25.062991 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:33:25.063003 kernel: Movable zone start for each node May 17 00:33:25.063016 kernel: Early memory node ranges May 17 00:33:25.063028 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:33:25.063039 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:33:25.063049 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:33:25.063061 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:33:25.063073 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:33:25.063085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:25.063099 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:33:25.063111 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:33:25.063124 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:33:25.063136 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:33:25.063147 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:33:25.063159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:33:25.063169 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:33:25.063180 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:33:25.063191 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:33:25.063205 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:33:25.063217 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:33:25.063229 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:33:25.063241 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:33:25.063252 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:33:25.063264 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:33:25.063279 kernel: pcpu-alloc: [0] 0 1 May 17 00:33:25.063289 kernel: Hyper-V: PV spinlocks enabled May 17 00:33:25.063301 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:33:25.063316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:33:25.063327 kernel: Policy zone: Normal May 17 00:33:25.063342 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:25.063361 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:33:25.063373 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:33:25.063384 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:33:25.063395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:33:25.063406 kernel: Memory: 8071680K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 315520K reserved, 0K cma-reserved) May 17 00:33:25.063421 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:33:25.063432 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:33:25.063454 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:33:25.063468 kernel: rcu: Hierarchical RCU implementation. May 17 00:33:25.063483 kernel: rcu: RCU event tracing is enabled. May 17 00:33:25.063497 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:33:25.063511 kernel: Rude variant of Tasks RCU enabled. May 17 00:33:25.063523 kernel: Tracing variant of Tasks RCU enabled. May 17 00:33:25.063534 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:33:25.063545 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:33:25.063556 kernel: Using NULL legacy PIC May 17 00:33:25.063571 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:33:25.063607 kernel: Console: colour dummy device 80x25 May 17 00:33:25.063619 kernel: printk: console [tty1] enabled May 17 00:33:25.063630 kernel: printk: console [ttyS0] enabled May 17 00:33:25.063641 kernel: printk: bootconsole [earlyser0] disabled May 17 00:33:25.063657 kernel: ACPI: Core revision 20210730 May 17 00:33:25.063669 kernel: Failed to register legacy timer interrupt May 17 00:33:25.063682 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:33:25.063694 kernel: Hyper-V: Using IPI hypercalls May 17 00:33:25.063706 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) May 17 00:33:25.063719 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:33:25.063731 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:33:25.063744 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:33:25.063756 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:33:25.063768 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:33:25.063783 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:33:25.063796 kernel: RETBleed: Vulnerable May 17 00:33:25.063809 kernel: Speculative Store Bypass: Vulnerable May 17 00:33:25.063824 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:33:25.063838 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:33:25.063852 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:33:25.063865 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:33:25.063879 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:33:25.063892 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:33:25.063906 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:33:25.063922 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:33:25.063935 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:33:25.063948 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:33:25.063961 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:33:25.063974 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:33:25.063987 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:33:25.064000 kernel: Freeing SMP alternatives memory: 32K May 17 00:33:25.064014 kernel: pid_max: default: 32768 minimum: 301 May 17 00:33:25.064027 kernel: LSM: Security Framework initializing May 17 00:33:25.064040 kernel: SELinux: Initializing. May 17 00:33:25.064053 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:33:25.064067 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:33:25.064083 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:33:25.064097 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:33:25.064111 kernel: signal: max sigframe size: 3632 May 17 00:33:25.064124 kernel: rcu: Hierarchical SRCU implementation. May 17 00:33:25.064138 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:33:25.064151 kernel: smp: Bringing up secondary CPUs ... May 17 00:33:25.064165 kernel: x86: Booting SMP configuration: May 17 00:33:25.064178 kernel: .... node #0, CPUs: #1 May 17 00:33:25.064192 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:33:25.064210 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:33:25.064223 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:33:25.064237 kernel: smpboot: Max logical packages: 1 May 17 00:33:25.064250 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 17 00:33:25.064263 kernel: devtmpfs: initialized May 17 00:33:25.064277 kernel: x86/mm: Memory block size: 128MB May 17 00:33:25.064291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:33:25.064305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:33:25.064318 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:33:25.064335 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:33:25.064349 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:33:25.064362 kernel: audit: initializing netlink subsys (disabled) May 17 00:33:25.064375 kernel: audit: type=2000 audit(1747442003.023:1): state=initialized audit_enabled=0 res=1 May 17 00:33:25.064388 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:33:25.064402 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:33:25.064416 kernel: cpuidle: using governor menu May 17 00:33:25.064428 kernel: ACPI: bus type PCI registered May 17 00:33:25.064442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:33:25.064459 kernel: dca service started, version 1.12.1 May 17 00:33:25.064472 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:33:25.064486 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:33:25.064499 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:33:25.064513 kernel: ACPI: Added _OSI(Module Device) May 17 00:33:25.064526 kernel: ACPI: Added _OSI(Processor Device) May 17 00:33:25.064539 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:33:25.064553 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:33:25.064567 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:33:25.064601 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:33:25.064615 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:33:25.064629 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:33:25.064642 kernel: ACPI: Interpreter enabled May 17 00:33:25.064656 kernel: ACPI: PM: (supports S0 S5) May 17 00:33:25.064669 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:33:25.064682 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:33:25.064696 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:33:25.064709 kernel: iommu: Default domain type: Translated May 17 00:33:25.064726 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:33:25.064739 kernel: vgaarb: loaded May 17 00:33:25.064752 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:33:25.064766 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:33:25.064780 kernel: PTP clock support registered May 17 00:33:25.064794 kernel: Registered efivars operations May 17 00:33:25.064807 kernel: PCI: Using ACPI for IRQ routing May 17 00:33:25.064820 kernel: PCI: System does not support PCI May 17 00:33:25.064833 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:33:25.064849 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:33:25.064862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:33:25.064876 kernel: pnp: PnP ACPI init May 17 00:33:25.064890 kernel: pnp: PnP ACPI: found 3 devices May 17 00:33:25.064903 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:33:25.064917 kernel: NET: Registered PF_INET protocol family May 17 00:33:25.064930 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:33:25.064944 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:33:25.064958 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:33:25.064974 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:33:25.064988 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:33:25.065001 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:33:25.065015 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:33:25.065028 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:33:25.065041 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:33:25.065055 kernel: NET: Registered PF_XDP protocol family May 17 00:33:25.065069 kernel: PCI: CLS 0 bytes, default 64 May 17 00:33:25.065082 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:33:25.065097 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 17 00:33:25.065109 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:33:25.065120 kernel: Initialise system trusted keyrings May 17 00:33:25.065132 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:33:25.065144 kernel: Key type asymmetric registered May 17 00:33:25.065156 kernel: Asymmetric key parser 'x509' registered May 17 00:33:25.065168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:33:25.065180 kernel: io scheduler mq-deadline registered May 17 00:33:25.065192 kernel: io scheduler kyber registered May 17 00:33:25.065207 kernel: io scheduler bfq registered May 17 00:33:25.065220 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:33:25.065233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:33:25.065246 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:33:25.065260 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:33:25.065273 kernel: i8042: PNP: No PS/2 controller found. May 17 00:33:25.065492 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:33:25.065657 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:33:24 UTC (1747442004) May 17 00:33:25.065776 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:33:25.065792 kernel: intel_pstate: CPU model not supported May 17 00:33:25.065804 kernel: efifb: probing for efifb May 17 00:33:25.065817 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:33:25.065829 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:33:25.065841 kernel: efifb: scrolling: redraw May 17 00:33:25.065853 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:33:25.065866 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:33:25.065881 kernel: fb0: EFI VGA frame buffer device May 17 00:33:25.065893 kernel: pstore: Registered efi as persistent store backend May 17 00:33:25.065906 kernel: NET: Registered PF_INET6 protocol family May 17 00:33:25.065918 kernel: Segment Routing with IPv6 May 17 00:33:25.065930 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:33:25.065943 kernel: NET: Registered PF_PACKET protocol family May 17 00:33:25.065955 kernel: Key type dns_resolver registered May 17 00:33:25.065968 kernel: IPI shorthand broadcast: enabled May 17 00:33:25.065980 kernel: sched_clock: Marking stable (778002700, 20334000)->(985424300, -187087600) May 17 00:33:25.065993 kernel: registered taskstats version 1 May 17 00:33:25.066008 kernel: Loading compiled-in X.509 certificates May 17 00:33:25.066021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:33:25.066033 kernel: Key type .fscrypt registered May 17 00:33:25.066046 kernel: Key type fscrypt-provisioning registered May 17 00:33:25.066058 kernel: pstore: Using crash dump compression: deflate May 17 00:33:25.066070 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:33:25.066083 kernel: ima: Allocated hash algorithm: sha1 May 17 00:33:25.066095 kernel: ima: No architecture policies found May 17 00:33:25.066110 kernel: clk: Disabling unused clocks May 17 00:33:25.066122 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:33:25.066135 kernel: Write protecting the kernel read-only data: 28672k May 17 00:33:25.066148 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:33:25.066161 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:33:25.066174 kernel: Run /init as init process May 17 00:33:25.066186 kernel: with arguments: May 17 00:33:25.066200 kernel: /init May 17 00:33:25.066212 kernel: with environment: May 17 00:33:25.066227 kernel: HOME=/ May 17 00:33:25.066239 kernel: TERM=linux May 17 00:33:25.066252 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:33:25.066268 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:33:25.066284 systemd[1]: Detected virtualization microsoft. May 17 00:33:25.066298 systemd[1]: Detected architecture x86-64. May 17 00:33:25.066311 systemd[1]: Running in initrd. May 17 00:33:25.066324 systemd[1]: No hostname configured, using default hostname. May 17 00:33:25.066340 systemd[1]: Hostname set to . May 17 00:33:25.066355 systemd[1]: Initializing machine ID from random generator. May 17 00:33:25.066368 systemd[1]: Queued start job for default target initrd.target. May 17 00:33:25.066381 systemd[1]: Started systemd-ask-password-console.path. May 17 00:33:25.066394 systemd[1]: Reached target cryptsetup.target. May 17 00:33:25.066408 systemd[1]: Reached target paths.target. May 17 00:33:25.066421 systemd[1]: Reached target slices.target. May 17 00:33:25.066435 systemd[1]: Reached target swap.target. May 17 00:33:25.066451 systemd[1]: Reached target timers.target. May 17 00:33:25.066466 systemd[1]: Listening on iscsid.socket. May 17 00:33:25.066480 systemd[1]: Listening on iscsiuio.socket. May 17 00:33:25.066494 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:33:25.066507 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:33:25.066521 systemd[1]: Listening on systemd-journald.socket. May 17 00:33:25.066535 systemd[1]: Listening on systemd-networkd.socket. May 17 00:33:25.066549 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:33:25.066565 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:33:25.066594 systemd[1]: Reached target sockets.target. May 17 00:33:25.066608 systemd[1]: Starting kmod-static-nodes.service... May 17 00:33:25.066622 systemd[1]: Finished network-cleanup.service. May 17 00:33:25.066637 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:33:25.066650 systemd[1]: Starting systemd-journald.service... May 17 00:33:25.066664 systemd[1]: Starting systemd-modules-load.service... May 17 00:33:25.066678 systemd[1]: Starting systemd-resolved.service... May 17 00:33:25.066693 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:33:25.066716 systemd-journald[183]: Journal started May 17 00:33:25.066786 systemd-journald[183]: Runtime Journal (/run/log/journal/59fbf5dedcd04d7896f16c10f3a0e9c5) is 8.0M, max 159.0M, 151.0M free. May 17 00:33:25.059161 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:33:25.080155 systemd[1]: Started systemd-journald.service. May 17 00:33:25.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.093115 kernel: audit: type=1130 audit(1747442005.080:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.081131 systemd[1]: Finished kmod-static-nodes.service. May 17 00:33:25.093378 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:33:25.105148 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:33:25.128538 kernel: audit: type=1130 audit(1747442005.092:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.128589 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:33:25.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.130198 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:33:25.137411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:33:25.151357 systemd-resolved[185]: Positive Trust Anchors: May 17 00:33:25.155466 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:25.161567 kernel: Bridge firewalling registered May 17 00:33:25.155690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:33:25.158439 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:33:25.163856 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:33:25.168837 systemd[1]: Starting dracut-cmdline.service... May 17 00:33:25.174524 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:33:25.209599 kernel: SCSI subsystem initialized May 17 00:33:25.209629 kernel: audit: type=1130 audit(1747442005.104:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.209703 dracut-cmdline[200]: dracut-dracut-053 May 17 00:33:25.209703 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:25.191834 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:33:25.192745 systemd[1]: Started systemd-resolved.service. May 17 00:33:25.207416 systemd[1]: Reached target nss-lookup.target. May 17 00:33:25.249905 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:33:25.249935 kernel: audit: type=1130 audit(1747442005.120:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.249949 kernel: device-mapper: uevent: version 1.0.3 May 17 00:33:25.249959 kernel: audit: type=1130 audit(1747442005.158:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.265017 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:33:25.265076 kernel: audit: type=1130 audit(1747442005.167:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.286608 kernel: audit: type=1130 audit(1747442005.206:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.286943 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:33:25.290040 systemd[1]: Finished systemd-modules-load.service. May 17 00:33:25.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.295832 systemd[1]: Starting systemd-sysctl.service... May 17 00:33:25.312716 kernel: audit: type=1130 audit(1747442005.294:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.316767 systemd[1]: Finished systemd-sysctl.service. May 17 00:33:25.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.332695 kernel: audit: type=1130 audit(1747442005.320:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.348604 kernel: Loading iSCSI transport class v2.0-870. May 17 00:33:25.368604 kernel: iscsi: registered transport (tcp) May 17 00:33:25.395355 kernel: iscsi: registered transport (qla4xxx) May 17 00:33:25.395446 kernel: QLogic iSCSI HBA Driver May 17 00:33:25.426783 systemd[1]: Finished dracut-cmdline.service. May 17 00:33:25.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.431728 systemd[1]: Starting dracut-pre-udev.service... May 17 00:33:25.483605 kernel: raid6: avx512x4 gen() 18372 MB/s May 17 00:33:25.503590 kernel: raid6: avx512x4 xor() 7944 MB/s May 17 00:33:25.523592 kernel: raid6: avx512x2 gen() 18418 MB/s May 17 00:33:25.543594 kernel: raid6: avx512x2 xor() 29504 MB/s May 17 00:33:25.563588 kernel: raid6: avx512x1 gen() 18437 MB/s May 17 00:33:25.583592 kernel: raid6: avx512x1 xor() 26721 MB/s May 17 00:33:25.604596 kernel: raid6: avx2x4 gen() 18436 MB/s May 17 00:33:25.624589 kernel: raid6: avx2x4 xor() 7609 MB/s May 17 00:33:25.644589 kernel: raid6: avx2x2 gen() 18375 MB/s May 17 00:33:25.664592 kernel: raid6: avx2x2 xor() 22144 MB/s May 17 00:33:25.684588 kernel: raid6: avx2x1 gen() 13883 MB/s May 17 00:33:25.704587 kernel: raid6: avx2x1 xor() 19295 MB/s May 17 00:33:25.724591 kernel: raid6: sse2x4 gen() 11633 MB/s May 17 00:33:25.743592 kernel: raid6: sse2x4 xor() 7316 MB/s May 17 00:33:25.763598 kernel: raid6: sse2x2 gen() 12889 MB/s May 17 00:33:25.783590 kernel: raid6: sse2x2 xor() 7694 MB/s May 17 00:33:25.802588 kernel: raid6: sse2x1 gen() 11663 MB/s May 17 00:33:25.824638 kernel: raid6: sse2x1 xor() 5886 MB/s May 17 00:33:25.824658 kernel: raid6: using algorithm avx512x1 gen() 18437 MB/s May 17 00:33:25.824668 kernel: raid6: .... xor() 26721 MB/s, rmw enabled May 17 00:33:25.831034 kernel: raid6: using avx512x2 recovery algorithm May 17 00:33:25.846606 kernel: xor: automatically using best checksumming function avx May 17 00:33:25.942603 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:33:25.950626 systemd[1]: Finished dracut-pre-udev.service. May 17 00:33:25.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.953000 audit: BPF prog-id=7 op=LOAD May 17 00:33:25.954000 audit: BPF prog-id=8 op=LOAD May 17 00:33:25.955222 systemd[1]: Starting systemd-udevd.service... May 17 00:33:25.970996 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:33:25.979330 systemd[1]: Started systemd-udevd.service. May 17 00:33:25.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:25.987820 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:33:26.003406 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation May 17 00:33:26.033230 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:33:26.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:26.038297 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:33:26.075348 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:33:26.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:26.128599 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:33:26.136601 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:33:26.160325 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:33:26.169596 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:33:26.169645 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:33:26.180601 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:33:26.186627 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:33:26.187595 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:33:26.187639 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:33:26.201603 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:33:26.210702 kernel: AES CTR mode by8 optimization enabled May 17 00:33:26.222281 kernel: scsi host0: storvsc_host_t May 17 00:33:26.222516 kernel: scsi host1: storvsc_host_t May 17 00:33:26.222545 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:33:26.237387 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:33:26.237485 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:33:26.268090 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:33:26.280416 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:33:26.280440 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:33:26.295644 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:33:26.295774 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:33:26.295874 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:33:26.295976 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:33:26.296076 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:33:26.296173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:26.296184 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:33:26.390318 kernel: hv_netvsc 6045bdfc-3616-6045-bdfc-36166045bdfc eth0: VF slot 1 added May 17 00:33:26.399596 kernel: hv_vmbus: registering driver hv_pci May 17 00:33:26.408328 kernel: hv_pci 0ab01128-8c78-471c-849f-524eeb62b6b5: PCI VMBus probing: Using version 0x10004 May 17 00:33:26.495105 kernel: hv_pci 0ab01128-8c78-471c-849f-524eeb62b6b5: PCI host bridge to bus 8c78:00 May 17 00:33:26.495295 kernel: pci_bus 8c78:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:33:26.495466 kernel: pci_bus 8c78:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:33:26.495664 kernel: pci 8c78:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:33:26.495841 kernel: pci 8c78:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:33:26.496001 kernel: pci 8c78:00:02.0: enabling Extended Tags May 17 00:33:26.496167 kernel: pci 8c78:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8c78:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:33:26.496330 kernel: pci_bus 8c78:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:33:26.496472 kernel: pci 8c78:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:33:26.588600 kernel: mlx5_core 8c78:00:02.0: firmware version: 14.30.5000 May 17 00:33:26.849287 kernel: mlx5_core 8c78:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 00:33:26.849495 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (435) May 17 00:33:26.849515 kernel: mlx5_core 8c78:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 17 00:33:26.849705 kernel: mlx5_core 8c78:00:02.0: mlx5e_tc_post_act_init:40:(pid 203): firmware level support is missing May 17 00:33:26.849873 kernel: hv_netvsc 6045bdfc-3616-6045-bdfc-36166045bdfc eth0: VF registering: eth1 May 17 00:33:26.850030 kernel: mlx5_core 8c78:00:02.0 eth1: joined to eth0 May 17 00:33:26.811083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:33:26.818432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:33:26.860756 kernel: mlx5_core 8c78:00:02.0 enP35960s1: renamed from eth1 May 17 00:33:26.930836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:33:26.957819 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:33:26.960696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:33:26.967754 systemd[1]: Starting disk-uuid.service... May 17 00:33:26.985602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:26.994605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:27.997608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:33:27.998213 disk-uuid[555]: The operation has completed successfully. May 17 00:33:28.068034 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:33:28.068142 systemd[1]: Finished disk-uuid.service. May 17 00:33:28.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.083007 systemd[1]: Starting verity-setup.service... May 17 00:33:28.118606 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:33:28.380106 systemd[1]: Found device dev-mapper-usr.device. May 17 00:33:28.386116 systemd[1]: Mounting sysusr-usr.mount... May 17 00:33:28.389993 systemd[1]: Finished verity-setup.service. May 17 00:33:28.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.465491 systemd[1]: Mounted sysusr-usr.mount. May 17 00:33:28.470474 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:33:28.467412 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:33:28.468250 systemd[1]: Starting ignition-setup.service... May 17 00:33:28.478059 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:33:28.496299 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:28.496332 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:28.496350 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:28.551484 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:33:28.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.555000 audit: BPF prog-id=9 op=LOAD May 17 00:33:28.556935 systemd[1]: Starting systemd-networkd.service... May 17 00:33:28.584787 systemd-networkd[793]: lo: Link UP May 17 00:33:28.584799 systemd-networkd[793]: lo: Gained carrier May 17 00:33:28.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.585365 systemd-networkd[793]: Enumeration completed May 17 00:33:28.585450 systemd[1]: Started systemd-networkd.service. May 17 00:33:28.588412 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:28.588776 systemd[1]: Reached target network.target. May 17 00:33:28.594266 systemd[1]: Starting iscsiuio.service... May 17 00:33:28.610185 systemd[1]: Started iscsiuio.service. May 17 00:33:28.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.613426 systemd[1]: Starting iscsid.service... May 17 00:33:28.626975 iscsid[802]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:28.626975 iscsid[802]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:33:28.626975 iscsid[802]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:33:28.626975 iscsid[802]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:33:28.626975 iscsid[802]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:33:28.626975 iscsid[802]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:28.626975 iscsid[802]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:33:28.660388 kernel: mlx5_core 8c78:00:02.0 enP35960s1: Link up May 17 00:33:28.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.619750 systemd[1]: Started iscsid.service. May 17 00:33:28.624454 systemd[1]: Starting dracut-initqueue.service... May 17 00:33:28.631274 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:33:28.639595 systemd[1]: Finished dracut-initqueue.service. May 17 00:33:28.644934 systemd[1]: Reached target remote-fs-pre.target. May 17 00:33:28.654774 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:33:28.659307 systemd[1]: Reached target remote-fs.target. May 17 00:33:28.681387 systemd[1]: Starting dracut-pre-mount.service... May 17 00:33:28.691642 kernel: hv_netvsc 6045bdfc-3616-6045-bdfc-36166045bdfc eth0: Data path switched to VF: enP35960s1 May 17 00:33:28.692804 systemd[1]: Finished dracut-pre-mount.service. May 17 00:33:28.700001 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:33:28.697010 systemd-networkd[793]: enP35960s1: Link UP May 17 00:33:28.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.697223 systemd-networkd[793]: eth0: Link UP May 17 00:33:28.697714 systemd-networkd[793]: eth0: Gained carrier May 17 00:33:28.705803 systemd-networkd[793]: enP35960s1: Gained carrier May 17 00:33:28.728204 systemd[1]: Finished ignition-setup.service. May 17 00:33:28.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:28.733213 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:33:28.743655 systemd-networkd[793]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:33:30.763840 systemd-networkd[793]: eth0: Gained IPv6LL May 17 00:33:32.124535 ignition[821]: Ignition 2.14.0 May 17 00:33:32.124552 ignition[821]: Stage: fetch-offline May 17 00:33:32.124704 ignition[821]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:32.124757 ignition[821]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:32.243200 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:32.243385 ignition[821]: parsed url from cmdline: "" May 17 00:33:32.244814 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:33:32.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.243389 ignition[821]: no config URL provided May 17 00:33:32.268313 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:33:32.268346 kernel: audit: type=1130 audit(1747442012.248:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.249873 systemd[1]: Starting ignition-fetch.service... May 17 00:33:32.243395 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:32.243404 ignition[821]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:32.243410 ignition[821]: failed to fetch config: resource requires networking May 17 00:33:32.243750 ignition[821]: Ignition finished successfully May 17 00:33:32.285264 ignition[827]: Ignition 2.14.0 May 17 00:33:32.285281 ignition[827]: Stage: fetch May 17 00:33:32.285418 ignition[827]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:32.285443 ignition[827]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:32.309051 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:32.309250 ignition[827]: parsed url from cmdline: "" May 17 00:33:32.309254 ignition[827]: no config URL provided May 17 00:33:32.309260 ignition[827]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:32.309269 ignition[827]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:32.309308 ignition[827]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:33:32.386212 ignition[827]: GET result: OK May 17 00:33:32.386380 ignition[827]: config has been read from IMDS userdata May 17 00:33:32.386427 ignition[827]: parsing config with SHA512: adc13ae72d8001498e5ac72a385f5fb41cf999ed77b6cb5111f1db37721846d1624a17ea45844fa34ed29a9013577319ae88497a05f054ffc3f58303f9494e01 May 17 00:33:32.390981 unknown[827]: fetched base config from "system" May 17 00:33:32.391208 unknown[827]: fetched base config from "system" May 17 00:33:32.392004 ignition[827]: fetch: fetch complete May 17 00:33:32.391218 unknown[827]: fetched user config from "azure" May 17 00:33:32.392012 ignition[827]: fetch: fetch passed May 17 00:33:32.392064 ignition[827]: Ignition finished successfully May 17 00:33:32.402313 systemd[1]: Finished ignition-fetch.service. May 17 00:33:32.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.415247 systemd[1]: Starting ignition-kargs.service... May 17 00:33:32.417598 kernel: audit: type=1130 audit(1747442012.403:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.428864 ignition[833]: Ignition 2.14.0 May 17 00:33:32.428874 ignition[833]: Stage: kargs May 17 00:33:32.429013 ignition[833]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:32.429050 ignition[833]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:32.437831 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:32.442822 ignition[833]: kargs: kargs passed May 17 00:33:32.442887 ignition[833]: Ignition finished successfully May 17 00:33:32.447850 systemd[1]: Finished ignition-kargs.service. May 17 00:33:32.451772 systemd[1]: Starting ignition-disks.service... May 17 00:33:32.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.461548 ignition[839]: Ignition 2.14.0 May 17 00:33:32.469759 kernel: audit: type=1130 audit(1747442012.450:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.461826 ignition[839]: Stage: disks May 17 00:33:32.461961 ignition[839]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:32.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.470700 systemd[1]: Finished ignition-disks.service. May 17 00:33:32.488424 kernel: audit: type=1130 audit(1747442012.471:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.461994 ignition[839]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:32.473342 systemd[1]: Reached target initrd-root-device.target. May 17 00:33:32.466658 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:32.488446 systemd[1]: Reached target local-fs-pre.target. May 17 00:33:32.469760 ignition[839]: disks: disks passed May 17 00:33:32.492442 systemd[1]: Reached target local-fs.target. May 17 00:33:32.469845 ignition[839]: Ignition finished successfully May 17 00:33:32.502816 systemd[1]: Reached target sysinit.target. May 17 00:33:32.506377 systemd[1]: Reached target basic.target. May 17 00:33:32.511064 systemd[1]: Starting systemd-fsck-root.service... May 17 00:33:32.579429 systemd-fsck[847]: ROOT: clean, 619/7326000 files, 481079/7359488 blocks May 17 00:33:32.589987 systemd[1]: Finished systemd-fsck-root.service. May 17 00:33:32.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.594967 systemd[1]: Mounting sysroot.mount... May 17 00:33:32.609556 kernel: audit: type=1130 audit(1747442012.593:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:32.621623 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:33:32.621869 systemd[1]: Mounted sysroot.mount. May 17 00:33:32.627514 systemd[1]: Reached target initrd-root-fs.target. May 17 00:33:32.660957 systemd[1]: Mounting sysroot-usr.mount... May 17 00:33:32.667363 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:33:32.672216 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:33:32.672258 systemd[1]: Reached target ignition-diskful.target. May 17 00:33:32.679167 systemd[1]: Mounted sysroot-usr.mount. May 17 00:33:32.730045 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:33:32.735049 systemd[1]: Starting initrd-setup-root.service... May 17 00:33:32.750605 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (858) May 17 00:33:32.750652 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:32.754691 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:33:32.763935 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:32.763958 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:32.766912 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:33:32.773908 initrd-setup-root[889]: cut: /sysroot/etc/group: No such file or directory May 17 00:33:32.793478 initrd-setup-root[897]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:33:32.798511 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:33:33.299239 systemd[1]: Finished initrd-setup-root.service. May 17 00:33:33.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.302755 systemd[1]: Starting ignition-mount.service... May 17 00:33:33.322532 kernel: audit: type=1130 audit(1747442013.301:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.319026 systemd[1]: Starting sysroot-boot.service... May 17 00:33:33.327500 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:33:33.327702 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:33:33.351409 ignition[926]: INFO : Ignition 2.14.0 May 17 00:33:33.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.351600 systemd[1]: Finished sysroot-boot.service. May 17 00:33:33.368985 kernel: audit: type=1130 audit(1747442013.353:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.369019 ignition[926]: INFO : Stage: mount May 17 00:33:33.369019 ignition[926]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:33.369019 ignition[926]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:33.369019 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:33.381293 ignition[926]: INFO : mount: mount passed May 17 00:33:33.381293 ignition[926]: INFO : Ignition finished successfully May 17 00:33:33.384833 systemd[1]: Finished ignition-mount.service. May 17 00:33:33.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.397594 kernel: audit: type=1130 audit(1747442013.386:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:33.930615 coreos-metadata[857]: May 17 00:33:33.930 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:33:33.946905 coreos-metadata[857]: May 17 00:33:33.946 INFO Fetch successful May 17 00:33:33.981688 coreos-metadata[857]: May 17 00:33:33.981 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:33:33.991246 coreos-metadata[857]: May 17 00:33:33.991 INFO Fetch successful May 17 00:33:34.006064 coreos-metadata[857]: May 17 00:33:34.005 INFO wrote hostname ci-3510.3.7-n-21508f608f to /sysroot/etc/hostname May 17 00:33:34.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:34.007883 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:33:34.026244 kernel: audit: type=1130 audit(1747442014.010:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:34.012550 systemd[1]: Starting ignition-files.service... May 17 00:33:34.029693 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:33:34.042598 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (937) May 17 00:33:34.042635 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:34.049826 kernel: BTRFS info (device sda6): using free space tree May 17 00:33:34.049849 kernel: BTRFS info (device sda6): has skinny extents May 17 00:33:34.057439 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:33:34.071846 ignition[956]: INFO : Ignition 2.14.0 May 17 00:33:34.071846 ignition[956]: INFO : Stage: files May 17 00:33:34.075273 ignition[956]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:34.075273 ignition[956]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:34.087601 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:34.103621 ignition[956]: DEBUG : files: compiled without relabeling support, skipping May 17 00:33:34.106702 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:33:34.106702 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:33:34.170556 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:33:34.175334 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:33:34.184235 unknown[956]: wrote ssh authorized keys file for user: core May 17 00:33:34.186786 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:33:34.189936 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:33:34.193686 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:33:34.197645 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:34.202010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:33:34.262716 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:33:34.372318 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:34.378796 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:33:34.382947 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:33:34.910005 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 17 00:33:34.959719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:33:34.964115 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1940932748" May 17 00:33:35.024370 ignition[956]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1940932748": device or resource busy May 17 00:33:35.024370 ignition[956]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1940932748", trying btrfs: device or resource busy May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1940932748" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1940932748" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem1940932748" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem1940932748" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228389870" May 17 00:33:35.024370 ignition[956]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228389870": device or resource busy May 17 00:33:35.024370 ignition[956]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1228389870", trying btrfs: device or resource busy May 17 00:33:35.024370 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228389870" May 17 00:33:34.974004 systemd[1]: mnt-oem1940932748.mount: Deactivated successfully. May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1228389870" May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1228389870" May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1228389870" May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:35.091528 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:33:34.996673 systemd[1]: mnt-oem1228389870.mount: Deactivated successfully. May 17 00:33:35.747249 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK May 17 00:33:35.928664 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:35.928664 ignition[956]: INFO : files: op(15): [started] processing unit "waagent.service" May 17 00:33:35.928664 ignition[956]: INFO : files: op(15): [finished] processing unit "waagent.service" May 17 00:33:35.928664 ignition[956]: INFO : files: op(16): [started] processing unit "nvidia.service" May 17 00:33:35.928664 ignition[956]: INFO : files: op(16): [finished] processing unit "nvidia.service" May 17 00:33:35.948608 ignition[956]: INFO : files: op(17): [started] processing unit "containerd.service" May 17 00:33:35.948608 ignition[956]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:33:35.956670 ignition[956]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:33:35.956670 ignition[956]: INFO : files: op(17): [finished] processing unit "containerd.service" May 17 00:33:35.956670 ignition[956]: INFO : files: op(19): [started] processing unit "prepare-helm.service" May 17 00:33:35.956670 ignition[956]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:35.973051 ignition[956]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:33:35.973051 ignition[956]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" May 17 00:33:35.973051 ignition[956]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" May 17 00:33:35.985369 ignition[956]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" May 17 00:33:35.985369 ignition[956]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" May 17 00:33:35.985369 ignition[956]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" May 17 00:33:35.985369 ignition[956]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:33:35.997319 ignition[956]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:33:36.000783 ignition[956]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:36.004730 ignition[956]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:33:36.008653 ignition[956]: INFO : files: files passed May 17 00:33:36.011104 ignition[956]: INFO : Ignition finished successfully May 17 00:33:36.013274 systemd[1]: Finished ignition-files.service. May 17 00:33:36.038239 kernel: audit: type=1130 audit(1747442016.018:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.036237 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:33:36.041441 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:33:36.042359 systemd[1]: Starting ignition-quench.service... May 17 00:33:36.055870 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:33:36.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.056639 systemd[1]: Finished ignition-quench.service. May 17 00:33:36.066935 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:33:36.071839 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:33:36.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.074190 systemd[1]: Reached target ignition-complete.target. May 17 00:33:36.083329 systemd[1]: Starting initrd-parse-etc.service... May 17 00:33:36.104034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:33:36.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.104168 systemd[1]: Finished initrd-parse-etc.service. May 17 00:33:36.111104 systemd[1]: Reached target initrd-fs.target. May 17 00:33:36.112917 systemd[1]: Reached target initrd.target. May 17 00:33:36.116512 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:33:36.117534 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:33:36.131869 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:33:36.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.135220 systemd[1]: Starting initrd-cleanup.service... May 17 00:33:36.146594 systemd[1]: Stopped target nss-lookup.target. May 17 00:33:36.150415 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:33:36.152414 systemd[1]: Stopped target timers.target. May 17 00:33:36.155970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:33:36.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.156092 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:33:36.159634 systemd[1]: Stopped target initrd.target. May 17 00:33:36.163372 systemd[1]: Stopped target basic.target. May 17 00:33:36.165028 systemd[1]: Stopped target ignition-complete.target. May 17 00:33:36.168604 systemd[1]: Stopped target ignition-diskful.target. May 17 00:33:36.172344 systemd[1]: Stopped target initrd-root-device.target. May 17 00:33:36.175995 systemd[1]: Stopped target remote-fs.target. May 17 00:33:36.177869 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:33:36.181732 systemd[1]: Stopped target sysinit.target. May 17 00:33:36.185298 systemd[1]: Stopped target local-fs.target. May 17 00:33:36.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.189054 systemd[1]: Stopped target local-fs-pre.target. May 17 00:33:36.192721 systemd[1]: Stopped target swap.target. May 17 00:33:36.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.196065 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:33:36.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.196252 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:33:36.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.199870 systemd[1]: Stopped target cryptsetup.target. May 17 00:33:36.203531 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:33:36.230369 iscsid[802]: iscsid shutting down. May 17 00:33:36.203702 systemd[1]: Stopped dracut-initqueue.service. May 17 00:33:36.207269 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:33:36.207414 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:33:36.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.250950 ignition[994]: INFO : Ignition 2.14.0 May 17 00:33:36.250950 ignition[994]: INFO : Stage: umount May 17 00:33:36.250950 ignition[994]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:33:36.250950 ignition[994]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:33:36.212636 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:33:36.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.266414 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:33:36.266414 ignition[994]: INFO : umount: umount passed May 17 00:33:36.266414 ignition[994]: INFO : Ignition finished successfully May 17 00:33:36.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.214437 systemd[1]: Stopped ignition-files.service. May 17 00:33:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.218987 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:33:36.219132 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:33:36.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.223954 systemd[1]: Stopping ignition-mount.service... May 17 00:33:36.226573 systemd[1]: Stopping iscsid.service... May 17 00:33:36.238091 systemd[1]: Stopping sysroot-boot.service... May 17 00:33:36.242050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:33:36.242235 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:33:36.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.244431 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:33:36.244558 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:33:36.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.248592 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:33:36.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.248703 systemd[1]: Stopped iscsid.service. May 17 00:33:36.263198 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:33:36.263281 systemd[1]: Stopped ignition-mount.service. May 17 00:33:36.266729 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:33:36.266827 systemd[1]: Stopped ignition-disks.service. May 17 00:33:36.276495 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:33:36.276545 systemd[1]: Stopped ignition-kargs.service. May 17 00:33:36.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.353000 audit: BPF prog-id=6 op=UNLOAD May 17 00:33:36.278232 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:33:36.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.278276 systemd[1]: Stopped ignition-fetch.service. May 17 00:33:36.279995 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:33:36.280041 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:33:36.286705 systemd[1]: Stopped target paths.target. May 17 00:33:36.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.288267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:33:36.292635 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:33:36.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.295664 systemd[1]: Stopped target slices.target. May 17 00:33:36.299573 systemd[1]: Stopped target sockets.target. May 17 00:33:36.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.301347 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:33:36.301395 systemd[1]: Closed iscsid.socket. May 17 00:33:36.304581 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:33:36.304625 systemd[1]: Stopped ignition-setup.service. May 17 00:33:36.308564 systemd[1]: Stopping iscsiuio.service... May 17 00:33:36.311144 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:33:36.311651 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:33:36.412382 kernel: hv_netvsc 6045bdfc-3616-6045-bdfc-36166045bdfc eth0: Data path switched from VF: enP35960s1 May 17 00:33:36.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.311749 systemd[1]: Stopped iscsiuio.service. May 17 00:33:36.313751 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:33:36.313831 systemd[1]: Finished initrd-cleanup.service. May 17 00:33:36.318038 systemd[1]: Stopped target network.target. May 17 00:33:36.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.321401 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:33:36.321445 systemd[1]: Closed iscsiuio.socket. May 17 00:33:36.325411 systemd[1]: Stopping systemd-networkd.service... May 17 00:33:36.329250 systemd[1]: Stopping systemd-resolved.service... May 17 00:33:36.335626 systemd-networkd[793]: eth0: DHCPv6 lease lost May 17 00:33:36.423000 audit: BPF prog-id=9 op=UNLOAD May 17 00:33:36.337193 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:33:36.337301 systemd[1]: Stopped systemd-networkd.service. May 17 00:33:36.340637 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:33:36.340736 systemd[1]: Stopped systemd-resolved.service. May 17 00:33:36.343287 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:33:36.343327 systemd[1]: Closed systemd-networkd.socket. May 17 00:33:36.347894 systemd[1]: Stopping network-cleanup.service... May 17 00:33:36.349361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:33:36.349406 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:33:36.351279 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:33:36.351331 systemd[1]: Stopped systemd-sysctl.service. May 17 00:33:36.353320 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:33:36.353376 systemd[1]: Stopped systemd-modules-load.service. May 17 00:33:36.357476 systemd[1]: Stopping systemd-udevd.service... May 17 00:33:36.362344 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:33:36.366447 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:33:36.366623 systemd[1]: Stopped systemd-udevd.service. May 17 00:33:36.369934 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:33:36.369973 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:33:36.373975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:33:36.374021 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:33:36.376124 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:33:36.376170 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:33:36.379848 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:33:36.379898 systemd[1]: Stopped dracut-cmdline.service. May 17 00:33:36.381643 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:33:36.381688 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:33:36.388019 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:33:36.398163 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:33:36.398224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:33:36.408422 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:33:36.408468 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:33:36.412447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:33:36.412499 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:33:36.468222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:33:36.483132 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:33:36.493307 systemd[1]: Stopped network-cleanup.service. May 17 00:33:36.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.503380 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:33:36.505721 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:33:36.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:36.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:37.026405 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:33:37.026542 systemd[1]: Stopped sysroot-boot.service. May 17 00:33:37.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:37.032662 systemd[1]: Reached target initrd-switch-root.target. May 17 00:33:37.036858 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:33:37.036926 systemd[1]: Stopped initrd-setup-root.service. May 17 00:33:37.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:37.043757 systemd[1]: Starting initrd-switch-root.service... May 17 00:33:37.119895 systemd[1]: Switching root. May 17 00:33:37.120000 audit: BPF prog-id=8 op=UNLOAD May 17 00:33:37.120000 audit: BPF prog-id=7 op=UNLOAD May 17 00:33:37.125000 audit: BPF prog-id=5 op=UNLOAD May 17 00:33:37.125000 audit: BPF prog-id=4 op=UNLOAD May 17 00:33:37.125000 audit: BPF prog-id=3 op=UNLOAD May 17 00:33:37.143080 systemd-journald[183]: Journal stopped May 17 00:33:52.536747 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:33:52.536783 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:33:52.536797 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:33:52.536808 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:33:52.536816 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:33:52.536826 kernel: SELinux: policy capability open_perms=1 May 17 00:33:52.536840 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:33:52.536850 kernel: SELinux: policy capability always_check_network=0 May 17 00:33:52.536860 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:33:52.536869 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:33:52.536880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:33:52.536889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:33:52.536900 kernel: kauditd_printk_skb: 49 callbacks suppressed May 17 00:33:52.536911 kernel: audit: type=1403 audit(1747442020.332:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:33:52.536924 systemd[1]: Successfully loaded SELinux policy in 277.416ms. May 17 00:33:52.536937 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.388ms. May 17 00:33:52.536950 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:33:52.536961 systemd[1]: Detected virtualization microsoft. May 17 00:33:52.536975 systemd[1]: Detected architecture x86-64. May 17 00:33:52.536985 systemd[1]: Detected first boot. May 17 00:33:52.536997 systemd[1]: Hostname set to . May 17 00:33:52.537008 systemd[1]: Initializing machine ID from random generator. May 17 00:33:52.537018 kernel: audit: type=1400 audit(1747442021.149:89): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:33:52.537031 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:33:52.537042 kernel: audit: type=1400 audit(1747442022.560:90): avc: denied { associate } for pid=1045 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:33:52.537055 kernel: audit: type=1300 audit(1747442022.560:90): arch=c000003e syscall=188 success=yes exit=0 a0=c00018a5c2 a1=c00018e7b0 a2=c00019c680 a3=32 items=0 ppid=1028 pid=1045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:52.537067 kernel: audit: type=1327 audit(1747442022.560:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:33:52.537079 kernel: audit: type=1400 audit(1747442022.566:91): avc: denied { associate } for pid=1045 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:33:52.537088 kernel: audit: type=1300 audit(1747442022.566:91): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a699 a2=1ed a3=0 items=2 ppid=1028 pid=1045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:52.537100 kernel: audit: type=1307 audit(1747442022.566:91): cwd="/" May 17 00:33:52.537111 kernel: audit: type=1302 audit(1747442022.566:91): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:52.537122 kernel: audit: type=1302 audit(1747442022.566:91): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:52.537135 systemd[1]: Populated /etc with preset unit settings. May 17 00:33:52.537147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:33:52.537157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:33:52.537170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:33:52.537182 systemd[1]: Queued start job for default target multi-user.target. May 17 00:33:52.537192 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:33:52.537208 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:33:52.537219 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:33:52.537234 systemd[1]: Created slice system-getty.slice. May 17 00:33:52.537246 systemd[1]: Created slice system-modprobe.slice. May 17 00:33:52.537256 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:33:52.537268 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:33:52.537281 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:33:52.537290 systemd[1]: Created slice user.slice. May 17 00:33:52.537304 systemd[1]: Started systemd-ask-password-console.path. May 17 00:33:52.537317 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:33:52.537327 systemd[1]: Set up automount boot.automount. May 17 00:33:52.537339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:33:52.537351 systemd[1]: Reached target integritysetup.target. May 17 00:33:52.537361 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:33:52.537372 systemd[1]: Reached target remote-fs.target. May 17 00:33:52.537386 systemd[1]: Reached target slices.target. May 17 00:33:52.537396 systemd[1]: Reached target swap.target. May 17 00:33:52.537411 systemd[1]: Reached target torcx.target. May 17 00:33:52.537422 systemd[1]: Reached target veritysetup.target. May 17 00:33:52.537434 systemd[1]: Listening on systemd-coredump.socket. May 17 00:33:52.537445 systemd[1]: Listening on systemd-initctl.socket. May 17 00:33:52.537456 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:33:52.537465 kernel: audit: type=1400 audit(1747442032.186:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:33:52.537474 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:33:52.537486 kernel: audit: type=1335 audit(1747442032.186:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:33:52.537495 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:33:52.537505 systemd[1]: Listening on systemd-journald.socket. May 17 00:33:52.537514 systemd[1]: Listening on systemd-networkd.socket. May 17 00:33:52.537523 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:33:52.537533 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:33:52.537544 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:33:52.537554 systemd[1]: Mounting dev-hugepages.mount... May 17 00:33:52.537563 systemd[1]: Mounting dev-mqueue.mount... May 17 00:33:52.537573 systemd[1]: Mounting media.mount... May 17 00:33:52.537593 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:52.537604 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:33:52.537617 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:33:52.537632 systemd[1]: Mounting tmp.mount... May 17 00:33:52.537642 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:33:52.537655 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:33:52.537667 systemd[1]: Starting kmod-static-nodes.service... May 17 00:33:52.537677 systemd[1]: Starting modprobe@configfs.service... May 17 00:33:52.537689 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:33:52.537702 systemd[1]: Starting modprobe@drm.service... May 17 00:33:52.537712 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:33:52.537724 systemd[1]: Starting modprobe@fuse.service... May 17 00:33:52.537738 systemd[1]: Starting modprobe@loop.service... May 17 00:33:52.537751 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:33:52.537763 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:33:52.537773 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:33:52.537786 systemd[1]: Starting systemd-journald.service... May 17 00:33:52.537798 systemd[1]: Starting systemd-modules-load.service... May 17 00:33:52.537808 systemd[1]: Starting systemd-network-generator.service... May 17 00:33:52.537821 systemd[1]: Starting systemd-remount-fs.service... May 17 00:33:52.537835 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:33:52.537845 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:52.537857 systemd[1]: Mounted dev-hugepages.mount. May 17 00:33:52.537870 kernel: loop: module loaded May 17 00:33:52.537880 systemd[1]: Mounted dev-mqueue.mount. May 17 00:33:52.537892 systemd[1]: Mounted media.mount. May 17 00:33:52.537904 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:33:52.537916 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:33:52.537927 systemd[1]: Mounted tmp.mount. May 17 00:33:52.537941 systemd[1]: Finished kmod-static-nodes.service. May 17 00:33:52.537954 kernel: audit: type=1130 audit(1747442032.476:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.537966 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:33:52.537976 systemd[1]: Finished modprobe@configfs.service. May 17 00:33:52.537988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:52.538001 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:33:52.538011 kernel: audit: type=1130 audit(1747442032.503:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.538022 kernel: fuse: init (API version 7.34) May 17 00:33:52.538036 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:33:52.538046 kernel: audit: type=1131 audit(1747442032.503:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.538062 systemd-journald[1144]: Journal started May 17 00:33:52.538114 systemd-journald[1144]: Runtime Journal (/run/log/journal/bc746718f0324877864441ef459af11f) is 8.0M, max 159.0M, 151.0M free. May 17 00:33:52.186000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:33:52.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.551223 systemd[1]: Started systemd-journald.service. May 17 00:33:52.553110 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:33:52.553678 systemd[1]: Finished modprobe@drm.service. May 17 00:33:52.556202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:52.556480 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:33:52.559057 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:33:52.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.569721 systemd[1]: Finished modprobe@fuse.service. May 17 00:33:52.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.584038 kernel: audit: type=1130 audit(1747442032.524:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.584075 kernel: audit: type=1131 audit(1747442032.524:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.528000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:33:52.591922 kernel: audit: type=1305 audit(1747442032.528:99): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:33:52.609707 kernel: audit: type=1300 audit(1747442032.528:99): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff3a7b8640 a2=4000 a3=7fff3a7b86dc items=0 ppid=1 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:52.528000 audit[1144]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff3a7b8640 a2=4000 a3=7fff3a7b86dc items=0 ppid=1 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:52.615156 kernel: audit: type=1327 audit(1747442032.528:99): proctitle="/usr/lib/systemd/systemd-journald" May 17 00:33:52.528000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:33:52.611096 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:52.611267 systemd[1]: Finished modprobe@loop.service. May 17 00:33:52.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.617739 systemd[1]: Finished systemd-modules-load.service. May 17 00:33:52.620281 systemd[1]: Finished systemd-network-generator.service. May 17 00:33:52.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.623817 systemd[1]: Finished systemd-remount-fs.service. May 17 00:33:52.626438 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:33:52.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.628990 systemd[1]: Reached target network-pre.target. May 17 00:33:52.631815 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:33:52.635153 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:33:52.637201 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:33:52.652920 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:33:52.656155 systemd[1]: Starting systemd-journal-flush.service... May 17 00:33:52.658177 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:52.659391 systemd[1]: Starting systemd-random-seed.service... May 17 00:33:52.661497 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:33:52.662819 systemd[1]: Starting systemd-sysctl.service... May 17 00:33:52.665946 systemd[1]: Starting systemd-sysusers.service... May 17 00:33:52.668980 systemd[1]: Starting systemd-udev-settle.service... May 17 00:33:52.674921 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:33:52.677163 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:33:52.687858 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:33:52.701919 systemd-journald[1144]: Time spent on flushing to /var/log/journal/bc746718f0324877864441ef459af11f is 22.745ms for 1102 entries. May 17 00:33:52.701919 systemd-journald[1144]: System Journal (/var/log/journal/bc746718f0324877864441ef459af11f) is 8.0M, max 2.6G, 2.6G free. May 17 00:33:52.783166 systemd-journald[1144]: Received client request to flush runtime journal. May 17 00:33:52.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:52.712563 systemd[1]: Finished systemd-random-seed.service. May 17 00:33:52.714558 systemd[1]: Reached target first-boot-complete.target. May 17 00:33:52.728509 systemd[1]: Finished systemd-sysctl.service. May 17 00:33:52.784242 systemd[1]: Finished systemd-journal-flush.service. May 17 00:33:52.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:53.304487 systemd[1]: Finished systemd-sysusers.service. May 17 00:33:53.308602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:33:53.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:53.761314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:33:53.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:53.900761 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:33:53.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:53.904636 systemd[1]: Starting systemd-udevd.service... May 17 00:33:53.923751 systemd-udevd[1209]: Using default interface naming scheme 'v252'. May 17 00:33:54.110388 systemd[1]: Started systemd-udevd.service. May 17 00:33:54.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:54.115213 systemd[1]: Starting systemd-networkd.service... May 17 00:33:54.154285 systemd[1]: Found device dev-ttyS0.device. May 17 00:33:54.186072 systemd[1]: Starting systemd-userdbd.service... May 17 00:33:54.218598 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:33:54.241000 audit[1210]: AVC avc: denied { confidentiality } for pid=1210 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:33:54.250599 kernel: hv_vmbus: registering driver hv_balloon May 17 00:33:54.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:54.256370 systemd[1]: Started systemd-userdbd.service. May 17 00:33:54.331560 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:33:54.331698 kernel: hv_vmbus: registering driver hv_utils May 17 00:33:54.339618 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:33:54.363047 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:33:54.363158 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:33:54.363233 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:33:54.369605 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:33:54.369699 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:33:54.346150 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:33:54.411422 systemd-journald[1144]: Time jumped backwards, rotating. May 17 00:33:54.411518 kernel: Console: switching to colour dummy device 80x25 May 17 00:33:54.411541 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:33:54.241000 audit[1210]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56531b360070 a1=f884 a2=7fa3e997cbc5 a3=5 items=12 ppid=1209 pid=1210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:54.241000 audit: CWD cwd="/" May 17 00:33:54.241000 audit: PATH item=0 name=(null) inode=237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=1 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=2 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=3 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=4 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=5 name=(null) inode=14776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=6 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=7 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=8 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=9 name=(null) inode=14778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=10 name=(null) inode=14774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PATH item=11 name=(null) inode=14779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:33:54.241000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:33:54.531059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:33:54.599150 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 17 00:33:54.631524 systemd[1]: Finished systemd-udev-settle.service. May 17 00:33:54.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:54.635644 systemd[1]: Starting lvm2-activation-early.service... May 17 00:33:54.699535 systemd-networkd[1215]: lo: Link UP May 17 00:33:54.699547 systemd-networkd[1215]: lo: Gained carrier May 17 00:33:54.700116 systemd-networkd[1215]: Enumeration completed May 17 00:33:54.700330 systemd[1]: Started systemd-networkd.service. May 17 00:33:54.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:54.704929 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:33:54.729789 systemd-networkd[1215]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:54.785153 kernel: mlx5_core 8c78:00:02.0 enP35960s1: Link up May 17 00:33:54.806222 kernel: hv_netvsc 6045bdfc-3616-6045-bdfc-36166045bdfc eth0: Data path switched to VF: enP35960s1 May 17 00:33:54.807153 systemd-networkd[1215]: enP35960s1: Link UP May 17 00:33:54.807634 systemd-networkd[1215]: eth0: Link UP May 17 00:33:54.807793 systemd-networkd[1215]: eth0: Gained carrier May 17 00:33:54.812422 systemd-networkd[1215]: enP35960s1: Gained carrier May 17 00:33:54.840261 systemd-networkd[1215]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:33:54.962189 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:33:54.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:54.986416 systemd[1]: Finished lvm2-activation-early.service. May 17 00:33:54.989338 systemd[1]: Reached target cryptsetup.target. May 17 00:33:54.992980 systemd[1]: Starting lvm2-activation.service... May 17 00:33:54.997599 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:33:55.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.016329 systemd[1]: Finished lvm2-activation.service. May 17 00:33:55.018758 systemd[1]: Reached target local-fs-pre.target. May 17 00:33:55.021141 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:33:55.021180 systemd[1]: Reached target local-fs.target. May 17 00:33:55.023242 systemd[1]: Reached target machines.target. May 17 00:33:55.026373 systemd[1]: Starting ldconfig.service... May 17 00:33:55.042156 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:33:55.042222 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:33:55.043337 systemd[1]: Starting systemd-boot-update.service... May 17 00:33:55.046252 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:33:55.049652 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:33:55.052905 systemd[1]: Starting systemd-sysext.service... May 17 00:33:55.548250 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1295 (bootctl) May 17 00:33:55.550740 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:33:55.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.564925 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:33:55.575323 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:33:55.580704 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:33:55.581018 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:33:55.665157 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:33:55.687820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:33:55.688701 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:33:55.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.729155 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:33:55.740151 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:33:55.753421 (sd-sysext)[1311]: Using extensions 'kubernetes'. May 17 00:33:55.753848 (sd-sysext)[1311]: Merged extensions into '/usr'. May 17 00:33:55.771439 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:55.773475 systemd[1]: Mounting usr-share-oem.mount... May 17 00:33:55.775806 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:33:55.777853 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:33:55.781380 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:33:55.789308 systemd[1]: Starting modprobe@loop.service... May 17 00:33:55.793804 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:33:55.794004 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:33:55.794264 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:55.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.800497 systemd[1]: Mounted usr-share-oem.mount. May 17 00:33:55.802970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:55.803185 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:33:55.805786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:55.805976 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:33:55.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.808770 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:55.809013 systemd[1]: Finished modprobe@loop.service. May 17 00:33:55.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.811705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:55.811854 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:33:55.813382 systemd[1]: Finished systemd-sysext.service. May 17 00:33:55.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.818317 systemd[1]: Starting ensure-sysext.service... May 17 00:33:55.821628 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:33:55.829903 systemd[1]: Reloading. May 17 00:33:55.875505 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:33:55.892575 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:33:55.894538 /usr/lib/systemd/system-generators/torcx-generator[1345]: time="2025-05-17T00:33:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:33:55.905238 /usr/lib/systemd/system-generators/torcx-generator[1345]: time="2025-05-17T00:33:55Z" level=info msg="torcx already run" May 17 00:33:55.910107 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:33:55.995919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:33:55.995942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:33:56.014081 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:33:56.078320 systemd-networkd[1215]: eth0: Gained IPv6LL May 17 00:33:56.088434 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:33:56.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.096700 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:56.096991 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:33:56.098319 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:33:56.102462 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:33:56.106769 systemd[1]: Starting modprobe@loop.service... May 17 00:33:56.109407 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:33:56.109700 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:33:56.110016 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:56.111681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:56.111872 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:33:56.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.114841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:56.115043 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:33:56.115503 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:56.115647 systemd[1]: Finished modprobe@loop.service. May 17 00:33:56.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.117680 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:56.117773 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:33:56.121494 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:56.122528 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:33:56.124657 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:33:56.126881 systemd[1]: Starting modprobe@drm.service... May 17 00:33:56.128448 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:33:56.129927 systemd[1]: Starting modprobe@loop.service... May 17 00:33:56.130304 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:33:56.130592 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:33:56.130924 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:33:56.132603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:33:56.132920 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:33:56.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.136451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:33:56.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.138984 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:33:56.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.141540 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:33:56.141734 systemd[1]: Finished modprobe@drm.service. May 17 00:33:56.142214 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:33:56.142384 systemd[1]: Finished modprobe@loop.service. May 17 00:33:56.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.143503 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:33:56.143593 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:33:56.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.144747 systemd[1]: Finished ensure-sysext.service. May 17 00:33:56.266632 systemd-fsck[1307]: fsck.fat 4.2 (2021-01-31) May 17 00:33:56.266632 systemd-fsck[1307]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:33:56.268523 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:33:56.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.275321 systemd[1]: Mounting boot.mount... May 17 00:33:56.294119 systemd[1]: Mounted boot.mount. May 17 00:33:56.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.307570 systemd[1]: Finished systemd-boot-update.service. May 17 00:33:57.145660 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:33:57.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.149934 systemd[1]: Starting audit-rules.service... May 17 00:33:57.153456 systemd[1]: Starting clean-ca-certificates.service... May 17 00:33:57.157667 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:33:57.162186 systemd[1]: Starting systemd-resolved.service... May 17 00:33:57.166289 systemd[1]: Starting systemd-timesyncd.service... May 17 00:33:57.172000 systemd[1]: Starting systemd-update-utmp.service... May 17 00:33:57.174945 systemd[1]: Finished clean-ca-certificates.service. May 17 00:33:57.182355 kernel: kauditd_printk_skb: 70 callbacks suppressed May 17 00:33:57.182412 kernel: audit: type=1130 audit(1747442037.177:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.182205 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:33:57.188496 systemd[1]: Finished systemd-update-utmp.service. May 17 00:33:57.200448 kernel: audit: type=1127 audit(1747442037.185:156): pid=1449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:33:57.185000 audit[1449]: SYSTEM_BOOT pid=1449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:33:57.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.213257 kernel: audit: type=1130 audit(1747442037.200:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.367788 systemd[1]: Started systemd-timesyncd.service. May 17 00:33:57.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.370710 systemd[1]: Reached target time-set.target. May 17 00:33:57.382226 kernel: audit: type=1130 audit(1747442037.370:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.384139 systemd-resolved[1447]: Positive Trust Anchors: May 17 00:33:57.384153 systemd-resolved[1447]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:57.384194 systemd-resolved[1447]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:33:57.425822 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:33:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.439150 kernel: audit: type=1130 audit(1747442037.428:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.462000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:33:57.463537 systemd[1]: Finished audit-rules.service. May 17 00:33:57.468351 augenrules[1465]: No rules May 17 00:33:57.484808 kernel: audit: type=1305 audit(1747442037.462:160): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:33:57.484890 kernel: audit: type=1300 audit(1747442037.462:160): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9fe1a7c0 a2=420 a3=0 items=0 ppid=1442 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:57.484915 kernel: audit: type=1327 audit(1747442037.462:160): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:33:57.462000 audit[1465]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9fe1a7c0 a2=420 a3=0 items=0 ppid=1442 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:33:57.462000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:33:57.485251 systemd-timesyncd[1448]: Contacted time server 185.83.169.27:123 (0.flatcar.pool.ntp.org). May 17 00:33:57.485307 systemd-timesyncd[1448]: Initial clock synchronization to Sat 2025-05-17 00:33:57.485124 UTC. May 17 00:33:57.518962 systemd-resolved[1447]: Using system hostname 'ci-3510.3.7-n-21508f608f'. May 17 00:33:57.521262 systemd[1]: Started systemd-resolved.service. May 17 00:33:57.523811 systemd[1]: Reached target network.target. May 17 00:33:57.526293 systemd[1]: Reached target network-online.target. May 17 00:33:57.528484 systemd[1]: Reached target nss-lookup.target. May 17 00:34:02.589291 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:34:02.597554 systemd[1]: Finished ldconfig.service. May 17 00:34:02.601458 systemd[1]: Starting systemd-update-done.service... May 17 00:34:02.609151 systemd[1]: Finished systemd-update-done.service. May 17 00:34:02.611551 systemd[1]: Reached target sysinit.target. May 17 00:34:02.613445 systemd[1]: Started motdgen.path. May 17 00:34:02.614972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:34:02.617540 systemd[1]: Started logrotate.timer. May 17 00:34:02.619280 systemd[1]: Started mdadm.timer. May 17 00:34:02.620948 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:34:02.622831 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:34:02.622869 systemd[1]: Reached target paths.target. May 17 00:34:02.624490 systemd[1]: Reached target timers.target. May 17 00:34:02.626523 systemd[1]: Listening on dbus.socket. May 17 00:34:02.629300 systemd[1]: Starting docker.socket... May 17 00:34:02.653983 systemd[1]: Listening on sshd.socket. May 17 00:34:02.656186 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:02.656783 systemd[1]: Listening on docker.socket. May 17 00:34:02.658959 systemd[1]: Reached target sockets.target. May 17 00:34:02.661187 systemd[1]: Reached target basic.target. May 17 00:34:02.663437 systemd[1]: System is tainted: cgroupsv1 May 17 00:34:02.663498 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:02.663531 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:02.664656 systemd[1]: Starting containerd.service... May 17 00:34:02.668087 systemd[1]: Starting dbus.service... May 17 00:34:02.671538 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:34:02.674557 systemd[1]: Starting extend-filesystems.service... May 17 00:34:02.676744 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:34:02.678721 systemd[1]: Starting kubelet.service... May 17 00:34:02.682234 systemd[1]: Starting motdgen.service... May 17 00:34:02.685833 systemd[1]: Started nvidia.service. May 17 00:34:02.689669 systemd[1]: Starting prepare-helm.service... May 17 00:34:02.694191 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:34:02.697979 systemd[1]: Starting sshd-keygen.service... May 17 00:34:02.702628 systemd[1]: Starting systemd-logind.service... May 17 00:34:02.704791 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:02.704897 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:34:02.706504 systemd[1]: Starting update-engine.service... May 17 00:34:02.710405 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:34:02.728253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:34:02.728568 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:34:02.763696 extend-filesystems[1481]: Found loop1 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda May 17 00:34:02.766446 extend-filesystems[1481]: Found sda1 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda2 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda3 May 17 00:34:02.766446 extend-filesystems[1481]: Found usr May 17 00:34:02.766446 extend-filesystems[1481]: Found sda4 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda6 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda7 May 17 00:34:02.766446 extend-filesystems[1481]: Found sda9 May 17 00:34:02.766446 extend-filesystems[1481]: Checking size of /dev/sda9 May 17 00:34:02.832940 jq[1480]: false May 17 00:34:02.835098 jq[1496]: true May 17 00:34:02.838205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:34:02.838583 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:34:02.849106 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:34:02.849405 systemd[1]: Finished motdgen.service. May 17 00:34:02.870349 jq[1518]: true May 17 00:34:02.888833 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:34:02.897664 systemd-logind[1494]: New seat seat0. May 17 00:34:02.900152 tar[1499]: linux-amd64/helm May 17 00:34:02.942490 extend-filesystems[1481]: Old size kept for /dev/sda9 May 17 00:34:02.945262 extend-filesystems[1481]: Found sr0 May 17 00:34:02.947793 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:34:02.948116 systemd[1]: Finished extend-filesystems.service. May 17 00:34:02.994617 dbus-daemon[1479]: [system] SELinux support is enabled May 17 00:34:02.994853 systemd[1]: Started dbus.service. May 17 00:34:02.999853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:34:02.999884 systemd[1]: Reached target system-config.target. May 17 00:34:03.005753 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:34:03.005775 systemd[1]: Reached target user-config.target. May 17 00:34:03.015991 systemd[1]: Started systemd-logind.service. May 17 00:34:03.018630 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:34:03.027986 env[1513]: time="2025-05-17T00:34:03.027929840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:34:03.070546 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:34:03.120430 bash[1537]: Updated "/home/core/.ssh/authorized_keys" May 17 00:34:03.133966 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:34:03.186812 env[1513]: time="2025-05-17T00:34:03.186757684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:34:03.186959 env[1513]: time="2025-05-17T00:34:03.186937284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.189471 env[1513]: time="2025-05-17T00:34:03.189419088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:03.189471 env[1513]: time="2025-05-17T00:34:03.189470788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.190844 env[1513]: time="2025-05-17T00:34:03.190806590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:03.190844 env[1513]: time="2025-05-17T00:34:03.190843190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.190970 env[1513]: time="2025-05-17T00:34:03.190860990Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:34:03.190970 env[1513]: time="2025-05-17T00:34:03.190873490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.191039 env[1513]: time="2025-05-17T00:34:03.190977590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.191445 env[1513]: time="2025-05-17T00:34:03.191418691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:03.192232 env[1513]: time="2025-05-17T00:34:03.192101992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:03.192232 env[1513]: time="2025-05-17T00:34:03.192152692Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:34:03.192343 env[1513]: time="2025-05-17T00:34:03.192235892Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:34:03.192343 env[1513]: time="2025-05-17T00:34:03.192252192Z" level=info msg="metadata content store policy set" policy=shared May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210675720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210726221Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210746021Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210796821Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210818821Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210838721Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210856521Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210877021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210895121Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210915721Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210933521Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.210950421Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.211076121Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:34:03.213204 env[1513]: time="2025-05-17T00:34:03.211231121Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211659522Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211693122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211714022Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211785822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211808122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211826622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211842322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211861522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211878922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211895722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211911922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.211930522Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.212090723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.212109723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:34:03.213728 env[1513]: time="2025-05-17T00:34:03.212127423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:34:03.214233 env[1513]: time="2025-05-17T00:34:03.213198824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:34:03.214233 env[1513]: time="2025-05-17T00:34:03.213223624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:34:03.214233 env[1513]: time="2025-05-17T00:34:03.213240224Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:34:03.214233 env[1513]: time="2025-05-17T00:34:03.213316424Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:34:03.214233 env[1513]: time="2025-05-17T00:34:03.213387925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:34:03.215112 env[1513]: time="2025-05-17T00:34:03.213767125Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:34:03.215112 env[1513]: time="2025-05-17T00:34:03.214693027Z" level=info msg="Connect containerd service" May 17 00:34:03.215112 env[1513]: time="2025-05-17T00:34:03.214739427Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.216621630Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.217911832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.217968432Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.218035232Z" level=info msg="containerd successfully booted in 0.200429s" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226365744Z" level=info msg="Start subscribing containerd event" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226446145Z" level=info msg="Start recovering state" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226568845Z" level=info msg="Start event monitor" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226593245Z" level=info msg="Start snapshots syncer" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226606445Z" level=info msg="Start cni network conf syncer for default" May 17 00:34:03.356349 env[1513]: time="2025-05-17T00:34:03.226628345Z" level=info msg="Start streaming server" May 17 00:34:03.218180 systemd[1]: Started containerd.service. May 17 00:34:03.667091 update_engine[1495]: I0517 00:34:03.666467 1495 main.cc:92] Flatcar Update Engine starting May 17 00:34:03.711694 systemd[1]: Started update-engine.service. May 17 00:34:03.716748 systemd[1]: Started locksmithd.service. May 17 00:34:03.720593 update_engine[1495]: I0517 00:34:03.719429 1495 update_check_scheduler.cc:74] Next update check in 11m20s May 17 00:34:03.770125 tar[1499]: linux-amd64/LICENSE May 17 00:34:03.770125 tar[1499]: linux-amd64/README.md May 17 00:34:03.775833 systemd[1]: Finished prepare-helm.service. May 17 00:34:04.365306 systemd[1]: Started kubelet.service. May 17 00:34:05.095160 kubelet[1603]: E0517 00:34:05.094416 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:05.096614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:05.096824 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:05.150888 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:34:05.511975 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:34:05.532143 systemd[1]: Finished sshd-keygen.service. May 17 00:34:05.536330 systemd[1]: Starting issuegen.service... May 17 00:34:05.539762 systemd[1]: Started waagent.service. May 17 00:34:05.545661 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:34:05.545934 systemd[1]: Finished issuegen.service. May 17 00:34:05.549975 systemd[1]: Starting systemd-user-sessions.service... May 17 00:34:05.568296 systemd[1]: Finished systemd-user-sessions.service. May 17 00:34:05.572456 systemd[1]: Started getty@tty1.service. May 17 00:34:05.576630 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:34:05.579320 systemd[1]: Reached target getty.target. May 17 00:34:05.581359 systemd[1]: Reached target multi-user.target. May 17 00:34:05.584942 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:34:05.592991 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:34:05.593295 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:34:05.596705 systemd[1]: Startup finished in 902ms (firmware) + 28.065s (loader) + 16.092s (kernel) + 25.764s (userspace) = 1min 10.824s. May 17 00:34:06.149685 login[1628]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:34:06.151699 login[1629]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:34:06.197706 systemd[1]: Created slice user-500.slice. May 17 00:34:06.199095 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:34:06.202403 systemd-logind[1494]: New session 1 of user core. May 17 00:34:06.207321 systemd-logind[1494]: New session 2 of user core. May 17 00:34:06.213839 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:34:06.215647 systemd[1]: Starting user@500.service... May 17 00:34:06.223319 (systemd)[1635]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:06.419746 systemd[1635]: Queued start job for default target default.target. May 17 00:34:06.420495 systemd[1635]: Reached target paths.target. May 17 00:34:06.420521 systemd[1635]: Reached target sockets.target. May 17 00:34:06.420538 systemd[1635]: Reached target timers.target. May 17 00:34:06.420553 systemd[1635]: Reached target basic.target. May 17 00:34:06.420718 systemd[1]: Started user@500.service. May 17 00:34:06.421195 systemd[1635]: Reached target default.target. May 17 00:34:06.421350 systemd[1635]: Startup finished in 190ms. May 17 00:34:06.421753 systemd[1]: Started session-1.scope. May 17 00:34:06.422325 systemd[1]: Started session-2.scope. May 17 00:34:11.172850 waagent[1621]: 2025-05-17T00:34:11.172738Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:34:11.176933 waagent[1621]: 2025-05-17T00:34:11.176857Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:34:11.179407 waagent[1621]: 2025-05-17T00:34:11.179343Z INFO Daemon Daemon Python: 3.9.16 May 17 00:34:11.181838 waagent[1621]: 2025-05-17T00:34:11.181766Z INFO Daemon Daemon Run daemon May 17 00:34:11.184238 waagent[1621]: 2025-05-17T00:34:11.184179Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:34:11.197034 waagent[1621]: 2025-05-17T00:34:11.196920Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:34:11.204582 waagent[1621]: 2025-05-17T00:34:11.204475Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:34:11.209008 waagent[1621]: 2025-05-17T00:34:11.208937Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:34:11.218206 waagent[1621]: 2025-05-17T00:34:11.209207Z INFO Daemon Daemon Using waagent for provisioning May 17 00:34:11.218206 waagent[1621]: 2025-05-17T00:34:11.210778Z INFO Daemon Daemon Activate resource disk May 17 00:34:11.218206 waagent[1621]: 2025-05-17T00:34:11.211379Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:34:11.219081 waagent[1621]: 2025-05-17T00:34:11.219020Z INFO Daemon Daemon Found device: None May 17 00:34:11.226610 waagent[1621]: 2025-05-17T00:34:11.219366Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:34:11.226610 waagent[1621]: 2025-05-17T00:34:11.220232Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:34:11.226610 waagent[1621]: 2025-05-17T00:34:11.221835Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:34:11.226610 waagent[1621]: 2025-05-17T00:34:11.222726Z INFO Daemon Daemon Running default provisioning handler May 17 00:34:11.245538 waagent[1621]: 2025-05-17T00:34:11.232388Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:34:11.245538 waagent[1621]: 2025-05-17T00:34:11.235243Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:34:11.245538 waagent[1621]: 2025-05-17T00:34:11.236003Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:34:11.245538 waagent[1621]: 2025-05-17T00:34:11.237180Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:34:11.315341 waagent[1621]: 2025-05-17T00:34:11.315164Z INFO Daemon Daemon Successfully mounted dvd May 17 00:34:11.387573 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:34:11.404664 waagent[1621]: 2025-05-17T00:34:11.404541Z INFO Daemon Daemon Detect protocol endpoint May 17 00:34:11.419309 waagent[1621]: 2025-05-17T00:34:11.405083Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:34:11.419309 waagent[1621]: 2025-05-17T00:34:11.406079Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:34:11.419309 waagent[1621]: 2025-05-17T00:34:11.406904Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:34:11.419309 waagent[1621]: 2025-05-17T00:34:11.407924Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:34:11.419309 waagent[1621]: 2025-05-17T00:34:11.408719Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:34:11.521368 waagent[1621]: 2025-05-17T00:34:11.521229Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:34:11.528742 waagent[1621]: 2025-05-17T00:34:11.522155Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:34:11.528742 waagent[1621]: 2025-05-17T00:34:11.523105Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:34:11.774031 waagent[1621]: 2025-05-17T00:34:11.773829Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:34:11.785596 waagent[1621]: 2025-05-17T00:34:11.785517Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:34:11.788359 waagent[1621]: 2025-05-17T00:34:11.788293Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:34:11.854671 waagent[1621]: 2025-05-17T00:34:11.854560Z INFO Daemon Daemon Found private key matching thumbprint 0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E May 17 00:34:11.861295 waagent[1621]: 2025-05-17T00:34:11.855084Z INFO Daemon Daemon Fetch goal state completed May 17 00:34:11.880949 waagent[1621]: 2025-05-17T00:34:11.880884Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: e71504b6-1960-4bd8-8aa5-f81bd9eb4009 New eTag: 7069152617147801853] May 17 00:34:11.887763 waagent[1621]: 2025-05-17T00:34:11.881549Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:34:11.890717 waagent[1621]: 2025-05-17T00:34:11.890641Z INFO Daemon Daemon Starting provisioning May 17 00:34:11.892832 waagent[1621]: 2025-05-17T00:34:11.891030Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:34:11.892832 waagent[1621]: 2025-05-17T00:34:11.892035Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-21508f608f] May 17 00:34:11.911236 waagent[1621]: 2025-05-17T00:34:11.911113Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-21508f608f] May 17 00:34:11.917977 waagent[1621]: 2025-05-17T00:34:11.911805Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:34:11.917977 waagent[1621]: 2025-05-17T00:34:11.912732Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:34:11.926089 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:34:11.926431 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:34:11.926499 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:34:11.926790 systemd[1]: Stopping systemd-networkd.service... May 17 00:34:11.931180 systemd-networkd[1215]: eth0: DHCPv6 lease lost May 17 00:34:11.932586 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:34:11.932917 systemd[1]: Stopped systemd-networkd.service. May 17 00:34:11.935737 systemd[1]: Starting systemd-networkd.service... May 17 00:34:11.972789 systemd-networkd[1679]: enP35960s1: Link UP May 17 00:34:11.972799 systemd-networkd[1679]: enP35960s1: Gained carrier May 17 00:34:11.974260 systemd-networkd[1679]: eth0: Link UP May 17 00:34:11.974269 systemd-networkd[1679]: eth0: Gained carrier May 17 00:34:11.974696 systemd-networkd[1679]: lo: Link UP May 17 00:34:11.974705 systemd-networkd[1679]: lo: Gained carrier May 17 00:34:11.975005 systemd-networkd[1679]: eth0: Gained IPv6LL May 17 00:34:11.975303 systemd-networkd[1679]: Enumeration completed May 17 00:34:11.975426 systemd[1]: Started systemd-networkd.service. May 17 00:34:11.979727 waagent[1621]: 2025-05-17T00:34:11.976928Z INFO Daemon Daemon Create user account if not exists May 17 00:34:11.979727 waagent[1621]: 2025-05-17T00:34:11.977638Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:34:11.979727 waagent[1621]: 2025-05-17T00:34:11.978681Z INFO Daemon Daemon Configure sudoer May 17 00:34:11.980076 waagent[1621]: 2025-05-17T00:34:11.980021Z INFO Daemon Daemon Configure sshd May 17 00:34:11.981018 waagent[1621]: 2025-05-17T00:34:11.980968Z INFO Daemon Daemon Deploy ssh public key. May 17 00:34:11.986307 systemd-networkd[1679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:34:11.987661 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:34:12.019241 systemd-networkd[1679]: eth0: DHCPv4 address 10.200.4.4/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:34:12.022035 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:34:13.078275 waagent[1621]: 2025-05-17T00:34:13.078126Z INFO Daemon Daemon Provisioning complete May 17 00:34:13.092512 waagent[1621]: 2025-05-17T00:34:13.092426Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:34:13.098495 waagent[1621]: 2025-05-17T00:34:13.092884Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:34:13.098495 waagent[1621]: 2025-05-17T00:34:13.094448Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:34:13.358779 waagent[1686]: 2025-05-17T00:34:13.358618Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:34:13.359507 waagent[1686]: 2025-05-17T00:34:13.359444Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:13.359652 waagent[1686]: 2025-05-17T00:34:13.359598Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:13.370421 waagent[1686]: 2025-05-17T00:34:13.370348Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:34:13.370580 waagent[1686]: 2025-05-17T00:34:13.370527Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:34:13.419754 waagent[1686]: 2025-05-17T00:34:13.419631Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E May 17 00:34:13.420050 waagent[1686]: 2025-05-17T00:34:13.419992Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:34:13.432140 waagent[1686]: 2025-05-17T00:34:13.432069Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 019a3bf3-2c11-4a84-9ce5-302e4baa30e6 New eTag: 7069152617147801853] May 17 00:34:13.432634 waagent[1686]: 2025-05-17T00:34:13.432576Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:34:13.515416 waagent[1686]: 2025-05-17T00:34:13.515259Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:34:13.524617 waagent[1686]: 2025-05-17T00:34:13.524538Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1686 May 17 00:34:13.527884 waagent[1686]: 2025-05-17T00:34:13.527820Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:34:13.529030 waagent[1686]: 2025-05-17T00:34:13.528974Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:34:13.613659 waagent[1686]: 2025-05-17T00:34:13.613538Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:34:13.614009 waagent[1686]: 2025-05-17T00:34:13.613948Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:34:13.621891 waagent[1686]: 2025-05-17T00:34:13.621836Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:34:13.622374 waagent[1686]: 2025-05-17T00:34:13.622315Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:34:13.623418 waagent[1686]: 2025-05-17T00:34:13.623355Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:34:13.624647 waagent[1686]: 2025-05-17T00:34:13.624589Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:34:13.625461 waagent[1686]: 2025-05-17T00:34:13.625405Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:34:13.625616 waagent[1686]: 2025-05-17T00:34:13.625565Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:13.625903 waagent[1686]: 2025-05-17T00:34:13.625851Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:13.626285 waagent[1686]: 2025-05-17T00:34:13.626234Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:13.626626 waagent[1686]: 2025-05-17T00:34:13.626575Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:13.627049 waagent[1686]: 2025-05-17T00:34:13.626996Z INFO EnvHandler ExtHandler Configure routes May 17 00:34:13.627356 waagent[1686]: 2025-05-17T00:34:13.627307Z INFO EnvHandler ExtHandler Gateway:None May 17 00:34:13.627698 waagent[1686]: 2025-05-17T00:34:13.627644Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:34:13.627840 waagent[1686]: 2025-05-17T00:34:13.627779Z INFO EnvHandler ExtHandler Routes:None May 17 00:34:13.628318 waagent[1686]: 2025-05-17T00:34:13.628263Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:34:13.628513 waagent[1686]: 2025-05-17T00:34:13.628449Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:34:13.629978 waagent[1686]: 2025-05-17T00:34:13.629926Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:34:13.629978 waagent[1686]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:34:13.629978 waagent[1686]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:34:13.629978 waagent[1686]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:34:13.629978 waagent[1686]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:13.629978 waagent[1686]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:13.629978 waagent[1686]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:13.632794 waagent[1686]: 2025-05-17T00:34:13.632695Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:34:13.634953 waagent[1686]: 2025-05-17T00:34:13.634671Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:34:13.638209 waagent[1686]: 2025-05-17T00:34:13.638143Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:34:13.650891 waagent[1686]: 2025-05-17T00:34:13.650825Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:34:13.652714 waagent[1686]: 2025-05-17T00:34:13.652663Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:34:13.653532 waagent[1686]: 2025-05-17T00:34:13.653476Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:34:13.665854 waagent[1686]: 2025-05-17T00:34:13.665796Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1679' May 17 00:34:13.690907 waagent[1686]: 2025-05-17T00:34:13.690825Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:34:13.739149 waagent[1686]: 2025-05-17T00:34:13.738999Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:34:13.739149 waagent[1686]: Executing ['ip', '-a', '-o', 'link']: May 17 00:34:13.739149 waagent[1686]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:34:13.739149 waagent[1686]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fc:36:16 brd ff:ff:ff:ff:ff:ff May 17 00:34:13.739149 waagent[1686]: 3: enP35960s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fc:36:16 brd ff:ff:ff:ff:ff:ff\ altname enP35960p0s2 May 17 00:34:13.739149 waagent[1686]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:34:13.739149 waagent[1686]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:34:13.739149 waagent[1686]: 2: eth0 inet 10.200.4.4/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:34:13.739149 waagent[1686]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:34:13.739149 waagent[1686]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:34:13.739149 waagent[1686]: 2: eth0 inet6 fe80::6245:bdff:fefc:3616/64 scope link \ valid_lft forever preferred_lft forever May 17 00:34:13.876237 waagent[1686]: 2025-05-17T00:34:13.874828Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:34:14.098419 waagent[1621]: 2025-05-17T00:34:14.098260Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:34:14.103170 waagent[1621]: 2025-05-17T00:34:14.103092Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:34:15.154709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:34:15.154982 systemd[1]: Stopped kubelet.service. May 17 00:34:15.157032 systemd[1]: Starting kubelet.service... May 17 00:34:15.192202 waagent[1714]: 2025-05-17T00:34:15.192073Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:34:15.193942 waagent[1714]: 2025-05-17T00:34:15.193861Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:34:15.194106 waagent[1714]: 2025-05-17T00:34:15.194050Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:34:15.194289 waagent[1714]: 2025-05-17T00:34:15.194233Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 17 00:34:15.213069 waagent[1714]: 2025-05-17T00:34:15.212661Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:34:15.213558 waagent[1714]: 2025-05-17T00:34:15.213497Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:15.213714 waagent[1714]: 2025-05-17T00:34:15.213666Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:15.213933 waagent[1714]: 2025-05-17T00:34:15.213883Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:34:15.226437 waagent[1714]: 2025-05-17T00:34:15.226369Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:34:15.233669 waagent[1714]: 2025-05-17T00:34:15.233610Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 May 17 00:34:15.234542 waagent[1714]: 2025-05-17T00:34:15.234483Z INFO ExtHandler May 17 00:34:15.234690 waagent[1714]: 2025-05-17T00:34:15.234641Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5e0e9ab1-7bde-445a-9a76-1d0f6cdcd876 eTag: 7069152617147801853 source: Fabric] May 17 00:34:15.235385 waagent[1714]: 2025-05-17T00:34:15.235329Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:34:15.256571 waagent[1714]: 2025-05-17T00:34:15.256423Z INFO ExtHandler May 17 00:34:15.256880 waagent[1714]: 2025-05-17T00:34:15.256792Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:34:15.264849 waagent[1714]: 2025-05-17T00:34:15.264782Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:34:15.265447 waagent[1714]: 2025-05-17T00:34:15.265381Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:34:15.285578 waagent[1714]: 2025-05-17T00:34:15.285517Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:34:15.461214 waagent[1714]: 2025-05-17T00:34:15.457096Z INFO ExtHandler Downloaded certificate {'thumbprint': '0E8CD958E3B2F114419E54EE4CADE2A156EFCD8E', 'hasPrivateKey': True} May 17 00:34:15.461214 waagent[1714]: 2025-05-17T00:34:15.459281Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:34:15.461214 waagent[1714]: 2025-05-17T00:34:15.460707Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:34:15.484454 waagent[1714]: 2025-05-17T00:34:15.484336Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:34:15.499159 waagent[1714]: 2025-05-17T00:34:15.497382Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:34:15.503405 waagent[1714]: 2025-05-17T00:34:15.503286Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:34:15.503784 waagent[1714]: 2025-05-17T00:34:15.503721Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:34:15.509917 systemd[1]: Started kubelet.service. May 17 00:34:15.921105 kubelet[1737]: E0517 00:34:15.920768 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:15.924934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:15.925162 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:16.008978 waagent[1714]: 2025-05-17T00:34:16.008858Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:34:16.008978 waagent[1714]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:16.008978 waagent[1714]: pkts bytes target prot opt in out source destination May 17 00:34:16.008978 waagent[1714]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:16.008978 waagent[1714]: pkts bytes target prot opt in out source destination May 17 00:34:16.008978 waagent[1714]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:34:16.008978 waagent[1714]: pkts bytes target prot opt in out source destination May 17 00:34:16.008978 waagent[1714]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:34:16.008978 waagent[1714]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:34:16.008978 waagent[1714]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:34:16.010057 waagent[1714]: 2025-05-17T00:34:16.009992Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:34:16.012600 waagent[1714]: 2025-05-17T00:34:16.012503Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:34:16.012831 waagent[1714]: 2025-05-17T00:34:16.012781Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:34:16.013170 waagent[1714]: 2025-05-17T00:34:16.013101Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:34:16.021029 waagent[1714]: 2025-05-17T00:34:16.020976Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:34:16.021503 waagent[1714]: 2025-05-17T00:34:16.021447Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:34:16.028694 waagent[1714]: 2025-05-17T00:34:16.028628Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1714 May 17 00:34:16.031532 waagent[1714]: 2025-05-17T00:34:16.031474Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:34:16.032246 waagent[1714]: 2025-05-17T00:34:16.032191Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:34:16.033017 waagent[1714]: 2025-05-17T00:34:16.032961Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:34:16.035448 waagent[1714]: 2025-05-17T00:34:16.035382Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:34:16.036655 waagent[1714]: 2025-05-17T00:34:16.036598Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:34:16.037224 waagent[1714]: 2025-05-17T00:34:16.037171Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:16.037624 waagent[1714]: 2025-05-17T00:34:16.037573Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:34:16.037805 waagent[1714]: 2025-05-17T00:34:16.037739Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:16.038431 waagent[1714]: 2025-05-17T00:34:16.038376Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:34:16.038901 waagent[1714]: 2025-05-17T00:34:16.038826Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:34:16.038975 waagent[1714]: 2025-05-17T00:34:16.038921Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:34:16.039533 waagent[1714]: 2025-05-17T00:34:16.039482Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:34:16.040221 waagent[1714]: 2025-05-17T00:34:16.040165Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:34:16.040291 waagent[1714]: 2025-05-17T00:34:16.040237Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:34:16.040291 waagent[1714]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:34:16.040291 waagent[1714]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:34:16.040291 waagent[1714]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:34:16.040291 waagent[1714]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:16.040291 waagent[1714]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:16.040291 waagent[1714]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:34:16.042475 waagent[1714]: 2025-05-17T00:34:16.042336Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:34:16.043059 waagent[1714]: 2025-05-17T00:34:16.042996Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:34:16.043292 waagent[1714]: 2025-05-17T00:34:16.043228Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:34:16.047911 waagent[1714]: 2025-05-17T00:34:16.047790Z INFO EnvHandler ExtHandler Configure routes May 17 00:34:16.051827 waagent[1714]: 2025-05-17T00:34:16.051753Z INFO EnvHandler ExtHandler Gateway:None May 17 00:34:16.055466 waagent[1714]: 2025-05-17T00:34:16.055404Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:34:16.055466 waagent[1714]: Executing ['ip', '-a', '-o', 'link']: May 17 00:34:16.055466 waagent[1714]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:34:16.055466 waagent[1714]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fc:36:16 brd ff:ff:ff:ff:ff:ff May 17 00:34:16.055466 waagent[1714]: 3: enP35960s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:fc:36:16 brd ff:ff:ff:ff:ff:ff\ altname enP35960p0s2 May 17 00:34:16.055466 waagent[1714]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:34:16.055466 waagent[1714]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:34:16.055466 waagent[1714]: 2: eth0 inet 10.200.4.4/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:34:16.055466 waagent[1714]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:34:16.055466 waagent[1714]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:34:16.055466 waagent[1714]: 2: eth0 inet6 fe80::6245:bdff:fefc:3616/64 scope link \ valid_lft forever preferred_lft forever May 17 00:34:16.056023 waagent[1714]: 2025-05-17T00:34:16.055970Z INFO EnvHandler ExtHandler Routes:None May 17 00:34:16.065308 waagent[1714]: 2025-05-17T00:34:16.065239Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:34:16.100161 waagent[1714]: 2025-05-17T00:34:16.099013Z INFO ExtHandler ExtHandler May 17 00:34:16.100161 waagent[1714]: 2025-05-17T00:34:16.099224Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: eb6b6a43-c529-4001-a8ca-b3b70d05e2d6 correlation 773b6019-7b91-4c27-825e-02d75a126261 created: 2025-05-17T00:32:44.054474Z] May 17 00:34:16.100326 waagent[1714]: 2025-05-17T00:34:16.100175Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:34:16.110666 waagent[1714]: 2025-05-17T00:34:16.110586Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:34:16.111228 waagent[1714]: 2025-05-17T00:34:16.111173Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 12 ms] May 17 00:34:16.130883 waagent[1714]: 2025-05-17T00:34:16.130824Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:34:16.142330 waagent[1714]: 2025-05-17T00:34:16.142255Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:34:16.145758 waagent[1714]: 2025-05-17T00:34:16.145604Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 770C8FA4-D6EF-4865-A350-6EB4CD8370E8;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:34:26.154657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:34:26.154961 systemd[1]: Stopped kubelet.service. May 17 00:34:26.157093 systemd[1]: Starting kubelet.service... May 17 00:34:26.495932 systemd[1]: Started kubelet.service. May 17 00:34:26.909471 kubelet[1779]: E0517 00:34:26.909407 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:26.911201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:26.911404 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:31.398406 systemd[1]: Created slice system-sshd.slice. May 17 00:34:31.400335 systemd[1]: Started sshd@0-10.200.4.4:22-10.200.16.10:39068.service. May 17 00:34:32.183881 sshd[1786]: Accepted publickey for core from 10.200.16.10 port 39068 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:34:32.185738 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:32.191478 systemd[1]: Started session-3.scope. May 17 00:34:32.192398 systemd-logind[1494]: New session 3 of user core. May 17 00:34:32.704627 systemd[1]: Started sshd@1-10.200.4.4:22-10.200.16.10:39080.service. May 17 00:34:33.290777 sshd[1791]: Accepted publickey for core from 10.200.16.10 port 39080 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:34:33.293216 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:33.298312 systemd[1]: Started session-4.scope. May 17 00:34:33.299437 systemd-logind[1494]: New session 4 of user core. May 17 00:34:33.713657 sshd[1791]: pam_unix(sshd:session): session closed for user core May 17 00:34:33.717048 systemd[1]: sshd@1-10.200.4.4:22-10.200.16.10:39080.service: Deactivated successfully. May 17 00:34:33.719397 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:34:33.720037 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. May 17 00:34:33.720962 systemd-logind[1494]: Removed session 4. May 17 00:34:33.811809 systemd[1]: Started sshd@2-10.200.4.4:22-10.200.16.10:39084.service. May 17 00:34:34.395823 sshd[1798]: Accepted publickey for core from 10.200.16.10 port 39084 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:34:34.397524 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:34.403237 systemd[1]: Started session-5.scope. May 17 00:34:34.403480 systemd-logind[1494]: New session 5 of user core. May 17 00:34:34.813978 sshd[1798]: pam_unix(sshd:session): session closed for user core May 17 00:34:34.817242 systemd[1]: sshd@2-10.200.4.4:22-10.200.16.10:39084.service: Deactivated successfully. May 17 00:34:34.818616 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. May 17 00:34:34.818729 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:34:34.820556 systemd-logind[1494]: Removed session 5. May 17 00:34:34.911119 systemd[1]: Started sshd@3-10.200.4.4:22-10.200.16.10:39096.service. May 17 00:34:35.496578 sshd[1805]: Accepted publickey for core from 10.200.16.10 port 39096 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:34:35.498283 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:35.504093 systemd[1]: Started session-6.scope. May 17 00:34:35.504493 systemd-logind[1494]: New session 6 of user core. May 17 00:34:35.919240 sshd[1805]: pam_unix(sshd:session): session closed for user core May 17 00:34:35.922479 systemd[1]: sshd@3-10.200.4.4:22-10.200.16.10:39096.service: Deactivated successfully. May 17 00:34:35.923879 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. May 17 00:34:35.923995 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:34:35.925366 systemd-logind[1494]: Removed session 6. May 17 00:34:36.017075 systemd[1]: Started sshd@4-10.200.4.4:22-10.200.16.10:39098.service. May 17 00:34:36.772935 sshd[1812]: Accepted publickey for core from 10.200.16.10 port 39098 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:34:36.774622 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:36.780404 systemd[1]: Started session-7.scope. May 17 00:34:36.780655 systemd-logind[1494]: New session 7 of user core. May 17 00:34:37.154597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:34:37.154850 systemd[1]: Stopped kubelet.service. May 17 00:34:37.156677 systemd[1]: Starting kubelet.service... May 17 00:34:38.124799 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:34:38.125191 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:34:38.207017 systemd[1]: Starting docker.service... May 17 00:34:38.275468 env[1829]: time="2025-05-17T00:34:38.275422514Z" level=info msg="Starting up" May 17 00:34:38.277927 env[1829]: time="2025-05-17T00:34:38.277900514Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:34:38.277927 env[1829]: time="2025-05-17T00:34:38.277919014Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:34:38.278092 env[1829]: time="2025-05-17T00:34:38.277940814Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:34:38.278092 env[1829]: time="2025-05-17T00:34:38.277954014Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:34:38.279806 env[1829]: time="2025-05-17T00:34:38.279779714Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:34:38.279806 env[1829]: time="2025-05-17T00:34:38.279797514Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:34:38.279941 env[1829]: time="2025-05-17T00:34:38.279816114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:34:38.279941 env[1829]: time="2025-05-17T00:34:38.279827114Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:34:38.301610 systemd[1]: Started kubelet.service. May 17 00:34:38.382746 kubelet[1840]: E0517 00:34:38.382245 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:38.384258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:38.384470 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:38.440454 env[1829]: time="2025-05-17T00:34:38.440414240Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:34:38.440454 env[1829]: time="2025-05-17T00:34:38.440441040Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:34:38.440730 env[1829]: time="2025-05-17T00:34:38.440667040Z" level=info msg="Loading containers: start." May 17 00:34:38.571160 kernel: Initializing XFRM netlink socket May 17 00:34:38.594645 env[1829]: time="2025-05-17T00:34:38.594597165Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:34:38.707431 systemd-networkd[1679]: docker0: Link UP May 17 00:34:38.728297 env[1829]: time="2025-05-17T00:34:38.728254286Z" level=info msg="Loading containers: done." May 17 00:34:38.756433 env[1829]: time="2025-05-17T00:34:38.756389991Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:34:38.756691 env[1829]: time="2025-05-17T00:34:38.756666691Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:34:38.756818 env[1829]: time="2025-05-17T00:34:38.756798191Z" level=info msg="Daemon has completed initialization" May 17 00:34:38.787784 systemd[1]: Started docker.service. May 17 00:34:38.792580 env[1829]: time="2025-05-17T00:34:38.792533496Z" level=info msg="API listen on /run/docker.sock" May 17 00:34:40.191203 env[1513]: time="2025-05-17T00:34:40.191141707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:34:41.041961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320773776.mount: Deactivated successfully. May 17 00:34:42.463166 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:34:42.695119 env[1513]: time="2025-05-17T00:34:42.695059538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:42.700522 env[1513]: time="2025-05-17T00:34:42.700471739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:42.704779 env[1513]: time="2025-05-17T00:34:42.704741140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:42.711876 env[1513]: time="2025-05-17T00:34:42.711835740Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:42.712537 env[1513]: time="2025-05-17T00:34:42.712504341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:34:42.713244 env[1513]: time="2025-05-17T00:34:42.713161541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:34:44.340272 env[1513]: time="2025-05-17T00:34:44.340215929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:44.347788 env[1513]: time="2025-05-17T00:34:44.347738830Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:44.351942 env[1513]: time="2025-05-17T00:34:44.351898530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:44.356611 env[1513]: time="2025-05-17T00:34:44.356573231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:44.357305 env[1513]: time="2025-05-17T00:34:44.357270831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:34:44.358483 env[1513]: time="2025-05-17T00:34:44.358455331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:34:45.696883 env[1513]: time="2025-05-17T00:34:45.696829072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:45.701880 env[1513]: time="2025-05-17T00:34:45.701842272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:45.705876 env[1513]: time="2025-05-17T00:34:45.705848173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:45.709803 env[1513]: time="2025-05-17T00:34:45.709771573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:45.710425 env[1513]: time="2025-05-17T00:34:45.710395973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:34:45.711079 env[1513]: time="2025-05-17T00:34:45.711051173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:34:46.099279 env[1513]: time="2025-05-17T00:34:46.099061412Z" level=error msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-proxy:v1.31.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" May 17 00:34:46.099992 env[1513]: time="2025-05-17T00:34:46.099963712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:34:47.392167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842821789.mount: Deactivated successfully. May 17 00:34:48.016976 env[1513]: time="2025-05-17T00:34:48.016915689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:48.022540 env[1513]: time="2025-05-17T00:34:48.022489790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:48.026366 env[1513]: time="2025-05-17T00:34:48.026337090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:48.030348 env[1513]: time="2025-05-17T00:34:48.030315390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:48.030750 env[1513]: time="2025-05-17T00:34:48.030720090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:34:48.031267 env[1513]: time="2025-05-17T00:34:48.031240390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:34:48.404745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:34:48.405039 systemd[1]: Stopped kubelet.service. May 17 00:34:48.407222 systemd[1]: Starting kubelet.service... May 17 00:34:48.505965 systemd[1]: Started kubelet.service. May 17 00:34:48.543948 kubelet[1965]: E0517 00:34:48.543875 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:48.545439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:48.545649 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:49.147099 update_engine[1495]: I0517 00:34:48.751230 1495 update_attempter.cc:509] Updating boot flags... May 17 00:34:49.356805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085951265.mount: Deactivated successfully. May 17 00:34:50.587654 env[1513]: time="2025-05-17T00:34:50.587596894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.594872 env[1513]: time="2025-05-17T00:34:50.594819894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.598750 env[1513]: time="2025-05-17T00:34:50.598709295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.603972 env[1513]: time="2025-05-17T00:34:50.603939995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.604668 env[1513]: time="2025-05-17T00:34:50.604634595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:34:50.605267 env[1513]: time="2025-05-17T00:34:50.605240995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:34:51.170908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382883201.mount: Deactivated successfully. May 17 00:34:51.188271 env[1513]: time="2025-05-17T00:34:51.188232337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:51.194087 env[1513]: time="2025-05-17T00:34:51.194052238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:51.198452 env[1513]: time="2025-05-17T00:34:51.198418138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:51.205783 env[1513]: time="2025-05-17T00:34:51.205751939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:51.206255 env[1513]: time="2025-05-17T00:34:51.206225939Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:34:51.206906 env[1513]: time="2025-05-17T00:34:51.206878839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:34:51.874568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290846338.mount: Deactivated successfully. May 17 00:34:54.371206 env[1513]: time="2025-05-17T00:34:54.371157240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:54.377326 env[1513]: time="2025-05-17T00:34:54.377288841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:54.382077 env[1513]: time="2025-05-17T00:34:54.382043041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:54.385638 env[1513]: time="2025-05-17T00:34:54.385605741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:54.386375 env[1513]: time="2025-05-17T00:34:54.386346241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:34:57.428085 systemd[1]: Stopped kubelet.service. May 17 00:34:57.430761 systemd[1]: Starting kubelet.service... May 17 00:34:57.465633 systemd[1]: Reloading. May 17 00:34:57.557617 /usr/lib/systemd/system-generators/torcx-generator[2058]: time="2025-05-17T00:34:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:57.557658 /usr/lib/systemd/system-generators/torcx-generator[2058]: time="2025-05-17T00:34:57Z" level=info msg="torcx already run" May 17 00:34:57.661356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:57.661382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:57.684007 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:57.778125 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:34:57.778252 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:34:57.778608 systemd[1]: Stopped kubelet.service. May 17 00:34:57.782407 systemd[1]: Starting kubelet.service... May 17 00:34:58.085214 systemd[1]: Started kubelet.service. May 17 00:34:58.831616 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:58.831616 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:34:58.831616 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:58.832162 kubelet[2136]: I0517 00:34:58.831683 2136 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:34:59.294927 kubelet[2136]: I0517 00:34:59.294884 2136 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:34:59.295124 kubelet[2136]: I0517 00:34:59.295111 2136 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:34:59.295787 kubelet[2136]: I0517 00:34:59.295763 2136 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:34:59.313639 kubelet[2136]: E0517 00:34:59.313607 2136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:59.314571 kubelet[2136]: I0517 00:34:59.314544 2136 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:34:59.320006 kubelet[2136]: E0517 00:34:59.319971 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:34:59.320125 kubelet[2136]: I0517 00:34:59.320105 2136 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:34:59.324737 kubelet[2136]: I0517 00:34:59.324709 2136 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:34:59.325783 kubelet[2136]: I0517 00:34:59.325759 2136 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:34:59.325943 kubelet[2136]: I0517 00:34:59.325905 2136 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:34:59.326118 kubelet[2136]: I0517 00:34:59.325940 2136 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-21508f608f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:34:59.326268 kubelet[2136]: I0517 00:34:59.326167 2136 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:34:59.326268 kubelet[2136]: I0517 00:34:59.326181 2136 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:34:59.326358 kubelet[2136]: I0517 00:34:59.326296 2136 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:59.330708 kubelet[2136]: I0517 00:34:59.330690 2136 kubelet.go:408] "Attempting to sync node with API server" May 17 00:34:59.330779 kubelet[2136]: I0517 00:34:59.330718 2136 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:34:59.330779 kubelet[2136]: I0517 00:34:59.330767 2136 kubelet.go:314] "Adding apiserver pod source" May 17 00:34:59.330863 kubelet[2136]: I0517 00:34:59.330790 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:34:59.339563 kubelet[2136]: I0517 00:34:59.339539 2136 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:34:59.340056 kubelet[2136]: I0517 00:34:59.340031 2136 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:34:59.340166 kubelet[2136]: W0517 00:34:59.340089 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:34:59.347088 kubelet[2136]: I0517 00:34:59.347064 2136 server.go:1274] "Started kubelet" May 17 00:34:59.347908 kubelet[2136]: W0517 00:34:59.347222 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-21508f608f&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:34:59.347908 kubelet[2136]: E0517 00:34:59.347296 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-21508f608f&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:59.364268 kubelet[2136]: E0517 00:34:59.363024 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-21508f608f.18402954bd24a309 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-21508f608f,UID:ci-3510.3.7-n-21508f608f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-21508f608f,},FirstTimestamp:2025-05-17 00:34:59.347038985 +0000 UTC m=+1.253508755,LastTimestamp:2025-05-17 00:34:59.347038985 +0000 UTC m=+1.253508755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-21508f608f,}" May 17 00:34:59.365349 kubelet[2136]: W0517 00:34:59.365313 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:34:59.365478 kubelet[2136]: E0517 00:34:59.365449 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:59.365653 kubelet[2136]: I0517 00:34:59.365632 2136 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:34:59.366017 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:34:59.366623 kubelet[2136]: I0517 00:34:59.366591 2136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:34:59.367080 kubelet[2136]: I0517 00:34:59.367064 2136 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:34:59.368044 kubelet[2136]: I0517 00:34:59.368012 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:34:59.368888 kubelet[2136]: I0517 00:34:59.368870 2136 server.go:449] "Adding debug handlers to kubelet server" May 17 00:34:59.373730 kubelet[2136]: I0517 00:34:59.367120 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:34:59.375641 kubelet[2136]: I0517 00:34:59.375619 2136 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:34:59.375772 kubelet[2136]: I0517 00:34:59.375755 2136 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:34:59.375833 kubelet[2136]: I0517 00:34:59.375808 2136 reconciler.go:26] "Reconciler: start to sync state" May 17 00:34:59.376253 kubelet[2136]: W0517 00:34:59.376181 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:34:59.376335 kubelet[2136]: E0517 00:34:59.376267 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:59.376497 kubelet[2136]: I0517 00:34:59.376474 2136 factory.go:221] Registration of the systemd container factory successfully May 17 00:34:59.376559 kubelet[2136]: I0517 00:34:59.376546 2136 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:34:59.377789 kubelet[2136]: E0517 00:34:59.377656 2136 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-21508f608f\" not found" May 17 00:34:59.377789 kubelet[2136]: E0517 00:34:59.377750 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-21508f608f?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="200ms" May 17 00:34:59.377925 kubelet[2136]: E0517 00:34:59.377866 2136 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:34:59.378076 kubelet[2136]: I0517 00:34:59.378055 2136 factory.go:221] Registration of the containerd container factory successfully May 17 00:34:59.438972 kubelet[2136]: I0517 00:34:59.438941 2136 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:34:59.439108 kubelet[2136]: I0517 00:34:59.439098 2136 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:34:59.439195 kubelet[2136]: I0517 00:34:59.439187 2136 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:59.443737 kubelet[2136]: I0517 00:34:59.443722 2136 policy_none.go:49] "None policy: Start" May 17 00:34:59.444424 kubelet[2136]: I0517 00:34:59.444405 2136 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:34:59.444512 kubelet[2136]: I0517 00:34:59.444448 2136 state_mem.go:35] "Initializing new in-memory state store" May 17 00:34:59.451365 kubelet[2136]: I0517 00:34:59.451340 2136 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:34:59.451496 kubelet[2136]: I0517 00:34:59.451480 2136 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:34:59.451549 kubelet[2136]: I0517 00:34:59.451496 2136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:34:59.452586 kubelet[2136]: I0517 00:34:59.452561 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:34:59.455672 kubelet[2136]: E0517 00:34:59.455649 2136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-21508f608f\" not found" May 17 00:34:59.470430 kubelet[2136]: I0517 00:34:59.470374 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:34:59.471769 kubelet[2136]: I0517 00:34:59.471745 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:34:59.471769 kubelet[2136]: I0517 00:34:59.471767 2136 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:34:59.471900 kubelet[2136]: I0517 00:34:59.471789 2136 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:34:59.471900 kubelet[2136]: E0517 00:34:59.471834 2136 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:34:59.472620 kubelet[2136]: W0517 00:34:59.472594 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:34:59.472766 kubelet[2136]: E0517 00:34:59.472745 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:59.554180 kubelet[2136]: I0517 00:34:59.553674 2136 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:34:59.555322 kubelet[2136]: E0517 00:34:59.555274 2136 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-3510.3.7-n-21508f608f" May 17 00:34:59.578422 kubelet[2136]: I0517 00:34:59.578391 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4cbca76e78d29ccb887def274acf7442-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-21508f608f\" (UID: \"4cbca76e78d29ccb887def274acf7442\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578559 kubelet[2136]: I0517 00:34:59.578438 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578559 kubelet[2136]: I0517 00:34:59.578468 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578559 kubelet[2136]: I0517 00:34:59.578488 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578559 kubelet[2136]: I0517 00:34:59.578516 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578559 kubelet[2136]: I0517 00:34:59.578540 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578760 kubelet[2136]: I0517 00:34:59.578564 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578760 kubelet[2136]: I0517 00:34:59.578594 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:34:59.578760 kubelet[2136]: I0517 00:34:59.578621 2136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:34:59.579393 kubelet[2136]: E0517 00:34:59.579360 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-21508f608f?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="400ms" May 17 00:34:59.757495 kubelet[2136]: I0517 00:34:59.757464 2136 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:34:59.757887 kubelet[2136]: E0517 00:34:59.757848 2136 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-3510.3.7-n-21508f608f" May 17 00:34:59.883089 env[1513]: time="2025-05-17T00:34:59.882342307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-21508f608f,Uid:6106bfcd90a40bdf1845b4f10aec8ec0,Namespace:kube-system,Attempt:0,}" May 17 00:34:59.891179 env[1513]: time="2025-05-17T00:34:59.891010008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-21508f608f,Uid:daa8b1e4f51b362a09ab8cb7fe704e67,Namespace:kube-system,Attempt:0,}" May 17 00:34:59.891179 env[1513]: time="2025-05-17T00:34:59.891009708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-21508f608f,Uid:4cbca76e78d29ccb887def274acf7442,Namespace:kube-system,Attempt:0,}" May 17 00:34:59.980277 kubelet[2136]: E0517 00:34:59.980233 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-21508f608f?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="800ms" May 17 00:35:00.159792 kubelet[2136]: I0517 00:35:00.159759 2136 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:00.160168 kubelet[2136]: E0517 00:35:00.160119 2136 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-3510.3.7-n-21508f608f" May 17 00:35:00.181988 kubelet[2136]: W0517 00:35:00.181914 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:35:00.182180 kubelet[2136]: E0517 00:35:00.181997 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:00.303004 kubelet[2136]: W0517 00:35:00.302933 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:35:00.303205 kubelet[2136]: E0517 00:35:00.303016 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:00.626007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782471163.mount: Deactivated successfully. May 17 00:35:00.667317 env[1513]: time="2025-05-17T00:35:00.667262238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.670249 env[1513]: time="2025-05-17T00:35:00.670210938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.681270 env[1513]: time="2025-05-17T00:35:00.681225938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.683657 env[1513]: time="2025-05-17T00:35:00.683624539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.687521 env[1513]: time="2025-05-17T00:35:00.687485039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.689457 env[1513]: time="2025-05-17T00:35:00.689422639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.693066 env[1513]: time="2025-05-17T00:35:00.693031639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.695384 env[1513]: time="2025-05-17T00:35:00.695353339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.706185 env[1513]: time="2025-05-17T00:35:00.706126139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.711520 env[1513]: time="2025-05-17T00:35:00.711477840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.731787 env[1513]: time="2025-05-17T00:35:00.731740340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.753045 env[1513]: time="2025-05-17T00:35:00.752998741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.762881 kubelet[2136]: W0517 00:35:00.762809 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-21508f608f&limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:35:00.763043 kubelet[2136]: E0517 00:35:00.762895 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-21508f608f&limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:00.781007 kubelet[2136]: E0517 00:35:00.780958 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-21508f608f?timeout=10s\": dial tcp 10.200.4.4:6443: connect: connection refused" interval="1.6s" May 17 00:35:00.791188 env[1513]: time="2025-05-17T00:35:00.787874443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:00.791188 env[1513]: time="2025-05-17T00:35:00.787961143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:00.791188 env[1513]: time="2025-05-17T00:35:00.787990343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:00.791188 env[1513]: time="2025-05-17T00:35:00.788194843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccb5fbdd8b88a5b6bef51d6d2b68615f4d1ea2fc85e2e2225578e6382410d57f pid=2176 runtime=io.containerd.runc.v2 May 17 00:35:00.818405 env[1513]: time="2025-05-17T00:35:00.818290544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:00.818735 env[1513]: time="2025-05-17T00:35:00.818694144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:00.818923 env[1513]: time="2025-05-17T00:35:00.818895444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:00.819698 env[1513]: time="2025-05-17T00:35:00.819639444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3e69c819d2f3d2618f6de544b3bec8e4cf29b09ee565175d598d98851d8658 pid=2197 runtime=io.containerd.runc.v2 May 17 00:35:00.849767 env[1513]: time="2025-05-17T00:35:00.849692445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:00.850005 env[1513]: time="2025-05-17T00:35:00.849978745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:00.850108 env[1513]: time="2025-05-17T00:35:00.850089445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:00.850435 env[1513]: time="2025-05-17T00:35:00.850383845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1860d42111ce35d08e436f784a1766a80f0052329b9ddcaeb20ae905584ff1ff pid=2239 runtime=io.containerd.runc.v2 May 17 00:35:00.894690 env[1513]: time="2025-05-17T00:35:00.894582247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-21508f608f,Uid:daa8b1e4f51b362a09ab8cb7fe704e67,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccb5fbdd8b88a5b6bef51d6d2b68615f4d1ea2fc85e2e2225578e6382410d57f\"" May 17 00:35:00.899162 env[1513]: time="2025-05-17T00:35:00.899102347Z" level=info msg="CreateContainer within sandbox \"ccb5fbdd8b88a5b6bef51d6d2b68615f4d1ea2fc85e2e2225578e6382410d57f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:35:00.922808 env[1513]: time="2025-05-17T00:35:00.922765148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-21508f608f,Uid:6106bfcd90a40bdf1845b4f10aec8ec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c3e69c819d2f3d2618f6de544b3bec8e4cf29b09ee565175d598d98851d8658\"" May 17 00:35:00.926369 env[1513]: time="2025-05-17T00:35:00.926322948Z" level=info msg="CreateContainer within sandbox \"7c3e69c819d2f3d2618f6de544b3bec8e4cf29b09ee565175d598d98851d8658\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:35:00.934182 kubelet[2136]: W0517 00:35:00.934100 2136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.4:6443: connect: connection refused May 17 00:35:00.934309 kubelet[2136]: E0517 00:35:00.934210 2136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.4:6443: connect: connection refused" logger="UnhandledError" May 17 00:35:00.946735 env[1513]: time="2025-05-17T00:35:00.946691749Z" level=info msg="CreateContainer within sandbox \"ccb5fbdd8b88a5b6bef51d6d2b68615f4d1ea2fc85e2e2225578e6382410d57f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8573a456a593f4869665c1af84327096666b7006c0b5a5c1820deee1a377022\"" May 17 00:35:00.948681 env[1513]: time="2025-05-17T00:35:00.948641549Z" level=info msg="StartContainer for \"d8573a456a593f4869665c1af84327096666b7006c0b5a5c1820deee1a377022\"" May 17 00:35:00.961176 env[1513]: time="2025-05-17T00:35:00.961075249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-21508f608f,Uid:4cbca76e78d29ccb887def274acf7442,Namespace:kube-system,Attempt:0,} returns sandbox id \"1860d42111ce35d08e436f784a1766a80f0052329b9ddcaeb20ae905584ff1ff\"" May 17 00:35:00.962358 kubelet[2136]: I0517 00:35:00.962332 2136 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:00.962720 kubelet[2136]: E0517 00:35:00.962682 2136 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.4:6443/api/v1/nodes\": dial tcp 10.200.4.4:6443: connect: connection refused" node="ci-3510.3.7-n-21508f608f" May 17 00:35:00.964998 env[1513]: time="2025-05-17T00:35:00.964959849Z" level=info msg="CreateContainer within sandbox \"1860d42111ce35d08e436f784a1766a80f0052329b9ddcaeb20ae905584ff1ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:35:00.978322 env[1513]: time="2025-05-17T00:35:00.978260250Z" level=info msg="CreateContainer within sandbox \"7c3e69c819d2f3d2618f6de544b3bec8e4cf29b09ee565175d598d98851d8658\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d0fdb662e2f6be0d04b7dc6782ce6d248f8ec3aa3e5a2e85559bac8ce727a4e\"" May 17 00:35:00.979097 env[1513]: time="2025-05-17T00:35:00.979063350Z" level=info msg="StartContainer for \"4d0fdb662e2f6be0d04b7dc6782ce6d248f8ec3aa3e5a2e85559bac8ce727a4e\"" May 17 00:35:01.017689 env[1513]: time="2025-05-17T00:35:01.017636351Z" level=info msg="CreateContainer within sandbox \"1860d42111ce35d08e436f784a1766a80f0052329b9ddcaeb20ae905584ff1ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d00ca7f91ef7bf8e75f470cab8d3a23935fbb3b7de9f1d1238079f1de4f613d1\"" May 17 00:35:01.018384 env[1513]: time="2025-05-17T00:35:01.018357551Z" level=info msg="StartContainer for \"d00ca7f91ef7bf8e75f470cab8d3a23935fbb3b7de9f1d1238079f1de4f613d1\"" May 17 00:35:01.062373 env[1513]: time="2025-05-17T00:35:01.062325353Z" level=info msg="StartContainer for \"d8573a456a593f4869665c1af84327096666b7006c0b5a5c1820deee1a377022\" returns successfully" May 17 00:35:01.100897 env[1513]: time="2025-05-17T00:35:01.100839254Z" level=info msg="StartContainer for \"4d0fdb662e2f6be0d04b7dc6782ce6d248f8ec3aa3e5a2e85559bac8ce727a4e\" returns successfully" May 17 00:35:01.226860 env[1513]: time="2025-05-17T00:35:01.226808159Z" level=info msg="StartContainer for \"d00ca7f91ef7bf8e75f470cab8d3a23935fbb3b7de9f1d1238079f1de4f613d1\" returns successfully" May 17 00:35:02.564681 kubelet[2136]: I0517 00:35:02.564656 2136 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:03.192889 kubelet[2136]: E0517 00:35:03.192850 2136 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-21508f608f\" not found" node="ci-3510.3.7-n-21508f608f" May 17 00:35:03.303874 kubelet[2136]: E0517 00:35:03.303763 2136 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.7-n-21508f608f.18402954bd24a309 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-21508f608f,UID:ci-3510.3.7-n-21508f608f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-21508f608f,},FirstTimestamp:2025-05-17 00:34:59.347038985 +0000 UTC m=+1.253508755,LastTimestamp:2025-05-17 00:34:59.347038985 +0000 UTC m=+1.253508755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-21508f608f,}" May 17 00:35:03.352440 kubelet[2136]: I0517 00:35:03.352402 2136 apiserver.go:52] "Watching apiserver" May 17 00:35:03.359145 kubelet[2136]: E0517 00:35:03.359040 2136 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.7-n-21508f608f.18402954bebdb03a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-21508f608f,UID:ci-3510.3.7-n-21508f608f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-21508f608f,},FirstTimestamp:2025-05-17 00:34:59.373846586 +0000 UTC m=+1.280316356,LastTimestamp:2025-05-17 00:34:59.373846586 +0000 UTC m=+1.280316356,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-21508f608f,}" May 17 00:35:03.366228 kubelet[2136]: I0517 00:35:03.366196 2136 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:03.366228 kubelet[2136]: E0517 00:35:03.366233 2136 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-21508f608f\": node \"ci-3510.3.7-n-21508f608f\" not found" May 17 00:35:03.378357 kubelet[2136]: I0517 00:35:03.378315 2136 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:35:03.556744 kubelet[2136]: E0517 00:35:03.556628 2136 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-21508f608f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:35:03.557605 kubelet[2136]: E0517 00:35:03.557570 2136 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:04.199828 kubelet[2136]: W0517 00:35:04.199783 2136 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:06.245262 systemd[1]: Reloading. May 17 00:35:06.329313 /usr/lib/systemd/system-generators/torcx-generator[2422]: time="2025-05-17T00:35:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:35:06.329346 /usr/lib/systemd/system-generators/torcx-generator[2422]: time="2025-05-17T00:35:06Z" level=info msg="torcx already run" May 17 00:35:06.425915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:35:06.425935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:35:06.444107 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:35:06.539717 systemd[1]: Stopping kubelet.service... May 17 00:35:06.563898 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:35:06.564300 systemd[1]: Stopped kubelet.service. May 17 00:35:06.566926 systemd[1]: Starting kubelet.service... May 17 00:35:06.863955 systemd[1]: Started kubelet.service. May 17 00:35:07.349446 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:07.349446 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:35:07.349446 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:35:07.350025 kubelet[2500]: I0517 00:35:07.349550 2500 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:35:07.360068 kubelet[2500]: I0517 00:35:07.360038 2500 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:35:07.360248 kubelet[2500]: I0517 00:35:07.360233 2500 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:35:07.360913 kubelet[2500]: I0517 00:35:07.360884 2500 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:35:07.362040 kubelet[2500]: I0517 00:35:07.362007 2500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:35:07.363777 kubelet[2500]: I0517 00:35:07.363732 2500 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:35:07.371314 kubelet[2500]: E0517 00:35:07.371273 2500 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:35:07.371314 kubelet[2500]: I0517 00:35:07.371315 2500 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:35:07.375050 kubelet[2500]: I0517 00:35:07.375029 2500 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:35:07.375439 kubelet[2500]: I0517 00:35:07.375418 2500 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:35:07.375584 kubelet[2500]: I0517 00:35:07.375549 2500 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:35:07.375779 kubelet[2500]: I0517 00:35:07.375584 2500 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-21508f608f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:35:07.375965 kubelet[2500]: I0517 00:35:07.375790 2500 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:35:07.375965 kubelet[2500]: I0517 00:35:07.375805 2500 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:35:07.375965 kubelet[2500]: I0517 00:35:07.375836 2500 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:07.375965 kubelet[2500]: I0517 00:35:07.375944 2500 kubelet.go:408] "Attempting to sync node with API server" May 17 00:35:07.375965 kubelet[2500]: I0517 00:35:07.375960 2500 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:35:07.376304 kubelet[2500]: I0517 00:35:07.375996 2500 kubelet.go:314] "Adding apiserver pod source" May 17 00:35:07.376304 kubelet[2500]: I0517 00:35:07.376009 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:35:07.380825 kubelet[2500]: I0517 00:35:07.380413 2500 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:35:07.381052 kubelet[2500]: I0517 00:35:07.380869 2500 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:35:07.381392 kubelet[2500]: I0517 00:35:07.381372 2500 server.go:1274] "Started kubelet" May 17 00:35:07.386179 kubelet[2500]: I0517 00:35:07.383684 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:35:07.394994 kubelet[2500]: I0517 00:35:07.394554 2500 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:35:07.395697 kubelet[2500]: I0517 00:35:07.395677 2500 server.go:449] "Adding debug handlers to kubelet server" May 17 00:35:07.397265 kubelet[2500]: I0517 00:35:07.396815 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:35:07.397265 kubelet[2500]: I0517 00:35:07.397013 2500 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:35:07.402027 kubelet[2500]: I0517 00:35:07.400744 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:35:07.406957 kubelet[2500]: I0517 00:35:07.404614 2500 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:35:07.406957 kubelet[2500]: E0517 00:35:07.404828 2500 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-21508f608f\" not found" May 17 00:35:07.406957 kubelet[2500]: I0517 00:35:07.406517 2500 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:35:07.406957 kubelet[2500]: I0517 00:35:07.406636 2500 reconciler.go:26] "Reconciler: start to sync state" May 17 00:35:07.414909 kubelet[2500]: I0517 00:35:07.413693 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:35:07.415070 kubelet[2500]: I0517 00:35:07.415050 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:35:07.415156 kubelet[2500]: I0517 00:35:07.415083 2500 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:35:07.415156 kubelet[2500]: I0517 00:35:07.415102 2500 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:35:07.415241 kubelet[2500]: E0517 00:35:07.415161 2500 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:35:07.431127 kubelet[2500]: I0517 00:35:07.427710 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:35:07.431250 kubelet[2500]: I0517 00:35:07.431193 2500 factory.go:221] Registration of the containerd container factory successfully May 17 00:35:07.431250 kubelet[2500]: I0517 00:35:07.431206 2500 factory.go:221] Registration of the systemd container factory successfully May 17 00:35:07.437688 kubelet[2500]: E0517 00:35:07.437665 2500 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:35:07.468983 sudo[2530]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:35:07.470857 sudo[2530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:35:07.502244 kubelet[2500]: I0517 00:35:07.502225 2500 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:35:07.502394 kubelet[2500]: I0517 00:35:07.502383 2500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:35:07.502459 kubelet[2500]: I0517 00:35:07.502453 2500 state_mem.go:36] "Initialized new in-memory state store" May 17 00:35:07.502686 kubelet[2500]: I0517 00:35:07.502657 2500 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:35:07.502780 kubelet[2500]: I0517 00:35:07.502760 2500 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:35:07.502833 kubelet[2500]: I0517 00:35:07.502828 2500 policy_none.go:49] "None policy: Start" May 17 00:35:07.503800 kubelet[2500]: I0517 00:35:07.503775 2500 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:35:07.503800 kubelet[2500]: I0517 00:35:07.503804 2500 state_mem.go:35] "Initializing new in-memory state store" May 17 00:35:07.504009 kubelet[2500]: I0517 00:35:07.503969 2500 state_mem.go:75] "Updated machine memory state" May 17 00:35:07.507704 kubelet[2500]: I0517 00:35:07.505408 2500 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:35:07.507911 kubelet[2500]: I0517 00:35:07.507881 2500 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:35:07.508075 kubelet[2500]: I0517 00:35:07.507897 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:35:07.511151 kubelet[2500]: I0517 00:35:07.511112 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:35:07.530763 kubelet[2500]: W0517 00:35:07.530744 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:07.532181 kubelet[2500]: W0517 00:35:07.532157 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:07.532345 kubelet[2500]: E0517 00:35:07.532327 2500 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-21508f608f\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-21508f608f" May 17 00:35:07.534734 kubelet[2500]: W0517 00:35:07.534719 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:07.608329 kubelet[2500]: I0517 00:35:07.608235 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608584 kubelet[2500]: I0517 00:35:07.608553 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608684 kubelet[2500]: I0517 00:35:07.608672 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608759 kubelet[2500]: I0517 00:35:07.608749 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608840 kubelet[2500]: I0517 00:35:07.608831 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608919 kubelet[2500]: I0517 00:35:07.608909 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:07.608995 kubelet[2500]: I0517 00:35:07.608986 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4cbca76e78d29ccb887def274acf7442-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-21508f608f\" (UID: \"4cbca76e78d29ccb887def274acf7442\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-21508f608f" May 17 00:35:07.609069 kubelet[2500]: I0517 00:35:07.609060 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6106bfcd90a40bdf1845b4f10aec8ec0-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-21508f608f\" (UID: \"6106bfcd90a40bdf1845b4f10aec8ec0\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:35:07.609150 kubelet[2500]: I0517 00:35:07.609116 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/daa8b1e4f51b362a09ab8cb7fe704e67-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-21508f608f\" (UID: \"daa8b1e4f51b362a09ab8cb7fe704e67\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" May 17 00:35:07.650161 kubelet[2500]: I0517 00:35:07.650124 2500 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:07.663967 kubelet[2500]: I0517 00:35:07.663937 2500 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-21508f608f" May 17 00:35:07.664221 kubelet[2500]: I0517 00:35:07.664204 2500 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-21508f608f" May 17 00:35:08.053889 sudo[2530]: pam_unix(sudo:session): session closed for user root May 17 00:35:08.380433 kubelet[2500]: I0517 00:35:08.380326 2500 apiserver.go:52] "Watching apiserver" May 17 00:35:08.407496 kubelet[2500]: I0517 00:35:08.407468 2500 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:35:08.467167 kubelet[2500]: W0517 00:35:08.467142 2500 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:35:08.467419 kubelet[2500]: E0517 00:35:08.467403 2500 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-21508f608f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" May 17 00:35:08.505450 kubelet[2500]: I0517 00:35:08.505397 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-21508f608f" podStartSLOduration=1.5053654760000001 podStartE2EDuration="1.505365476s" podCreationTimestamp="2025-05-17 00:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:08.490507274 +0000 UTC m=+1.613811412" watchObservedRunningTime="2025-05-17 00:35:08.505365476 +0000 UTC m=+1.628669514" May 17 00:35:08.505788 kubelet[2500]: I0517 00:35:08.505760 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-21508f608f" podStartSLOduration=1.5057497180000001 podStartE2EDuration="1.505749718s" podCreationTimestamp="2025-05-17 00:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:08.503711198 +0000 UTC m=+1.627015236" watchObservedRunningTime="2025-05-17 00:35:08.505749718 +0000 UTC m=+1.629053756" May 17 00:35:08.538216 kubelet[2500]: I0517 00:35:08.538165 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-21508f608f" podStartSLOduration=4.538145211 podStartE2EDuration="4.538145211s" podCreationTimestamp="2025-05-17 00:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:08.525794179 +0000 UTC m=+1.649098217" watchObservedRunningTime="2025-05-17 00:35:08.538145211 +0000 UTC m=+1.661449349" May 17 00:35:09.701760 sudo[1816]: pam_unix(sudo:session): session closed for user root May 17 00:35:09.795433 sshd[1812]: pam_unix(sshd:session): session closed for user core May 17 00:35:09.798240 systemd[1]: sshd@4-10.200.4.4:22-10.200.16.10:39098.service: Deactivated successfully. May 17 00:35:09.800091 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:35:09.800636 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. May 17 00:35:09.802047 systemd-logind[1494]: Removed session 7. May 17 00:35:11.379551 kubelet[2500]: I0517 00:35:11.379521 2500 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:35:11.384392 env[1513]: time="2025-05-17T00:35:11.384336384Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:35:11.385881 kubelet[2500]: I0517 00:35:11.385307 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:35:12.340229 kubelet[2500]: I0517 00:35:12.340189 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-cgroup\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340464 kubelet[2500]: I0517 00:35:12.340440 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-lib-modules\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340545 kubelet[2500]: I0517 00:35:12.340475 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f048c2e-11cf-4d09-a6bb-19da01f1b299-clustermesh-secrets\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340545 kubelet[2500]: I0517 00:35:12.340499 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1b44aed-878e-4ff6-8fde-64c6ada10766-kube-proxy\") pod \"kube-proxy-7wxg2\" (UID: \"f1b44aed-878e-4ff6-8fde-64c6ada10766\") " pod="kube-system/kube-proxy-7wxg2" May 17 00:35:12.340545 kubelet[2500]: I0517 00:35:12.340522 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1b44aed-878e-4ff6-8fde-64c6ada10766-xtables-lock\") pod \"kube-proxy-7wxg2\" (UID: \"f1b44aed-878e-4ff6-8fde-64c6ada10766\") " pod="kube-system/kube-proxy-7wxg2" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340546 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-kernel\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340568 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hubble-tls\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340592 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cni-path\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340614 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-xtables-lock\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340635 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-net\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340680 kubelet[2500]: I0517 00:35:12.340660 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t622l\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-kube-api-access-t622l\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340684 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hostproc\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340705 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-run\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340729 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-bpf-maps\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340763 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1b44aed-878e-4ff6-8fde-64c6ada10766-lib-modules\") pod \"kube-proxy-7wxg2\" (UID: \"f1b44aed-878e-4ff6-8fde-64c6ada10766\") " pod="kube-system/kube-proxy-7wxg2" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340786 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qx2\" (UniqueName: \"kubernetes.io/projected/f1b44aed-878e-4ff6-8fde-64c6ada10766-kube-api-access-d8qx2\") pod \"kube-proxy-7wxg2\" (UID: \"f1b44aed-878e-4ff6-8fde-64c6ada10766\") " pod="kube-system/kube-proxy-7wxg2" May 17 00:35:12.340917 kubelet[2500]: I0517 00:35:12.340809 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-etc-cni-netd\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.341104 kubelet[2500]: I0517 00:35:12.340835 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-config-path\") pod \"cilium-rvsvt\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " pod="kube-system/cilium-rvsvt" May 17 00:35:12.441704 kubelet[2500]: I0517 00:35:12.441660 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tkn\" (UniqueName: \"kubernetes.io/projected/536cbde9-bfd7-49f1-9c86-6667b712d7aa-kube-api-access-n5tkn\") pod \"cilium-operator-5d85765b45-sjnvf\" (UID: \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\") " pod="kube-system/cilium-operator-5d85765b45-sjnvf" May 17 00:35:12.442294 kubelet[2500]: I0517 00:35:12.441865 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536cbde9-bfd7-49f1-9c86-6667b712d7aa-cilium-config-path\") pod \"cilium-operator-5d85765b45-sjnvf\" (UID: \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\") " pod="kube-system/cilium-operator-5d85765b45-sjnvf" May 17 00:35:12.446172 kubelet[2500]: I0517 00:35:12.446116 2500 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:35:12.566059 env[1513]: time="2025-05-17T00:35:12.566017155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7wxg2,Uid:f1b44aed-878e-4ff6-8fde-64c6ada10766,Namespace:kube-system,Attempt:0,}" May 17 00:35:12.577575 env[1513]: time="2025-05-17T00:35:12.577538568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvsvt,Uid:6f048c2e-11cf-4d09-a6bb-19da01f1b299,Namespace:kube-system,Attempt:0,}" May 17 00:35:12.610732 env[1513]: time="2025-05-17T00:35:12.610610465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:12.610903 env[1513]: time="2025-05-17T00:35:12.610644969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:12.610903 env[1513]: time="2025-05-17T00:35:12.610674272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:12.611844 env[1513]: time="2025-05-17T00:35:12.611801681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59c2150b64a1826f54d44f5b3b7f435528ee6aeaed02bc058fc66f78b4d898e4 pid=2581 runtime=io.containerd.runc.v2 May 17 00:35:12.630461 env[1513]: time="2025-05-17T00:35:12.630408179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:12.630612 env[1513]: time="2025-05-17T00:35:12.630468285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:12.630612 env[1513]: time="2025-05-17T00:35:12.630512889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:12.630754 env[1513]: time="2025-05-17T00:35:12.630685106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4 pid=2609 runtime=io.containerd.runc.v2 May 17 00:35:12.662545 env[1513]: time="2025-05-17T00:35:12.662499481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sjnvf,Uid:536cbde9-bfd7-49f1-9c86-6667b712d7aa,Namespace:kube-system,Attempt:0,}" May 17 00:35:12.679856 env[1513]: time="2025-05-17T00:35:12.679810955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rvsvt,Uid:6f048c2e-11cf-4d09-a6bb-19da01f1b299,Namespace:kube-system,Attempt:0,} returns sandbox id \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\"" May 17 00:35:12.682828 env[1513]: time="2025-05-17T00:35:12.682668831Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:35:12.685522 env[1513]: time="2025-05-17T00:35:12.684046164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7wxg2,Uid:f1b44aed-878e-4ff6-8fde-64c6ada10766,Namespace:kube-system,Attempt:0,} returns sandbox id \"59c2150b64a1826f54d44f5b3b7f435528ee6aeaed02bc058fc66f78b4d898e4\"" May 17 00:35:12.687095 env[1513]: time="2025-05-17T00:35:12.686668118Z" level=info msg="CreateContainer within sandbox \"59c2150b64a1826f54d44f5b3b7f435528ee6aeaed02bc058fc66f78b4d898e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:35:12.712883 env[1513]: time="2025-05-17T00:35:12.712834147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:12.713038 env[1513]: time="2025-05-17T00:35:12.713019765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:12.713164 env[1513]: time="2025-05-17T00:35:12.713105073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:12.713391 env[1513]: time="2025-05-17T00:35:12.713360898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f pid=2664 runtime=io.containerd.runc.v2 May 17 00:35:12.746875 env[1513]: time="2025-05-17T00:35:12.746827033Z" level=info msg="CreateContainer within sandbox \"59c2150b64a1826f54d44f5b3b7f435528ee6aeaed02bc058fc66f78b4d898e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f12a30bfffb677bfb26a06bd0d9c263c1060d58b1ca9a07e8ddbb8f42dfb281\"" May 17 00:35:12.747881 env[1513]: time="2025-05-17T00:35:12.747849332Z" level=info msg="StartContainer for \"8f12a30bfffb677bfb26a06bd0d9c263c1060d58b1ca9a07e8ddbb8f42dfb281\"" May 17 00:35:12.793828 env[1513]: time="2025-05-17T00:35:12.793770371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sjnvf,Uid:536cbde9-bfd7-49f1-9c86-6667b712d7aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\"" May 17 00:35:12.814803 env[1513]: time="2025-05-17T00:35:12.814732397Z" level=info msg="StartContainer for \"8f12a30bfffb677bfb26a06bd0d9c263c1060d58b1ca9a07e8ddbb8f42dfb281\" returns successfully" May 17 00:35:13.491255 kubelet[2500]: I0517 00:35:13.491209 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7wxg2" podStartSLOduration=1.4911907229999999 podStartE2EDuration="1.491190723s" podCreationTimestamp="2025-05-17 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:13.481558717 +0000 UTC m=+6.604862755" watchObservedRunningTime="2025-05-17 00:35:13.491190723 +0000 UTC m=+6.614494761" May 17 00:35:30.770956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375173578.mount: Deactivated successfully. May 17 00:35:33.436086 env[1513]: time="2025-05-17T00:35:33.436040363Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:33.440394 env[1513]: time="2025-05-17T00:35:33.440355107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:33.444068 env[1513]: time="2025-05-17T00:35:33.444036315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:33.444539 env[1513]: time="2025-05-17T00:35:33.444510542Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:35:33.446823 env[1513]: time="2025-05-17T00:35:33.445781914Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:35:33.447717 env[1513]: time="2025-05-17T00:35:33.447238796Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:35:33.478808 env[1513]: time="2025-05-17T00:35:33.478771579Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\"" May 17 00:35:33.480479 env[1513]: time="2025-05-17T00:35:33.480450674Z" level=info msg="StartContainer for \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\"" May 17 00:35:33.535979 env[1513]: time="2025-05-17T00:35:33.535933010Z" level=info msg="StartContainer for \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\" returns successfully" May 17 00:35:34.469421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183-rootfs.mount: Deactivated successfully. May 17 00:35:37.598183 env[1513]: time="2025-05-17T00:35:37.598089235Z" level=error msg="collecting metrics for 7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183" error="cgroups: cgroup deleted: unknown" May 17 00:35:37.809417 env[1513]: time="2025-05-17T00:35:37.809329508Z" level=info msg="shim disconnected" id=7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183 May 17 00:35:37.809417 env[1513]: time="2025-05-17T00:35:37.809405811Z" level=warning msg="cleaning up after shim disconnected" id=7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183 namespace=k8s.io May 17 00:35:37.809417 env[1513]: time="2025-05-17T00:35:37.809419212Z" level=info msg="cleaning up dead shim" May 17 00:35:37.818071 env[1513]: time="2025-05-17T00:35:37.818028755Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2912 runtime=io.containerd.runc.v2\n" May 17 00:35:38.540947 env[1513]: time="2025-05-17T00:35:38.540896232Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:35:38.579947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049267438.mount: Deactivated successfully. May 17 00:35:38.594646 env[1513]: time="2025-05-17T00:35:38.594585733Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\"" May 17 00:35:38.595410 env[1513]: time="2025-05-17T00:35:38.595314569Z" level=info msg="StartContainer for \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\"" May 17 00:35:38.692984 env[1513]: time="2025-05-17T00:35:38.692940081Z" level=info msg="StartContainer for \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\" returns successfully" May 17 00:35:38.693826 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:35:38.694754 systemd[1]: Stopped systemd-sysctl.service. May 17 00:35:38.694925 systemd[1]: Stopping systemd-sysctl.service... May 17 00:35:38.698399 systemd[1]: Starting systemd-sysctl.service... May 17 00:35:38.714035 systemd[1]: Finished systemd-sysctl.service. May 17 00:35:38.826242 env[1513]: time="2025-05-17T00:35:38.825331641Z" level=info msg="shim disconnected" id=ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23 May 17 00:35:38.826242 env[1513]: time="2025-05-17T00:35:38.825398144Z" level=warning msg="cleaning up after shim disconnected" id=ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23 namespace=k8s.io May 17 00:35:38.826242 env[1513]: time="2025-05-17T00:35:38.825414045Z" level=info msg="cleaning up dead shim" May 17 00:35:38.834629 env[1513]: time="2025-05-17T00:35:38.834590506Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2977 runtime=io.containerd.runc.v2\n" May 17 00:35:39.537694 env[1513]: time="2025-05-17T00:35:39.537640567Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:39.544242 env[1513]: time="2025-05-17T00:35:39.544126686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:39.544802 env[1513]: time="2025-05-17T00:35:39.544736316Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:35:39.557192 env[1513]: time="2025-05-17T00:35:39.548705111Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:35:39.557192 env[1513]: time="2025-05-17T00:35:39.549114731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:39.557192 env[1513]: time="2025-05-17T00:35:39.552608703Z" level=info msg="CreateContainer within sandbox \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:35:39.572794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23-rootfs.mount: Deactivated successfully. May 17 00:35:39.583935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179646286.mount: Deactivated successfully. May 17 00:35:39.596777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47323466.mount: Deactivated successfully. May 17 00:35:39.623487 env[1513]: time="2025-05-17T00:35:39.623441186Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\"" May 17 00:35:39.625306 env[1513]: time="2025-05-17T00:35:39.625270076Z" level=info msg="StartContainer for \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\"" May 17 00:35:39.632346 env[1513]: time="2025-05-17T00:35:39.632309622Z" level=info msg="CreateContainer within sandbox \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\"" May 17 00:35:39.632870 env[1513]: time="2025-05-17T00:35:39.632840348Z" level=info msg="StartContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\"" May 17 00:35:39.712365 env[1513]: time="2025-05-17T00:35:39.712320757Z" level=info msg="StartContainer for \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\" returns successfully" May 17 00:35:39.732150 env[1513]: time="2025-05-17T00:35:39.732070428Z" level=info msg="StartContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" returns successfully" May 17 00:35:40.203362 env[1513]: time="2025-05-17T00:35:40.203302287Z" level=info msg="shim disconnected" id=2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699 May 17 00:35:40.203688 env[1513]: time="2025-05-17T00:35:40.203664404Z" level=warning msg="cleaning up after shim disconnected" id=2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699 namespace=k8s.io May 17 00:35:40.203795 env[1513]: time="2025-05-17T00:35:40.203781010Z" level=info msg="cleaning up dead shim" May 17 00:35:40.216714 env[1513]: time="2025-05-17T00:35:40.216487321Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3070 runtime=io.containerd.runc.v2\n" May 17 00:35:40.562411 env[1513]: time="2025-05-17T00:35:40.562289947Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:35:40.622417 env[1513]: time="2025-05-17T00:35:40.622365736Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\"" May 17 00:35:40.623491 env[1513]: time="2025-05-17T00:35:40.623459889Z" level=info msg="StartContainer for \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\"" May 17 00:35:40.768482 kubelet[2500]: I0517 00:35:40.768417 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-sjnvf" podStartSLOduration=2.01263634 podStartE2EDuration="28.768380157s" podCreationTimestamp="2025-05-17 00:35:12 +0000 UTC" firstStartedPulling="2025-05-17 00:35:12.795099499 +0000 UTC m=+5.918403637" lastFinishedPulling="2025-05-17 00:35:39.550843416 +0000 UTC m=+32.674147454" observedRunningTime="2025-05-17 00:35:40.625957309 +0000 UTC m=+33.749261447" watchObservedRunningTime="2025-05-17 00:35:40.768380157 +0000 UTC m=+33.891684195" May 17 00:35:40.796395 env[1513]: time="2025-05-17T00:35:40.796340801Z" level=info msg="StartContainer for \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\" returns successfully" May 17 00:35:40.850106 env[1513]: time="2025-05-17T00:35:40.849979080Z" level=info msg="shim disconnected" id=957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f May 17 00:35:40.850343 env[1513]: time="2025-05-17T00:35:40.850239093Z" level=warning msg="cleaning up after shim disconnected" id=957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f namespace=k8s.io May 17 00:35:40.850343 env[1513]: time="2025-05-17T00:35:40.850269494Z" level=info msg="cleaning up dead shim" May 17 00:35:40.871272 env[1513]: time="2025-05-17T00:35:40.871218601Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3124 runtime=io.containerd.runc.v2\n" May 17 00:35:41.566201 env[1513]: time="2025-05-17T00:35:41.565678594Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:35:41.571724 systemd[1]: run-containerd-runc-k8s.io-957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f-runc.lkLBFV.mount: Deactivated successfully. May 17 00:35:41.571912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f-rootfs.mount: Deactivated successfully. May 17 00:35:41.604255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226346734.mount: Deactivated successfully. May 17 00:35:41.616821 env[1513]: time="2025-05-17T00:35:41.616768997Z" level=info msg="CreateContainer within sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\"" May 17 00:35:41.618631 env[1513]: time="2025-05-17T00:35:41.617509132Z" level=info msg="StartContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\"" May 17 00:35:41.673722 env[1513]: time="2025-05-17T00:35:41.673671172Z" level=info msg="StartContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" returns successfully" May 17 00:35:41.782056 kubelet[2500]: I0517 00:35:41.782029 2500 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:35:41.943147 kubelet[2500]: I0517 00:35:41.943090 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pst\" (UniqueName: \"kubernetes.io/projected/a24e2c01-fcac-4292-bc42-1d9cbe81b5e2-kube-api-access-q6pst\") pod \"coredns-7c65d6cfc9-nzgqs\" (UID: \"a24e2c01-fcac-4292-bc42-1d9cbe81b5e2\") " pod="kube-system/coredns-7c65d6cfc9-nzgqs" May 17 00:35:41.943147 kubelet[2500]: I0517 00:35:41.943152 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24e2c01-fcac-4292-bc42-1d9cbe81b5e2-config-volume\") pod \"coredns-7c65d6cfc9-nzgqs\" (UID: \"a24e2c01-fcac-4292-bc42-1d9cbe81b5e2\") " pod="kube-system/coredns-7c65d6cfc9-nzgqs" May 17 00:35:41.943383 kubelet[2500]: I0517 00:35:41.943181 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17434d44-673e-495b-9960-1d5e57c596c4-config-volume\") pod \"coredns-7c65d6cfc9-59wvb\" (UID: \"17434d44-673e-495b-9960-1d5e57c596c4\") " pod="kube-system/coredns-7c65d6cfc9-59wvb" May 17 00:35:41.943383 kubelet[2500]: I0517 00:35:41.943212 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6vw\" (UniqueName: \"kubernetes.io/projected/17434d44-673e-495b-9960-1d5e57c596c4-kube-api-access-pb6vw\") pod \"coredns-7c65d6cfc9-59wvb\" (UID: \"17434d44-673e-495b-9960-1d5e57c596c4\") " pod="kube-system/coredns-7c65d6cfc9-59wvb" May 17 00:35:42.140732 env[1513]: time="2025-05-17T00:35:42.140669089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzgqs,Uid:a24e2c01-fcac-4292-bc42-1d9cbe81b5e2,Namespace:kube-system,Attempt:0,}" May 17 00:35:42.142840 env[1513]: time="2025-05-17T00:35:42.142800787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-59wvb,Uid:17434d44-673e-495b-9960-1d5e57c596c4,Namespace:kube-system,Attempt:0,}" May 17 00:35:42.598347 kubelet[2500]: I0517 00:35:42.598275 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rvsvt" podStartSLOduration=9.834550687 podStartE2EDuration="30.598255436s" podCreationTimestamp="2025-05-17 00:35:12 +0000 UTC" firstStartedPulling="2025-05-17 00:35:12.681877454 +0000 UTC m=+5.805181492" lastFinishedPulling="2025-05-17 00:35:33.445582103 +0000 UTC m=+26.568886241" observedRunningTime="2025-05-17 00:35:42.597082482 +0000 UTC m=+35.720386620" watchObservedRunningTime="2025-05-17 00:35:42.598255436 +0000 UTC m=+35.721559474" May 17 00:35:44.027613 systemd-networkd[1679]: cilium_host: Link UP May 17 00:35:44.027741 systemd-networkd[1679]: cilium_net: Link UP May 17 00:35:44.027744 systemd-networkd[1679]: cilium_net: Gained carrier May 17 00:35:44.031342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:35:44.031050 systemd-networkd[1679]: cilium_host: Gained carrier May 17 00:35:44.262109 systemd-networkd[1679]: cilium_vxlan: Link UP May 17 00:35:44.262118 systemd-networkd[1679]: cilium_vxlan: Gained carrier May 17 00:35:44.471282 systemd-networkd[1679]: cilium_host: Gained IPv6LL May 17 00:35:44.492161 kernel: NET: Registered PF_ALG protocol family May 17 00:35:44.814263 systemd-networkd[1679]: cilium_net: Gained IPv6LL May 17 00:35:45.232317 systemd-networkd[1679]: lxc_health: Link UP May 17 00:35:45.252028 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:35:45.251538 systemd-networkd[1679]: lxc_health: Gained carrier May 17 00:35:45.725951 systemd-networkd[1679]: lxc4f9296daca7d: Link UP May 17 00:35:45.733193 kernel: eth0: renamed from tmp674ba May 17 00:35:45.745986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4f9296daca7d: link becomes ready May 17 00:35:45.745265 systemd-networkd[1679]: lxc4f9296daca7d: Gained carrier May 17 00:35:45.751517 systemd-networkd[1679]: lxc19903d2c1c8c: Link UP May 17 00:35:45.760182 kernel: eth0: renamed from tmp8dfdf May 17 00:35:45.774223 systemd-networkd[1679]: cilium_vxlan: Gained IPv6LL May 17 00:35:45.778327 systemd-networkd[1679]: lxc19903d2c1c8c: Gained carrier May 17 00:35:45.783235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc19903d2c1c8c: link becomes ready May 17 00:35:46.798327 systemd-networkd[1679]: lxc_health: Gained IPv6LL May 17 00:35:47.246309 systemd-networkd[1679]: lxc19903d2c1c8c: Gained IPv6LL May 17 00:35:47.566317 systemd-networkd[1679]: lxc4f9296daca7d: Gained IPv6LL May 17 00:35:49.319643 env[1513]: time="2025-05-17T00:35:49.319563435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:49.320231 env[1513]: time="2025-05-17T00:35:49.319656038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:49.320231 env[1513]: time="2025-05-17T00:35:49.319684240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:49.320231 env[1513]: time="2025-05-17T00:35:49.319852746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/674ba075b87e1322a9432fa129ee3ab3990cae49d4408ceeadbc2e4a06773849 pid=3665 runtime=io.containerd.runc.v2 May 17 00:35:49.394399 env[1513]: time="2025-05-17T00:35:49.392072710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:49.394399 env[1513]: time="2025-05-17T00:35:49.392125212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:49.394399 env[1513]: time="2025-05-17T00:35:49.392170014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:49.394399 env[1513]: time="2025-05-17T00:35:49.392391823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dfdf8834b5a2d30feebf409f1aae26f616efbe082c94db51c48ff564f6aff01 pid=3697 runtime=io.containerd.runc.v2 May 17 00:35:49.452094 systemd[1]: run-containerd-runc-k8s.io-8dfdf8834b5a2d30feebf409f1aae26f616efbe082c94db51c48ff564f6aff01-runc.YTAaSm.mount: Deactivated successfully. May 17 00:35:49.481726 env[1513]: time="2025-05-17T00:35:49.481673763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-59wvb,Uid:17434d44-673e-495b-9960-1d5e57c596c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"674ba075b87e1322a9432fa129ee3ab3990cae49d4408ceeadbc2e4a06773849\"" May 17 00:35:49.489755 env[1513]: time="2025-05-17T00:35:49.489705081Z" level=info msg="CreateContainer within sandbox \"674ba075b87e1322a9432fa129ee3ab3990cae49d4408ceeadbc2e4a06773849\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:49.541467 env[1513]: time="2025-05-17T00:35:49.540888511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzgqs,Uid:a24e2c01-fcac-4292-bc42-1d9cbe81b5e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dfdf8834b5a2d30feebf409f1aae26f616efbe082c94db51c48ff564f6aff01\"" May 17 00:35:49.542753 env[1513]: time="2025-05-17T00:35:49.542719884Z" level=info msg="CreateContainer within sandbox \"674ba075b87e1322a9432fa129ee3ab3990cae49d4408ceeadbc2e4a06773849\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"212d254e7d7a3a6bef9fc1c48d3e588f7fc7327c9efa02388c072a81ce36736f\"" May 17 00:35:49.543223 env[1513]: time="2025-05-17T00:35:49.543198503Z" level=info msg="StartContainer for \"212d254e7d7a3a6bef9fc1c48d3e588f7fc7327c9efa02388c072a81ce36736f\"" May 17 00:35:49.545291 env[1513]: time="2025-05-17T00:35:49.545255384Z" level=info msg="CreateContainer within sandbox \"8dfdf8834b5a2d30feebf409f1aae26f616efbe082c94db51c48ff564f6aff01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:49.583922 env[1513]: time="2025-05-17T00:35:49.583309893Z" level=info msg="CreateContainer within sandbox \"8dfdf8834b5a2d30feebf409f1aae26f616efbe082c94db51c48ff564f6aff01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e611d47f14bcf5ba62cd7aa500323768e4ffe8aafdd8d3cf55a8b8f1b24ca3a\"" May 17 00:35:49.586254 env[1513]: time="2025-05-17T00:35:49.584759351Z" level=info msg="StartContainer for \"4e611d47f14bcf5ba62cd7aa500323768e4ffe8aafdd8d3cf55a8b8f1b24ca3a\"" May 17 00:35:49.602529 env[1513]: time="2025-05-17T00:35:49.602481453Z" level=info msg="StartContainer for \"212d254e7d7a3a6bef9fc1c48d3e588f7fc7327c9efa02388c072a81ce36736f\" returns successfully" May 17 00:35:49.659981 env[1513]: time="2025-05-17T00:35:49.659860529Z" level=info msg="StartContainer for \"4e611d47f14bcf5ba62cd7aa500323768e4ffe8aafdd8d3cf55a8b8f1b24ca3a\" returns successfully" May 17 00:35:50.606744 kubelet[2500]: I0517 00:35:50.606655 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-59wvb" podStartSLOduration=38.606636289 podStartE2EDuration="38.606636289s" podCreationTimestamp="2025-05-17 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:50.605300737 +0000 UTC m=+43.728604875" watchObservedRunningTime="2025-05-17 00:35:50.606636289 +0000 UTC m=+43.729940427" May 17 00:37:11.047785 systemd[1]: Started sshd@5-10.200.4.4:22-10.200.16.10:38340.service. May 17 00:37:11.634178 sshd[3837]: Accepted publickey for core from 10.200.16.10 port 38340 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:11.635657 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:11.641042 systemd-logind[1494]: New session 8 of user core. May 17 00:37:11.641853 systemd[1]: Started session-8.scope. May 17 00:37:12.156254 sshd[3837]: pam_unix(sshd:session): session closed for user core May 17 00:37:12.159339 systemd[1]: sshd@5-10.200.4.4:22-10.200.16.10:38340.service: Deactivated successfully. May 17 00:37:12.160585 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:37:12.161006 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. May 17 00:37:12.161949 systemd-logind[1494]: Removed session 8. May 17 00:37:17.254069 systemd[1]: Started sshd@6-10.200.4.4:22-10.200.16.10:38356.service. May 17 00:37:17.842278 sshd[3852]: Accepted publickey for core from 10.200.16.10 port 38356 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:17.844071 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:17.849472 systemd[1]: Started session-9.scope. May 17 00:37:17.850206 systemd-logind[1494]: New session 9 of user core. May 17 00:37:18.343866 sshd[3852]: pam_unix(sshd:session): session closed for user core May 17 00:37:18.346844 systemd[1]: sshd@6-10.200.4.4:22-10.200.16.10:38356.service: Deactivated successfully. May 17 00:37:18.348763 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:37:18.349228 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. May 17 00:37:18.350554 systemd-logind[1494]: Removed session 9. May 17 00:37:23.441421 systemd[1]: Started sshd@7-10.200.4.4:22-10.200.16.10:54550.service. May 17 00:37:24.031294 sshd[3866]: Accepted publickey for core from 10.200.16.10 port 54550 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:24.032708 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:24.037812 systemd[1]: Started session-10.scope. May 17 00:37:24.038582 systemd-logind[1494]: New session 10 of user core. May 17 00:37:24.512744 sshd[3866]: pam_unix(sshd:session): session closed for user core May 17 00:37:24.516118 systemd[1]: sshd@7-10.200.4.4:22-10.200.16.10:54550.service: Deactivated successfully. May 17 00:37:24.517585 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. May 17 00:37:24.517708 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:37:24.519546 systemd-logind[1494]: Removed session 10. May 17 00:37:29.611474 systemd[1]: Started sshd@8-10.200.4.4:22-10.200.16.10:51082.service. May 17 00:37:30.205980 sshd[3879]: Accepted publickey for core from 10.200.16.10 port 51082 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:30.207561 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:30.213208 systemd[1]: Started session-11.scope. May 17 00:37:30.214306 systemd-logind[1494]: New session 11 of user core. May 17 00:37:30.675674 sshd[3879]: pam_unix(sshd:session): session closed for user core May 17 00:37:30.679002 systemd[1]: sshd@8-10.200.4.4:22-10.200.16.10:51082.service: Deactivated successfully. May 17 00:37:30.681347 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:37:30.681857 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. May 17 00:37:30.684773 systemd-logind[1494]: Removed session 11. May 17 00:37:35.773725 systemd[1]: Started sshd@9-10.200.4.4:22-10.200.16.10:51086.service. May 17 00:37:36.361543 sshd[3892]: Accepted publickey for core from 10.200.16.10 port 51086 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:36.363005 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:36.368211 systemd[1]: Started session-12.scope. May 17 00:37:36.368931 systemd-logind[1494]: New session 12 of user core. May 17 00:37:36.852046 sshd[3892]: pam_unix(sshd:session): session closed for user core May 17 00:37:36.855477 systemd[1]: sshd@9-10.200.4.4:22-10.200.16.10:51086.service: Deactivated successfully. May 17 00:37:36.856744 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:37:36.858879 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. May 17 00:37:36.860298 systemd-logind[1494]: Removed session 12. May 17 00:37:36.949407 systemd[1]: Started sshd@10-10.200.4.4:22-10.200.16.10:51096.service. May 17 00:37:37.535705 sshd[3906]: Accepted publickey for core from 10.200.16.10 port 51096 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:37.537495 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:37.542489 systemd[1]: Started session-13.scope. May 17 00:37:37.543358 systemd-logind[1494]: New session 13 of user core. May 17 00:37:38.049668 sshd[3906]: pam_unix(sshd:session): session closed for user core May 17 00:37:38.052559 systemd[1]: sshd@10-10.200.4.4:22-10.200.16.10:51096.service: Deactivated successfully. May 17 00:37:38.054087 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:37:38.054524 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. May 17 00:37:38.055556 systemd-logind[1494]: Removed session 13. May 17 00:37:38.146666 systemd[1]: Started sshd@11-10.200.4.4:22-10.200.16.10:51106.service. May 17 00:37:38.747232 sshd[3917]: Accepted publickey for core from 10.200.16.10 port 51106 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:38.748837 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:38.754108 systemd[1]: Started session-14.scope. May 17 00:37:38.754590 systemd-logind[1494]: New session 14 of user core. May 17 00:37:39.237807 sshd[3917]: pam_unix(sshd:session): session closed for user core May 17 00:37:39.241603 systemd[1]: sshd@11-10.200.4.4:22-10.200.16.10:51106.service: Deactivated successfully. May 17 00:37:39.242286 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. May 17 00:37:39.243266 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:37:39.244092 systemd-logind[1494]: Removed session 14. May 17 00:37:44.335739 systemd[1]: Started sshd@12-10.200.4.4:22-10.200.16.10:50572.service. May 17 00:37:44.926226 sshd[3931]: Accepted publickey for core from 10.200.16.10 port 50572 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:44.927784 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:44.932658 systemd[1]: Started session-15.scope. May 17 00:37:44.933410 systemd-logind[1494]: New session 15 of user core. May 17 00:37:45.408146 sshd[3931]: pam_unix(sshd:session): session closed for user core May 17 00:37:45.411227 systemd[1]: sshd@12-10.200.4.4:22-10.200.16.10:50572.service: Deactivated successfully. May 17 00:37:45.412475 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:37:45.413962 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. May 17 00:37:45.415545 systemd-logind[1494]: Removed session 15. May 17 00:37:50.506468 systemd[1]: Started sshd@13-10.200.4.4:22-10.200.16.10:50416.service. May 17 00:37:51.100350 sshd[3944]: Accepted publickey for core from 10.200.16.10 port 50416 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:51.101785 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:51.106937 systemd[1]: Started session-16.scope. May 17 00:37:51.107690 systemd-logind[1494]: New session 16 of user core. May 17 00:37:51.592764 sshd[3944]: pam_unix(sshd:session): session closed for user core May 17 00:37:51.595821 systemd[1]: sshd@13-10.200.4.4:22-10.200.16.10:50416.service: Deactivated successfully. May 17 00:37:51.597575 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:37:51.598365 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. May 17 00:37:51.599463 systemd-logind[1494]: Removed session 16. May 17 00:37:51.689525 systemd[1]: Started sshd@14-10.200.4.4:22-10.200.16.10:50426.service. May 17 00:37:52.279456 sshd[3957]: Accepted publickey for core from 10.200.16.10 port 50426 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:52.280977 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:52.286247 systemd[1]: Started session-17.scope. May 17 00:37:52.286978 systemd-logind[1494]: New session 17 of user core. May 17 00:37:52.796362 sshd[3957]: pam_unix(sshd:session): session closed for user core May 17 00:37:52.800041 systemd[1]: sshd@14-10.200.4.4:22-10.200.16.10:50426.service: Deactivated successfully. May 17 00:37:52.802058 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:37:52.802866 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. May 17 00:37:52.804030 systemd-logind[1494]: Removed session 17. May 17 00:37:52.893151 systemd[1]: Started sshd@15-10.200.4.4:22-10.200.16.10:50438.service. May 17 00:37:53.479543 sshd[3968]: Accepted publickey for core from 10.200.16.10 port 50438 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:53.480943 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:53.485994 systemd[1]: Started session-18.scope. May 17 00:37:53.486736 systemd-logind[1494]: New session 18 of user core. May 17 00:37:55.506669 sshd[3968]: pam_unix(sshd:session): session closed for user core May 17 00:37:55.509687 systemd[1]: sshd@15-10.200.4.4:22-10.200.16.10:50438.service: Deactivated successfully. May 17 00:37:55.510914 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:37:55.510945 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. May 17 00:37:55.512579 systemd-logind[1494]: Removed session 18. May 17 00:37:55.603987 systemd[1]: Started sshd@16-10.200.4.4:22-10.200.16.10:50452.service. May 17 00:37:56.188623 sshd[3987]: Accepted publickey for core from 10.200.16.10 port 50452 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:56.190112 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:56.195230 systemd[1]: Started session-19.scope. May 17 00:37:56.195639 systemd-logind[1494]: New session 19 of user core. May 17 00:37:56.769035 sshd[3987]: pam_unix(sshd:session): session closed for user core May 17 00:37:56.772384 systemd[1]: sshd@16-10.200.4.4:22-10.200.16.10:50452.service: Deactivated successfully. May 17 00:37:56.773919 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:37:56.774839 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. May 17 00:37:56.777148 systemd-logind[1494]: Removed session 19. May 17 00:37:56.866153 systemd[1]: Started sshd@17-10.200.4.4:22-10.200.16.10:50462.service. May 17 00:37:57.454766 sshd[3998]: Accepted publickey for core from 10.200.16.10 port 50462 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:37:57.456356 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:37:57.461762 systemd[1]: Started session-20.scope. May 17 00:37:57.462043 systemd-logind[1494]: New session 20 of user core. May 17 00:37:57.937779 sshd[3998]: pam_unix(sshd:session): session closed for user core May 17 00:37:57.940488 systemd[1]: sshd@17-10.200.4.4:22-10.200.16.10:50462.service: Deactivated successfully. May 17 00:37:57.941420 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:37:57.942451 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. May 17 00:37:57.943855 systemd-logind[1494]: Removed session 20. May 17 00:38:03.036856 systemd[1]: Started sshd@18-10.200.4.4:22-10.200.16.10:46750.service. May 17 00:38:03.633117 sshd[4014]: Accepted publickey for core from 10.200.16.10 port 46750 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:03.634605 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:03.638453 systemd-logind[1494]: New session 21 of user core. May 17 00:38:03.640680 systemd[1]: Started session-21.scope. May 17 00:38:04.105190 sshd[4014]: pam_unix(sshd:session): session closed for user core May 17 00:38:04.108211 systemd[1]: sshd@18-10.200.4.4:22-10.200.16.10:46750.service: Deactivated successfully. May 17 00:38:04.110052 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:38:04.110664 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. May 17 00:38:04.111667 systemd-logind[1494]: Removed session 21. May 17 00:38:09.203836 systemd[1]: Started sshd@19-10.200.4.4:22-10.200.16.10:53310.service. May 17 00:38:09.797089 sshd[4030]: Accepted publickey for core from 10.200.16.10 port 53310 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:09.799104 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:09.805196 systemd[1]: Started session-22.scope. May 17 00:38:09.805912 systemd-logind[1494]: New session 22 of user core. May 17 00:38:10.277762 sshd[4030]: pam_unix(sshd:session): session closed for user core May 17 00:38:10.281165 systemd[1]: sshd@19-10.200.4.4:22-10.200.16.10:53310.service: Deactivated successfully. May 17 00:38:10.283203 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:38:10.284190 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. May 17 00:38:10.285550 systemd-logind[1494]: Removed session 22. May 17 00:38:15.375012 systemd[1]: Started sshd@20-10.200.4.4:22-10.200.16.10:53316.service. May 17 00:38:15.959275 sshd[4045]: Accepted publickey for core from 10.200.16.10 port 53316 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:15.960736 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:15.967279 systemd-logind[1494]: New session 23 of user core. May 17 00:38:15.967975 systemd[1]: Started session-23.scope. May 17 00:38:16.435314 sshd[4045]: pam_unix(sshd:session): session closed for user core May 17 00:38:16.438217 systemd[1]: sshd@20-10.200.4.4:22-10.200.16.10:53316.service: Deactivated successfully. May 17 00:38:16.439557 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. May 17 00:38:16.439646 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:38:16.440859 systemd-logind[1494]: Removed session 23. May 17 00:38:16.532852 systemd[1]: Started sshd@21-10.200.4.4:22-10.200.16.10:53320.service. May 17 00:38:17.121008 sshd[4059]: Accepted publickey for core from 10.200.16.10 port 53320 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:17.122723 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:17.128291 systemd[1]: Started session-24.scope. May 17 00:38:17.128536 systemd-logind[1494]: New session 24 of user core. May 17 00:38:18.760877 kubelet[2500]: I0517 00:38:18.760803 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nzgqs" podStartSLOduration=186.760781387 podStartE2EDuration="3m6.760781387s" podCreationTimestamp="2025-05-17 00:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:50.639329459 +0000 UTC m=+43.762633497" watchObservedRunningTime="2025-05-17 00:38:18.760781387 +0000 UTC m=+191.884085425" May 17 00:38:18.775204 env[1513]: time="2025-05-17T00:38:18.775153675Z" level=info msg="StopContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" with timeout 30 (s)" May 17 00:38:18.775952 env[1513]: time="2025-05-17T00:38:18.775908469Z" level=info msg="Stop container \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" with signal terminated" May 17 00:38:18.797176 systemd[1]: run-containerd-runc-k8s.io-47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e-runc.fzZaTZ.mount: Deactivated successfully. May 17 00:38:18.825958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515-rootfs.mount: Deactivated successfully. May 17 00:38:18.832282 env[1513]: time="2025-05-17T00:38:18.829198554Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:38:18.835874 env[1513]: time="2025-05-17T00:38:18.835844002Z" level=info msg="StopContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" with timeout 2 (s)" May 17 00:38:18.836333 env[1513]: time="2025-05-17T00:38:18.836271499Z" level=info msg="Stop container \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" with signal terminated" May 17 00:38:18.843295 systemd-networkd[1679]: lxc_health: Link DOWN May 17 00:38:18.843302 systemd-networkd[1679]: lxc_health: Lost carrier May 17 00:38:18.874430 env[1513]: time="2025-05-17T00:38:18.874384502Z" level=info msg="shim disconnected" id=9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515 May 17 00:38:18.874680 env[1513]: time="2025-05-17T00:38:18.874661900Z" level=warning msg="cleaning up after shim disconnected" id=9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515 namespace=k8s.io May 17 00:38:18.874762 env[1513]: time="2025-05-17T00:38:18.874751899Z" level=info msg="cleaning up dead shim" May 17 00:38:18.887047 env[1513]: time="2025-05-17T00:38:18.886996004Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" May 17 00:38:18.891600 env[1513]: time="2025-05-17T00:38:18.891567468Z" level=info msg="StopContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" returns successfully" May 17 00:38:18.891903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e-rootfs.mount: Deactivated successfully. May 17 00:38:18.892516 env[1513]: time="2025-05-17T00:38:18.892490361Z" level=info msg="StopPodSandbox for \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\"" May 17 00:38:18.892685 env[1513]: time="2025-05-17T00:38:18.892662560Z" level=info msg="Container to stop \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.913606 env[1513]: time="2025-05-17T00:38:18.913559497Z" level=info msg="shim disconnected" id=47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e May 17 00:38:18.913854 env[1513]: time="2025-05-17T00:38:18.913831595Z" level=warning msg="cleaning up after shim disconnected" id=47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e namespace=k8s.io May 17 00:38:18.913968 env[1513]: time="2025-05-17T00:38:18.913949794Z" level=info msg="cleaning up dead shim" May 17 00:38:18.927902 env[1513]: time="2025-05-17T00:38:18.927854186Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4160 runtime=io.containerd.runc.v2\n" May 17 00:38:18.935038 env[1513]: time="2025-05-17T00:38:18.935002630Z" level=info msg="StopContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" returns successfully" May 17 00:38:18.935680 env[1513]: time="2025-05-17T00:38:18.935645325Z" level=info msg="shim disconnected" id=e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f May 17 00:38:18.935781 env[1513]: time="2025-05-17T00:38:18.935688825Z" level=warning msg="cleaning up after shim disconnected" id=e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f namespace=k8s.io May 17 00:38:18.935781 env[1513]: time="2025-05-17T00:38:18.935700325Z" level=info msg="cleaning up dead shim" May 17 00:38:18.936974 env[1513]: time="2025-05-17T00:38:18.936941615Z" level=info msg="StopPodSandbox for \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\"" May 17 00:38:18.937162 env[1513]: time="2025-05-17T00:38:18.937120214Z" level=info msg="Container to stop \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.937268 env[1513]: time="2025-05-17T00:38:18.937247913Z" level=info msg="Container to stop \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.937354 env[1513]: time="2025-05-17T00:38:18.937336112Z" level=info msg="Container to stop \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.937438 env[1513]: time="2025-05-17T00:38:18.937419811Z" level=info msg="Container to stop \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.937545 env[1513]: time="2025-05-17T00:38:18.937526511Z" level=info msg="Container to stop \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:18.946401 env[1513]: time="2025-05-17T00:38:18.946365542Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4179 runtime=io.containerd.runc.v2\n" May 17 00:38:18.946696 env[1513]: time="2025-05-17T00:38:18.946666239Z" level=info msg="TearDown network for sandbox \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\" successfully" May 17 00:38:18.946696 env[1513]: time="2025-05-17T00:38:18.946691939Z" level=info msg="StopPodSandbox for \"e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f\" returns successfully" May 17 00:38:18.983861 env[1513]: time="2025-05-17T00:38:18.983808150Z" level=info msg="shim disconnected" id=a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4 May 17 00:38:18.984105 env[1513]: time="2025-05-17T00:38:18.984084648Z" level=warning msg="cleaning up after shim disconnected" id=a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4 namespace=k8s.io May 17 00:38:18.984230 env[1513]: time="2025-05-17T00:38:18.984211947Z" level=info msg="cleaning up dead shim" May 17 00:38:18.991721 env[1513]: time="2025-05-17T00:38:18.991681589Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4211 runtime=io.containerd.runc.v2\n" May 17 00:38:18.992015 env[1513]: time="2025-05-17T00:38:18.991982086Z" level=info msg="TearDown network for sandbox \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" successfully" May 17 00:38:18.992107 env[1513]: time="2025-05-17T00:38:18.992014286Z" level=info msg="StopPodSandbox for \"a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4\" returns successfully" May 17 00:38:19.092655 kubelet[2500]: I0517 00:38:19.091253 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536cbde9-bfd7-49f1-9c86-6667b712d7aa-cilium-config-path\") pod \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\" (UID: \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\") " May 17 00:38:19.092655 kubelet[2500]: I0517 00:38:19.091311 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5tkn\" (UniqueName: \"kubernetes.io/projected/536cbde9-bfd7-49f1-9c86-6667b712d7aa-kube-api-access-n5tkn\") pod \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\" (UID: \"536cbde9-bfd7-49f1-9c86-6667b712d7aa\") " May 17 00:38:19.096249 kubelet[2500]: I0517 00:38:19.096212 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536cbde9-bfd7-49f1-9c86-6667b712d7aa-kube-api-access-n5tkn" (OuterVolumeSpecName: "kube-api-access-n5tkn") pod "536cbde9-bfd7-49f1-9c86-6667b712d7aa" (UID: "536cbde9-bfd7-49f1-9c86-6667b712d7aa"). InnerVolumeSpecName "kube-api-access-n5tkn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:19.096689 kubelet[2500]: I0517 00:38:19.096663 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536cbde9-bfd7-49f1-9c86-6667b712d7aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "536cbde9-bfd7-49f1-9c86-6667b712d7aa" (UID: "536cbde9-bfd7-49f1-9c86-6667b712d7aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:19.191771 kubelet[2500]: I0517 00:38:19.191723 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-config-path\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.191771 kubelet[2500]: I0517 00:38:19.191771 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cni-path\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.191798 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-run\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.191824 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f048c2e-11cf-4d09-a6bb-19da01f1b299-clustermesh-secrets\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.191844 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hubble-tls\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.191997 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t622l\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-kube-api-access-t622l\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.192058 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-cgroup\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192252 kubelet[2500]: I0517 00:38:19.192085 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hostproc\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192111 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-etc-cni-netd\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192183 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-xtables-lock\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192215 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-lib-modules\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192243 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-net\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192271 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-kernel\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192584 kubelet[2500]: I0517 00:38:19.192296 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-bpf-maps\") pod \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\" (UID: \"6f048c2e-11cf-4d09-a6bb-19da01f1b299\") " May 17 00:38:19.192899 kubelet[2500]: I0517 00:38:19.192358 2500 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5tkn\" (UniqueName: \"kubernetes.io/projected/536cbde9-bfd7-49f1-9c86-6667b712d7aa-kube-api-access-n5tkn\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.192899 kubelet[2500]: I0517 00:38:19.192380 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/536cbde9-bfd7-49f1-9c86-6667b712d7aa-cilium-config-path\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.192899 kubelet[2500]: I0517 00:38:19.192438 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.193146 kubelet[2500]: I0517 00:38:19.193102 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.193297 kubelet[2500]: I0517 00:38:19.193276 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.193466 kubelet[2500]: I0517 00:38:19.193412 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.195863 kubelet[2500]: I0517 00:38:19.195824 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:19.196002 kubelet[2500]: I0517 00:38:19.195914 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.196002 kubelet[2500]: I0517 00:38:19.195947 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.196002 kubelet[2500]: I0517 00:38:19.195973 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.196002 kubelet[2500]: I0517 00:38:19.195996 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.196264 kubelet[2500]: I0517 00:38:19.196021 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.196264 kubelet[2500]: I0517 00:38:19.196046 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:19.199630 kubelet[2500]: I0517 00:38:19.199590 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f048c2e-11cf-4d09-a6bb-19da01f1b299-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:19.199730 kubelet[2500]: I0517 00:38:19.199709 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-kube-api-access-t622l" (OuterVolumeSpecName: "kube-api-access-t622l") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "kube-api-access-t622l". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:19.201957 kubelet[2500]: I0517 00:38:19.201933 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f048c2e-11cf-4d09-a6bb-19da01f1b299" (UID: "6f048c2e-11cf-4d09-a6bb-19da01f1b299"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:19.293252 kubelet[2500]: I0517 00:38:19.293213 2500 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-xtables-lock\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293466 kubelet[2500]: I0517 00:38:19.293447 2500 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-lib-modules\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293467 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-net\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293481 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293503 2500 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-bpf-maps\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293519 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-config-path\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293531 2500 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cni-path\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293552 kubelet[2500]: I0517 00:38:19.293546 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-run\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293558 2500 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f048c2e-11cf-4d09-a6bb-19da01f1b299-clustermesh-secrets\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293570 2500 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hubble-tls\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293583 2500 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t622l\" (UniqueName: \"kubernetes.io/projected/6f048c2e-11cf-4d09-a6bb-19da01f1b299-kube-api-access-t622l\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293595 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-cilium-cgroup\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293607 2500 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-hostproc\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.293751 kubelet[2500]: I0517 00:38:19.293618 2500 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f048c2e-11cf-4d09-a6bb-19da01f1b299-etc-cni-netd\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:19.786234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f-rootfs.mount: Deactivated successfully. May 17 00:38:19.786451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e599a8c9071ccd1e0c66bbcbdad3bbd497105ee70a135d6431c1aad96abc783f-shm.mount: Deactivated successfully. May 17 00:38:19.786613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4-rootfs.mount: Deactivated successfully. May 17 00:38:19.786771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a09cefa691b3b69594f7eb4e5537379da59b76f3d0a391af7d57661ea62579f4-shm.mount: Deactivated successfully. May 17 00:38:19.786942 systemd[1]: var-lib-kubelet-pods-536cbde9\x2dbfd7\x2d49f1\x2d9c86\x2d6667b712d7aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn5tkn.mount: Deactivated successfully. May 17 00:38:19.787104 systemd[1]: var-lib-kubelet-pods-6f048c2e\x2d11cf\x2d4d09\x2da6bb\x2d19da01f1b299-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt622l.mount: Deactivated successfully. May 17 00:38:19.787285 systemd[1]: var-lib-kubelet-pods-6f048c2e\x2d11cf\x2d4d09\x2da6bb\x2d19da01f1b299-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:19.787434 systemd[1]: var-lib-kubelet-pods-6f048c2e\x2d11cf\x2d4d09\x2da6bb\x2d19da01f1b299-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:19.889653 kubelet[2500]: I0517 00:38:19.889612 2500 scope.go:117] "RemoveContainer" containerID="9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515" May 17 00:38:19.892726 env[1513]: time="2025-05-17T00:38:19.891717304Z" level=info msg="RemoveContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\"" May 17 00:38:19.905253 env[1513]: time="2025-05-17T00:38:19.905214602Z" level=info msg="RemoveContainer for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" returns successfully" May 17 00:38:19.905491 kubelet[2500]: I0517 00:38:19.905471 2500 scope.go:117] "RemoveContainer" containerID="9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515" May 17 00:38:19.905728 env[1513]: time="2025-05-17T00:38:19.905663198Z" level=error msg="ContainerStatus for \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\": not found" May 17 00:38:19.905853 kubelet[2500]: E0517 00:38:19.905824 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\": not found" containerID="9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515" May 17 00:38:19.907628 kubelet[2500]: I0517 00:38:19.905910 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515"} err="failed to get container status \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\": rpc error: code = NotFound desc = an error occurred when try to find container \"9513bbfa32132fd51d4336a46302aad302cdd4ed0c40375eddef3fc30a372515\": not found" May 17 00:38:19.907628 kubelet[2500]: I0517 00:38:19.906010 2500 scope.go:117] "RemoveContainer" containerID="47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e" May 17 00:38:19.908773 env[1513]: time="2025-05-17T00:38:19.908652976Z" level=info msg="RemoveContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\"" May 17 00:38:19.916546 env[1513]: time="2025-05-17T00:38:19.916506017Z" level=info msg="RemoveContainer for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" returns successfully" May 17 00:38:19.916750 kubelet[2500]: I0517 00:38:19.916716 2500 scope.go:117] "RemoveContainer" containerID="957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f" May 17 00:38:19.917709 env[1513]: time="2025-05-17T00:38:19.917679608Z" level=info msg="RemoveContainer for \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\"" May 17 00:38:19.927244 env[1513]: time="2025-05-17T00:38:19.927212336Z" level=info msg="RemoveContainer for \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\" returns successfully" May 17 00:38:19.930668 kubelet[2500]: I0517 00:38:19.930649 2500 scope.go:117] "RemoveContainer" containerID="2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699" May 17 00:38:19.932718 env[1513]: time="2025-05-17T00:38:19.932684195Z" level=info msg="RemoveContainer for \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\"" May 17 00:38:19.944548 env[1513]: time="2025-05-17T00:38:19.944511306Z" level=info msg="RemoveContainer for \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\" returns successfully" May 17 00:38:19.944698 kubelet[2500]: I0517 00:38:19.944667 2500 scope.go:117] "RemoveContainer" containerID="ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23" May 17 00:38:19.945607 env[1513]: time="2025-05-17T00:38:19.945579898Z" level=info msg="RemoveContainer for \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\"" May 17 00:38:19.970211 env[1513]: time="2025-05-17T00:38:19.970173312Z" level=info msg="RemoveContainer for \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\" returns successfully" May 17 00:38:19.970515 kubelet[2500]: I0517 00:38:19.970439 2500 scope.go:117] "RemoveContainer" containerID="7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183" May 17 00:38:19.971662 env[1513]: time="2025-05-17T00:38:19.971633001Z" level=info msg="RemoveContainer for \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\"" May 17 00:38:19.980828 env[1513]: time="2025-05-17T00:38:19.980793632Z" level=info msg="RemoveContainer for \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\" returns successfully" May 17 00:38:19.981007 kubelet[2500]: I0517 00:38:19.980986 2500 scope.go:117] "RemoveContainer" containerID="47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e" May 17 00:38:19.981306 env[1513]: time="2025-05-17T00:38:19.981225629Z" level=error msg="ContainerStatus for \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\": not found" May 17 00:38:19.981472 kubelet[2500]: E0517 00:38:19.981451 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\": not found" containerID="47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e" May 17 00:38:19.981555 kubelet[2500]: I0517 00:38:19.981495 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e"} err="failed to get container status \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"47be8541bbafac32fd3be61fd7e814b1f6daef034518c6fb5afc73716ac4aa4e\": not found" May 17 00:38:19.981555 kubelet[2500]: I0517 00:38:19.981523 2500 scope.go:117] "RemoveContainer" containerID="957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f" May 17 00:38:19.981771 env[1513]: time="2025-05-17T00:38:19.981719825Z" level=error msg="ContainerStatus for \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\": not found" May 17 00:38:19.981897 kubelet[2500]: E0517 00:38:19.981873 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\": not found" containerID="957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f" May 17 00:38:19.981973 kubelet[2500]: I0517 00:38:19.981902 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f"} err="failed to get container status \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"957ebd1ac893050b6d187dda7ecf3cb85d2d5539ad92fe5fe0689fd634430f9f\": not found" May 17 00:38:19.981973 kubelet[2500]: I0517 00:38:19.981928 2500 scope.go:117] "RemoveContainer" containerID="2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699" May 17 00:38:19.982156 env[1513]: time="2025-05-17T00:38:19.982098322Z" level=error msg="ContainerStatus for \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\": not found" May 17 00:38:19.982276 kubelet[2500]: E0517 00:38:19.982254 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\": not found" containerID="2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699" May 17 00:38:19.982351 kubelet[2500]: I0517 00:38:19.982280 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699"} err="failed to get container status \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e690c473932d77a1205ed2280ff3f600967eca0e1009535895a96cf437f4699\": not found" May 17 00:38:19.982351 kubelet[2500]: I0517 00:38:19.982300 2500 scope.go:117] "RemoveContainer" containerID="ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23" May 17 00:38:19.982528 env[1513]: time="2025-05-17T00:38:19.982482419Z" level=error msg="ContainerStatus for \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\": not found" May 17 00:38:19.982631 kubelet[2500]: E0517 00:38:19.982615 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\": not found" containerID="ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23" May 17 00:38:19.982685 kubelet[2500]: I0517 00:38:19.982638 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23"} err="failed to get container status \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad8896d7232f78f8290631fb4f4da73a1264f07696f0626842ed237513d40d23\": not found" May 17 00:38:19.982685 kubelet[2500]: I0517 00:38:19.982657 2500 scope.go:117] "RemoveContainer" containerID="7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183" May 17 00:38:19.982863 env[1513]: time="2025-05-17T00:38:19.982816217Z" level=error msg="ContainerStatus for \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\": not found" May 17 00:38:19.982962 kubelet[2500]: E0517 00:38:19.982938 2500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\": not found" containerID="7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183" May 17 00:38:19.983040 kubelet[2500]: I0517 00:38:19.982964 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183"} err="failed to get container status \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e4bd93a27332d0b8ea6a3835d4829b0ba08b509f2ea4da518619db9fd564183\": not found" May 17 00:38:20.819516 sshd[4059]: pam_unix(sshd:session): session closed for user core May 17 00:38:20.822675 systemd[1]: sshd@21-10.200.4.4:22-10.200.16.10:53320.service: Deactivated successfully. May 17 00:38:20.823850 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:38:20.825723 systemd-logind[1494]: Session 24 logged out. Waiting for processes to exit. May 17 00:38:20.827242 systemd-logind[1494]: Removed session 24. May 17 00:38:20.917259 systemd[1]: Started sshd@22-10.200.4.4:22-10.200.16.10:41554.service. May 17 00:38:21.417585 kubelet[2500]: I0517 00:38:21.417542 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536cbde9-bfd7-49f1-9c86-6667b712d7aa" path="/var/lib/kubelet/pods/536cbde9-bfd7-49f1-9c86-6667b712d7aa/volumes" May 17 00:38:21.418074 kubelet[2500]: I0517 00:38:21.418060 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" path="/var/lib/kubelet/pods/6f048c2e-11cf-4d09-a6bb-19da01f1b299/volumes" May 17 00:38:21.510661 sshd[4229]: Accepted publickey for core from 10.200.16.10 port 41554 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:21.512314 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:21.517379 systemd[1]: Started session-25.scope. May 17 00:38:21.518304 systemd-logind[1494]: New session 25 of user core. May 17 00:38:22.375587 kubelet[2500]: E0517 00:38:22.375534 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="cilium-agent" May 17 00:38:22.375587 kubelet[2500]: E0517 00:38:22.375577 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="mount-cgroup" May 17 00:38:22.375587 kubelet[2500]: E0517 00:38:22.375587 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="apply-sysctl-overwrites" May 17 00:38:22.375587 kubelet[2500]: E0517 00:38:22.375595 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="mount-bpf-fs" May 17 00:38:22.375587 kubelet[2500]: E0517 00:38:22.375602 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="536cbde9-bfd7-49f1-9c86-6667b712d7aa" containerName="cilium-operator" May 17 00:38:22.375934 kubelet[2500]: E0517 00:38:22.375613 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="clean-cilium-state" May 17 00:38:22.375934 kubelet[2500]: I0517 00:38:22.375645 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f048c2e-11cf-4d09-a6bb-19da01f1b299" containerName="cilium-agent" May 17 00:38:22.375934 kubelet[2500]: I0517 00:38:22.375653 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="536cbde9-bfd7-49f1-9c86-6667b712d7aa" containerName="cilium-operator" May 17 00:38:22.455912 sshd[4229]: pam_unix(sshd:session): session closed for user core May 17 00:38:22.458961 systemd[1]: sshd@22-10.200.4.4:22-10.200.16.10:41554.service: Deactivated successfully. May 17 00:38:22.460267 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:38:22.460305 systemd-logind[1494]: Session 25 logged out. Waiting for processes to exit. May 17 00:38:22.461592 systemd-logind[1494]: Removed session 25. May 17 00:38:22.511459 kubelet[2500]: I0517 00:38:22.511376 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-ipsec-secrets\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512057 kubelet[2500]: I0517 00:38:22.512029 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-config-path\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512196 kubelet[2500]: I0517 00:38:22.512171 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-clustermesh-secrets\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512316 kubelet[2500]: I0517 00:38:22.512207 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cni-path\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512316 kubelet[2500]: I0517 00:38:22.512240 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hostproc\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512316 kubelet[2500]: I0517 00:38:22.512270 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-lib-modules\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512316 kubelet[2500]: I0517 00:38:22.512297 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hubble-tls\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512325 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-bpf-maps\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512353 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-cgroup\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512382 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-net\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512409 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-kernel\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512437 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-run\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512533 kubelet[2500]: I0517 00:38:22.512466 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-etc-cni-netd\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512857 kubelet[2500]: I0517 00:38:22.512495 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-xtables-lock\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.512857 kubelet[2500]: I0517 00:38:22.512526 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7sq\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-kube-api-access-rm7sq\") pod \"cilium-tkkgk\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " pod="kube-system/cilium-tkkgk" May 17 00:38:22.552624 systemd[1]: Started sshd@23-10.200.4.4:22-10.200.16.10:41570.service. May 17 00:38:22.562484 kubelet[2500]: E0517 00:38:22.562452 2500 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:22.680616 env[1513]: time="2025-05-17T00:38:22.680562220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tkkgk,Uid:09d00bed-edea-46d3-ac4f-7ffbc880ae73,Namespace:kube-system,Attempt:0,}" May 17 00:38:22.714804 env[1513]: time="2025-05-17T00:38:22.714713788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:22.715116 env[1513]: time="2025-05-17T00:38:22.715070585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:22.715376 env[1513]: time="2025-05-17T00:38:22.715303584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:22.715768 env[1513]: time="2025-05-17T00:38:22.715731481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101 pid=4255 runtime=io.containerd.runc.v2 May 17 00:38:22.761611 env[1513]: time="2025-05-17T00:38:22.761223171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tkkgk,Uid:09d00bed-edea-46d3-ac4f-7ffbc880ae73,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\"" May 17 00:38:22.765245 env[1513]: time="2025-05-17T00:38:22.765202844Z" level=info msg="CreateContainer within sandbox \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:22.799514 env[1513]: time="2025-05-17T00:38:22.799454911Z" level=info msg="CreateContainer within sandbox \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\"" May 17 00:38:22.802577 env[1513]: time="2025-05-17T00:38:22.800772302Z" level=info msg="StartContainer for \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\"" May 17 00:38:22.852541 env[1513]: time="2025-05-17T00:38:22.851095160Z" level=info msg="StartContainer for \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\" returns successfully" May 17 00:38:22.918787 env[1513]: time="2025-05-17T00:38:22.918735499Z" level=info msg="shim disconnected" id=51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881 May 17 00:38:22.918787 env[1513]: time="2025-05-17T00:38:22.918782899Z" level=warning msg="cleaning up after shim disconnected" id=51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881 namespace=k8s.io May 17 00:38:22.918787 env[1513]: time="2025-05-17T00:38:22.918793899Z" level=info msg="cleaning up dead shim" May 17 00:38:22.932535 env[1513]: time="2025-05-17T00:38:22.932424506Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4338 runtime=io.containerd.runc.v2\n" May 17 00:38:23.138310 sshd[4240]: Accepted publickey for core from 10.200.16.10 port 41570 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:23.139784 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:23.144993 systemd[1]: Started session-26.scope. May 17 00:38:23.145800 systemd-logind[1494]: New session 26 of user core. May 17 00:38:23.415916 kubelet[2500]: E0517 00:38:23.415592 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-59wvb" podUID="17434d44-673e-495b-9960-1d5e57c596c4" May 17 00:38:23.633261 sshd[4240]: pam_unix(sshd:session): session closed for user core May 17 00:38:23.636423 systemd[1]: sshd@23-10.200.4.4:22-10.200.16.10:41570.service: Deactivated successfully. May 17 00:38:23.637291 systemd-logind[1494]: Session 26 logged out. Waiting for processes to exit. May 17 00:38:23.638733 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:38:23.639683 systemd-logind[1494]: Removed session 26. May 17 00:38:23.732614 systemd[1]: Started sshd@24-10.200.4.4:22-10.200.16.10:41578.service. May 17 00:38:23.917884 env[1513]: time="2025-05-17T00:38:23.907504784Z" level=info msg="StopPodSandbox for \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\"" May 17 00:38:23.917884 env[1513]: time="2025-05-17T00:38:23.907585083Z" level=info msg="Container to stop \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:38:23.915633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101-shm.mount: Deactivated successfully. May 17 00:38:23.954734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101-rootfs.mount: Deactivated successfully. May 17 00:38:23.972730 env[1513]: time="2025-05-17T00:38:23.972673456Z" level=info msg="shim disconnected" id=0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101 May 17 00:38:23.972941 env[1513]: time="2025-05-17T00:38:23.972732355Z" level=warning msg="cleaning up after shim disconnected" id=0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101 namespace=k8s.io May 17 00:38:23.972941 env[1513]: time="2025-05-17T00:38:23.972744155Z" level=info msg="cleaning up dead shim" May 17 00:38:23.980666 env[1513]: time="2025-05-17T00:38:23.980627503Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4384 runtime=io.containerd.runc.v2\n" May 17 00:38:23.980977 env[1513]: time="2025-05-17T00:38:23.980944701Z" level=info msg="TearDown network for sandbox \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\" successfully" May 17 00:38:23.980977 env[1513]: time="2025-05-17T00:38:23.980970401Z" level=info msg="StopPodSandbox for \"0ce17852f29cf688a8c512fb25e442a1dd35138fc0ec871dd68112aaa6b22101\" returns successfully" May 17 00:38:24.028781 kubelet[2500]: I0517 00:38:24.028664 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-net\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.029448 kubelet[2500]: I0517 00:38:24.029419 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-ipsec-secrets\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.029593 kubelet[2500]: I0517 00:38:24.029572 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-bpf-maps\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.029722 kubelet[2500]: I0517 00:38:24.029705 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hostproc\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.029840 kubelet[2500]: I0517 00:38:24.029820 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-cgroup\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.029960 kubelet[2500]: I0517 00:38:24.029937 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cni-path\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030084 kubelet[2500]: I0517 00:38:24.030067 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-clustermesh-secrets\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030562 kubelet[2500]: I0517 00:38:24.030532 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-etc-cni-netd\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030672 kubelet[2500]: I0517 00:38:24.030602 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hubble-tls\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030672 kubelet[2500]: I0517 00:38:24.030632 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-kernel\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030793 kubelet[2500]: I0517 00:38:24.030676 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-xtables-lock\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030793 kubelet[2500]: I0517 00:38:24.030709 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm7sq\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-kube-api-access-rm7sq\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030793 kubelet[2500]: I0517 00:38:24.030757 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-config-path\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.030793 kubelet[2500]: I0517 00:38:24.030790 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-run\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.031477 kubelet[2500]: I0517 00:38:24.028722 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.031577 kubelet[2500]: I0517 00:38:24.031519 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.031643 kubelet[2500]: I0517 00:38:24.031573 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hostproc" (OuterVolumeSpecName: "hostproc") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.031643 kubelet[2500]: I0517 00:38:24.031614 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.031754 kubelet[2500]: I0517 00:38:24.031642 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cni-path" (OuterVolumeSpecName: "cni-path") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.031878 kubelet[2500]: I0517 00:38:24.031847 2500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-lib-modules\") pod \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\" (UID: \"09d00bed-edea-46d3-ac4f-7ffbc880ae73\") " May 17 00:38:24.031968 kubelet[2500]: I0517 00:38:24.031950 2500 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-bpf-maps\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.032031 kubelet[2500]: I0517 00:38:24.031979 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-net\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.032031 kubelet[2500]: I0517 00:38:24.031998 2500 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hostproc\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.032031 kubelet[2500]: I0517 00:38:24.032024 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-cgroup\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.032218 kubelet[2500]: I0517 00:38:24.032043 2500 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cni-path\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.037828 systemd[1]: var-lib-kubelet-pods-09d00bed\x2dedea\x2d46d3\x2dac4f\x2d7ffbc880ae73-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:38:24.044645 systemd[1]: var-lib-kubelet-pods-09d00bed\x2dedea\x2d46d3\x2dac4f\x2d7ffbc880ae73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:38:24.044915 kubelet[2500]: I0517 00:38:24.044878 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.048539 kubelet[2500]: I0517 00:38:24.048509 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.048849 kubelet[2500]: I0517 00:38:24.048828 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:38:24.048964 kubelet[2500]: I0517 00:38:24.048951 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.049049 kubelet[2500]: I0517 00:38:24.049037 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.049261 kubelet[2500]: I0517 00:38:24.049243 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:24.049396 kubelet[2500]: I0517 00:38:24.049381 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:38:24.049496 kubelet[2500]: I0517 00:38:24.049477 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:38:24.058223 kubelet[2500]: I0517 00:38:24.058195 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:24.058434 kubelet[2500]: I0517 00:38:24.058412 2500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-kube-api-access-rm7sq" (OuterVolumeSpecName: "kube-api-access-rm7sq") pod "09d00bed-edea-46d3-ac4f-7ffbc880ae73" (UID: "09d00bed-edea-46d3-ac4f-7ffbc880ae73"). InnerVolumeSpecName "kube-api-access-rm7sq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:38:24.132239 kubelet[2500]: I0517 00:38:24.132199 2500 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm7sq\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-kube-api-access-rm7sq\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132239 kubelet[2500]: I0517 00:38:24.132232 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-config-path\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132239 kubelet[2500]: I0517 00:38:24.132246 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-run\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132259 2500 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-lib-modules\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132270 2500 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132281 2500 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09d00bed-edea-46d3-ac4f-7ffbc880ae73-clustermesh-secrets\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132292 2500 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-etc-cni-netd\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132302 2500 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09d00bed-edea-46d3-ac4f-7ffbc880ae73-hubble-tls\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132312 2500 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.132469 kubelet[2500]: I0517 00:38:24.132322 2500 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d00bed-edea-46d3-ac4f-7ffbc880ae73-xtables-lock\") on node \"ci-3510.3.7-n-21508f608f\" DevicePath \"\"" May 17 00:38:24.326926 sshd[4361]: Accepted publickey for core from 10.200.16.10 port 41578 ssh2: RSA SHA256:07CXe8ueQ4fNlYAl4hK7sSS8EcVy/wqg6UxAP3bqsIw May 17 00:38:24.326674 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:38:24.332257 systemd[1]: Started session-27.scope. May 17 00:38:24.332545 systemd-logind[1494]: New session 27 of user core. May 17 00:38:24.628448 systemd[1]: var-lib-kubelet-pods-09d00bed\x2dedea\x2d46d3\x2dac4f\x2d7ffbc880ae73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:38:24.628861 systemd[1]: var-lib-kubelet-pods-09d00bed\x2dedea\x2d46d3\x2dac4f\x2d7ffbc880ae73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drm7sq.mount: Deactivated successfully. May 17 00:38:24.910326 kubelet[2500]: I0517 00:38:24.910297 2500 scope.go:117] "RemoveContainer" containerID="51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881" May 17 00:38:24.915699 env[1513]: time="2025-05-17T00:38:24.915445275Z" level=info msg="RemoveContainer for \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\"" May 17 00:38:24.928110 env[1513]: time="2025-05-17T00:38:24.928062595Z" level=info msg="RemoveContainer for \"51e2a01a7c7bed7800cc4083f9faa5e2453518fec8b623c8420f41dbf88a2881\" returns successfully" May 17 00:38:24.955370 kubelet[2500]: E0517 00:38:24.955328 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09d00bed-edea-46d3-ac4f-7ffbc880ae73" containerName="mount-cgroup" May 17 00:38:24.955660 kubelet[2500]: I0517 00:38:24.955628 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d00bed-edea-46d3-ac4f-7ffbc880ae73" containerName="mount-cgroup" May 17 00:38:25.035742 kubelet[2500]: I0517 00:38:25.035691 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-cilium-cgroup\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036378 kubelet[2500]: I0517 00:38:25.036341 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-xtables-lock\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036495 kubelet[2500]: I0517 00:38:25.036481 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-hostproc\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036595 kubelet[2500]: I0517 00:38:25.036583 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-host-proc-sys-net\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036694 kubelet[2500]: I0517 00:38:25.036680 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fv5\" (UniqueName: \"kubernetes.io/projected/bf4262b9-82b2-4a35-b605-dd2a1025e013-kube-api-access-q8fv5\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036786 kubelet[2500]: I0517 00:38:25.036775 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-bpf-maps\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036880 kubelet[2500]: I0517 00:38:25.036868 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf4262b9-82b2-4a35-b605-dd2a1025e013-cilium-config-path\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.036970 kubelet[2500]: I0517 00:38:25.036959 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-cni-path\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037066 kubelet[2500]: I0517 00:38:25.037052 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf4262b9-82b2-4a35-b605-dd2a1025e013-cilium-ipsec-secrets\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037168 kubelet[2500]: I0517 00:38:25.037155 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf4262b9-82b2-4a35-b605-dd2a1025e013-clustermesh-secrets\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037267 kubelet[2500]: I0517 00:38:25.037255 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-etc-cni-netd\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037363 kubelet[2500]: I0517 00:38:25.037352 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-lib-modules\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037519 kubelet[2500]: I0517 00:38:25.037506 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-cilium-run\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037617 kubelet[2500]: I0517 00:38:25.037604 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf4262b9-82b2-4a35-b605-dd2a1025e013-host-proc-sys-kernel\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.037713 kubelet[2500]: I0517 00:38:25.037699 2500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf4262b9-82b2-4a35-b605-dd2a1025e013-hubble-tls\") pod \"cilium-trmct\" (UID: \"bf4262b9-82b2-4a35-b605-dd2a1025e013\") " pod="kube-system/cilium-trmct" May 17 00:38:25.262311 env[1513]: time="2025-05-17T00:38:25.262179037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trmct,Uid:bf4262b9-82b2-4a35-b605-dd2a1025e013,Namespace:kube-system,Attempt:0,}" May 17 00:38:25.295279 env[1513]: time="2025-05-17T00:38:25.295212035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:38:25.295494 env[1513]: time="2025-05-17T00:38:25.295251235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:38:25.295494 env[1513]: time="2025-05-17T00:38:25.295265135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:38:25.295494 env[1513]: time="2025-05-17T00:38:25.295417334Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f pid=4422 runtime=io.containerd.runc.v2 May 17 00:38:25.330343 env[1513]: time="2025-05-17T00:38:25.330295621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trmct,Uid:bf4262b9-82b2-4a35-b605-dd2a1025e013,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\"" May 17 00:38:25.333728 env[1513]: time="2025-05-17T00:38:25.333683600Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:38:25.372476 env[1513]: time="2025-05-17T00:38:25.372424763Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c0bb346f75668751744222be78c7d483ea67721ce78c876c2a3610ee98c5429\"" May 17 00:38:25.373287 env[1513]: time="2025-05-17T00:38:25.373185359Z" level=info msg="StartContainer for \"6c0bb346f75668751744222be78c7d483ea67721ce78c876c2a3610ee98c5429\"" May 17 00:38:25.418121 kubelet[2500]: E0517 00:38:25.416250 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-59wvb" podUID="17434d44-673e-495b-9960-1d5e57c596c4" May 17 00:38:25.426991 kubelet[2500]: I0517 00:38:25.426955 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d00bed-edea-46d3-ac4f-7ffbc880ae73" path="/var/lib/kubelet/pods/09d00bed-edea-46d3-ac4f-7ffbc880ae73/volumes" May 17 00:38:25.440404 env[1513]: time="2025-05-17T00:38:25.440348348Z" level=info msg="StartContainer for \"6c0bb346f75668751744222be78c7d483ea67721ce78c876c2a3610ee98c5429\" returns successfully" May 17 00:38:25.509424 env[1513]: time="2025-05-17T00:38:25.509365127Z" level=info msg="shim disconnected" id=6c0bb346f75668751744222be78c7d483ea67721ce78c876c2a3610ee98c5429 May 17 00:38:25.509424 env[1513]: time="2025-05-17T00:38:25.509422027Z" level=warning msg="cleaning up after shim disconnected" id=6c0bb346f75668751744222be78c7d483ea67721ce78c876c2a3610ee98c5429 namespace=k8s.io May 17 00:38:25.509723 env[1513]: time="2025-05-17T00:38:25.509433326Z" level=info msg="cleaning up dead shim" May 17 00:38:25.516933 env[1513]: time="2025-05-17T00:38:25.516815181Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4504 runtime=io.containerd.runc.v2\n" May 17 00:38:25.916598 env[1513]: time="2025-05-17T00:38:25.916540840Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:38:25.956924 env[1513]: time="2025-05-17T00:38:25.956870893Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5\"" May 17 00:38:25.958305 env[1513]: time="2025-05-17T00:38:25.958267585Z" level=info msg="StartContainer for \"775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5\"" May 17 00:38:26.040791 env[1513]: time="2025-05-17T00:38:26.040733290Z" level=info msg="StartContainer for \"775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5\" returns successfully" May 17 00:38:26.075301 env[1513]: time="2025-05-17T00:38:26.075245987Z" level=info msg="shim disconnected" id=775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5 May 17 00:38:26.075301 env[1513]: time="2025-05-17T00:38:26.075294786Z" level=warning msg="cleaning up after shim disconnected" id=775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5 namespace=k8s.io May 17 00:38:26.075301 env[1513]: time="2025-05-17T00:38:26.075306286Z" level=info msg="cleaning up dead shim" May 17 00:38:26.082816 env[1513]: time="2025-05-17T00:38:26.082763843Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4568 runtime=io.containerd.runc.v2\n" May 17 00:38:26.629063 systemd[1]: run-containerd-runc-k8s.io-775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5-runc.neDfC4.mount: Deactivated successfully. May 17 00:38:26.629315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-775e444644f007ab9bce5bdbb63c5da55ef5f7d5806db724ba972db172af9ed5-rootfs.mount: Deactivated successfully. May 17 00:38:26.925374 env[1513]: time="2025-05-17T00:38:26.925327285Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:38:26.959002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390566147.mount: Deactivated successfully. May 17 00:38:26.971950 env[1513]: time="2025-05-17T00:38:26.971900311Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524\"" May 17 00:38:26.974790 env[1513]: time="2025-05-17T00:38:26.974758994Z" level=info msg="StartContainer for \"d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524\"" May 17 00:38:27.036104 env[1513]: time="2025-05-17T00:38:27.036051742Z" level=info msg="StartContainer for \"d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524\" returns successfully" May 17 00:38:27.071094 env[1513]: time="2025-05-17T00:38:27.071039543Z" level=info msg="shim disconnected" id=d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524 May 17 00:38:27.071094 env[1513]: time="2025-05-17T00:38:27.071095143Z" level=warning msg="cleaning up after shim disconnected" id=d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524 namespace=k8s.io May 17 00:38:27.071428 env[1513]: time="2025-05-17T00:38:27.071105943Z" level=info msg="cleaning up dead shim" May 17 00:38:27.079084 env[1513]: time="2025-05-17T00:38:27.079036298Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4626 runtime=io.containerd.runc.v2\n" May 17 00:38:27.416005 kubelet[2500]: E0517 00:38:27.415956 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-59wvb" podUID="17434d44-673e-495b-9960-1d5e57c596c4" May 17 00:38:27.563862 kubelet[2500]: E0517 00:38:27.563786 2500 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:38:27.628401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8141c8ae4d30ce49ce5d4bfd8c205499f1965864b4f60586574f96306b88524-rootfs.mount: Deactivated successfully. May 17 00:38:27.929007 env[1513]: time="2025-05-17T00:38:27.928035091Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:38:27.970339 env[1513]: time="2025-05-17T00:38:27.970289352Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3\"" May 17 00:38:27.972068 env[1513]: time="2025-05-17T00:38:27.971049748Z" level=info msg="StartContainer for \"13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3\"" May 17 00:38:28.023348 env[1513]: time="2025-05-17T00:38:28.023298157Z" level=info msg="StartContainer for \"13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3\" returns successfully" May 17 00:38:28.052715 env[1513]: time="2025-05-17T00:38:28.052658297Z" level=info msg="shim disconnected" id=13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3 May 17 00:38:28.052715 env[1513]: time="2025-05-17T00:38:28.052714096Z" level=warning msg="cleaning up after shim disconnected" id=13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3 namespace=k8s.io May 17 00:38:28.052995 env[1513]: time="2025-05-17T00:38:28.052725196Z" level=info msg="cleaning up dead shim" May 17 00:38:28.060336 env[1513]: time="2025-05-17T00:38:28.060292455Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:38:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4685 runtime=io.containerd.runc.v2\n" May 17 00:38:28.628740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13167b75d7a05e01b614874c2dfb247573dae2d2a69f39d5bb58bb3ac57994c3-rootfs.mount: Deactivated successfully. May 17 00:38:28.932751 env[1513]: time="2025-05-17T00:38:28.932697706Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:38:28.976845 env[1513]: time="2025-05-17T00:38:28.976794266Z" level=info msg="CreateContainer within sandbox \"e3f51bd1e934f6c178f6094bd3057bcfa49a43e12f32cdc5fbbfabf1d72fd29f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7\"" May 17 00:38:28.977803 env[1513]: time="2025-05-17T00:38:28.977768561Z" level=info msg="StartContainer for \"fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7\"" May 17 00:38:29.041106 env[1513]: time="2025-05-17T00:38:29.041030925Z" level=info msg="StartContainer for \"fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7\" returns successfully" May 17 00:38:29.389167 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:38:29.416419 kubelet[2500]: E0517 00:38:29.415793 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-59wvb" podUID="17434d44-673e-495b-9960-1d5e57c596c4" May 17 00:38:29.628995 systemd[1]: run-containerd-runc-k8s.io-fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7-runc.1Smsuc.mount: Deactivated successfully. May 17 00:38:30.821121 systemd[1]: run-containerd-runc-k8s.io-fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7-runc.K12lH5.mount: Deactivated successfully. May 17 00:38:31.416275 kubelet[2500]: E0517 00:38:31.416228 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-59wvb" podUID="17434d44-673e-495b-9960-1d5e57c596c4" May 17 00:38:32.153496 systemd-networkd[1679]: lxc_health: Link UP May 17 00:38:32.166157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:38:32.163565 systemd-networkd[1679]: lxc_health: Gained carrier May 17 00:38:32.172162 kubelet[2500]: I0517 00:38:32.168714 2500 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-21508f608f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:38:32Z","lastTransitionTime":"2025-05-17T00:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:38:33.001878 systemd[1]: run-containerd-runc-k8s.io-fe59772c8ecc5031de8d80bc49e89d48fab6e7786f670dc17b2a4486f6a049a7-runc.ssSMy9.mount: Deactivated successfully. May 17 00:38:33.325486 kubelet[2500]: I0517 00:38:33.325327 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-trmct" podStartSLOduration=9.325305544999999 podStartE2EDuration="9.325305545s" podCreationTimestamp="2025-05-17 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:38:29.955401644 +0000 UTC m=+203.078705682" watchObservedRunningTime="2025-05-17 00:38:33.325305545 +0000 UTC m=+206.448609683" May 17 00:38:33.582397 systemd-networkd[1679]: lxc_health: Gained IPv6LL May 17 00:38:39.644682 sshd[4361]: pam_unix(sshd:session): session closed for user core May 17 00:38:39.648090 systemd[1]: sshd@24-10.200.4.4:22-10.200.16.10:41578.service: Deactivated successfully. May 17 00:38:39.649241 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:38:39.650015 systemd-logind[1494]: Session 27 logged out. Waiting for processes to exit. May 17 00:38:39.651771 systemd-logind[1494]: Removed session 27.